text
stringlengths
59
500k
subset
stringclasses
6 values
Asymmetric norm In mathematics, an asymmetric norm on a vector space is a generalization of the concept of a norm. Definition An asymmetric norm on a real vector space $X$ is a function $p:X\to [0,+\infty )$ that has the following properties: • Subadditivity, or the triangle inequality: $p(x+y)\leq p(x)+p(y){\text{ for all }}x,y\in X.$ • Nonnegative homogeneity: $p(rx)=rp(x){\text{ for all }}x\in X$ and every non-negative real number $r\geq 0.$ • Positive definiteness: $p(x)>0{\text{ unless }}x=0$ Asymmetric norms differ from norms in that they need not satisfy the equality $p(-x)=p(x).$ If the condition of positive definiteness is omitted, then $p$ is an asymmetric seminorm. A weaker condition than positive definiteness is non-degeneracy: that for $x\neq 0,$ at least one of the two numbers $p(x)$ and $p(-x)$ is not zero. Examples On the real line $\mathbb {R} ,$ the function $p$ given by $p(x)={\begin{cases}|x|,&x\leq 0;\\2|x|,&x\geq 0;\end{cases}}$ is an asymmetric norm but not a norm. In a real vector space $X,$ the Minkowski functional $p_{B}$ of a convex subset $B\subseteq X$ that contains the origin is defined by the formula $p_{B}(x)=\inf \left\{r\geq 0:x\in rB\right\}\,$ for $x\in X$ This functional is an asymmetric seminorm if $B$ is an absorbing set, which means that $\bigcup _{r\geq 0}rB=X,$ and ensures that $p(x)$ is finite for each $x\in X.$ Corresponce between asymmetric seminorms and convex subsets of the dual space If $B^{*}\subseteq \mathbb {R} ^{n}$ is a convex set that contains the origin, then an asymmetric seminorm $p$ can be defined on $\mathbb {R} ^{n}$ by the formula $p(x)=\max _{\varphi \in B^{*}}\langle \varphi ,x\rangle .$ For instance, if $B^{*}\subseteq \mathbb {R} ^{2}$ is the square with vertices $(\pm 1,\pm 1),$ then $p$ is the taxicab norm $x=\left(x_{0},x_{1}\right)\mapsto \left|x_{0}\right|+\left|x_{1}\right|.$ Different convex sets yield different seminorms, and every asymmetric seminorm on $\mathbb {R} ^{n}$ can be obtained from some convex set, called its dual unit ball. Therefore, asymmetric seminorms are in one-to-one correspondence with convex sets that contain the origin. The seminorm $p$ is • positive definite if and only if $B^{*}$ contains the origin in its topological interior, • degenerate if and only if $B^{*}$ is contained in a linear subspace of dimension less than $n,$ and • symmetric if and only if $B^{*}=-B^{*}.$ More generally, if $X$ is a finite-dimensional real vector space and $B^{*}\subseteq X^{*}$ is a compact convex subset of the dual space $X^{*}$ that contains the origin, then $p(x)=\max _{\varphi \in B^{*}}\varphi (x)$ is an asymmetric seminorm on $X.$ See also • Finsler manifold – smooth manifold equipped with a Minkowski functional at each tangent spacePages displaying wikidata descriptions as a fallback • Minkowski functional – Function made from a set References • Cobzaş, S. (2006). "Compact operators on spaces with asymmetric norm". Stud. Univ. Babeş-Bolyai Math. 51 (4): 69–87. ISSN 0252-1938. MR 2314639. • S. Cobzas, Functional Analysis in Asymmetric Normed Spaces, Frontiers in Mathematics, Basel: Birkhäuser, 2013; ISBN 978-3-0348-0477-6. Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons Topological vector spaces (TVSs) Basic concepts • Banach space • Completeness • Continuous linear operator • Linear functional • Fréchet space • Linear map • Locally convex space • Metrizability • Operator topologies • Topological vector space • Vector space Main results • Anderson–Kadec • Banach–Alaoglu • Closed graph theorem • F. Riesz's • Hahn–Banach (hyperplane separation • Vector-valued Hahn–Banach) • Open mapping (Banach–Schauder) • Bounded inverse • Uniform boundedness (Banach–Steinhaus) Maps • Bilinear operator • form • Linear map • Almost open • Bounded • Continuous • Closed • Compact • Densely defined • Discontinuous • Topological homomorphism • Functional • Linear • Bilinear • Sesquilinear • Norm • Seminorm • Sublinear function • Transpose Types of sets • Absolutely convex/disk • Absorbing/Radial • Affine • Balanced/Circled • Banach disks • Bounding points • Bounded • Complemented subspace • Convex • Convex cone (subset) • Linear cone (subset) • Extreme point • Pre-compact/Totally bounded • Prevalent/Shy • Radial • Radially convex/Star-shaped • Symmetric Set operations • Affine hull • (Relative) Algebraic interior (core) • Convex hull • Linear span • Minkowski addition • Polar • (Quasi) Relative interior Types of TVSs • Asplund • B-complete/Ptak • Banach • (Countably) Barrelled • BK-space • (Ultra-) Bornological • Brauner • Complete • Convenient • (DF)-space • Distinguished • F-space • FK-AK space • FK-space • Fréchet • tame Fréchet • Grothendieck • Hilbert • Infrabarreled • Interpolation space • K-space • LB-space • LF-space • Locally convex space • Mackey • (Pseudo)Metrizable • Montel • Quasibarrelled • Quasi-complete • Quasinormed • (Polynomially • Semi-) Reflexive • Riesz • Schwartz • Semi-complete • Smith • Stereotype • (B • Strictly • Uniformly) convex • (Quasi-) Ultrabarrelled • Uniformly smooth • Webbed • With the approximation property •  Mathematics portal • Category • Commons
Wikipedia
# Fitness function for TSP The fitness function is a crucial component of a genetic algorithm. It evaluates the quality of a solution and determines how likely it is to be selected for reproduction. In the context of TSP, the fitness function measures the total distance of a tour. To calculate the fitness of a tour, you can use the following formula: $$fitness = \sum_{i=0}^{n-1} d(tour[i], tour[i+1])$$ where $d(a, b)$ is the distance between cities $a$ and $b$. To implement the fitness function in Python, you can define a function that takes a tour (a list of city indices) and a distance matrix as input, and returns the fitness value. ```python def fitness(tour, distance_matrix): return sum(distance_matrix[tour[i], tour[i+1]] for i in range(len(tour) - 1)) ``` ## Exercise Calculate the fitness of the following tour for the TSP: Tour: [0, 1, 2, 3, 0] Distance matrix: [[0, 10, 20, 30], [10, 0, 25, 35], [20, 25, 0, 15], [30, 35, 15, 0]] ### Solution The fitness of the tour [0, 1, 2, 3, 0] is 100. # Genetic algorithms for TSP Genetic algorithms (GAs) are a class of evolutionary algorithms inspired by the process of natural selection. They are commonly used for optimization and search problems. In the context of TSP, GAs can be used to find approximate solutions to the problem. The key components of a genetic algorithm for TSP are: - Initialization: Create a population of random tours. - Selection: Select tours based on their fitness. - Crossover: Combine the genes (city indices) of two selected tours to create new offspring. - Mutation: Randomly change some genes in the offspring. - Replacement: Replace the worst tours in the population with the new offspring. In Python, you can implement a genetic algorithm for TSP using the `deap` library, which provides a framework for genetic algorithms. ```python import random from deap import base, creator, tools # Create a class to represent a tour creator.create("FitnessMax", base.Fitness, weights=(1.0,)) creator.create("Individual", list, fitness=creator.FitnessMax) # Initialize the population toolbox = base.Toolbox() toolbox.register("attr_bool", random.randint, 0, 1) toolbox.register("individual", tools.initRepeat, creator.Individual, toolbox.attr_bool, n=n) toolbox.register("population", tools.initRepeat, list, toolbox.individual) # Define the fitness function toolbox.register("mate", tools.cxTwoPoint) toolbox.register("mutate", tools.mutFlipBit, indpb=0.05) toolbox.register("select", tools.selTournament, tournsize=3) toolbox.register("evaluate", fitness) # Run the genetic algorithm population = toolbox.population(n=50) cxpb, mutpb, ngen = 0.7, 0.2, 50 for _ in range(ngen): offspring = tools.selBest(population, 1) offspring = [toolbox.clone(ind) for ind in offspring] for ind1, ind2 in zip(offspring[::2], offspring[1::2]): if random.random() < cxpb: toolbox.mate(ind1, ind2) del ind1.fitness.values del ind2.fitness.values if random.random() < mutpb: toolbox.mutate(ind1) del ind1.fitness.values del ind2.fitness.values invalid_ind = [ind for ind in offspring if not ind.fitness.valid] fitnesses = map(toolbox.evaluate, invalid_ind) for ind, fit in zip(invalid_ind, fitnesses): ind.fitness.values = fit population[:] = offspring ``` # Implementing the genetic algorithm in Python To implement the genetic algorithm for TSP in Python, you can use the `deap` library, which provides a framework for genetic algorithms. The key steps are: 1. Define a class to represent a tour. 2. Register functions for initialization, crossover, mutation, selection, and evaluation. 3. Initialize the population. 4. Run the genetic algorithm by iteratively selecting tours, performing crossover and mutation, and replacing the worst tours with the new offspring. Here's an example of how to implement the genetic algorithm in Python: ```python import random from deap import base, creator, tools # Create a class to represent a tour creator.create("FitnessMax", base.Fitness, weights=(1.0,)) creator.create("Individual", list, fitness=creator.FitnessMax) # Initialize the population toolbox = base.Toolbox() toolbox.register("attr_bool", random.randint, 0, 1) toolbox.register("individual", tools.initRepeat, creator.Individual, toolbox.attr_bool, n=n) toolbox.register("population", tools.initRepeat, list, toolbox.individual) # Define the fitness function toolbox.register("mate", tools.cxTwoPoint) toolbox.register("mutate", tools.mutFlipBit, indpb=0.05) toolbox.register("select", tools.selTournament, tournsize=3) toolbox.register("evaluate", fitness) # Run the genetic algorithm population = toolbox.population(n=50) cxpb, mutpb, ngen = 0.7, 0.2, 50 for _ in range(ngen): offspring = tools.selBest(population, 1) offspring = [toolbox.clone(ind) for ind in offspring] for ind1, ind2 in zip(offspring[::2], offspring[1::2]): if random.random() < cxpb: toolbox.mate(ind1, ind2) del ind1.fitness.values del ind2.fitness.values if random.random() < mutpb: toolbox.mutate(ind1) del ind1.fitness.values del ind2.fitness.values invalid_ind = [ind for ind in offspring if not ind.fitness.valid] fitnesses = map(toolbox.evaluate, invalid_ind) for ind, fit in zip(invalid_ind, fitnesses): ind.fitness.values = fit population[:] = offspring ``` # Selection process in genetic algorithms In genetic algorithms, selection is the process of choosing tours based on their fitness. The most common selection methods in GAs are: - Tournament selection: Select tours by randomly choosing $k$ tours from the population and selecting the best one. - Roulette wheel selection: Assign a fitness value to each tour as a probability of being selected, and then select tours based on these probabilities. In the example above, tournament selection is used. # Example of TSP solution using genetic algorithms Let's consider an example of a TSP problem with 4 cities: A, B, C, and D. The distance matrix is as follows: ``` A B C D A 0 10 20 30 B 10 0 25 35 C 20 25 0 15 D 30 35 15 0 ``` Using the genetic algorithm for TSP in Python, we can find an approximate solution to this problem. The resulting tour may look like this: [0, 2, 1, 3, 0]. # Advanced topics in genetic algorithms for TSP In practice, genetic algorithms for TSP can be further improved by using advanced techniques, such as: - Multi-objective optimization: Optimize multiple objectives simultaneously, such as minimizing the total distance and minimizing the number of tours. - Local search: Improve the solution by performing local search within the neighborhood of the current solution. - Hybrid algorithms: Combine genetic algorithms with other optimization techniques, such as simulated annealing or tabu search. # Optimizing the genetic algorithm for TSP There are several ways to optimize the genetic algorithm for TSP: - Increase the population size: A larger population size can lead to better exploration and exploitation of the search space. - Adjust the mutation rate: A higher mutation rate can lead to more diversity in the population, but it may also increase the chances of getting stuck in a local optimum. - Use a more advanced selection method: Tournament selection can be replaced with other selection methods, such as roulette wheel selection or best-fit selection. # Real-world applications of genetic algorithms for TSP in Python Genetic algorithms for TSP have been successfully applied in various real-world problems, such as: - Vehicle routing problems: Find optimal routes for delivering goods to multiple destinations. - Project scheduling: Determine the best sequence of tasks to minimize the total completion time. - Resource allocation: Allocate resources (such as workers or machines) to tasks in a way that minimizes the total completion time. In these applications, the genetic algorithm can be used to solve the TSP as a subproblem and optimize the overall solution.
Textbooks
On the bilinear Hilbert transform along two polynomials by Dong Dong PDF Proc. Amer. Math. Soc. 147 (2019), 4245-4258 Request permission We prove that the bilinear Hilbert transform along two polynomials $B_{P,Q}(f,g)(x)=\int _{\mathbb {R}}f(x-P(t))g(x-Q(t))\frac {dt}{t}$ is bounded from $L^p \times L^q$ to $L^r$ for a large range of $(p,q,r)$, as long as the polynomials $P$ and $Q$ have distinct leading and trailing degrees. The same boundedness property holds for the corresponding bilinear maximal function \[ \mathcal {M}_{P,Q}(f,g)(x)=\sup _{\epsilon >0}\frac {1}{2\epsilon }\int _{-\epsilon }^{\epsilon } |f(x-P(t))g(x-Q(t))|dt.\] Michael Christ, Alexander Nagel, Elias M. Stein, and Stephen Wainger, Singular and maximal Radon transforms: analysis and geometry, Ann. of Math. (2) 150 (1999), no. 2, 489–577. MR 1726701, DOI 10.2307/121088 Dong Dong, On a discrete bilinear singular operator, C. R. Math. Acad. Sci. Paris 355 (2017), no. 5, 538–542 (English, with English and French summaries). MR 3650379, DOI 10.1016/j.crma.2017.03.010 D. Dong, Multilinear operators in harmonic analysis: methods and applications, Thesis (Ph.D.)-University of Illinois at Urbana-Champaign, 2018, http://hdl.handle.net/ 2142/101503 D. Dong, Full range boundedness of bilinear Hilbert transform along certain polynomials, Math. Inequal. Appl., to appear. D. Dong, X. Li, W. Sawin Improved estimates for polynomial Roth type theorems in finite fields, https://arxiv.org/abs/1709.00080, J. Anal. Math., to appear. Dong Dong and Xianchang Meng, Discrete bilinear Radon transforms along arithmetic functions with many common values, Bull. Lond. Math. Soc. 50 (2018), no. 1, 132–142. MR 3778550, DOI 10.1112/blms.12127 Eugene B. Fabes, Singular integrals and partial differential equations of parabolic type, Studia Math. 28 (1966/67), 81–131. MR 213744, DOI 10.4064/sm-28-1-81-131 Loukas Grafakos and Xiaochun Li, Uniform bounds for the bilinear Hilbert transforms. I, Ann. of Math. (2) 159 (2004), no. 3, 889–933. MR 2113017, DOI 10.4007/annals.2004.159.889 Jingwei Guo and Lechao Xiao, Bilinear Hilbert transforms associated with plane curves, J. Geom. Anal. 26 (2016), no. 2, 967–995. MR 3472825, DOI 10.1007/s12220-015-9580-z Lars Hörmander, Oscillatory integrals and multipliers on $FL^{p}$, Ark. Mat. 11 (1973), 1–11. MR 340924, DOI 10.1007/BF02388505 Michael Lacey and Christoph Thiele, $L^p$ estimates on the bilinear Hilbert transform for $2<p<\infty$, Ann. of Math. (2) 146 (1997), no. 3, 693–724. MR 1491450, DOI 10.2307/2952458 Michael Lacey and Christoph Thiele, On Calderón's conjecture, Ann. of Math. (2) 149 (1999), no. 2, 475–496. MR 1689336, DOI 10.2307/120971 Xiaochun Li, Uniform bounds for the bilinear Hilbert transforms. II, Rev. Mat. Iberoam. 22 (2006), no. 3, 1069–1126. MR 2320411, DOI 10.4171/RMI/483 Xiaochun Li, Uniform estimates for some paraproducts, New York J. Math. 14 (2008), 145–192. MR 2413217 Xiaochun Li, Bilinear Hilbert transforms along curves I: The monomial case, Anal. PDE 6 (2013), no. 1, 197–220. MR 3068544, DOI 10.2140/apde.2013.6.197 Victor Lie, On the boundedness of the bilinear Hilbert transform along "non-flat� smooth curves, Amer. J. Math. 137 (2015), no. 2, 313–363. MR 3337797, DOI 10.1353/ajm.2015.0013 Xiaochun Li and Lechao Xiao, Uniform estimates for bilinear Hilbert transforms and bilinear maximal functions associated to polynomials, Amer. J. Math. 138 (2016), no. 4, 907–962. MR 3538147, DOI 10.1353/ajm.2016.0030 Alexander Nagel, Néstor Rivière, and Stephen Wainger, On Hilbert transforms along curves, Bull. Amer. Math. Soc. 80 (1974), 106–108. MR 450899, DOI 10.1090/S0002-9904-1974-13374-4 Alexander Nagel, Néstor M. Rivière, and Stephen Wainger, On Hilbert transforms along curves. II, Amer. J. Math. 98 (1976), no. 2, 395–403. MR 450900, DOI 10.2307/2373893 Alexander Nagel, Nestor Riviere, and Stephen Wainger, A maximal function associated to the curve $(t, t^{2})$, Proc. Nat. Acad. Sci. U.S.A. 73 (1976), no. 5, 1416–1417. MR 399389, DOI 10.1073/pnas.73.5.1416 E. M. Stein, Some problems in harmonic analysis, Proc. Internat. Congress Math (Nice, 1970), vol.1, Gauthier-Villars, Paris, 1971, pp.173–190. Elias M. Stein, Maximal functions. II. Homogeneous curves, Proc. Nat. Acad. Sci. U.S.A. 73 (1976), no. 7, 2176–2177. MR 420117, DOI 10.1073/pnas.73.7.2176 Elias M. Stein and Stephen Wainger, The estimation of an integral arising in multiplier transformations, Studia Math. 35 (1970), 101–104. MR 265995, DOI 10.4064/sm-35-1-101-104 E. M. Stein and S. Wainger, Maximal functions associated to smooth curves, Proc. Nat. Acad. Sci. U.S.A. 73 (1976), no. 12, 4295–4296. MR 415199, DOI 10.1073/pnas.73.12.4295 Elias M. Stein and Stephen Wainger, Problems in harmonic analysis related to curvature, Bull. Amer. Math. Soc. 84 (1978), no. 6, 1239–1295. MR 508453, DOI 10.1090/S0002-9904-1978-14554-6 Christoph Thiele, A uniform estimate, Ann. of Math. (2) 156 (2002), no. 2, 519–563. MR 1933076, DOI 10.2307/3597197 Retrieve articles in Proceedings of the American Mathematical Society with MSC (2010): 42B20, 47B38 Retrieve articles in all journals with MSC (2010): 42B20, 47B38 Dong Dong Affiliation: Center for Scientific Computation and Mathematical Modeling, University of Maryland, College Park, Maryland 20742 Email: [email protected] Received by editor(s): January 25, 2017 Published electronically: June 27, 2019 Additional Notes: The author acknowledges the support from the Gene H. Golub Fund of the Mathematics Department at the University of Illinois. Communicated by: Alexander Iosevich Journal: Proc. Amer. Math. Soc. 147 (2019), 4245-4258 MSC (2010): Primary 42B20; Secondary 47B38 DOI: https://doi.org/10.1090/proc/14518
CommonCrawl
Decolorization and detoxification of triphenylmethane dyes by isolated endophytic fungus, Bjerkandera adusta SWUSI4 under non-nutritive conditions Tiancong Gao1, Dan Qin1, Shihao Zuo1, Yajun Peng1, Jieru Xu1, Baohong Yu1, Hongchuan Song2 & Jinyan Dong1,3 Biodecolorization by microorganisms is a potential treatment technique because they seem to be environmentally safe. In the present study, the decolorization and detoxification of cotton blue, crystal violet, malachite green and methyl violet by endophytic fungi were investigated. Preliminary screening result indicated that SWUSI4, identified as Bjerkandera adusta, demonstrated the best decolorization for the four TPM dyes within 14 days. Furthermore, optimization result demonstrated the decolorization rate could reach above 90% at 24 h by live cells of isolate SWUSI4 when 4 g biomass was added into 100-mL dyes solution with the concentration 50 mg/L and shaking (150 rpm) conditions. Moreover, decolorization mechanism analysis shows that the decolorization was caused by the isolate SWUSI4 that mainly includes both absorption of biomass and/or degradation of enzymes. Biosorption of dyes was attributed to binding to hydroxyl, amino, phosphoryl alkane, and ester–lipids groups based on Fourier transform infrared (FTIR) analyses. The biodegradation potential of SWUSI4 was further suggested by the change of peaks in the ultraviolet–visible (UV–vis) spectra and detection of manganese peroxidase and lignin peroxidase activities. Finally, the phytotoxicity test confirmed that the toxicity of TPM dyes after treatment with SWUSI4 was significantly lower than that before treatment. These results indicate that an endophytic SWUSI4 could be used as a potential TPM dyes adsorption and degradation agent, thus facilitating the study of the plant–endophyte symbiosis in the bioremediation processes. As one of the most important groups of synthetic dyes, triphenylmethane (TPM) dyes such as cotton blue (CB), crystal violet (CV), malachite green (MG) and methyl violet (MV) in textile, leather, cosmetic, pharmaceutical and paper industries pose a direct threat to the environment due to the toxicity and carcinogenicity of their degradation products (Daneshvar et al. 2007; Chaudhry et al. 2014; Barapatre et al. 2017; Przystas et al. 2018). As a consequence, effective removal of hazardous TPM dyes from the aqueous solutions as well as detoxification is crucial. Over the past decade, bioremediation is considered an excellent alternative, especially microorganisms such as fungi, since the removal processes occurs by a green technology of low-cost and eco-friendly compared with traditional physico-chemical technique (Kaushik and Malik 2009; Ali 2010; Turhan et al. 2012; Marcharchand and Ting 2017). There are two main mechanisms by which fungi remove dyes: either biosorption and/or biodegradation (Ali 2010; Marcharchand and Ting 2017; Chen et al. 2019a, b, c). In most cases, fungi can degrade dyes, it has been primarily attributed to their enzymes, such as manganese peroxidase (MnP), lignin peroxidase (LiP) and laccase (Lac) (Zhuo et al. 2011; Barapatre et al. 2017; Wang et al. 2017). This has been proven by most widely distributed fungi, for example Phanerochaete chrysosporium (Bumpus and Brock 1988), Trametes versicolor (Casas et al. 2009), Irpex lacteus (Yang et al. 2016), Coriolopsis sp. (Chen and Ting, 2015), Pleurotus ostreatus (Morales-Alvarez et al. 2018). To the best of our knowledge, many previous studies have focused on using fungi (especially white-rot fungi) to decolorize dyes (Yang et al. 2009; Almeida and Corso 2018). However, the use of endophytic isolates for the decolorization of TPM dyes has been less explored. Endophytic fungi can metabolize organic contaminants and assist in plant growth, thus facilitating the phytoremediation of polluted environments (Shang et al. 2019). Additionally, the application of fungal biomass in dye wastewater treatment technology has attracted more and more attention (Kaushik and Malik 2009; Chen et al. 2019a, b, c). Hence, we can seek potential endophytic fungi for bioremediation of soil, water, etc. Based on the above reasons, the aim of this work was to study the potential use endophytic fungi, which was isolated from the plant, Sinosenecio oldhamianus, as a biological agent to remove TPM dyes. The ability to decolor TPM dyes by endophytic isolate under non-nutritive conditions was evaluated. By studying the effect of decolorization conditions (fungal biomass, initial dye concentration, shaking and static), the UV–vis absorption spectra analysis, Fourier transform infrared (FTIR) analysis, enzyme activities and phytotoxicity, this work may facilitate a better understanding of the role of endophytic fungi in the phytoremediation of TPM dyes. Endophytic fungi The endophytic fungi used in this study were isolated from the roots of Sinosenecio oldhamianus. Plant samples were collected from the surrounding areas of factory at Shapingba (106.46° E, 29.65° N) of Chongqing, China. The samples were chosen randomly, regardless of their age or their size. All samples were immediately brought to the laboratory, and stored at 4 ℃ in a refrigerator. Each sample tissues were used within 24 h from collection for the isolation of endophytic fungi. The endophytic isolations were obtained using the methods described by Huang et al. (2015). Briefly, the roots of Sinosenecio oldhamianus samples were thoroughly washed in running tap water to remove debris, and then air-dried naturally. Subsequently, samples were individually surface sterilized with 75% ethanol for 5 min and rinsed with sterile distilled water 3 times, followed by immersion in 0.1% mercuric chloride (HgCl) (v/v) for 3 min. Afterwards, the samples were transferred to potato dextrose agar (PDA) medium (supplemented with 60 mg/mL of streptomycin and 100 mg/mL of ampicillin). The inoculated plates were incubated at 28 ℃ incubator for 2–15 days to allow the growth of endophytic fungal hyphae, and checked regularly until pure culture. Screening of decolorized strains The isolated endophytes were grown onto mineral basal medium (MBM) agar plates and preliminary screened for their ability to decolorize four types of TPM dyes (CV, MV, MG and CB). The MBM contained (g/L): FeSO4·7H2O, 0.01; ZnSO4·7H2O, 0.01; MgSO4·7H2O, 0.5; CuSO4·5H2O, 0.05; KCl, 0.5; K2HPO4, 1; NaNO3, 3; starch, 10; and agar, 18. Each TPM dye (CV, MV, MG and CB) was added into the MBM medium to a final concentration of 100 mg/L. The agar plates were inoculated with a 5-mm2 agar plug from a 7-day-old fungal culture and incubated in 28 ℃ for 14 days and finally the colonies which showed a maximum decolorization zone was obtained. Un-inoculated plates with the respective dyes were used as the control. Each isolate was prepared in triplicate. The fungal isolate with the highest decolorization potential against TPM dyes was taxonomically identified based on its morphological characteristic, and also by comparison of the ITS sequences as described in Qin et al. (2018). Briefly, the morphological appearances of the selected fungal isolate were characterized based on mycelium color, growth pattern, and morphology of vegetative spores' structure. For molecular identification, genomic DNA was extracted from 1 g of chilled mycelia in liquid nitrogen using the CTAB method (Zhang et al. 2006). Extracted fungal DNA was then PCR-amplified using universal primer ITS1 and ITS4 under the following conditions: an initial denaturing step at 94 °C for 3 min, followed by 32 amplification cycles at 94 °C for 30 s, 56 °C for 30 s, 72 °C for 90 s, and a final extension at 72 °C for 10 min. PCR products were then purified and sequenced. The fungi were classified by comparing the ITS sequences of fungi with the data available in NCBI using BLAST search (https://www.ncbi.nlm.nih.gov/). The resulting sequences were aligned with the Clustal X software (Larkin et al. 2007), with gaps treated as missing data. Phylogenetic tree was built by the neighbor-joining method using Mega 6.0 software. The bootstrap was 1000 replications to assess the reliability of the inferred tree (Tamura et al. 2011). Dye decolorization batch experiments under non-nutritive conditions Fungal biomass was first established by inoculating 100 mL potato dextrose broth (PDB) with three mycelium plugs (0.8 cm in diameter) to generate sufficient biomass used for dye decolorization experiments. The inoculated PDB was then incubated as a standing culture (28 ± 2 °C) for 5–7 days, and the biomass was subsequently filtered through a sterile filter paper, then fresh biomass was harvested and washed with sterile distilled water. Each TPM dye (CV, MV, MG, and CB) was weighed and dissolved in 100 mL autoclaved distilled water to produce dye solutions. To analyze the effects of different conditions on the efficiency of TPM dyes decolorization, batch experiments were conducted at different fungal biomass inoculum size (1–8 g), initial dyes concentrations (50–250 mg/L), static (0 rpm) and shaking (150 rpm) conditions using decolorization percentage as the index. Untreated (non-inoculated) dye solutions were designed as controls. The samples were centrifuged at 5000 rpm for 10 min. The supernatants were measured by monitoring the absorbance of each dye in the culture medium at its respective maximum absorption wavelength (590 nm for CV, 585 nm for MV, 617 nm for MG and 599 nm for CB) using a UV–Vis spectrophotometer (Libra S12, Biochrom). The dye removal potential was expressed as decolorization efficiency (DE, %) as follows: $$ {\text{DE}}(\% ) = \frac{{{\text{initial absorbance}} - {\text{observed absorbance}}}}{{\text{initial absorbance}}} \times 100. $$ Analysis of decolorizing mechanism Analysis of biomass absorption and enzymolysis contributions on the decolorization of TPM dyes TPM dyes solution (50 mg/L) was mixed with fungal biomass (4 g) under shaking (150 rpm) conditions at 30 ℃ conditions at 30 fungal biomass decolorization rate as in Sect. 2.4. Control experiments with the dead biomass which were sterilized at 121 ℃ for 30 min were also be carried out (Ting et al. 2016; Wang et al. 2017). UV–vis spectra of dye solution UV–vis spectral analysis was performed to determine the possible occurrence of dyes removal with fungal strains treatment, by comparing the change in peaks of dye samples at initial stage of experiment and post-treatment (from Sect. 2.5.1). Absorption peaks for each TPM dye were detected by means of UV–vis (TECAN®, Infinite M 200 plate reader) at wavelengths 400–800 nm. Spectra peaks were plotted and compared (Kalpana et al. 2012). Fourier transform infrared of fungi FTIR analysis was performed to characterize the functional groups present on the cell wall of live cells and dead cells of fungal biomass. To confirm the existence and responsibility of functional groups on the fungal biomass in biosorption, FTIR spectra of the biomass samples (from Sect. 2.5.1) before and after biosorption were obtained using FT-IR spectrum (Perkin Elmer, USA) conducted in ambient temperature. Data were collected within the mid-infrared region from 4000 to 400 cm−1 (20-scan speed) (Chew and Ting 2016; Munck et al. 2018). Enzyme assays Enzyme activities were estimated spectrophotometrically by using the optimized crude supernatant fractions. The control was prepared by inoculating PDB with 5 mycelium plugs (5 mm2) and incubated with agitation (150 rpm, 30 ± 2 °C). Enzyme activity was measured after 24 h in the treatment and control groups. MnP activity was determined by oxidation of MnSO4 (Asgher et al. 2016). The reaction mixture had a final volume of 2 mL containing 0.05 M of MnSO4 in 0.1 M sodium acetate buffer pH 5 and supernatant (enzyme sample). The reaction was initiated by addition of hydrogen peroxide at a final concentration of 0.5 mM. The mixture was incubated for 10 min at 25 ℃ and the increase in absorbance was measured at 470 nm. The LiP activity was measured by the oxidation of veratryl alcohol to veratryl aldehyde. The final 3.0 mL of reaction volume contained 0.2 M tartaric acid, 10 mM veratryl alcohol, 2 M H2O2 and supernatant (enzyme sample). Controls were maintained without addition of H2O2. The samples were incubated at 25 ℃ for 15 min and increase in absorbance was monitored spectrophotometrically at 310 nm (Takamiya et al. 2008). The Lac assay was performed by detecting oxidation of 2,2′-azinobis (3-ethylbenzthiazoline)-6-sulfonate (ABTS) in control and treated samples via colorimetric change at 420 nm (Chen and Ting 2015a, b). The reaction mixture contained 5 mM ABTS in 0.1 M sodium acetate buffer (pH 5.0) and supernatant (enzyme sample). One enzyme unit was defined as the amount of enzyme required to catalyze the conversion of 1 micromole of substrate per min at 25 °C. Phytotoxicity studies Toxicity of TPM dyes before and after treatment was inspected against two commonly cultivated crops, which are Vigna radiata and Zea mays plant seeds. About 5 mL each of 100 mg/L TPM dyes solution with and without fungal treatment (biomass 2 g, 150 rpm, 30 ± 2 °C) was used for the assay. Ten seeds of each plant were sowed in the Petri plates and with sterile distilled water as a control at 28 ℃ culture for 5 days (Kalpana et al. 2012). The phytotoxicity was evaluated on the basis of their percentage germination (%) and the shoot length (plumule) and root length (radicle) measured after the incubation for 5 days. All data in this paper are the mean value (± SD) of three independent replicates. SPSS 19.0 software was used for statistical analysis. Statistically significant differences among the experimental treatments were analyzed using one-way analysis of variance (ANOVA) followed by Duncan's test at 0.05 probability level. Screening of TPM dyes decolorization strain In this study, the 14 strains of endophytic fungi were isolated from the roots of Sinosenecio oldhamianus. The decolorization capability of these endophytic isolate were assessed on MBM plates with 4 types of TPM dyes (CV, MV, MG and CB). The tested fungal strains on solid medium exhibited a different potential of TPM dyes decolorization. It is worth noting that only one endophyte, isolate SWUSI4, was able to decolorize all four different TPM dyes and also demonstrated the highest decolorization potential (Additional file 1: Table S1 and Fig. S1). This result conforms with most observations (Jasinska et al. 2012; Yang et al. 2016; Marcharchand and Ting 2017), suggesting that different fungi had different dye decolorization abilities. Simultaneously, the potential of decolorization is related to the adaptability and activity of the selected strains. Obviously, our results indicated that strain SWUSI4 had the capability to decolorize all tested four TPM dyes on solid culture (Additional file 1: Table S1 and Fig. S1). Generally, strains isolated from contaminated areas have been reported to have a strong tolerance and potential for remediation to contaminants (Yang et al. 2009; Almeida and Corso 2018). This study is also the first to report the effectiveness of endophytic fungus from the dye-contaminated Sinosenecio oldhamianus, and its potential in decolorizing TPM dyes. Hence, according to the above mentioned, isolate SWUSI4 was selected for further taxonomical identification as well as more detailed decolorization study. Identification of isolate SWUSI4 Morphologically, isolate SWUSI4 produces white fast-growing colonies with a diameter of 8.34 cm (± 0.26) after 5 days of growth on PDA at a temperature of 28 ºC, aerial mycelium abundant, woolly, at first white but later becoming yellowish, reverse white (Additional file 1: Fig. S2A, B). Advancing hyphae dichotomously branched, 4.54 µm (± 1.76) diameter; and air-borne hyphae dividing into one-celled arthroconidia which remain cylindrical or become ellipsoidal or slightly barrel-shaped, 4.08 (± 0.81) × 6.20 (± 2.16) µm (Additional file 1: Fig. S2C), which is consistent with the morphological description of B. adusta R59 in the published literature (Kornillowicz-Kowalska et al. 2006). For isolate SWUSI4, the 5.8SrDNA gene sequence was determined and classified in the genus Bjerkandera sp. on the basis of neighbor-joining analysis compared to other similar fungi stains (Fig. 1). This identity was provided on the basis of the nucleotide sequence having a 99% homology (E value of 0.0; 99% query coverage) with that of B. adusta strain MJ01 deposited in GenBank (accession number HQ327995.1) and B. adusta strain SM27 deposited in GenBank (accession number KU055647.1). Based on molecular taxonomy investigation of the strain SWUSI4, this fungus was identified as B. adusta. And the nucleotides sequences were submitted to GenBank and provided a GenBank accession number MN640911. Neighbor-joining tree based on ITS rDNA sequence of the fungus SWUSI4 and its closest ITS rDNA matches in the GenBank Dye decolorization activities of B. adusta SWUSI4 in different conditions Effects of biomass on dye removal The effect of biomass dosage was evaluated at the concentration of 100 mg/L (100 mL) at 30 ℃ for 14 days. As revealed in Fig. 2a, application of 4 g of biomass was sufficient to achieve the decolorization efficiency (DE %) for CV (85%) and MG (95%), while CB and MV required 6 g to achieve maximum DE 89% and 92%, respectively. In terms of CV and MG, the maximum DE was observed when the biomass dosage was 6 g, but there were no conspicuous differences between biomass dosage of 6 g and 4 g (Fig. 2a). The benefit of using more biomass has also been reported in other studies (Kaushik and Malik et al. 2009; Wang et al. 2017; Bankole et al. 2018; Almeida and Corso 2018); ; ; , which primarily attributed to having more binding sites and enzymes give rise to biosorption and biodegradation, respectively. Optimum decolorization conditions for isolate SWUSI4. A: Influence of biomass (1, 2, 4, 6 and 8 g, respectively) of SWUSI4 on the decolorization activities of CV, MV, MG and CB. B: Influence of initial dye concentrations (50, 100, 150, 200 and 250 mg L−1, respectively) of SWUSI4 on the decolorization activities of CV, MV, MG and CB. C: Influence of static and shaking conditions of SWUSI4 on the decolorization activities of CV, MV, MG and CB. Note: Means with the same letters and captions are not significantly different at honestly significant difference (HSD (0.05)). Mean comparisons are made for means with the same fonts. For example, means designated "A" compared to another mean with "A" Duncan grouping is not significantly different, while "A" compared to "B" and "C" is significantly different. Similarly, means with "a" are not significantly different compared to "a" but are for "b" and "c". "**" indicates significant difference based on T-test (**p < 0.01, ***p < 0.001). Bars indicate standard deviation of mean (± SD) In this study, TPM dyes were effectively decolorized by endophytic fungus (isolate SWUSI4) under non-nutritive conditions. Interestingly, successful decolorization TPM dyes by isolate SWUSI4 was similar to other environmental isolates, such as Trichoderma asperellum and Penicillium simplicissimum, revealing the nature of fungi as dye degraders (Chen and Ting 2015a, b; Marcharchand et al. 2017). Therefore, we further compared the decolorization rate of endophytic B. adusta SWUSI4 with other strains reported previously. For example, Marcharchand and Ting (2017) have reported that CV, MV, MG and CB (100 mg/L) were decolorized up to 11, 67, 76 and 57%, respectively, by Trichoderma asperellum within 336 h (14 days). Another non-white rot fungi Penicillium simplicissimum showed 76, 79, 54 and 64% decolorization of CV, MV, MG and CB (100 mg/L) within 336 h (14 days), respectively. However, endophytic fungus SWUSI4 demonstrated strong decolorization efficiency for CV, MV, MG and CB (100 mg/L) within 14 days were 72%, 81%, 91% and 70%, respectively. By contrast, endophytic strain SWUSI4 decolorization efficiency to CV, MV, MG and CB is greater than above-mentioned control strains. In general, decolorization efficiency rose with the increase of biomass dosage at a certain dye concentration. However, in the present study, when biomass ranged from 6 to 8 g, DE value was significantly down to 84–74% (CV), 92–83% (MV), 96–82% (MG) and 89–77% (CB), respectively (Fig. 2A). This result showed that the higher biomass (6 or 8 g) suppressed decolorization efficiency, the reason of which was caused by the dye initial concentration in the culture medium rather than the dosage of inoculum (Chen et al. 2015a). Hence, the results obtained from this investigation demonstrated that biomass of endophytic fungus not only can decolorize TPM dyes, but also suitable biomass could be efficiently employed as a low-cost and eco-friendly biosorbent for TPM dye removal. Influence of initial dye concentrations on decolorization Effect of initial dye concentration on the decolorization ability of SWUSI4 was studied by adding the fungal biomass (2 g) to CV, MV, MG and CB solution (100 mL) with different initial concentrations (50, 100, 150, 200 and 250 mg/L, respectively). Decolorization efficiency (DE) was calculated after 14 days at 30 ℃. For four tested TPM dyes, it was evident that the DE declined when initial dye concentration increased. At the dye concentration of 50 mg/L, SWUSI4 allowed 85, 90, 94 and 80% DE of CV, MV, MG and CB, respectively (Fig. 2b); whereas, when the initial dye concentration was 250 mg/L, DE reached 34%, 37%, 67% and 35% for CV, MV, MG and CB, respectively (Fig. 2b). Similar results have also been reported by Lin et al. (2010), who found that the decolorization efficiency of Mucoromycotina sp. declined with increasing initial dye concentrations. The implications of high initial dye concentrations agreed with most investigations (Chen and Ting 2015; Wang et al. 2017; Almeida and Corso 2018), thus suggesting that the toxicity of dye could be more pronounced at higher dye concentrations which may suppress the microbial growth. In order to improve the decolorization ability, 50 mg/L TPM dyes solution was chosen as the optimum dye concentration. Effect of shaking and stationary conditions To determine the effects of static and shaking conditions on decolorization, this effect was studied under two different conditions (0 rpm and 150 rpm, respectively), by treating 100 ml of dye solutions with fungal biomass 2.0 g, initial dye concentrations 100 mg/L with other experimental conditions remaining constant. Comparatively, SWUSI4 in shaking condition was more effective in decolorizing TPM dyes than in static condition (Fig. 2c). Among them, the DE on CV, MV and CB were significantly higher in shaking condition with means 72%, 81% and 70% as compared to 27%, 47% and 58% in static condition, respectively (Fig. 2c). Nevertheless, the DE of MG dye under shaking condition is similar to static condition (90% vs. 91%) (Fig. 2c). According to the authors, the higher decolorization under shaking condition than in static condition is primarily dependent on the oxidative reactions by key enzymes such as LiP and Mnp (Shedbalkar et al. 2008; Zhuo et al. 2011). Besides, it has been reported that the shaking condition was better for faster and complete adsorption and decolorization of MV and CV by Coriolopsis sp., as well as CB by Penicillium simplicissimum KP713758 or Coriolopsis sp. as compared to the static conditions (Chen and Ting 2015). However, in another case, the process of decolorization does not require oxygen and most possibly involved reductive reactions by a different set of reductases. We assumed that the discrepancy of decolorization between static and shaking incubation conditions seemed to be related to the fungal species. All in all, our results demonstrated that decolorization could be more efficient under shaking incubation conditions. Biomass absorption or adsorption and enzymolysis contributions of B. adusta on the removal of TPM dyes To evaluate the effect of biodegradation and biosorption, the decolorization of TPM dyes was performed separately by live cells and dead cells under optimized conditions. As shown in Fig. 3, TPM dyes solution (50 mg/L) was mixed with fungal biomass (4 g) under shaking (150 rpm) conditions at 30 ℃ for 7 days, the decolorization efficiency of live cells for MG, MV, CB and CV was 97%, 94%, 94% and 92%, respectively. By contrast, dead cells showed decolorization capacities of 72%, 71%, 64% and 53% for MG, MV, CB and CV, respectively. Furthermore, the decolorization process by live cells of B. adusta SWUSI4 was rapid compared to dead cells. Live cells achieved DE for CV, MV, MG and CB allowed rapid decolorization within 24 h (91%, 94%, 96% and 93%, respectively). By contrast, dead cells achieved DE for CV, MV, MG and CB allowed 45%, 63%, 68% and 55%, respectively, decolorization within 24 h (Additional file 1: Fig. S3). In general, the higher DE by live cells as opposed to dead cells has been reported in other studies as well Ting et al. (2016); Przystas et al. (2018); Chen et al. (2019a, b, c). This has been primarily attributed to the biodegradation of live cells because they can produce lignin enzymes, such as MnP, LiP and Lac (Srinivasan and Viraraghavan 2010; Marcharchand and Ting 2017; Munck et al. 2018). Influence of live cells and dead cells on decolorization of CV, MV, MG and CB by SWUSI4 at 30 ± 2 °C. "***" indicates significant difference based on paired T-test (p < 0.01). Bars indicate standard deviation of mean (± SD) It is well known that decolorization occurred by adsorption of the fungal mycelium firstly, and then followed by enzymes of the live cells (Parshetti et al. 2011). However, the decolorization merely depended on the absorption once the mycelia were dead (Wang et al. 2017). On the other hand, according to Casas et al. (2009), the occurrence of biosorption can be concluded by the dye-colored fungal cells after decolorization. In our study, after the decolorization the color of dead cells was same as corresponding TPM dyes, respectively, while the dye solution turned lighter after decolorization by live cells (Additional file 1: Fig. S4). This result indicated that live cells may degrade dyes to a certain extent by enzymes. Thus, absorption presumably played a major role in the decolorization as well as degradation also had a certain role when using live cells. UV–Vis analysis As shown in Fig. 4, for both treatments with live and dead cells of B. adusta SWUSI4 for 7 days, these absorbance peaks were obviously observed to reduce or disappear after decolorization. According to the reports (Ting et al. 2016; Ortiz-Monsalve et al. 2019; Munck et al. 2018), the disappearance or reduction of peaks in dyes can be attributed to the enzymatic biodegradation or biosorption of biomass. In our results, the complete dissolution of maximum absorption peaks were clearly observed for CB and MG treated with live cells of B. adusta (Fig. 4a, c). For CV and MV treated with live cells, the corresponding maximum absorption peak dramatically decreased in intensity after application with SWUSI4 (Fig. 4b, d). For CV, MV, MG and CB treated with dead cells of B. adusta, the maximum absorption peaks remained detectable after 7 days (Fig. 4e–h). Furthermore, the decrease degree of the maximum absorption peaks seems to be proportional to the decolorization percentage of their corresponding dyes detected, especially from 0–1 day irrespective of whether live or dead cells of B. adusta SWUSI4 were used. Previously, it has been reported that a decrease in absorbance peaks and appearances of a new peak reflects the removal of dye via biodegradation leading to biodecolorization, while dead cells destroyed the absorption peak via biosorption (Asad et al. 2007; Chen and Ting 2015). Association with decolorization percentage and UV–vis analysis, it can be stated that the decolorization of four tested TPM dyes carried out by live cells of isolate SWUSI4 were related to both biodegradation and biosorption, while dead cells only via biosorption. The UV–vis spectrum analysis derived from a CB + live cells, b CV + live cells, c MG + live cells, d MV + live cells, e CB + dead cells, f CV + dead cells, g MG + dead cells, h MV + dead cells. Analysis was conducted at the start of the experiment (day 0) and at day 1, day 7 post-treatment, respectively FTIR analyses In the present study, the FTIR analysis revealed there were no significant differences in the number of functional groups present on the cell wall of live cells and dead cell forms of SWUSI4. These primary functional groups include hydroxyl, amino, phosphoryl alkane, and ester–lipids groups (Tables 1 and 2). For live cells, the treatments with CV, MV, MG and CB caused peaks to shift at 3394 cm−1 (representing O–H and N–H groups), 2368 cm−1 (C=C stretching of ester), 1654 cm−1 (C=O stretching and N–H deformation of amide I band), 1076 cm−1 (denoting C–C, C=C, C–O–C and C–O–P groups of polysaccharides) and 528 cm−1 (nitro compounds and disulphide groups) (Table 1). Interactions between phosphoryl group and TPM dyes may have occurred according to the shift of a peak at 1076 cm−1 and disappearance of a peak at 1149 cm−1. Furthermore, another peak at 1251 cm−1 (also denoting the phosphate group) was masked after CV treatment and shifted to 1261 cm−1 after MG treatment although no changes in this peak was not observed for CB and MV-treated live cells. In addition, new peak at 1741 cm−1 was detected (C=O group) in all dye-treated live cells. On the contrary, the involvement of C–H stretching vibrations at 2926 and 2339 cm−1, and amide III group at 1456 cm−1 in dye adsorption were not prominent (Table 1). On the other hand, dye-treated dead cells displayed similar changes in vibrational frequencies as for live cells, though with was more new peaks appearing (Table 2). The peaks at 3415, 2368, 1404 and 1033 cm−1 shifted after exposure to the four dyes. In addition, a weak shift from 2926 to 2924 cm−1 (C–H asymmetric stretching) was observed for CV, MV and MG-treated dead cells. All four dyes caused the disappearance of two peaks at 1327 and 775 cm−1, and the emergence of a new peak at 1741 cm−1(C=O stretching of ester). Further, new peak at 2857 cm−1 was detected (C–H stretching) in CB-treated dead cells, whereas peaks at 927 cm−1 were masked in all treated dead cells but MG. The MV and MG treatments result in the emergence of a new peak at 1342 cm−1 (Amide III), and shifting of existing peaks at 1456 and 1404 cm−1(C–N stretching) (Table 2). Additionally, the CV and CB treatments also led to the emergence of a new peak at 1344 cm−1, and shifting of existing peaks at 1404 cm−1. Table 1 Exemplary FTIR band positions (cm-1) of live cells of isolate SWUSI4 before and after treatment with CV, MV, MG and CB Table 2 Exemplary FTIR band positions (cm-1) of dead cells of isolate SWUSI4 before and after treatment with CV, MV, MG and CB The changes in vibrational frequencies recorded via FTIR analyses on the chemical surface composition of dye-treated B. adusta confirmed the involvement of biosorption in dye removal. For example, Yang et al. (2011) showed that the biosorption of Acid Blue 25 by dead (autoclaved) Penicillium YW 01 involved amine, amide and carboxyl groups. Another fungus Aspergillus fumigatus, removal of Acid Violet 49 by dead (through freezing) was attributed to amino, carboxyl, phosphate, and sulfonyl groups (Chaudhry et al. 2014). Similarly, Chen et al. (11,12,c) also reported that removal of TPM dyes by live cells and dead cells (autoclaving) of Penicillium simplicissimum involved amino, hydroxyl, phosphoryl, nitro groups, etc. This is evidence that dead cells (autoclaving) did not have severe implications on the functional groups on the cell wall, as most of the major functional groups were detected. They may also be typical functional groups on the cell walls of a variety of fungi (Marcharchand and Ting 2017; Chew and Ting 2016). Additionally, the number of functional groups does not necessarily correlate with their potential in dye removal since the number of functional groups is almost the same present on the cell wall of live cells and dead cell forms of SWUSI4. This further explained that the higher decolorization efficiency of live cells may be attributed to the role of enzymes secreted. However, positively charged and negatively charged groups are important to attract both basic and acidic dyes through electrostatic attraction which is the basis of the biosorption mechanism of dye removal (Marcharchand and Ting 2017). Therefore, the involvement of chemical groups of SWUSI4 in the removal of TPM dyes substantiated biosorption as part of the dye removal mechanisms. In order to get additional insight into the enzymolysis contributions of SWUSI4 on the removal of dye, the enzyme activities of Lac, MnP and LiP were monitored after 24 h in the treatment and control groups. As shown in Table 3, our results demonstrated that SWUSI4 produced more LiP and MnP as their levels were significantly induced compared to controls in the presence of TPM dyes. Different from LiP and MnP, Lac activities demonstrated to be significantly lower than in the control. The levels of MnP and Lip were significantly higher in all TPM dyes compared to the level of Lac levels, indicating SWUSI4 may rely more on LiP and MnP oxidase for biodegradation. Additionally, MnP and LiP activity in MG was higher than that in MV, indicating that the relative contribution of each enzyme to dye decolorization is different for SWUSI4. Table 3 Activities of laccase (Lac), lignin peroxidase (LiP) and manganese peroxidase (MnP) assayed from isolate SWUSI4 cultures exposed to TPM dyes compared to control (in potato dextrose broth only) Generally, the relative contributions of LiP, MnP and Lac to the decolorization of dyes may be different for each fungus (Srinivasan and Viraraghavan 2010; Wang et al. 2017). Previously, Phanerochaete chrysosporium have been reported as Lip producers able to decolorize CV (Bumpus and Brock 1988), Irpex lacteus as Mnp producers able to decolorize MG (Yang et al. 2016; Duan et al. 2018) and Pleurotus ostreatus as Lac producers able to decolorize MG and CV (Morales-Álvarez et al. 2018). Besides fungi, contribution of these enzymes in biodegradation process highly depends on type of dyes (Al Farraj et al. 2019). For instance, Chen and Ting (2019) revealed that the enzyme activities of manganese peroxidase were significantly enhanced in response to MG, whereas only tyrosinase activities were higher when inoculated into MV and CB. Similarly, in Penicillium simplicissimum (isolate 10), higher levels of LiP were detected in cultures supplemented with MV and CV (Chen and Ting 2015). Therefore, for this reason, we studied the relative contributions of LiP, MnP and Lac produced by endophytic isolate SWUSI4 to degraded TPM dyes. For this reason, we studied the enzymes produced by endophytic isolate SWUSI4. Overall, the strain SWUSI4 showed that the activity of MnP and Lip are higher than Lac in this study. As such, MnP presumably played an important role in the decolorization as well as Lip also had a certain role and therefore can regulate biodegradation of TPM dyes to some extent. Phytotoxicity test Parshetti (2010) reported that plants belong to the group of sensitive indicators of remediation. Therefore, two common crops Vigna radiata and Zea mays were tested for toxicity in this study. The germination rate, shoot and root length of germinated seeds were observed and presented in Table 4. The seed germination rate in Vigna radiata was not affected by CB (100%), but was inhabited by CV, MV and MG (85, 90 and 80%, respectively). Similarly, the inhibition of CV, MV, MG and CB on germination of Zea mays was 70, 80, 60 and 90%, respectively, compared with sterile distilled water (Table 4). Meanwhile, seed germination of Vigna radiata was enhanced and improved by 100% for all the four tested TPM dyes compared to the untreated dye sample. Similarly, seed germination of Zea mays was also increased by 93, 95, 90 and 100% for CV, MV, MG and CB of isolate SWUSI4 treatment, respectively. On the other hand, compared to untreated TPM dyes, the enhanced and significant growth of the shoot and root of Vigna radiata and Zea mays suggests reduced toxicity after treatment (p < 0.05) (Table 4). In addition, in the case of plants, Vigna radiata was less affected than Zea mays, suggesting that Zea mays might have higher sensitivity towards dye toxicity compared to Vigna radiata. On the whole, the result suggests that the all four tested TPM dyes were toxic to both plants, while the metabolites formed after treatments were less toxic or nontoxic, which signifies the detoxification of TPM dyes by B. adusta SWUSI4 (Chen et al. 2019a, b, c). Table. 4 Results of phytotoxicity tests of TPM dye and its metabolites produced by the fungus SWUSI4 on Vigna radiate and Zea may This study is the first to report the effectiveness of endophytic fungi isolated from the Sinosenecio oldhamianus to decolorize TPM dyes (CV, MV, MG, and CB). Among them, the isolate SWUSI4 (B. adusta) had the best decolorization ability for four tested TPM dyes. The decolorization efficiency of SWUSI4 was influenced by initial dye concentrations, fungal biomass, and shaking. The better biosorption of TPM dyes by isolate SWUSI4 has been majorly attributed to the predominance of hydroxyl, amino, phosphoryl, alkane, etc., groups on the surface of the cell wall of SWUSI4. Biodegradation played an important role in dye decolorization, which resulted in reduced absorbance peaks for dyes, LiP and MnP levels were significantly induced, and degraded products were also found to be less toxic compared with before degradation. Hence, this work indicated that B. adusta SWUSI4 has potential application prospects for the biodecolorization and detoxification of TPM dyes. Data availability statements All data generated or analyzed during this study are included in this published article and its additional files. TPM: Triphenylmethane Crystal violet MV: Methyl violet MG: Malachite green CB: Cotton blue LiP: Lignin peroxidase MnP: Manganese peroxidase Lac: Laccase Potato dextrose ITS: Internal transcribed spacers DNA: CTAB: Cetyltrimethylammonium bromide NCBI: ABTS: 2,2′-Azinobis (3-ethylbenzthiazoline)-6-sulfonate Decolorization efficiency FTIR: Fourier transform infrared UV–vis: Ultraviolet–visible Farraj A, Elshikh DA, Khulaifi MS, Hadibarata MM, Yuniarto T, Syafiuddin A (2019) Biotransformation and Detoxification of Anthraquinone Dye Green 3 using halophilic Hortaea sp. Int Biodeter Biodegrad 140:72–77. https://doi.org/10.1016/j.ibiod.2019.03.011 Ali H (2010) Biodegradation of synthetic dyes—a review. Water Air Soil Pollut 213(1–4):251–273. https://doi.org/10.1007/s11270-010-0382-4 Almeida EJ, Corso CR (2018) Decolorization and removal of toxicity of textile azo dyes using fungal biomass pelletized. Int J Environ Sci Technol 16(3):1319–1328. https://doi.org/10.1007/s13762-018-1728-5 Asad S, Amoozegar MA, Pourbabaee AA, Sarbolouki MN, Dastgheib SM (2007) Decolorization of textile azo dyes by newly isolated halophilic and halotolerant bacteria. Bioresour Technol 98(11):2082–2088. https://doi.org/10.1016/j.biortech.2006.08.020 Asgher M, Ramzan M, Bilal M (2016) Purification and characterization of manganese peroxidases from native and mutant Trametes versicolor IBL-04. Chin J Catal 37(4):561–570. https://doi.org/10.1016/S1872r2067(15)61044S1 Bankole PO, Adekunle AA, Govindwar SP (2018) Enhanced decolorization and biodegradation of acid red 88 dye by newly isolated fungus, Achaetomium strumarium. J Environ Chem Eng 6(2):1589–1600. https://doi.org/10.1016/j.jece.2018.01.069 Barapatre A, Aadil KR, Jha H (2017) Biodegradation of malachite green by the ligninolytic fungus Aspergillus flavus. Clean-Soil Air Water 45(4):1–12. https://doi.org/10.1002/clen.201600045 Bumpus JA, Brock BJ (1988) Biodegradation of crystal violet by the white rot fungus Phanerochaete-Chrysosporium. Appl Environ Microbiol 54(5):1143–1150 Casas N, Parella T, Vicent T, Caminal G, Sarra M (2009) Metabolites from the biodegradation of triphenylmethane dyes by Trametes versicolor or laccase. Chemosphere 75(10):1344–1349. https://doi.org/10.1016/j.chemosphere.2009.02.029 Chaudhry MT, Zohaib M, Rauf N, Tahir SS, Parvez S (2014) Biosorption characteristics of Aspergillus fumigatus for the decolorization of triphenylmethane dye acid violet 49. Appl Microbiol Biotechnol 98(7):3133–3141. https://doi.org/10.1007/s00253-013-5306-y Chen SH, Chew YL, Ng SL, Ting ASY (2019a) Biodegradation of triphenylmethane dyes by non-white rot fungus Penicillium simplicissimum: Enzymatic and toxicity studies. Int J Environ Res 13(2):273–282. https://doi.org/10.1007/s41742-019-00171-2 Chen SH, Cheow YL, Ng SL, Ting ASY (2019b) Removal of triphenylmethane dyes in single-dye and dye-metal mixtures by live and dead cells of metal-tolerant Penicillium simplicissimum. Separ Sci Technol. https://doi.org/10.1080/01496395.2019.1626422 Chen SH, Ting Y (2015) Biosorption and biodegradation potential of triphenylmethane dyes by newly discovered Penicillium simplicissimum isolated from indoor wastewater sample. Int Biodeterior Biodegrad 103:1–7. https://doi.org/10.1016/j.ibiod.2015.04.004 Chen SH, Su A, Ting Y (2015) Biodecolorization and biodegradation potential of recalcitrant triphenylmethane dyes by Coriolopsis sp. isolated from compost. J Environ Manage 150:274–280. https://doi.org/10.1016/j.jenvman.2014.09.014 Chen SH, Cheow YL, Ng SL, Ting ASY (2019c) Mechanisms for metal removal established via electron microscopy and spectroscopy: a case study on metal tolerant fungi Penicillium simplicissimum. J Hazard Mater 362:394–402. https://doi.org/10.1016/j.jhazmat.2018.08.077 Chew SY, Ting ASY (2016) Common filamentous Trichoderma asperellum for effective removal of triphenylmethane dyes. Desal Water Treat 57(29):13534–13539. https://doi.org/10.1080/19443994.2015.1060173 Daneshvar N, Ayazloo M, Khataee AR, Pourhassan M (2007) Biological decolorization of dye solution containing Malachite Green by microalgae Cosmarium sp. Bioresour Technol 98:1176–1182. https://doi.org/10.1016/j.biortech.2006.05.025 Duan Z, Shen R, Liu B, Yao M, Jia R (2018) Comprehensive investigation of a dye-decolorizing peroxidase and a manganese peroxidase from Irpex lacteus F17, a lignin-degrading basidiomycete. AMB Express 8:1–16. https://doi.org/10.1186/s13568-018-0648-6 Huang Q, An H, Song H, Mao H, Shen W, Dong J (2015) Diversity and biotransformative potential of endophytic fungi associated with the medicinal plant Kadsura angustifolia. Res Microbiol 166(1):45–55. https://doi.org/10.1016/j.resmic.2014.12.004 Jasinska A, Rozalska S, Bernat P, Paraszkiewicz K, Dlugonski J (2012) Malachite green decolorization by non-basidiomycete filamentous fungi of Penicillium pinophilum and Myrothecium roridum. Int Biodeter Biodegrad 73:33–40. https://doi.org/10.1016/j.ibiod.2012.06.025 Kalpana D, Velmurugan N, Shim JH, Oh BT, Senthil K, Lee YS (2012) Biodecolorization and biodegradation of reactive Levafix Blue E-RA granulate dye by the white rot fungus Irpex lacteus. J Environ Manage 111(2012):142–149. https://doi.org/10.1016/j.jenvman.2012.06.041 Kaushik P, Malik A (2009) Fungal dye decolourization: recent advances and future potential. Environ Int 35(1):127–141. https://doi.org/10.1016/j.envint.2008.05.010 Kornillowicz-Kowalska T, Wrzosek M, Ginalska G, Iglik H, Bancerz R (2006) Identification and application of a new fungal strain Bjerkandera adusta R59 in decolorization of daunomycin wastes. Enzyme Microbial Technol 38(5):583–590. https://doi.org/10.1016/j.enzmictec.2005.10.009 Larkin MA, Blackshields G, Brown NP, Chenna R, McGettigan PA, McWilliam H (2007) Clustal W and Clustal X version 2.0. Bioinformatics 23:2947–2948. https://doi.org/10.1093/bioinformatics/btm404 Lin YH, Chen L, He XB, He YQ, Zhou X (2010) Biodegradation of aniline blue dye by a fungus Mucoromycotina sp. HS-3. Microbiol China 37(12):1727–1733 Marcharchand S, Ting ASY (2017) Trichoderma asperellum cultured in reduced concentrations of synthetic medium retained dye decolourization efficacy. J Environ Manage 203:542–549. https://doi.org/10.1016/j.jenvman.2017.06.068 Morales-Alvarez ED, Rivera-Hoyos CM, Poveda-Cuevas SA, Reyes-Guzman EA, Pedroza-Rodriguez AM, Reyes-Montano EA, Poutou-Pinales RA (2018) Malachite green and crystal violet decolorization by Ganoderma lucidum and Pleurotus ostreatus supernatant and by rGlLCC1 and rPOXA 1B concentrates: molecular docking analysis. Appl Biochem Biotechnol 184(3):794–805. https://doi.org/10.1007/s12010-017-2560-y Munck C, Thierry E, Grassle S, Ting SH (2018) Biofilm formation of filamentous fungi Coriolopsis sp. on simple muslin cloth to enhance removal of triphenylmethane dyes. J Environ Manage 214:261–266. https://doi.org/10.1016/j.jenvman.2018.03.025 Ortiz-Monsalve VS, Poll P, Jaramillo-Garcia E, Pegas Henriques V, A J, Gutterres M, (2019) Biodecolourization and biodetoxification of dye-containing wastewaters from leather dyeing by the native fungal strain Trametes villosa SCS-10. Biochem Eng J 141:19–28. https://doi.org/10.1016/j.bej.2018.10.002 Parshetti GK, Parshetti SG, Telke AA, Kalyani DC, Doong RA, Govindwar SP (2011) Biodegradation of crystal violet by Agrobacterium radiobacter. J Environ Sci 23(8):1384–1393. https://doi.org/10.1016/S1001-0742(10)60547-5 Parshetti GK, Telke AA, Kalyani DC, Govindwar SP (2010) Decolorization and detoxification of sulfonated azo dye methyl orange by Kocuria rosea MTCC 1532. J Hazard Mater 176:503–509. https://doi.org/10.1016/j.jhazmat.2009.11.058 Przystas W, Zablocka-Godlewska E, Grabinska-Sota E (2018) Efficiency of decolorization of different dyes using fungal biomass immobilized on different solid supports. Braz J Microbiol 49(2):285–295. https://doi.org/10.1016/j.bjm.2017.06.010 Qin D, Wang L, Han M, Wang J, Song H, Yan X, Duan X, Dong J (2018) Effects of an Endophytic Fungus Umbelopsis dimorpha on the secondary metabolites of host-plant Kadsura angustifolia. Front Microbiol 9:2845. https://doi.org/10.3389/fmicb.2018.02845 Shang NJ, Ding MJ, Dai MX, Si HL, Li SG, Zhao GY (2019) Biodegradation of malachite green by an endophytic bacterium Klebsiella aerogenes S27 involving a novel oxidoreductase. Appl Microbiol Biotechnol 103(5):2141–2153. https://doi.org/10.1007/s00253-018-09583-0 Shedbalkar U, Dhanve R, Jadhav J (2008) Biodegradation of triphenylmethane dye cotton blue by Penicillium ochrochloron MTCC 517. J Hazard Mater 157(2–3):472–479. https://doi.org/10.1016/j.jhazmat.2008.01.023 Srinivasan A, Viraraghavan T (2010) Decolorization of dye wastewaters by biosorbents: a review. J Environ Manage 91(10):1915–1929. https://doi.org/10.1016/j.jenvman.2010.05.003 Takamiya M, Magan N, Warner PJ (2008) Impact assessment of bisphenol A on lignin-modifying enzymes by basidiomycete Trametes versicolor. J Hazard Mater 154(1–3):33–37. https://doi.org/10.1016/j.jhazmat.2007.09.098 Tamura K, Peterson D, Peterson N, Stecher G, Nei M, Kumar S (2011) MEGA5: Molecular evolutionary genetics analysis using maximum likelihood, evolutionary distance, and maximum parsimony methods. Mol Biol Evol 8:2731–2739. https://doi.org/10.1093/molbev/msr121 Ting ASY, Lee MVJ, Chow YY, Cheong SL (2016) Novel exploration of endophytic Diaporthe sp. for the biosorption and biodegradation of triphenylmethane dyes. Water Air Soil Pollut 227(109):1–8. https://doi.org/10.1007/s11270-016-2810-6 Turhan K, Durukan I, Ozturkcan SA, Turgut Z (2012) Decolorization of textile basic dye in aqueous solution by ozone. Dyes Pigments 92(3):897–901. https://doi.org/10.1016/j.dyepig.2011.07.012 Wang N, Chu Y, Wu FA, Zhao Z, Xu X (2017) Decolorization and degradation of Congo red by a newly isolated white rot fungus, Ceriporia lacerata, from decayed mulberry branches. Int Biodeterior Biodegrad 117:236–244. https://doi.org/10.1016/j.ibiod.2016.12.015 Yang XQ, Zhao XX, Liu C, Zheng YY, Qian SJ (2009) Decolorization of azo, triphenylmethane and anthraquinone dyes by a newly isolated Trametes sp. SQ01 and its laccase. Process Biochem 44(10):1185–1189. https://doi.org/10.1016/j.procbio.2009.06.015 Yang X, Zheng J, Lu Y, Jia R (2016) Degradation and detoxification of the triphenylmethane dye malachite green catalyzed by crude manganese peroxidase from Irpex lacteus F17. Environ Sci Pollut Res 23(10):9585–9597. https://doi.org/10.1007/s11356-016-6164-9 Yang Y, Jim D, Wang G, Liu D, Jia X, Zhao Y (2011) Biosorption of acid blue 25 by unmodified and CPC-modified biomass of Penicillium YW01: kinetic study, equilibrium isotherm and FTIR analysis. Colloids Surf Biointerf 88:521–526. https://doi.org/10.1016/j.colsurfb.2011.07.047 Zhang D, Yang Y, Castlebury LA, Cerniglia CE (2006) A method for the large scale isolation of high transformation efficiency fungal genomic DNA. FEMS Microbiol Lett 145:261–265. https://doi.org/10.1111/j.1574-6968.1996.tb08587.x Zhuo R, Ma L, Fan F, Gong Y, Wan X, Jiang M, Zhang X, Yang Y (2011) Decolorization of different dyes by a newly isolated white-rot fungi strain Ganoderma sp. En3 and cloning and functional analysis of its laccase gene. J Hazard Mater 192(2):855–873. https://doi.org/10.1016/j.jhazmat.2011.05.106 This work was financially supported by the Natural Science Foundation of Chongqing (cstc2017jcyjAX0225). Key Laboratory of Eco-Environments in Three Gorges Reservoir Region of Ministry of Education, School of Life Sciences, Southwest University, Chongqing, 400715, People's Republic of China Tiancong Gao, Dan Qin, Shihao Zuo, Yajun Peng, Jieru Xu, Baohong Yu & Jinyan Dong School of Energy and Environment Science, Yunnan Normal University, Kunming, 650092, People's Republic of China Hongchuan Song Key Laboratory of Plant Resource Conservation and Germplasm Innovation, School of Life Sciences, Southwest University, Chongqing, 400715, China Jinyan Dong Tiancong Gao Dan Qin Shihao Zuo Yajun Peng Jieru Xu Baohong Yu Conceived and designed the experiments: TCG. Performed the experiments: TCG, DQ, SHZ, YJP and JRX. Analyzed the data: TCG, DQ and YJP. Contributed agents/materials/analysis tools: TCG, YJP, DQ, BHY, HCS. Wrote the paper: TCG. All authors read and approved the final manuscript. Correspondence to Jinyan Dong. Additional file 1: Additional table and figures. Gao, T., Qin, D., Zuo, S. et al. Decolorization and detoxification of triphenylmethane dyes by isolated endophytic fungus, Bjerkandera adusta SWUSI4 under non-nutritive conditions. Bioresour. Bioprocess. 7, 53 (2020). https://doi.org/10.1186/s40643-020-00340-8 Received: 08 June 2020 Triphenylmethane dyes Bjerkandera adusta SWUSI4 Decolorization Phytotoxicity Refining Bioresources for Sustainable Future
CommonCrawl
HPOAnnotator: improving large-scale prediction of HPO annotations by low-rank approximation with HPO semantic similarities and multiple PPI networks Volume 12 Supplement 10 Selected articles from the IEEE BIBM International Conference on Bioinformatics & Biomedicine (BIBM) 2018: medical genomics Junning Gao1, Lizhi Liu1, Shuwei Yao1, Xiaodi Huang2, Hiroshi Mamitsuka3,4 & Shanfeng Zhu1,5,6 As a standardized vocabulary of phenotypic abnormalities associated with human diseases, the Human Phenotype Ontology (HPO) has been widely used by researchers to annotate phenotypes of genes/proteins. For saving the cost and time spent on experiments, many computational approaches have been proposed. They are able to alleviate the problem to some extent, but their performances are still far from satisfactory. For inferring large-scale protein-phenotype associations, we propose HPOAnnotator that incorporates multiple Protein-Protein Interaction (PPI) information and the hierarchical structure of HPO. Specifically, we use a dual graph to regularize Non-negative Matrix Factorization (NMF) in a way that the information from different sources can be seamlessly integrated. In essence, HPOAnnotator solves the sparsity problem of a protein-phenotype association matrix by using a low-rank approximation. By combining the hierarchical structure of HPO and co-annotations of proteins, our model can well capture the HPO semantic similarities. Moreover, graph Laplacian regularizations are imposed in the latent space so as to utilize multiple PPI networks. The performance of HPOAnnotator has been validated under cross-validation and independent test. Experimental results have shown that HPOAnnotator outperforms the competing methods significantly. Through extensive comparisons with the state-of-the-art methods, we conclude that the proposed HPOAnnotator is able to achieve the superior performance as a result of using a low-rank approximation with a graph regularization. It is promising in that our approach can be considered as a starting point to study more efficient matrix factorization-based algorithms. Phenotypes refer to observable physical or biological traits of an organism. Revealing the relationships between genes/proteins and their related phenotypes is one of the main objectives of genetics in the post-genome era [1–3]. The Human Phenotype Ontology (HPO) [4] is a standardized vocabulary for describing the phenotypic abnormalities associated with human diseases [5]. Being initially populated by using databases of human genes and genetic disorders such as OMIM [6], Orphanet [7] and DECIPHER [8], HPO was later expanded by using literature curation [9]. At present, only small quantities of human protein-coding genes (∼3500) have HPO annotations. It is, however, believed that a large number of currently unannotated genes/proteins are related to disease phenotypes. Therefore, it is critical to predict genes/protein-HPO associations by using accurate computational methods. Currently, HPO contains four sub-ontologies: Organ abnormality, Mode of inheritance, Clinical modifier, and Mortality/Aging. As the main sub-ontology, Organ abnormality describes clinical abnormalities whose first- level children are formed by terms like abnormality of a skeletal system. The Mode of inheritance describes inheritance patterns of phenotypes and contains terms such as Autosomal dominant. The Clinical modifier contains classes that describe typical modifiers of clinical symptoms such as those triggered by carbohydrate ingestion. For Mortality/Aging, it describes the age of death by terms like Death in childhood and Sudden death. The Organ abnormality, Mode of inheritance, Clinical modifier, and Mortality/Aging have ∼12000, 28, 100, and 8 terms, respectively. The annotations between genes/proteins and HPO terms are very sparse. Specifically, 284621 annotations are for 3459 proteins and 6407 HPO terms with the sparsity of 1.2%. Meanwhile, the annotation growth by time, for example, is about 5%, with adding only 14820 annotations as new ones between June 2017 to December 2017. Since genes/proteins are annotated with multiple HPO terms, the prediction can be regarded as a problem of multi-label predictions. Differing from this, HPO terms, however, form a hierarchical structure. This implies that once a gene/protein is labeled with one HPO term, it should also be labeled with all of its ancestors of this particular HPO term. In other words, when a gene/protein is not labeled with an HPO term, it should not be labeled with all of its descendants, either. That is, general terms are located at the top of the HPO structure, with the term specificity increasing from the root to the leaves. Figure 1 shows a real example of an HPO hierarchical structure (i.e., Directed Acyclic Graph, DAG) and the scale of sub-ontologies. An example of the HPO hierarchical tree. All parent-child relationships in HPO represent "is-a" relationships. X-linked inheritance, Abnormality of limbs, Phenotypic variability, and Age of death are examples for sub-ontologies Mode of inheritance, Organ abnormality, Clinical modifier, and Mortality/Aging, respectively The existing computational approaches for HPO annotation prediction can be divided into two categories, namely feature-based and network-based methods. The feature-based approaches use gene/protein information as the features to predict its annotations for a query gene/protein. For sparse and noisy data, the incorporation of auxiliary information into original input data generally helps to improve predictive performance. One of these methods, learning to rank, has been demonstrated the superior performance in GO annotation prediction [10], for example. Compared with GO annotations, HPO annotations are, however, more reliable and stable. In addition, the sparseness of HPO annotations is much less than that of GO annotations, with focusing on human proteins and terms under Organ abnormality only. Nevertheless, few existing feature-based models take into consideration HPO information, e.g., the hierarchical structure and co-occurrence of HPO terms. The network-based approaches are more prevalent at present. Usually, multiple networks are integrated into a new large-scale network in order to improve the prediction in these approaches such as random-walk [11] and weighted score computation [12]. However, network-based approaches cannot perform well for sparse data. This is because of disconnected nodes that are commonly encountered in real-world graphs, particulary for sparse data, even though they can be related to each other. Prediction of the annotations between genes/proteins and HPO terms can be grouped into two categorises: 1) pair prediction, which predicts the missing HPO annotations of existing proteins, and 2) prediction of new proteins, which annotates HPO terms to the totally unannotated proteins. Most existing work belong to the latter category, but few are for the former. To narrow this gap, we focus on the first category in this paper, which is also a famous task in the CAFA challenge. Existing methods for the first category have four major limitations. First, the hierarchy of HPO is completely ignored. The hierarchical structure poses a formidable challenge to a prediction: a model needs to evaluate the associations between a protein and all of its related phenotypes from the deeper levels to the root in the hierarchy. Second, the existing methods do not make full use of the potentials of Protein-Protein Interaction (PPI) networks. For example, a PPI network is modeled in the original annotation space in their models, which may not extract the information effectively. Moreover, multiple PPI networks may be derived from different sources, resulting in the data fusion. Third, only a few known associations are available for training. So they are extremely unbalanced. Specifically, more than half of the terms in HPO are used to annotate zero or only one protein. As a result, such a drastic sparsity makes prediction more challenging. Finally, existing methods usually study the sub-ontologies independently without considering the co-annotations of HPO terms. However, co-annotations are quite common in annotations. It is likely that they help improve prediction results. To address the above four problems, we apply matrix factorization to approximate a protein-HPO annotation matrix by two factorized low-rank matrices. As such, the latent factors that underlie the HPO annotations can be well captured. Since the HPO annotation matrix is binary, we choose to use Non-negative Matrix Factorization (NMF). NMF has proved to be effective for sparse problems in the field of bioinformatics [13–16]. Based on our above observations, we propose an NMF-based framework called HPOAnnotator by which to predict missing protein-HPO annotations. In essence, the key idea of our model is to factorize the HPO annotation matrix into two non-negative low-rank latent matrices, which correspond to the respective latent feature spaces of proteins and HPO terms. In addition, the graph Laplacian on PPI networks is performed to exploit their intrinsic geometric structure. Co-annotations and the hierarchical structure of HPO are also incorporated to measure HPO semantic relationships. We have experimentally validated the performance of HPOAnnotator by comparing it with the three network-based approaches, which will be reviewed in the related work. The proposed model was tested on the latest large-scale HPO data with around 300000 annotations. Experimental results clearly demonstrated that HPOAnnotator outperformed the competing methods under two scenarios: cross-validation and independent test. It indicates that a low-rank approximation and network information are effective for pair prediction. Furthermore, our case studies further provide evidence for the practical use of HPOAnnotator. Note that, the work presented in this paper is the extension of our previous work AiProAnnotator [17] (AiPA for short). The main difference between the two methods is that HPOAnnotator can seamlessly combine multiple rather than single PPI networks and then benefit from them. As mentioned before, we can group the existing approaches to HPO annotations into two categories: feature-based and network-based ones. Two well-known methods of feature-based approaches are PHENOstruct [9] and Clus-HMC-Ens [18]. Clus-HMC-Ens applies the decision tree ensembles, while PHENOstruct (the extension of GOstruct which was designed to predict GO annotations) relies on the Structural Support Vector Machine (SSVM). Together with HPO annotations (i.e., labels) of each protein, a feature-based method normally accepts feature vectors as the input of a classifier. The trained classifier is then used to make a prediction. The above procedure is the same for both two categories of approaches. Additionally, it is worth noting that PHENOstruct and Clus-HMC-Ens were originally developed for GO but then applied to HPO annotation prediction. In this sense, the difference between HPO annotations and GO annotations has not been fully taken into account by researchers. Relying on two networks of protein-HPO annotations and the hierarchy of HPO (or Network of HPO, called NHPO) with an optional PPI Network (hereafter PPN), the network-based approaches make predictions. The assumption behind them is that two nodes in a network should share some similarities, particular for those well-connected nodes who have more similarities. In the following, we review the three methods as representatives of network-based approaches, all of which are compared against our proposed approach in the experiments. Bi-random walk Bi-Random Walk (BiRW) [19, 20] has been demonstrated as a useful method for the bi-network prediction problem. BiRW performs random walks on the Kronecker product graph between PPN and NHPO in a way that they can be combined effectively for the protein-phenotype association prediction. The random walks iteratively performed by BiRW follow the equation: $$ \mathbf{Y}_{t} = \alpha \mathbf{P} \mathbf{Y}_{t-1} \mathbf{G} + (1 - \alpha) \mathbf{ \widetilde{Y} } $$ where α>0 is a decay factor, P and G are the normalized PPN and NHPO matrix, respectively. Yt is the estimation of associations at iteration t, and \(\mathbf { \widetilde {Y} }\) denotes the initial annotations in the training data. By introducing BiRW to capture the circular bigraphs patterns in the networks, the model can unveil phenome-genome associations over time. Dual label propagation model The label propagation-based algorithm has been successfully applied to predict phenotype-gene associations in various forms [21, 22]. With the following objective function, label propagation assumes that proteins should be assigned to the same label, if they are connected in a PPN: $$ \begin{aligned} \Psi(\mathbf{y}) &= \theta \sum_{i,j=1}^{n_{p}} \bar{\mathbf{S}}^{p} (y_{i} - y_{j})^{2} + \sum_{i} (y_{i} - \widetilde{y}_{i})^{2} \\ &= \theta \mathbf{y}^{T} \mathbf{L}_{S} \mathbf{y} + (1- \theta) \lVert \mathbf{y} - \widetilde{\mathbf{y}} \rVert^{2} \end{aligned} $$ where \(\bar {\mathbf {S}}^{p}\) is a normalized PPN defined as \(\bar {\mathbf {S}}^{p} = \mathbf {D}^{-\frac {1}{2}}\mathbf {S}^{p}\mathbf {D}^{-\frac {1}{2}}\), and D is a diagonal matrix with the row-sum of Sp on the diagonal entries. Equation 2 can be rewritten as follows: $$ \Psi(\mathbf{Y}) = \theta \text{tr} (\mathbf{Y}^{T} \mathbf{L}_{S} \mathbf{Y}) + (1 - \theta) \lVert \mathbf{Y} - \mathbf{\widetilde{Y}} \rVert_{F}^{2} $$ where tr(·) denotes the trace of matrix, ∥·∥F denotes the Frobenius norm, and LS is the normalized graph Laplacian matrix of \( \bar {\mathbf {S}}^{p}\) defined as \(\mathbf {L}_{S} = \mathbf {I} - \bar {\mathbf {S}}^{p}\). The Dual Label Propagation model (DLP) [23] extends the label propagation model by adding two smoothness terms. The first term imposes the smoothness in a PPN such that interacting proteins tend to be associated with the same HPO term. The second term imposes the smoothness in NHPO in a way that the connected phenotypes (parent-child pair) are encouraged to be associated with the same protein. The objective function of DLP is given as: $$ \Psi(\mathbf{Y}) = \lVert \boldsymbol{\Omega} \odot (\mathbf{Y} - \mathbf{\widetilde{Y}}) \rVert_{F}^{2} + \beta \text{tr}(\mathbf{Y}^{T} \mathbf{L}_{S} \mathbf{Y}) + \gamma \text{tr} (\mathbf{Y} \mathbf{L}_{G_{Y}} \mathbf{Y}^{T}) $$ where β,γ≥0 are tuning parameters, LS and \(\mathbf {L}_{G_{Y}}\) encode the PPN and NHPO information, respectively. Ω is the binary indicator matrix that selects only the known associations to be penalized, and ⊙ denotes Hadamard product (a.k.a entrywise product). Ontology-guided group lasso The last method to be reviewed is Ontology-guided Group Lasso (OGL) [24]. It uses an ontology-guided group norm for HPO, rather than the graph regularizer in DLP. By combining label propagation and an ontology-guided group lasso norm derived from the hierarchical structure of HPO, OGL updates estimation, according to the following objective function: $$ {\begin{aligned} \Psi (\mathbf{Y}) = \lVert \boldsymbol{\Omega} \odot (\mathbf{Y} - \mathbf{\widetilde{Y}}) \rVert_{F}^{2} + \beta \text{tr}(\mathbf{Y}^{T} \mathbf{L}_{S} \mathbf{Y}) + \gamma \sum_{i=1}^{n_{p}} \sum_{g \in \mathcal{G}_{Y}} r_{g}^{Y} \lVert \mathbf{Y}_{(g)i} \rVert_{2} \end{aligned}} $$ where β,γ≥0 are balancing factors. \(r_{g}^{Y}\) is the group weight for group g. Y(g)i selects the group members of group g from the i-th column of Y, and the smoothness is imposed through the ℓ2-norm group lasso (∥·∥2) among the members for the consistent prediction within the group. A notable difference between OGL and our model is that the estimated matrix is not factorized into low-rank matrices. One of the biggest drawbacks of network-based methods is that data sparseness has a significant impact on the performance. As mentioned before, the current HPO annotations are quite sparse. In addition, all of the network based-methods suffer the heavy computational burden, as they accept a large-scale protein-HPO annotation matrix as an input directly. Let \(\phantom {\dot {i}\!}\mathbf {Y} \in \{ 0, 1 \}^{N_{p} \times N_{h}}\) be a protein-HPO annotation matrix, where Np and Nh are the number of proteins and HPO terms, respectively. If protein i is annotated by an HPO term j, then Yij=1, and 0 otherwise. We define \(\phantom {\dot {i}\!}\mathbf {S}^{p_{k}}\) (k=1,2,⋯,t) be the networks for proteins, namely PPNs, where t is the total number of networks. \(\mathbf {S}^{p_{k}}_{i,j}\) represents the strength of the relationship between protein i and protein j in the k-th PPN. Similarly, let Sh be the network of HPO terms which is generated from an ontology structure and co-annotations, and \(\mathbf {S}^{h}_{i,j}\) is the similarity value between term i and term j. Our goal is to estimate \(\hat {\mathbf {Y}}\) given Y, \(\phantom {\dot {i}\!}\mathbf {S}^{p_{k}}\) and Sh. Our proposed method Preprocessing: generating a network from HPO The network of HPO terms, or NHPO, is derived by measuring the similarity between two HPO terms in a hierarchy. We adopt the measure proposed in [25]. Having been extensively used in natural language processing, this metric defines the semantic similarity between two labeled nodes by counting the co-occurrence frequency in a corpus. Specifically for HPO, the semantic similarity between two terms s and t is defined as: $$ \mathbf{S}^{h}_{s,t} = \frac{2 \cdot I(\text{mca}(s,t))}{I(s) + I(t)} $$ where I(s)= log(p(s)) and \(p(s) = \frac {\text {count}(s)}{N_{p}}\). Here, count(s) denotes the number of proteins annotated by term s and mca(s,t) is given as follows: $$ \text{mca}(s,t) = \arg \min_{k \in \mathrm{A}(s,t)} p(k) $$ where A(s,t) represents the set of all common ancestors of s and t. The weight of the edge between nodes s and t in NHPO is exactly the similarity score. The larger the number of annotated proteins shared by s and t, the higher their similarity score is. It is more likely to happen when the common ancestor of s and t is located closely. This means that Sh considers both the co-annotations of two HPO terms and their distance in a hierarchical structure. Non-negative matrix factorization The aim of Non-negative Matrix Factorization (NMF) is to find two low-rank matrices with all non-negative elements by approximating the original input matrix. In fact, the latent factors that underlie the interactions are captured. Mathematically, the input matrix \({\mathbf {Y} \in \mathbb {R}^{N_{p} \times N_{h}}_{+}}\) is decomposed into two rank-K matrices, \(\mathbf {U} \in \mathbb {R}^{N_{p} \times K }_{+}\) and \(\mathbf {V} \in \mathbb {R}^{N_{h} \times K}_{+}\). Then, finding U and V can be done by minimizing the reconstruction error which is defined as: $$ J = \lVert \mathbf{Y} - \mathbf{U} \mathbf{V}^{T} \rVert_{F}^{2}, \text{ s.t.} \mathbf{U} \geq 0, \mathbf{V} \geq 0 $$ Generally, the ℓ2 (Tikhonov) regularization is imposed to Eq. (7) so as to alleviate overfitting of U and V. Since there are unknown (missing) entries in Y, we encode the missingness with a masking matrix \(\phantom {\dot {i}\!}\mathbf {W} \in \{0, 1\}^{N_{p} \times N_{h}}\). If the annotation between protein i and HPO term j is missing, we set Wij=0. Otherwise, we set Wij=1, meaning that the element Yij is observed. Accordingly, W is also plugged as an extra input into our model. Together with the ℓ2-norm regularization terms, the objective function is refined as follows: $$ \begin{aligned} J_{\text{NMF}} =& \lVert \mathbf{W} \odot (\mathbf{Y} - \mathbf{U} \mathbf{V}^{T}) \rVert_{F}^{2} \\&+ \lambda (\lVert \mathbf{U} \rVert_{F}^{2} + \lVert \mathbf{V} \rVert_{F}^{2}), \text{ s.t.} \mathbf{U} \geq 0, \mathbf{V} \geq 0 \end{aligned} $$ where λ is a regularization coefficient. The unobserved protein-HPO associations are completed by multiplying two factor matrices, or concretely, \(\hat {\mathbf {Y}} = \mathbf {U} \mathbf {V}^{T}\). Network regularization Once we obtain the similarity matrix of HPO, Sh, we can regularize V with the help of it. The basic idea is to impose smoothness constraints on the phenotype-side factors; that is $$ \begin{aligned} & \frac{1}{2} \sum_{i,j} \mathbf{S}^{h}_{i,j} \lVert \mathbf{V}_{i} - \mathbf{V}_{j} \rVert^{2} \\ =~&\text{tr} (\mathbf{V}^{T} (\mathbf{D}^{h} - \mathbf{S}^{h}) \mathbf{V}) \\ =~&\text{tr} (\mathbf{V}^{T} \mathbf{L}^{h} \mathbf{V}) \end{aligned} $$ where Vi is the i-th row vector of V, Dh is a diagonal matrix whose diagonals are the node degrees, and Lh=Dh−Sh is the graph Laplacian of Sh. Actually, the term is exactly the vanilla graph regularizer. For proteins, multiple PPNs are derived from diverse data sources with heterogeneous properties. In this way, for a collective of PPNs \(\phantom {\dot {i}\!}\mathbf {S}^{p_{k}} (k = 1, \cdots, t)\), their regularizer is imposed as $$ \sum_{k=1}^{t} \text{tr}(\mathbf{U}^{T} \mathbf{L}^{p_{k}} \mathbf{U}), $$ where \(\mathbf {L}^{p_{k}} = \mathbf {D}^{p_{k}} - \mathbf {S}^{p_{k}} \) is the graph Laplacian of \(\phantom {\dot {i}\!}S^{p_{k}}\), and \(\phantom {\dot {i}\!}\mathbf {D}^{p_{k}}\) is the degree matrix. Minimization of graph-based regularization terms will lead to the learned data representations (U and V) that respect the intrinsic geometrical structure of original data spaces (\(\phantom {\dot {i}\!}\mathbf {S}^{p_{k}}\) and Sh). Note that such standard graph regularization has already been used in a variety of applications [26]. Model formulation By combining (8), (9) and (10), our model is formulated as follows: $$ {\begin{aligned} & \min_{\mathbf{U} \geq 0, \mathbf{V} \geq 0} \lVert \mathbf{W} \odot (\mathbf{Y} - \mathbf{U} \mathbf{V}^{T}) \rVert_{F}^{2} + \lambda (\lVert \mathbf{U} \rVert_{F}^{2} + \lVert \mathbf{V} \rVert_{F}^{2}) \\&\quad+ \alpha \sum_{k=1}^{t} \text{tr} (\mathbf{U}^{T} \mathbf{L}^{p_{k}} \mathbf{U}) + \beta \text{tr} (\mathbf{V}^{T} \mathbf{L}^{h} \mathbf{V}) \end{aligned}} $$ where α and β are regularization coefficients to strike a balance between the reconstruction error and graph smoothness. Model optimization Notice that the objective function defined in Eq. (11) is biconvex with respect to U and V. A very regular but effective procedure for fitting is Alternating Least Square (ALS), which alternately optimizes one of the variables by fixing the others as constants until convergence. We first hold U fixed and derive the updating rule of V. The objective function of V can be written as: $$ J(\mathbf{V}) = \lVert \mathbf{W} \odot (\mathbf{Y} - \mathbf{U} \mathbf{V}^{T}) \rVert_{F}^{2} + \lambda \lVert \mathbf{V} \rVert_{F}^{2} + \beta \text{tr} (\mathbf{V}^{T}\mathbf{L}^{h}\mathbf{V}) $$ Accordingly, the derivative of J(V) with respect to V is $$ \frac{\partial J(\mathbf{V})}{\partial \mathbf{V}} = -2 (\mathbf{W} \odot \mathbf{Y})^{T}\mathbf{U} + 2(\mathbf{W} \odot \mathbf{U}\mathbf{V}^{T})^{T}\mathbf{U} + 2\lambda \mathbf{V} + 2\beta \mathbf{L}^{h}\mathbf{V} $$ Taking the Karush-Kuhn-Tucker (KKT) complementary condition, we obtain $$ [(\mathbf{W} \odot \mathbf{U}\mathbf{V}^{T})^{T}\mathbf{U} - (\mathbf{W} \odot \mathbf{Y})^{T}\mathbf{U} + \lambda \mathbf{V} + \beta \mathbf{L}^{h} \mathbf{V}]_{ij} \mathbf{V}_{ij} = 0 $$ Now let us rewrite Lh=Lh+−Lh−, where we have Lh+=(|Lh|+Lh)/2 and Lh−=(|Lh|−Lh)/2. The multiplicative update rule of V is then: $$ \mathbf{V}_{ij} \longleftarrow \mathbf{V}_{ij} \sqrt{ \frac{ (\mathbf{W} \odot \mathbf{Y})^{T}\mathbf{U} + \beta \mathbf{L}^{h-}\mathbf{V} } {(\mathbf{W} \odot \mathbf{U}\mathbf{V}^{T})^{T}\mathbf{U} + \lambda \mathbf{V} + \beta \mathbf{L}^{h+}\mathbf{V}} } $$ Note that the problem given by (11) is symmetric in terms of U and V. Therefore, the derivation of the updating rule of U is simply the reverse of the above case. Precisely, we have $$ \mathbf{U}_{ij} \longleftarrow \mathbf{U}_{ij} \sqrt{ \frac{ (\mathbf{W} \odot \mathbf{Y}) \mathbf{V} + \alpha \sum_{k=1}^{t}(\mathbf{L}^{p_{k}-} \mathbf{U}) }{(\mathbf{W} \odot \mathbf{U}\mathbf{V}^{T}) \mathbf{V} + \lambda \mathbf{U} + \alpha \sum_{k=1}^{t} (\mathbf{L}^{p_{k}+} \mathbf{U})} } $$ Training algorithm We describe the overall framework of HPOAnnotator in Fig. 2. The procedure of our optimization process is presented in Algorithm 1. The optimization was implemented based on the MATLAB code provided by [26]. The framework of HPOAnnotator HPO annotations Two HPO annotation datasets released by June 2017 and December 2017 were downloaded from the official HPO website (https://hpo.jax.org/). For the sake of brevity, we call them Data-201706 and Data-201712 in the following, respectively. The true-path-rule is applied here to propagate annotations, and only HPO terms with at least one related protein remains. Table 1 lists the statistics of the two datasets. Table 1 Statistics of two datasets: Data-201706 and Data-201712 According to the number of proteins annotated, we separated the HPO terms into five groups: 1 to 10, 11 to 30, 31 to 100, 101 to 300, and more than 300. Figure 3 shows the percentage of HPO terms and corresponding annotations over five groups in Data-201706. HPO terms are divided into five groups according to the number of proteins they annotate. The number of HPO terms per group (the left-hand side of each group) and the total number of annotations per group (the right-hand side of each group) are shown for Data-201706 NHPO (Network of HPO) We downloaded the hierarchical structure of HPO from their official website. PPN (Protein-Protein Network) Four types of PPNs were used in our experiments; that is, STRING [27] (https://string-db.org/), GeneMANIA [28] (http://genemania.org/data/), BioGRID [29] (https://downloads.thebiogrid.org/BioGRID), and Reactome [30] (https://reactome.org/download-data). Table 2 reports the statistics of these four networks. Note that STRING is the most famous PPI network, which was found very useful for predicting HPO annotations in [9]. It combines diverse data sources, including co-expression, co-occurrence, fusion, neighborhood, genetic interactions, and physical interactions, by assigning a confidence score to a certain pair of proteins for indicating its reliability. Table 2 Statistics of PPNs of Data-201706 A preliminary test on pairs of two HPO terms in NHPO: the correlation between the number of shared proteins and the average similarity First, we grouped all pairs of two HPO terms (from NHPO), according to the number of proteins, say M, shared by the two HPO terms. For each group, we then computed the average similarity score (Sh) by NHPO over those sharing M proteins. Finally, we plotted each group over the two-dimensional space of M× the average similarity score. Figure 4 shows the result. The similarity score is equal to the edge weight of NHPO. This means that this test would be evaluated on the consistency of the similarity with the number of shared proteins from each HPO term pair. There found some correlations between these two, which would be a positive support for using NHPO for HPO annotations. Each circle is a pair of two HPO terms in NHPO, with sharing the same numbers of proteins, say M. The y-axis is the average similarity score between two HPO terms over those proteins sharing the same M, and the x-axis is M, i.e., the number of shared proteins. The red line is fitted by a linear function A preliminary test on pairs of protein-protein edges in a PPN: correlations between the average similarity by a PPN and #shared HPO Considering the extensiveness, we chosen STRING as the research object. At first step, we grouped all pairs of two proteins, according to the number of their shared HPO terms, denoted as K. For each group, we then computed the average of similarity score (Sp) of STRING PPN over those sharing the same number of HPO terms. Finally, we plotted each group over the two-dimensional space of the average score (similarity) ×K. Figure 5 shows the plotted results. The line in this figure shows that the polynomial trend line is fitted to the distributed points of the two-dimensional space. It shows a slightly positive correlation between the number of shared HPO terms and the average similarity score by a PPN. This observation validates the idea that the edges in a PPN may imply that proteins connected by the edges share the same HPO. Each circle is a pair of two proteins in STRING PPN, with sharing the same numbers of HPO terms, say K. The x-axis is the average similarity score between two proteins over those HPO terms sharing the same K, and the y-axis is K, i.e., the number of shared HPO terms. The red line shows the trend, which is fitted by a polynomial function with the maximum degree of three The performance is evaluated from three aspects. Annotation-centric measure Each annotation (or a protein-HPO term pair) is viewed as one instance. The models are evaluated using Area Under the receiver operator characteristics Curve (AUC) [31]. Considering the sparseness of protein-HPO association matrix, we measure the Area Under the Precision-Recall curve (AUPR) as well. Protein-centric measure AUCs (AUPRs) are calculated for each protein based on the corresponding predictive scores by all available HPO terms. Then the computed AUCs (AUPRs) are averaged over all proteins, resulting in micro-AUC (micro-AUPR). HPO term-centric measure We think that the term-centric measure is important. Typical scientists or biologists focus first on a certain HPO term and are interested in obtaining genes/proteins, which can be annotated by the focused HPO term. The HPO term-centric measure can be computed in a total reverse manner of the protein-centric measure, with the following two steps: 1) AUCs (AUPRs) are first computed for each HPO term; and 2) The computed AUCs (AUPRs) are averaged over all HPO terms, which result in macro-AUC (macro-AUPR). In addition, we average the computed AUCs (AUPRs) over HPO terms at only leaves of the HPO hierarchical structure. We call the obtained AUC (AUPR) leaf-AUC (leaf-AUPR). We further calculate the macro-AUCs (macro-AUPRs) for each of the five groups, which are generated by focusing on the number of annotations per HPO term (see Fig. 3). In total, (from annotation-, protein-, and HPO term-centric measures) we have the eight criteria to validate the performance. Parameter settings Our approach is compared with three network-based methods: BiRW [20], DLP [23] and OGL [24] as described in related work. Besides, we take Logistic Regression (LR) as a feature-based baseline. Note that LR classifiers are trained on each single HPO term independently, and the features are built by concatenating association scores in PPNs together. The parameter of BiRW is selected from {0.1,0.2,⋯,0.9}. Regularization coefficients (i.e., hyper-parameters) of DLP and OGL, β and γ are selected from {10−6,10−5,⋯,106}. Note that the ranges of these parameters are specified by following [23]. Our model has four parameters: K, α, β and λ, which are determined by internal five-fold cross-validation, where the training data is further randomly divided into five folds (one for validation and the rest for training). The search ranges are as follows: {100,200} for K, {2−3,2−2,⋯,22,23} for λ, {2−7,2−6,⋯,26,27} for α and β. There are several variants of our algorithm by changing the settings of hyper-parameters α and β. We also evaluate each of them as comparison methods. The details are as follows. NMF: α=0 and β=0 Now the model is reduced to standard NMF, and the objective function is exactly the same as Eq. (8). NMF-PPN: α≠0 and β=0 Under this setting, there is no regularization term of NHPO, but PPN has. Thus, we term this model as NMF-PPN. NMF-NHPO: α=0 and β≠0 This setting is in contrast to NMF-PPN. That is, the regularization term of NHPO is kept, while that of PPN is not. For the case of α≠0 and β≠0, there are two another variants depending on whether or not multiple PPNs are utilized. AiPA: only one PPN is utilized It is proposed in our previous study [17], which can be regarded as a special case of HPOAnnotator because only single PPN of STRING is exploited. HPOAnnotator: multiple PPNs are utilized It is our final model presented in this paper. All four PPNs are used, including STRING, GeneMANIA, BioGRID, and Reactome as described before. Two evaluation settings Under two different settings, we validate the performance of the compared methods from two viewpoints: Cross-validation over Data-201706 We conduct 5 ×5-fold cross-validation over all annotations on Data-201706. That is, we repeat the following procedure five times: all known annotations are divided randomly into five equal folds. The four folds are for training, while the remaining one is for test. After selecting the test annotation between protein p and HPO term h, all annotations between p and the descendants of term h in the hierarchical structure of HPO are removed from the training data, in order to avoid any overlaps between training data and test data. It means that we predict the annotation of protein p out of all unknown HPO terms, which is a fair and strict evaluation. Independent test by using Data-201712 HPO annotations are incomplete, due to various reasons, such as slow curation. The way of annotations might be changed over time. So we conduct additional several experiments other than regular cross-validation by using data obtained in different time periods. That is, the training data is obtained before June 2017. All annotations in Data-201706 are used for training, where an internal five-fold cross-validation is done for setting up parameter values. After training, annotations obtained from June to December 2017 are then used for testing. Predictive performance in cross-validation on Data-201706 Table 3 reports the scores of the eight criteria obtained by averaging over 5 ×5 cross-validation (25 runs in total) on Data-201706. In this experiment, we compare the nine methods in total. In particular, the four are existing methods (LR, BiRW, OGL and DLP), and another five are variants of our model (NMF, NMF-PPN, NMF-NHPO, AiPA and HPOAnnotator). Note that STRING is the only PPN utilized in NMF-PPN. From the table, it clearly shows that our five methods perform better than the four existing methods. For example, our four methods achieve around 0.5 to 0.56 in AUPR, while all the scores by the existing methods are less than 0.1. In fact, our five methods perform better than the existing methods with respect to all of the eight metrics. Thus, their performance differences are very clear. We can conclude that a low-rank approximation is useful for the HPO annotation problem. Furthermore, HPOAnnotator always outperforms other variants in eight conditions among our five methods. This indicates that network information is well incorporated into our formulation. Table 3 The results of the eight criteria obtained by 5 ×5-fold cross-validation over Data-201706 for the nine competing methods in total Table 4 lists the AUC scores obtained for five groups divided by the number of annotations. Again, the results reported in these tables demonstrate the same conclusion as that in Table 3. That is, HPOAnnotator outperforms all other methods in all of the cases. A similar trend is also shown in Table 5. In summary, our approach is capable of achieving the best performance for HPO annotations in terms of cross-validation. Table 4 Macro-AUC obtained by 5 ×5-fold cross-validation over Data-201706 for the nine competing methods Table 5 Macro-AUPR obtained by 5 ×5-fold cross-validation over Data-201706 for the nine competing methods A noteworthy point is that our method works well for the HPO terms with a very small number of annotations, i.e., only one to ten annotations per HPO term. In fact, this situation is usually hard for a low-rank approximation. As HPOAnnotator has achieved the best performance, this implies that a low-rank approximation is useful for all types of groups including HPO terms with a very small number of annotations for HPO annotations. The effectiveness of individual PPNs in cross-validation on Data-201706 By using NMF-PPN, we perform a set of experiments in order to identify the most effective PPN in terms of HPO predictions. To this end, we perform a series of experiments on NMF-PPN by using a single PPN as its input at a time. NMF-PPN with the four PPNs performs best as reported in Table 6. As shown in Table 6, we can conclude that STRING is the most useful PPN for predicting HPO annotations. By the way, Our model can take advantage of different PPNs to achieve the best performance. Table 6 Performance of NMF-PPN with individual PPNs Computation times in cross-validation on Data-201706 The computation (training) times of the eight methods compared in the cross-validation are recorded, where the times are averaged over the total 25 runs (5 ×5 folds). The computation times on the same machine with the same settings are reported in Table 7. From the table, our four models run faster than the compared ones. In fact, they are more than eight times faster than OGL and DLP. The training data is updated periodically, thus the model must be trained by the updated data often. As such, this advantage of our models would make a difference. In addition, OGL and DLP need much more memory spaces than the compared methods. Table 7 Training times of a single run in 5 ×5-fold cross-validation (average over 25 runs) Predictive performance in the independent test on Data-201712 Table 8 reports AUC obtained by the experiments conducted on independent data for the eight competing methods. Among the three existing methods, DLP achieves the best performance, with AUC of 0.8298. NMF outperforms DLP with AUC of 0.8527, while two variants of NMF with one network regularizer further achieves better performance with AUC of around 0.89. AiPA achieves 0.9187 of AUC with STRING PPN and NHPO. Most importantly, HPOAnnotator archives the best performance, with the AUC of more than 0.92. Table 8 AUC obtained by independent test using Data-201712 As Table 9 reports, seven out of the 30 highest ranked predicted annotations are validated to be true according to Data-201712 which is released later. For example, protein Q02388, encoded by gene COL7A1, is actually annotated by HPO term HP:0001072 (Thickened skin). But we fail to find it in the data released by December 2017. Another example is protein Q9UBX5. According to Data-201706, it has no relationship with HPO term HP:0012638 (Abnormality of nervous system physiology). But this record occurs in the later release of the data. Table 9 Seven true predictions out of the top 30 results (by HPOAnnotator) among all newly added annotations As the highest-ranked new annotation found by our model, HP:0001072 is known to also annotate another ten proteins, O43897, P07585, P08123, P08253, P12111, P20849, P20908, P25067, P53420, and Q13751, based on Data-201706. We find that their similarity scores with Q02388 in STRING are more than 0.9. It indicates that their interactions between Q02388 and those ten proteins in PPNs imply a high possibility of annotating Q02388 by HP:0001072. In summary, the number of these examples have demonstrated both the effectiveness and necessity of introducing PPI networks for unknown HPO annotations prediction. Validating false positives As mentioned before, seven of the top 30 correct predictions from our model have already been found in the December 2017 release version of HPO annotations. Due to the fact that a curation process on HPO annotations is normally slow, we believe that there may be more false positives among our top ranked predictions. In order to validate our assumption, we first select the rest of the top 10 predictions that have not been found in the December 2017 HPO data. Using a protein name (or its coding gene name) and an HPO term name as a query for online search engines, we then check the relevant literature and diseases for each false prediction. Finally, we manually extract the information from the retrieved papers containing supporting evidence that suggest a particular false positive to be correct in fact. Using this manual process, we find evidence for another two predictions. Table 10 lists the PubMed IDs of the relevant literature, the relevant diseases names, and the detailed evidence for each pair of the found gene/protein-HPO term. The results strongly indicate that the performance of HPOAnnotator is under-estimated, which is caused by the incompleteness of the current gold standard. Table 10 Validation of false positives in the top 10 ranked predictions A typical example of demonstrating the performance of HPOAnnotator To further demonstrate the performance of our proposed method for predicting HPO annotations, we here present the different predictions made by the four methods for a typical example, protein P23434. As listed in the last row of Table 11, this protein has 10 annotations. It is interesting to note that the number of correctly predicted HPO terms gradually increases from the first row to the fourth row. Again, this indicates that network information is effective for improving the performance of predicting HPO annotations. Table 11 Predicted HPO terms of P23434 (gene name: GCSH) by our four methods based on NMF Performance comparisons focusing on Organ abnormality Most of the existing models are evaluated on separate sub-ontologies. However, considering only part of the ontology may lose entire network information. Such information can connect proteins or HPO terms that are even beyond the boundaries of two or more subontologies in the network space. As such, we do not conduct the experiments on separate sub-ontologies. Instead, we focus on the major sub-ontology, Organ abnormality (the part under HP:0000118), with 6370 HPO terms, 3446 proteins and 269420 annotations in total according to Data-201706. A 5 ×5-fold cross-validation has been conducted by following the same splitting strategy as before. Table 12 reports the scores of the eight evaluation criteria obtained by all compared methods. The results clearly show that the performance differences among the seven cases are subtle. For example, HPOAnnotator achieves the best performance with respect to all evaluation measure except for leaf-AUC. Comparing NMF-Organ and NMF-PPN-Organ in terms of AUC, we can find that network information can help to improve the performance to a certain extent. Nonetheless, the use of both networks of PPN and NHPO might not be so effective in this scenario. Besides, it seems that the performance improvement is quite limited when we consider the whole ontology rather than individual sub-ontologies. Tables 13 and 14 list the evaluation scores of Macro-AUC and Macro-AUPR over the five HPO term groups, respectively. The trend is similar to that presented in Table 12. Again, the results show no notable difference among the compared methods. Table 12 Performance results on Data-201706 focusing on the sub-ontology Organ abnormality Table 13 Macro-AUC obtained by focusing on Organ abnormality Table 14 Macro-AUPR obtained by focusing on Organ abnormality In this paper, we have presented an approach that uses a low-rank approximation to solve the problem of the large-scale prediction of HPO annotations for human proteins. In particular, network information is used to regulate such an approximation. The network information can be derived from both sides of annotations, i.e., PPI networks, and a hierarchical structure of an ontology. In essence, we provided a low-rank approximation solution to the optimization problem of matrix factorization with a network-derived regularization. Extensive experiments on the current HPO database have been conducted to validate the effectiveness of our approach. Experimental results clearly demonstrated the good performance of the proposed method under various settings, including cross-validation, independent test, analysis on the major sub-ontology Organ abnormality, and detailed case studies. The results have validated the good effectiveness as a result of using network information and ontology hierarchical structure as regularization and a low-rank approximation for HPO predictions, even for predictions on HPO terms with a very small number of known annotations. Overall, the four important findings can be concluded from the experimental results: 1) a low-rank approximation works quite well for a large-scale HPO annotations prediction; or more generally, for multi-label classification, even for predicting labels with an extremely small number of labeled instances; 2) a hierarchical ontology structure is very useful as side information for improving the performance of a low-rank approximation; 3) PPI networks from different sources play an important role in predictions; and 4) multiplicative parameter update of a low-rank approximation (matrix factorization) is time-efficient, with around eight times faster than network-based approaches that need the huge memory because of using the original annotation matrices directly. AiPA: AiProAnnotator ALS: Alternating least square AUC: Area under the receiver operator characteristics curve AUPR: Area under the precision-recall curve BiRW: CAFA: Critical assessment of functional annotation DAG: DLP: Dual label propagation HPO: Human phenotype ontology KKT: Karush-kuhn-tucker NHPO: Network of HPO NMF: OGL: Protein-protein interaction Protein-protein network SSVM: Structural support vector machine Botstein D, Risch N. Discovering genotypes underlying human phenotypes: past successes for mendelian disease, future approaches for complex disease. Nat Genet. 2003; 33(3s):228. Li MJ, Sham PC, Wang J. Genetic variant representation, annotation and prioritization in the post-gwas era. Cell Res. 2012; 22(10):1505–8. Lage K, Karlberg EO, Størling ZM, et al. A human phenome-interactome network of protein complexes implicated in genetic disorders. Nat Biotechnol. 2007; 25(3):309–316. Freimer N, Sabatti C. The human phenome project. Nat Genet. 2003; 34(1):15–21. Köhler S, Doelken SC, Mungall CJ, et al. The human phenotype ontology project: linking molecular biology and disease through phenotype data. Nucleic Acids Res. 2013; 42(D1):966–74. Hamosh A, Scott AF, Amberger JS, et al. Online mendelian inheritance in man (omim), a knowledgebase of human genes and genetic disorders. Nucleic Acids Res. 2005; 33(suppl_1):514–7. Aymé S, Schmidtke J. Networking for rare diseases: a necessity for europe. Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz. 2007; 50(12):1477–83. Bragin E, Chatzimichali EA, Wright CF, et al. Decipher: database for the interpretation of phenotype-linked plausibly pathogenic sequence and copy-number variation. Nucleic Acids Res. 2013; 42(D1):993–1000. Kahanda I, Funk C, Verspoor K, Ben-Hur A. Phenostruct: Prediction of human phenotype ontology terms using heterogeneous data sources. F1000Res. 2015; 4:259. You R, Zhang Z, Xiong Y, et al. Golabeler: Improving sequence-based large-scale protein function prediction by learning to rank. Bioinformatics. 2018; 34(14):2465–73. Xie M, Hwang T, Kuang R. Reconstructing disease phenome-genome association by bi-random walk. Bioinformatics. 2012; 1(02):1–8. Wang P, Lai W, Li MJ, et al. Inference of gene-phenotype associations via protein-protein interaction and orthology. PloS one. 2013; 8(10):77478. Gao Y, Church G. Improving molecular cancer class discovery through sparse non-negative matrix factorization. Bioinformatics. 2005; 21(21):3970–5. Kim H, Park H. Sparse non-negative matrix factorizations via alternating non-negativity-constrained least squares for microarray data analysis. Bioinformatics. 2007; 23(12):1495–502. Wang JJ, Wang X, Gao X. Non-negative matrix factorization by maximizing correntropy for cancer clustering. BMC Bioinformatics. 2013; 14(1):107. Hofree M, Shen JP, Carter H, Gross A, Ideker T. Network-based stratification of tumor mutations. Nat Methods. 2013; 10(11):1108–15. Gao J, Yao S, Mamitsuka H, Zhu S. Aiproannotator: Low-rank approximation with network side information for high-performance, large-scale human protein abnormality annotator. In: IEEE International Conference on Bioinformatics and Biomedicine, BIBM. Madrid: IEEE: 2018. p. 13–20. Schietgat L, Vens C, Struyf J, et al. Predicting gene function using hierarchical multi-label decision tree ensembles. BMC Bioinformatics. 2010; 11(1):2. Xie M, Hwang T, Kuang R. Prioritizing disease genes by bi-random walk. In: Advances in Knowledge Discovery and Data Mining - 16th Pacific-Asia Conference, PAKDD. Kuala Lumpur: Springer: 2012. p. 292–303. Xie M, Xu Y, Zhang Y, Hwang T, Kuang R. Network-based phenome-genome association prediction by bi-random walk. PloS One. 2015; 10(5):0125138. Hwang T, Kuang R. A heterogeneous label propagation algorithm for disease gene discovery. In: Proceedings of the SIAM International Conference on Data Mining, SDM. Columbus: SIAM: 2010. p. 583–94. Mehan MR, Nunez-Iglesias J, Dai C, Waterman MS, Zhou XJ. An integrative modular approach to systematically predict gene-phenotype associations. BMC Bioinformatics. 2010; 11(1):62. Petegrosso R, Park S, Hwang TH, Kuang R. Transfer learning across ontologies for phenome-genome association prediction. Bioinformatics. 2016; 33(4):529–36. K S, X EP. Tree-guided group lasso for multi-task regression with structured sparsity. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10). Haifa: Omnipress: 2010. p. 543–50. Lin D. An information-theoretic definition of similarity. In: Proceedings of the Fifteenth International Conference on Machine Learning (ICML) 1998. Madison: Morgan Kaufmann: 1998. p. 296–304. Cai D, He X, Han J, Huang TS. Graph regularized nonnegative matrix factorization for data representation. IEEE Trans Pattern Anal Mach Intell. 2011; 33(8):1548–60. Szklarczyk D, Franceschini A, Kuhn M, et al. The string database in 2011: functional interaction networks of proteins, globally integrated and scored. Nucleic Acids Res. 2010; 39(suppl_1):561–8. Warde-Farley D, Donaldson SL, Comes O, et al. The genemania prediction server: biological network integration for gene prioritization and predicting gene function. Nucleic Acids Res. 2010; 38(suppl_2):214–20. Stark C, Breitkreutz B, Reguly T, et al. Biogrid: a general repository for interaction datasets. Nucleic Acids Res. 2006; 34(suppl_1):535–9. Fabregat A, Jupe S, Matthews L, et al. The reactome pathway knowledgebase. Nucleic Acids Res. 2017; 46(D1):649–55. Wu X, Zhou Z. A unified view of multi-label performance measures. In: Proceedings of the 34th International Conference on Machine Learning, ICML. Sydney: PMLR: 2017. p. 3780–8. Publication costs were funded by National Natural Science Foundation of China (No. 61872094 and No. 61572139). S. Z. is supported by Shanghai Municipal Science and Technology Major Project (No. 2017SHZDZX01). J. G., L. L. and S. Y. are supported by the 111 Project (NO. B18015), the key project of Shanghai Science & Technology (No. 16JC1420402), Shanghai Municipal Science and Technology Major Project (No. 2018SHZDZX01) and ZJLab. H. M. has been supported in part by JST ACCEL (grant number JPMJAC1503), MEXT Kakenhi (grant numbers 16H02868 and 19H04169), FiDiPro by Tekes (currently Business Finland) and AIPSE program by Academy of Finland. The funding body have no role in the design of the study and collection, analysis, and interpretation of data and writing the manuscript. School of Computer Science and Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, 220 Handan Road, Shanghai, 200433, China Junning Gao, Lizhi Liu, Shuwei Yao & Shanfeng Zhu School of Computing and Mathematics, Charles Sturt University, Elizabeth Mitchell Dr, Albury, NSW 2640, Australia Xiaodi Huang Bioinformatics Center, Institute for Chemical Research, Kyoto University, Kashiwada Gokasho, Uji, Kyoto, 611-0011, Japan Hiroshi Mamitsuka Department of Computer Science, Aalto University, Konemiehentie 2, Espoo, 02150, Finland Shanghai Institute of Artificial Intelligence Algorithms and ISTBI, Fudan University, Shanghai, 200433, China Shanfeng Zhu Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence (Fudan University), Ministry of Education, Shanghai, China Junning Gao Lizhi Liu Shuwei Yao JG and SZ jointly contributed to the design of the study. JG designed and implemented the ANMF method, performed the experiments, and drafted the manuscript. LL, SY, XH, HM and SZ helped the result analysis, and contributed to improving the writing of manuscripts. All authors read and approved the final manuscript. Correspondence to Shanfeng Zhu. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Gao, J., Liu, L., Yao, S. et al. HPOAnnotator: improving large-scale prediction of HPO annotations by low-rank approximation with HPO semantic similarities and multiple PPI networks. BMC Med Genomics 12 (Suppl 10), 187 (2019). https://doi.org/10.1186/s12920-019-0625-1 Low-rank approximation Protein-protein interaction networks
CommonCrawl
\begin{definition}[Definition:Ideal (Order Theory)] Let $\left({S, \preceq}\right)$ be an ordered set. Let $I \subseteq S$ be a non-empty subset of $S$. Then $I$ is an '''ideal''' of $S$ {{iff}} it is both a lower set and a directed set. That is, $I$ is an '''ideal''' {{iff}}: :$\forall x \in I, y \in P: y \preceq x \implies y \in I$ :$\forall x, y \in I: \exists z \in I: x \preceq z$ and $y \preceq z$ Category:Definitions/Order Theory \end{definition}
ProofWiki
\begin{document} \begin{center} {\LARGE\bf Commensurability of lattices in right-angled buildings}\\ {\large Sam Shepherd} \end{center} \begin{abstract} Let $\Gamma$ be a graph product of finite groups, with finite underlying graph, and let $\Delta$ be the associated right-angled building. We prove that a uniform lattice $\Lambda$ in the cubical automorphism group $\operatorname{Aut}(\Delta)$ is weakly commensurable to $\Gamma$ if and only if all convex subgroups of $\Lambda$ are separable. As a corollary, any two finite special cube complexes with universal cover $\Delta$ have a common finite cover. An important special case of our theorem is where $\Gamma$ is a right-angled Coxeter group and $\Delta$ is the associated Davis complex. We also obtain an analogous result for right-angled Artin groups. In addition, we deduce quasi-isometric rigidity for the group $\Gamma$ when $\Delta$ has the structure of a Fuchsian building. \end{abstract} \tableofcontents \section{Introduction} Given compact length spaces $X_1$ and $X_2$ with a common universal cover $\tilde{X}$, it is natural to ask whether $X_1$ and $X_2$ have a common finite cover. The deck transformation groups of $\tilde{X}\to X_1, X_2$ are \emph{uniform lattices} $\Gamma_1,\Gamma_2$ in the isometry group $\operatorname{Is}(\tilde{X})$ (i.e. they act properly and cocompactly on $\tilde{X}$), and the existence of a common finite cover is equivalent to $\Gamma_1$ and $\Gamma_2$ being \emph{weakly commensurable in $\operatorname{Is}(\tilde{X})$} -- meaning that there exists $g\in\operatorname{Is}(\tilde{X})$ such that $g\Gamma_1 g^{-1}$ is \emph{commensurable} to $\Gamma_2$ (i.e. $g\Gamma_1 g^{-1}\cap\Gamma_2$ has finite index in both $g\Gamma_1 g^{-1}$ and $\Gamma_2$). Alternatively, one could start with a locally compact length space $X$ and uniform lattices $\Gamma_1,\Gamma_2<\operatorname{Is}(X)$, and ask whether $\Gamma_1$ and $\Gamma_2$ are weakly commensurable. In particular, one could ask if there is an algebraic property of the lattices that guarantees weak commensurability. If $X=\mathbb{H}^2$ is the hyperbolic plane then it can happen that $\Gamma_1$ and $\Gamma_2$ are not weakly commensurable even if the lattices are isomorphic as groups (for instance using the uncountability of the moduli space of hyperbolic surfaces of a given genus). However, if $X$ is a symmetric space associated to a semisimple Lie group with no compact factors and trivial center that is not locally isomorphic to SL$(2,\mathbb{R})$, then Mostow Rigidity \cite{Mostow73} tells us that $\Gamma_1$ is conjugate to $\Gamma_2$ if and only if the lattices are isomorphic; therefore $\Gamma_1$ and $\Gamma_2$ are weakly commensurable if and only if they are \emph{abstractly commensurable} (i.e. have isomorphic finite-index subgroups). Nevertheless, there are still many examples in this setting where $\Gamma_1$ and $\Gamma_2$ are not abstractly commensurable but have similar algebraic properties -- for instance they could be a pair of uniform lattices in PU$(n,1)$ with isomorphic profinite completions \cite{Stover19}. A very different setting is where $X$ is a locally finite cell complex, and we consider uniform lattices $\Gamma_1,\Gamma_2$ in its automorphism group $\operatorname{Aut}(X)$. If $X$ is a tree then Leighton's Theorem tells us that \emph{all} uniform lattices in $\operatorname{Aut}(X)$ are weakly commensurable \cite{Leighton82}. There are various results for other cell complexes giving sufficient (and sometimes necessary) conditions for $\Gamma_1$ and $\Gamma_2$ to be weakly commensurable \cite{Haglund06,Huang18,Woodhouse19,Shepherd20,BridsonShepherd21,Woodhouse21} -- we will discuss many of these later in the introduction. For many of these results the conditions for $\Gamma_1$ and $\Gamma_2$ involve the existence of \emph{separable subgroups} (see Definition \ref{defn:separable}). Separable subgroups are natural to consider in this context, because if $\Gamma_1$ and $\Gamma_2$ have many separable subgroups then one can often replace them by finite-index sublattices with certain desired properties (usually involving the geometry of $X$). On the other hand, lattices with a lack of separability properties provide a good source of counter-examples; for instance if $X$ is a product of trees then there exist some uniform lattices that are residually finite (e.g. a product of lattices) and others that are not \cite{Wise96,BurgerMozes97} -- residual finiteness is preserved by weak commensurability, so this gives examples where $\Gamma_1$ and $\Gamma_2$ are not weakly commensurable. The setting of interest to us is as follows. Let $\Gamma=\Gamma(\mathcal{G},(G_i)_{i\in I})$ be a graph product of finite groups, with finite underlying graph $\mathcal{G}$, and let $\Delta=\Delta(\mathcal{G},(G_i)_{i\in I})$ be the associated right-angled building (see Section \ref{sec:graphprod} for definitions). The cellular structure on $\Delta$ makes it a CAT(0) cube complex. Examples include products of trees and Davis complexes of right-angled Coxeter groups. The finiteness assumption for the groups $G_i$ and graph $\mathcal{G}$ ensures that $\Delta$ is locally finite. Our main theorem is as follows. \begin{thm}\label{thm:Delta} Let $\Lambda<\operatorname{Aut}(\Delta)$ be a uniform lattice. Then $\Lambda$ and $\Gamma$ are weakly commensurable in $\operatorname{Aut}(\Delta)$ if and only if all convex subgroups of $\Lambda$ are separable. \end{thm} Here $\operatorname{Aut}(\Delta)$ is the group of all cubical automorphisms of $\Delta$, not just the type-preserving automorphisms, so this theorem generalizes the work of Haglund \cite[Theorems 1.4 and 7.2]{Haglund06}. By a \emph{convex subgroup} of $\Lambda$ we mean a subgroup that stabilizes and acts cocompactly on a convex subcomplex of $\Delta$. \begin{remk}\label{remk:inthyp} The condition in Theorem \ref{thm:Delta} that all convex subgroups of $\Lambda$ are separable can be replaced with the weaker condition that all finite-index subgroups of finite intersections of $\Lambda$-stabilizers of hyperplanes are separable (see Proposition \ref{prop:separable} and the discussion preceding it). The same replacement can be made in Corollaries \ref{cor:Coxeter} and \ref{cor:Artin}. \end{remk} \subsection{Some corollaries} An important concept in the theory of cube complexes is the notion of a group acting specially on a CAT(0) cube complex (Definition \ref{defn:specially}), due to Haglund and Wise \cite{HaglundWise08}. The resulting theory has lead to many striking advancements in group theory and topology, including the resolution of the Virtual Haken Conjecture \cite{Agol13}. At the heart of this theory is the fact that a group inherits various strong separability properties if it acts specially on a CAT(0) cube complex. In particular, if $X$ is a locally finite CAT(0) cube complex and $\Lambda<\operatorname{Aut}(X)$ is a (virtually) special uniform lattice, then all convex subgroups of $\Lambda$ are separable. Special uniform lattices are thus good candidates for weak commensurability results. As a simple example, it follows from \cite{Leighton82,Wise06} that if $X$ is a product of trees then all special uniform lattices in $\operatorname{Aut}(X)$ are weakly commensurable. Returning to the right-angled building $\Delta$, we show in Proposition \ref{prop:Gvspecial} that the uniform lattice $\Gamma<\operatorname{Aut}(\Delta)$ is virtually special. In particular, this implies that all convex subgroups of $\Gamma$ are separable, so we deduce the ``only if'' direction of Theorem \ref{thm:Delta} (we note that the separability of convex subgroups of $\Gamma$ is also proved in \cite{Haglund08}). In addition, the above discussion implies the following corollary of Theorem \ref{thm:Delta}. \begin{cor}\label{cor:speciallattices} All special uniform lattices in $\operatorname{Aut}(\Delta)$ are weakly commensurable. \end{cor} We remark that it is unknown whether all special uniform lattices in $\operatorname{Aut}(X)$ are weakly commensurable for $X$ an arbitrary locally finite CAT(0) cube complex -- this is an open problem of Haglund \cite[Problem 2.4]{Haglund06}. One can also formulate Corollary \ref{cor:speciallattices} in terms of covering spaces as follows. \begin{cor} Let $X_1$ and $X_2$ be finite special cube complexes with universal cover $\Delta$. Then $X_1$ and $X_2$ have a common finite cover. \end{cor} If the groups $G_i$ are all copies of $\mathbb{Z}/2\mathbb{Z}$ then the graph product $\Gamma$ is called a \emph{right-angled Coxeter group} and the right-angled building $\Delta$ is called the \emph{Davis complex} associated to $\Gamma$ \cite{Davis83,Moussong88}. We thus get another corollary of Theorem \ref{thm:Delta}, which in particular answers \cite[Problem 2.2]{Haglund06} in the affirmative. \begin{cor}\label{cor:Coxeter} Let $W$ be a right-angled Coxeter group with associated Davis complex $X$, and let $\Lambda<\operatorname{Aut}(X)$ be a uniform lattice. Then $\Lambda$ and $W$ are weakly commensurable in $\operatorname{Aut}(X)$ if and only if all convex subgroups of $\Lambda$ are separable. \end{cor} \begin{remk}\label{remk:otherDavis} The Davis complex $X$ is sometimes defined as the CAT(0) cube complex whose 1-skeleton is the undirected Cayley graph of $W$ with respect to the standard generating set \cite[Proposition 7.3.4]{Davis08}; but taking the cubical subdivision recovers the definition of the Davis complex given above, so Corollary \ref{cor:Coxeter} holds for both versions of the Davis complex. \end{remk} Corollary \ref{cor:Coxeter} generalizes work of Woodhouse \cite{Woodhouse21}, who considers the case where the defining graph of $W$ is the 1-skeleton of a certain kind of Kneser complex. Haglund also has a result similar to Corollary \ref{cor:Coxeter} \cite[Theorem 1.7]{Haglund06}, which applies to certain hyperbolic but non-right-angled Coxeter groups with two-dimensional Davis complexes. Similar to the class of right-angled Coxeter groups is the class of right-angled Artin groups. A \emph{right-angled Artin group} $A$ is defined as the graph product of finitely many copies of $\mathbb{Z}$, and it is the fundamental group of a certain finite non-positively curved cube complex called the Salvetti complex \cite{Charney07}; in particular, the universal cover $X$ of the Salvetti complex is a CAT(0) cube complex admitting a proper cocompact action of $A$. The cube complex $X$ is not the right-angled building associated to the graph product structure on $A$ (the latter is not even locally finite) but it does coincide with the Davis complex of a certain right-angled Coxeter group $W$ \cite{DavisJanuszkiewicz00} (this uses the notion of Davis complex from Remark \ref{remk:otherDavis}), and $A$ is commensurable to $W$ as a lattice in $\operatorname{Aut}(X)$. Therefore we obtain the following corollary of Corollary \ref{cor:Coxeter}. \begin{cor}\label{cor:Artin} Let $A$ be a right-angled Artin group, let $X$ be the universal cover of the associated Salvetti complex, and let $\Lambda<\operatorname{Aut}(X)$ be a uniform lattice. Then $\Lambda$ and $A$ are weakly commensurable in $\operatorname{Aut}(X)$ if and only if all convex subgroups of $\Lambda$ are separable. \end{cor} This generalizes a theorem of Huang \cite[Theorem 1.5]{Huang18}, who considers the case where the defining graph of $A$ is star-rigid and has no induced 4-cycles. However, Huang's result is stronger in one sense because it shows that \emph{all} uniform lattices in $\operatorname{Aut}(X)$ are weakly commensurable. It is natural to ask when this holds in general. \begin{que}\label{que:allwc} For which right-angled buildings $\Delta$ is it true that all uniform lattices in $\operatorname{Aut}(\Delta)$ are weakly commensurable? \end{que} We do not have a complete solution, but we can answer this question for certain cases. It follows from \cite{Meier96} that $\Delta$ is hyperbolic if and only if the defining graph $\mathcal{G}$ contains no induced 4-cycle, and hyperbolic cubulated groups are virtually special by \cite{Agol13}, so Corollary \ref{cor:speciallattices} yields the following result. \begin{cor}\label{cor:hyp} If the defining graph $\mathcal{G}$ contains no induced 4-cycle, then all uniform lattices in $\operatorname{Aut}(\Delta)$ are weakly commensurable. \end{cor} We can also use Corollary \ref{cor:speciallattices} to get a positive answer to Question \ref{que:allwc} in some relatively hyperbolic cases. Indeed, Caprace gives conditions for a right-angled Coxeter group to be hyperbolic relative to a collection of parabolic subgroups \cite{Caprace15}, and it follows from \cite{Oregonreyes20,SageevWise15} that all uniform lattices in the automorphism group of the corresponding Davis complex are virtually special if these parabolic subgroups are abelian or hyperbolic. Additionally, we get a positive answer to Question \ref{que:allwc} if $\Gamma$ has finite index in $\operatorname{Aut}(\Delta)$ as then all uniform lattices in $\operatorname{Aut}(\Delta)$ are commensurable, this happens for example if $\Gamma$ is a right-angled Coxeter group with star-rigid defining graph (Remark \ref{remk:Autdiscrete}). On the other hand, there are some cases where we get a negative answer to Question \ref{que:allwc} because there is a non-residually-finite uniform lattice in $\operatorname{Aut}(\Delta)$, for example if $\Delta$ is a product of trees or the cube complex associated to a right-angled Artin group whose defining graph contains an induced 4-cycle \cite[Theorem 1.8]{Huang18}. \subsection{Quasi-isometric rigidity} We prove that the graph product $\Gamma$ is quasi-isometrically rigid in certain cases with the following theorem. This requires the defining graph $\mathcal{G}$ to be a \emph{generalized $m$-gon}, meaning that it is connected, bipartite, and has diameter $m$ and girth $2m$. \begin{thm}\label{thm:QI} Let $\Gamma=\Gamma(\mathcal{G},(G_i)_{i\in I})$ be a graph product. Suppose that $\mathcal{G}$ is a finite generalized $m$-gon, with $m\geq3$, which is bipartite with respect to the partition $I=I_1\sqcup I_2$. Suppose that $d_1,d_2,p_1,p_2\geq2$ are integers such that every $i\in I_k$ ($k=1,2$) has degree $d_k$ and $|G_i|=p_k$. Then in each of the following cases \begin{enumerate}[label=(\roman*)] \item\label{item:i} $2< d_1,d_2,p_1,p_2$, \item\label{item:ii} $p_1=p_2=2<d_1,d_2$, \item\label{item:iii} $d_1=d_2=2<p_1,p_2$, \end{enumerate} any finitely generated group quasi-isometric to $\Gamma$ is abstractly commensurable with $\Gamma$. \end{thm} The source of this rigidity comes from the fact that the associated building $\Delta$ has the structure of a Fuchsian building in these cases. The result then follows by combining the quasi-isometric rigidity of Fuchsian buildings \cite{Xie06} with Corollary \ref{cor:hyp} (or with \cite{Haglund06} and \cite{Agol13} in cases \ref{item:ii} and \ref{item:iii}). The details are in Section \ref{sec:QI}. Theorem \ref{thm:QI} provides examples of quasi-isometrically rigid hyperbolic groups with Menger curve boundary, answering a question of Dru\c{t}u--Kapovich \cite[Problem 25.20]{DrutuKapovich18}. \subsection{Proof strategy}\label{subsec:strategy} We utilize a number of well known structures on the right-angled building $\Delta$, many of which originate from the classical theory of Tits buildings. For example, $\Delta$ is divided into a number of finite convex subcomplexes called \emph{chambers} (Definition \ref{defn:building}), and $\Gamma$ acts simply transitively on the set of chambers. Each vertex of $\Delta$ also has an associated \emph{rank} (Definition \ref{defn:rank}). The cubical automorphism group $\operatorname{Aut}(\Delta)$ might not preserve chambers or rank, but we show that the subgroup $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ of rank-preserving automorphisms does preserve the chamber structure, along with a number of other structures (Proposition \ref{prop:preserved}), and we show that $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ has finite index in $\operatorname{Aut}(\Delta)$ (Proposition \ref{prop:finiteindex}). Most of our arguments involve these structures, so we always work with $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ instead of $\operatorname{Aut}(\Delta)$. Another key definition in our argument is the notion of \emph{typed atlas} (Definition \ref{defn:typedatlas}), which can be thought of as a way of decorating $\Delta$ to make it more rigid. We show that $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ acts transitively on the set of typed atlases, and that the stabilizer of each typed atlas is a uniform lattice in $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ (Proposition \ref{prop:typedatlas}). Haglund has a similar argument involving atlases rather than typed atlases \cite[Proposition 6.5]{Haglund06}. The ``if'' direction of Theorem \ref{thm:Delta} then reduces to finding a pair of typed atlases that are (virtually) preserved by $\Lambda$ and $\Gamma$ respectively. Constructing a typed atlas that is preserved by $\Gamma$ is relatively easy, but finding one that is virtually preserved by $\Lambda$ requires more work. We first build a groupoid that consists of one cubical isomorphism between each pair of chambers in $\Delta$. This groupoid must satisfy several other properties, including being invariant under the action of some finite-index subgroup $\Lambda'<\Lambda$. We then construct a typed atlas by specifying some decorations on a single chamber $C$ and transporting these decorations to all other chambers using the groupoid. This typed atlas may not be preserved by $\Lambda$ or even $\Lambda'$, but it will be preserved by the kernel of a certain holonomy map $\Upsilon:\Lambda'\to\operatorname{Aut}(C)$, where $\Upsilon(\lambda)$ is defined by mapping $C$ onto another chamber using $\lambda$ and then mapping back to $C$ using the groupoid (Lemma \ref{lem:Latypedatlas}). The core of the paper is devoted to building the $\Lambda'$-invariant groupoid. The strategy is to build a hierarchy of groupoids, each defined on a certain subset of chambers called a \emph{chamber-residue} (Definition \ref{defn:chamberres}). Roughly, the chamber-residues correspond to subgroups of $\Gamma$ obtained by restricting to a subgraph of the defining graph (and also the cosets of such subgroups). The hierarchy starts with finite chamber-residues corresponding to spherical subgroups of $\Gamma$, and we define groupoids on these using the actions of the spherical subgroups. The groupoid we need, defined on the set of all chambers, is obtained at the final step of the hierarchy. The groupoids built at each step of the hierarchy must be equivariant with respect to some finite-index subgroup of $\Lambda$, which necessitates using holonomy maps for chamber-residue stabilizers (these are similar to the holonomy map described in the previous paragraph but with $\operatorname{Aut}(C)$ replaced by the symmetry group of a certain finite set of groupoids). This is where we use the separability of convex subgroups of $\Lambda$. These arguments have parallels in the work of Woodhouse \cite{Woodhouse21}, who proved a result similar to Theorem \ref{thm:Delta} for a certain class of right-angled Coxeter groups, but one big difference is that Woodhouse's hierarchy only has two levels, one level corresponding to hyperplanes and another corresponding to the whole of $\Delta$, whereas our hierarchy is more complex. \subsection{Structure of the paper} Sections \ref{sec:graphprod} and \ref{sec:galleries} review some standard definitions and lemmas regarding graph products and right-angled buildings, including the notions of chambers, galleries and chamber-residues. Section \ref{sec:rank} introduces the rank and poset structures on $\Delta$, as well as a new notion called \emph{level-equivalence}, which we use to prove propositions about the rank-preserving automorphism group $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$. Section \ref{sec:hyperplanes} studies the hyperplanes in $\Delta$, and relates them to the level-equivalence classes. We also prove that $\Gamma$ is a virtually special lattice in $\operatorname{Aut}(\Delta)$, from which we deduce the ``only if'' direction in Theorem \ref{thm:Delta}. Section \ref{sec:resgroup} introduces the notion of \emph{residue-groupoids}, which are the groupoids we need for the hierarchy discussed in Section \ref{subsec:strategy}. Sections \ref{sec:hierclasses}--\ref{sec:atlas} prove the ``if'' direction in Theorem \ref{thm:Delta}. Section \ref{sec:hierclasses} lays the groundwork for the hierarchy of residue-groupoids by defining a hierarchy of level-equivalence classes. Section \ref{sec:hierresgroup} constructs the hierarchy of residue-groupoids. Section \ref{sec:atlas} defines the notion of typed atlases, and uses them to complete the proof that $\Lambda$ and $\Gamma$ are weakly commensurable. Finally, Section \ref{sec:QI} proves Theorem \ref{thm:QI}, identifying several cases where $\Delta$ has the structure of a Fuchsian building, and deducing that $\Gamma$ is quasi-isometrically rigid in these cases. \textbf{Acknowledgements:}\, I am grateful for the comments and suggestions of Jingyin Huang, Martin Bridson, Eduardo Oregón-Reyes and Daniel Woodhouse. \section{Graph products and right-angled buildings}\label{sec:graphprod} In this section we establish some basic definitions and lemmas, including the definitions of graph product and the associated right-angled building. Most of this content is standard and appears in \cite{Haglund06}. First we need the notion of cubical cones. \begin{nota} For $N$ a simplicial complex, we let $\bar{N}$ denote the poset of simplices of $N$, together with the empty set, ordered by inclusion. We regard $\bar{N}$ combinatorially, so an element of $\bar{N}$ is a subset of the vertex set of $N$ that is either empty or spans a simplex. \end{nota} \begin{defn}(Cubical cones)\\\label{defn:cubecone} The \emph{cubical cone} $C(N)$ of a simplicial complex $N$ with vertex set $I$ is the cube complex constructed as follows. We have a bijection $t$ from the vertex set of $C(N)$ to $\bar{N}$, called the \emph{typing map}. We refer to $t(v)$ as the \emph{type} of the vertex $v$, and the vertex of type $\emptyset$ is called the \emph{center} of $C(N)$. We have an edge in $C(N)$ joining vertices $v_1,v_2$ whenever $t(v_1)\cup\{i\}=t(v_2)$ for some $i\in I$, so the 1-skeleton of $C(N)$ corresponds to the Hasse diagram of $\bar{N}$. Whenever $J_1\subset J_2\in\bar{N}$, the induced subgraph in the 1-skeleton of $C(N)$ spanned by vertices $v$ with $J_1\subset t(v)\subset J_2$ is isomorphic to the 1-skeleton of a cube of dimension $|J_2-J_1|$; and each face of this cube corresponds to sets $J_1\subset J'_1\subset J'_2\subset J_2$ and vertices $v$ with $J'_1\subset t(v)\subset J'_2$. We can therefore define the cubes of $C(N)$ to be in correspondence with pairs of nested sets $J_1\subset J_2\in\bar{N}$. If $Q$ is the cube corresponding to $J_1\subset J_2$ then we define $\underline{t}(Q):=J_1$. For any point $p\in C(N)$, there will be a unique cube $Q$ containing $p$ in its interior, and we define $\underline{t}(p):=\underline{t}(Q)$. \end{defn} \begin{figure} \caption{An example of a simplicial complex $N$ and corresponding cubical cone $C(N)$.} \label{fig:ccone} \end{figure} \begin{lem} \cite[Lemma 3.5]{Haglund06}\\\label{lem:cubecone} The cubical cone $C(N)$ is a CAT(0) cube complex, and the link of its center is isomorphic to $N$. \end{lem} We can now define graph products and their associated right-angled buildings. \begin{defn}(Graph products)\\ Let $\mathcal{G}$ denote a simplicial graph with vertex set $I$. Suppose that for each $i\in I$ we are given a group $G_i$. We define the \emph{graph product of $(G_i)_{i\in I}$ along $\mathcal{G}$} to be the group $$\Gamma=\Gamma(\mathcal{G},(G_i)_{i\in I}):=\frac{*_{i\in I} G_i}{\llangle g_i g_j g_i^{-1} g_j^{-1}\mid i,j\in I\text{ adjacent}\rrangle},$$ where $i,j\in I$ \emph{adjacent} means that $i,j$ are distinct vertices joined by an edge in $\mathcal{G}$. Let $\Gamma_i<\Gamma$ be the image of the morphism $G_i\to\Gamma$, and for $J\subset I$ we denote by $\Gamma_J$ the subgroup of $\Gamma$ generated by $(\Gamma_j)_{j\in J}$, and we denote by $\mathcal{G}_J$ the subgraph of $\mathcal{G}$ induced by $J$. We say that $J\subset I$ is \emph{spherical} if any two distinct vertices of $J$ are adjacent. For $i\in I$ we let $i\dn^\perp$ denote the set of all $j\in I$ adjacent to $i$, and we let $i\dn^{\underline{\perp}}:=\{i\}\cup i\dn^\perp$. Similarly, for $J\subset I$ spherical we let \begin{equation*} J\dn^{\underline{\perp}}:=\bigcap_{j\in J} j\dn^{\underline{\perp}}\quad\text{and}\quad J\dn^\perp:=J\dn^{\underline{\perp}} - J. \end{equation*} By convention we let $\emptyset\dn^{\underline{\perp}}=\emptyset\dn^\perp=I$. Note that we always have $J\subset J\dn^{\underline{\perp}}$. \end{defn} \begin{lem}\label{lem:GammaJ} The following statements hold: \begin{enumerate} \item\label{item:GiGi} For each $i\in I$ the morphism $G_i\to\Gamma$ is injective, so we will identify the groups $G_i$ and $\Gamma_i$. \item\label{item:GJ} For $J\subset I$ the natural map $*_{j\in J} G_j\to\Gamma$ induces an isomorphism $\Gamma(\mathcal{G}_J,(G_j)_{j\in J})\to\Gamma_J$. \item\label{item:GJ1J2} $\Gamma_{J_1}\cap\Gamma_{J_2}=\Gamma_{J_1\cap J_2}$ for $J_1,J_2\subset I$. \end{enumerate} \end{lem} \begin{proof} For $J\subset I$ we have a morphism from $*_{i\in I}G_i\to *_{j\in J} G_j$ that is the identity on each $G_j$ for $j\in J$ and kills each $G_i$ for $i\notin J$. This descends to a morphism $\rho_J:\Gamma\to\Gamma(\mathcal{G}_J,(G_j)_{j\in J})$. The natural map $\Gamma(\mathcal{G}_J,(G_j)_{j\in J})\to\Gamma$ postcomposed with $\rho_J$ is the identity, so this proves \ref{item:GJ}. \ref{item:GiGi} is a special case of \ref{item:GJ}. We may now consider $\rho_J$ as a group retraction $\rho_J:\Gamma\to\Gamma_J$. For $J_1,J_2\subset I$, observe that $\rho_{J_1}\circ\rho_{J_2}$ has image in $\Gamma_{J_1\cap J_2}$ (for example by considering the image of a product of elements in the $G_i$). But $\Gamma_{J_1\cap J_2}$ is a subgroup of $\Gamma_{J_1}\cap\Gamma_{J_2}$, and $\rho_{J_1}\circ\rho_{J_2}$ is the identity on $\Gamma_{J_1}\cap\Gamma_{J_2}$, hence $\Gamma_{J_1}\cap\Gamma_{J_2}=\Gamma_{J_1\cap J_2}$. \end{proof} \begin{defn}(Right-angled building of a graph product)\\\label{defn:building} Let $\Gamma=\Gamma(\mathcal{G},(G_i)_{i\in I})$ be a graph product, let $N=N(\mathcal{G})$ be the flag completion of $\mathcal{G}$, and let $C(N)$ be the cubical cone of $N$. Recall from Lemma \ref{lem:cubecone} that we have a map $\underline{t}:C(N)\to\bar{N}$, and in this setting $\bar{N}$ is the set of spherical subsets of $I$. Consider the equivalence relation on $\Gamma\times C(N)$ defined by $$(\gamma,p)\sim(\gamma',p')\quad\Leftrightarrow\quad p=p'\text{ and }\gamma^{-1}\gamma'\in\Gamma_J\text{ for }J=\underline{t}(p).$$ The \emph{right-angled building of $\Gamma$} is the quotient $\Delta=\Delta(\mathcal{G},(G_i)_{i\in I}):=\Gamma\times C(N)/\sim$. We denote the equivalence class of $(\gamma,p)$ by $[\gamma,p]$. For $\gamma\in\Gamma$, the image of the inclusion $\{\gamma\}\times C(N)\xhookrightarrow{}\Delta$ is called a \emph{chamber of $\Delta$}, denoted $C_\gamma$. Letting $v_N$ denote the center of $C(N)$, we call $[\gamma,v_N]$ the \emph{center of $C_\gamma$}. We denote the base chamber $C_{1_\Gamma}$ by $C_*$ and the set of all chambers by $\mathcal{C}(\Delta)$. We define the \emph{typing map} $t:\Delta^0\to\bar{N}$ (where $\Delta^0$ is the vertex set of $\Delta$) to be the natural extension of the typing map from Definition \ref{defn:cubecone}. Again we refer to $t(v)$ as the \emph{type} of the vertex $v$. \end{defn} \begin{figure} \caption{An example of the graph $\mathcal{G}$ and a section of the right-angled building $\Delta$. In this example the groups $G_i,G_j,G_k,G_l$ have orders $2,2,3,3$ respectively. One of the chambers is shown in bold, and its vertices are labeled by their types.} \label{fig:building} \end{figure} We need the following key lemma regarding the structure of intersections of chambers. \begin{lem}\label{lem:chamberint} Let $\gamma_1,\gamma_2\in\Gamma$. Then $C_{\gamma_1}\cap C_{\gamma_2}$ is non-empty if and only if there exists $J\in\bar{N}$ with $\gamma_1^{-1}\gamma_2\in\Gamma_J$. In this case, there is a unique minimal such $J$, and we have \begin{equation}\label{chamberintersect} C_{\gamma_1}\cap C_{\gamma_2}=\{\gamma_1\}\times C_J=\{\gamma_2\}\times C_J, \end{equation} where $C_J\subset C(N)$ is the subcomplex defined by $$C_J:=\{p\in C(N)\mid J\subset \underline{t}(p)\}.$$ \end{lem} \begin{proof} If $J\in\bar{N}$ with $\gamma_1^{-1}\gamma_2\in\Gamma_J$ then $[\gamma_1,v]=[\gamma_2,v]\in C_{\gamma_1}\cap C_{\gamma_2}$ for $v\in C(N)$ the vertex with $t(v)=\underline{t}(v)=J$. Conversely, if $[\gamma,p]\in C_{\gamma_1}\cap C_{\gamma_2}$ then $\gamma_1^{-1}\gamma_2\in\Gamma_{J}$ for $J=\underline{t}(p)$. Now $\Gamma_J$ is the direct product of the subgroups $G_j$ for $j\in J$, so we may write $\gamma_1^{-1}\gamma_2=\prod_{j\in J}g_j$ with $g_j\in G_j$. Shrinking $J$ if necessary, we may assume that the $g_j$ are non-trivial. We claim that $J\in\bar{N}$ is unique minimal with $\gamma_1^{-1}\gamma_2\in\Gamma_{J}$. Indeed for any $i\in I$ we have a homomorphism $\rho_i:\Gamma\to G_i$ by killing $G_j$ for $i\neq j\in I$, and it is clear that $\rho_i(\Gamma_{J'})=\{1\}$ if $i\notin J'$. But we know that $\rho_j(\gamma_1^{-1}\gamma_2)=g_j\neq 1$ for $j\in J$, so $\gamma_1^{-1}\gamma_2\in\Gamma_{J'}$ implies $J\subset J'$ as required. We now show (\ref{chamberintersect}). Consider $p\in C(N)$ with $\underline{t}(p)=J'$. We have that $[\gamma_1,p]=[\gamma_2,p]$ if and only if $\gamma_1^{-1}\gamma_2\in\Gamma_{J'}$, but by the first part of the lemma this is equivalent to $J\subset J'$ -- i.e. $p\in C_J$. \end{proof} \begin{cor}\label{cor:cubestructure} The cube complex structure on $C(N)$ induces a cube complex structure on $\Delta$. \end{cor} \begin{remk}\label{remk:chamberneigh} If $C$ is a chamber with center $v$, then $C$ is the cubical neighborhood of $v$ -- i.e. the union of cubes in $\Delta$ that contain $v$. This is because $C(N)$ is the cubical neighborhood of $v_N$ (see Definition \ref{defn:cubecone}), and for $\gamma\in \Gamma$ the equivalence class $[\gamma,v_N]$ is a singleton, so all cubes in $\Delta$ containing $[\gamma,v_N]$ must come from cubes in $\{\gamma\}\times C(N)$. \end{remk} We will also need the following three results. \begin{lem}\cite[Lemma 4.4]{Haglund06}\\ The left action of $\Gamma$ on $\Gamma\times C(N)$ induces an action of $\Gamma$ on $\Delta$ by cubical automorphisms. Moreover, the projection $\Gamma\times C(N)\to C(N)$ induces a projection $\rho:\Delta\to C(N)$, invariant under the action of $\Gamma$, and two points are in the same $\Gamma$-orbit if and only if they have the same image under $\rho$. The typing map $t$ factors through $\rho$, so $t$ is also invariant under the action of $\Gamma$. \end{lem} \begin{lem}\cite[Corollary 4.9]{Haglund06}\\\label{lem:lfinite} $\Delta$ is locally finite if and only if the graph $\mathcal{G}$ and the groups $G_i$ are finite. In this case $\Gamma$ is a uniform lattice in $\operatorname{Aut}(\Delta)$, the group of cubical automorphisms of $\Delta$. \end{lem} The following is proved in \cite{Davis98} and \cite{Meier96}. \begin{prop} $\Delta$ is a CAT(0) cube complex. \end{prop} \section{Galleries and chamber-residues}\label{sec:galleries} In this section we define adjacency of chambers, galleries and chamber-residues, and prove some basic lemmas. Again most of this material is standard and appears in \cite{Haglund06}. \begin{defn}(Adjacent chambers)\\\label{defn:adjacent} We say that chambers $C_{\gamma_1},C_{\gamma_2}$ are \emph{$i$-adjacent} if $1\neq \gamma_1^{-1}\gamma_2\in G_i$. We say that $C_{\gamma_1},C_{\gamma_2}$ are \emph{adjacent} if they are $i$-adjacent for some $i\in I$. It follows from Definition \ref{defn:building} and Lemma \ref{lem:chamberint} that chambers $C,C'$ are $i$-adjacent if and only if $C\neq C'$ and there is a vertex $v\in C\cap C'$ of type $\{i\}$. \end{defn} \begin{defn}(Galleries)\\ Let $C,C'\in\mathcal{C}(\Delta)$ be chambers. A \emph{gallery of $\Delta$ joining $C$ and $C'$} is a sequence of chambers $(C_0,C_1,...,C_n)$ such that $C_0=C$, $C_n=C'$ and for each $1\leq k\leq n$ the chambers $C_{k-1},C_k$ are $i_k$-adjacent for some $i_k\in I$. For $J\subset I$ we say that $(C_0,C_1,...,C_n)$ is a \emph{$J$-gallery} if $i_k\in J$ for $1\leq k\leq n$. Note that an $\emptyset$-gallery consists of a single chamber. The product of two galleries $G\cdot G'$ is defined by concatenation, and the product of two $J$-galleries is again a $J$-gallery. The action of $\Gamma$ on $\mathcal{C}(\Delta)$ preserves $i$-adjacency for each $i\in I$, so we get a natural action of $\Gamma$ on the set of galleries, which preserves the set of $J$-galleries for each $J\subset I$. \end{defn} The following lemma follows straight from the definitions. \begin{lem}\label{lem:alljoined} The galleries joining $C_{\gamma}$ and $C_{\gamma'}$ correspond to product decompositions $\gamma^{-1}\gamma'=g_1g_2\cdots g_n$ for $g_k\in\sqcup_i G_i-\{1\}$. The corresponding gallery is $(C_{\gamma_0},C_{\gamma_1},...,C_{\gamma_n})$ where $\gamma_k:=\gamma g_1\cdots g_k$. In particular, any two chambers are joined by a gallery. \end{lem} We now describe moves that allow one to transform between any two galleries with the same end-chambers. \begin{lem}\label{lem:moves} If two galleries have the same end-chambers then one can be transformed into the other by a sequence of the following three moves (or their inverses -- which we refer to by the same names): \begin{enumerate}[label={(M\arabic*)}] \item\label{M1} Replace $(C_0,C_1,...,C_n)$ with $(C_0,C_1,...,C_k,C',C_k,...,C_n)$, where $C_k,C'$ are adjacent. \item\label{M2} Replace $(C_0,C_1,...,C_n)$ with $(C_0,C_1,...,C_k,C',C_{k+1},...,C_n)$, where $C_k,C',C_{k+1}$ are pairwise $i$-adjacent for some $i\in I$. \item\label{M3} Replace $(C_0,C_1,...,C_n)$ with $(C_0,C_1,...,C_k,C',C_{k+2},...,C_n)$, where $C_k,C_{k+1}$ and $C',C_{k+2}$ are $i$-adjacent, and $C_k,C'$ and $C_{k+1},C_{k+2}$ are $j$-adjacent, for some adjacent $i,j\in I$. \end{enumerate} \end{lem} \begin{proof} If two words on $\sqcup_i G_i - \{1\}$ represent the same element of $\Gamma$, then it follows from the presentation of $\Gamma$ that the first word can be transformed into the second word by a sequence of the following three moves (or their inverses): \begin{enumerate}[label={(M\arabic*$'$)}] \item\label{i1} Replace $g_1\cdots g_n$ with $g_1\cdots g_kgg^{-1}g_{k+1}\cdots g_n$, where $g\in\sqcup_i G_i-\{1\}$. \item Replace $g_1\cdots g_n$ with $g_1\cdots g_k g'_1 g'_2 g_{k+2}\cdots g_n$, where $g_{k+1}=g'_1 g'_2$ and $g_{k+1},g'_1,g'_2\in G_i-\{1\}$ for some $i\in I$. \item\label{i3} Replace $g_1\cdots g_n$ with $g_1\cdots g_k g_{k+2}g_{k+1}g_{k+3}\cdots g_n$, where $g_{k+1}\in G_i$ and $g_{k+2}\in G_j$ with $i,j\in I$ adjacent. \end{enumerate} Given a gallery $(C_0,C_1,...,C_n)$, as in Lemma \ref{lem:alljoined} there exist $\gamma\in\Gamma$ and $g_k\in\sqcup_i G_i-\{1\}$ such that $C_k=\gamma g_1\cdots g_k C_*$ for $0\leq k\leq n$. Moreover, the product $g_1\cdots g_n\in\Gamma$ only depends on the end-chambers $C_0$ and $C_n$. We conclude by the observation that transforming the word $g_1\cdots g_n$ using the moves \ref{i1}--\ref{i3} corresponds to transforming the gallery $(C_0,C_1,...,C_n)$ using the moves \ref{M1}--\ref{M3}. \end{proof} Galleries allow us to group chambers together into chamber-residues, defined as follows. \begin{defn}(Chamber-residues)\\\label{defn:chamberres} Given a chamber $C\in\mathcal{C}(\Delta)$ and $J\subset I$, the \emph{$J$-chamber-residue of $C$}, denoted $\mathcal{C}(J,C)$, is the set of all chambers that appear in $J$-galleries based at $C$. (Note that some authors refer to $\mathcal{C}(J,C)$ as just a $J$-residue, although Haglund defines $J$-residues to be slightly different yet related structures in \cite{Haglund06}.) Note that $\mathcal{C}(\emptyset,C)=\{C\}$ and $\mathcal{C}(I,C)=\mathcal{C}(\Delta)$. \end{defn} We finish the section with a number of important results about the structure of chamber-residues. \begin{lem}\label{lem:CJC} $\mathcal{C}(J,C_\gamma)=\{C_{\gamma'}\mid \gamma'\in\gamma\Gamma_J\}$ \end{lem} \begin{proof} This follows from Lemma \ref{lem:alljoined}. \end{proof} \begin{lem}\label{lem:intchamres} $\mathcal{C}(J_1,C)\cap\mathcal{C}(J_2,C)=\mathcal{C}(J_1\cap J_2,C)$ for $J_1,J_2\subset I$ and $C\in \mathcal{C}(\Delta)$. \end{lem} \begin{proof} This follows from Lemmas \ref{lem:GammaJ}\ref{item:GJ1J2} and \ref{lem:CJC}. \end{proof} \begin{cor} If $C_1,C_2\in\mathcal{C}(J,C)$ are $i$-adjacent then $i\in J$. \end{cor} \begin{lem}\label{lem:cCv} Let $v\in\Delta^0$ be a vertex of type $J$. Then the set $\mathcal{C}(v)$ of chambers containing $v$ is a $J$-chamber-residue. \end{lem} \begin{proof} Let $C_{\gamma_1},C_{\gamma_2}\in\mathcal{C}(v)$. Lemma \ref{lem:chamberint} tells us that there is $J'\in\bar{N}$ with $\gamma_1^{-1}\gamma_2\in\Gamma_{J'}$ and $J'\subset \underline{t}(v)=t(v)=J$. We deduce that $\gamma_1^{-1}\gamma_2\in\Gamma_{J}$, so $C_{\gamma_2}\in \mathcal{C}(J,C_{\gamma_1})$ by Lemma \ref{lem:CJC}. Conversely, for any other $C_{\gamma_3}\in \mathcal{C}(J,C_{\gamma_1})$ we must have $\gamma_1^{-1}\gamma_3\in\Gamma_J$, so $v\in C_{\gamma_3}$ by Lemma \ref{lem:chamberint}. \end{proof} \begin{lem}\label{lem:moveinJ} If $C,C'$ are two chambers in the same $J$-chamber-residue, then any two $J$-galleries joining $C$ and $C'$ differ by a sequence of moves \ref{M1}--\ref{M3} such that the intermediate galleries are also $J$-galleries. \end{lem} \begin{proof} By Lemma \ref{lem:alljoined}, a $J$-gallery joining $C$ and $C'$ corresponds to a word on $\sqcup_{j\in J} G_j-\{1\}$. Such a word is an element of $\Gamma_J$, which is isomorphic to the graph product $\Gamma(\mathcal{G}_J,(G_j)_{j\in J})$ by Lemma \ref{lem:GammaJ}, so any two such words differ by a sequence of moves \ref{i1}--\ref{i3} such that all intermediate words also have letters in $\sqcup_{j\in J} G_j-\{1\}$. And such a sequence of moves \ref{i1}--\ref{i3} corresponds to a sequence of moves \ref{M1}--\ref{M3} such that the intermediate galleries are $J$-galleries. \end{proof} \begin{lem}(Product structure of $J\dn^{\underline{\perp}}$-chamber-residues)\\\label{lem:product} Let $C\in\mathcal{C}(\Delta)$ and $J\subset I$ spherical. Then there is a bijection $$\beta:\mathcal{C}(J,C)\times\mathcal{C}(J\dn^\perp,C)\to\mathcal{C}(J\dn^{\underline{\perp}},C),$$ such that: \begin{enumerate} \item\label{item:betasections} $\beta(C_1,C)=C_1$ for all $C_1\in \mathcal{C}(J,C)$ and $\beta(C,C_2)=C_2$ for all $C_2\in \mathcal{C}(J\dn^\perp,C)$. \item\label{item:iadjbeta} $\beta(C_1,C_2),\beta(C'_1,C'_2)$ are $i$-adjacent if and only if \begin{enumerate} \item\label{item:adj1} $C_1,C'_1$ are $i$-adjacent and $C_2=C'_2$, or \item\label{item:adj2} $C_2,C'_2$ are $i$-adjacent and $C_1=C'_1$. \end{enumerate} \item\label{item:betares} $\beta(\mathcal{C}(J,C)\times\{C_2\})=\mathcal{C}(J,C_2)$ for all $C_2\in\mathcal{C}(J\dn^\perp,C)$ and $\beta(\{C_1\}\times\mathcal{C}(J\dn^\perp,C))=\mathcal{C}(J\dn^\perp,C_1)$ for all $C_1\in\mathcal{C}(J,C)$. \end{enumerate} \end{lem} \begin{proof} Translating by $\Gamma$ we may assume that $C=C_*$. We have a product splitting $\Gamma_{J\dn^{\underline{\perp}}}=\Gamma_J\times \Gamma_{J\dn^\perp}$, so, using Lemma \ref{lem:CJC}, we may define $\beta$ by $\beta(C_{\gamma_1},C_{\gamma_2}):=C_{\gamma_1\gamma_2}$ for $\gamma_1\in\Gamma_J$ and $\gamma_2\in\Gamma_{J\dn^\perp}$. Properties \ref{item:betasections} and \ref{item:iadjbeta} follow straight from this definition, and, again using Lemma \ref{lem:CJC}, property \ref{item:betares} follows from the equations \begin{equation*} \mathcal{C}(J,C_{\gamma_2})=\{C_{\gamma_1\gamma_2}\mid\gamma_1\in \Gamma_J\}\quad\text{for }\gamma_2\in\Gamma_{J\dn^\perp}, \end{equation*} and \begin{equation*} \mathcal{C}(J,C_{\gamma_1})=\{C_{\gamma_1\gamma_2}\mid\gamma_2\in \Gamma_{J\dn^\perp}\}\quad\text{for }\gamma_1\in\Gamma_J.\qedhere \end{equation*} \end{proof} \section{Rank, poset structure and level-equivalence}\label{sec:rank} In this section we introduce the rank and poset structure for the vertices of $\Delta$, and relate these notions to the galleries, chamber-residues and typing map of the previous sections. Key to this relationship is a certain equivalence relation on the vertices of $\Delta$, called level-equivalence. The rank and poset structures are standard, but level-equivalence is a notion new to this paper. We also prove two important propositions about the group of rank-preserving automorphisms of $\Delta$. \begin{defn}(Rank and poset structure)\\\label{defn:rank} The \emph{rank} of a vertex $v\in\Delta^0$ is defined by $\operatorname{rk}(v):=|t(v)|$. Note that centers of chambers are precisely the rank-0 vertices of $\Delta$. Recall that the vertices of $C(N)$ are in bijection with the poset $\bar{N}$, so we get a poset structure on $C(N)$. We extend this poset structure $\Gamma$-equivariantly to $\Delta^0$: explicitly $u\leq v$ if $t(u)\subset t(v)$ and there is some chamber $C$ containing both $u$ and $v$. \end{defn} \begin{lem}\label{lem:min} The following statements hold: \begin{enumerate} \item\label{item:Q} For vertices $u,v\in\Delta^0$ we have $u\leq v$ if and only if $\operatorname{rk}(u)\leq\operatorname{rk}(v)$ and there is a cube $Q$ containing both $u$ and $v$ of dimension $\operatorname{rk}(v)-\operatorname{rk}(u)$. In this case the cube $Q$ contains a vertex of type $J$ for every $t(u)\subset J\subset t(v)$. \item\label{item:wCcapC'} If the intersection of two chambers $C\cap C'$ is non-empty, then there is $J\in\bar{N}$ such that a vertex $v\in C$ is contained in $C\cap C'$ if and only if $J\subset t(v)$. Hence the vertex $v\in C$ of type $J$ is the unique $\leq$-minimal vertex in $C\cap C'$ -- and we denote it by $v=\wedge(C, C')$ (since $\wedge$ denotes the meet in a poset). \item\label{item:stayC} Given vertices $u\leq v$, if $u$ is in a chamber $C$ then $v$ is also in $C$. \item\label{item:iadjwedge} Chambers $C,C'$ are $i$-adjacent if and only if $t(\wedge(C, C'))=\{i\}$. \item\label{item:adjwedge} Chambers $C,C'$ are adjacent if and only if $\operatorname{rk}(\wedge(C, C'))=1$. \end{enumerate} \end{lem} \begin{proof} We prove each statement in turn. \begin{enumerate} \item If $u,v$ are vertices in $C(N)$ with $t(u)\subset t(v)$, then recall from Definition \ref{defn:cubecone} that there is a unique cube $Q$ of dimension $|t(v)|-|t(u)|$ containing $u$ and $v$; moreover, $Q$ contains a vertex of type $J$ for every $t(u)\subset J\subset t(v)$. This concludes the proof of \ref{item:Q} since the action of $\Gamma$ provides type-preserving isomorphisms from every chamber in $\Delta$ to $C(N)$. \item This follows from Lemma \ref{lem:chamberint}. \item Since $u\leq v$, we know that $u$ and $v$ are both contained in some chamber $C'$. We have $u\in C\cap C'$, so by \ref{item:wCcapC'} there exists $J\in\bar{N}$ such that a vertex $w\in C'$ is contained in $C\cap C'$ if and only if $J\subset t(w)$ -- in particular $J\subset t(u)$. Hence $J\subset t(u)\subset t(v)$ and $v\in C$. \item We have already seen that distinct chambers $C,C'$ are $i$-adjacent if and only if there is a vertex $v\in C\cap C'$ with $t(v)=\{i\}$ (Definition \ref{defn:adjacent}). As distinct chambers have distinct type-$\emptyset$ vertices (i.e. distinct centers), we conclude from \ref{item:wCcapC'} that $C,C'$ are $i$-adjacent if and only if $t(\wedge(C, C'))=\{i\}$. \item The rank-1 vertices are the vertices of type $\{i\}$ for some $i\in I$, so we deduce from \ref{item:iadjwedge} that chambers $C,C'$ are adjacent if and only if $\operatorname{rk}(\wedge(C, C'))=1$.\qedhere \end{enumerate} \end{proof} We can think of rank as dividing $\Delta^0$ into a number of levels. Edges in $\Delta^0$ always go between consecutive levels, but we also want a notion of adjacency for vertices in the same level. \begin{defn}(Level-adjacency and level-equivalence)\\\label{defn:[v]} We say that vertices $v_1,v_2\in\Delta^0$ are \emph{level-adjacent} if they are contained in adjacent chambers $C_1,C_2$ respectively, with $v_1,v_2\notin C_1\cap C_2$, and with some vertex $v_1,v_2\leq u\in C_1\cap C_2$ such that $\operatorname{rk}(v_1)=\operatorname{rk}(v_2)=\operatorname{rk}(u)-1$. Let $\approx$ be the equivalence relation on $\Delta^0$ generated by level-adjacency, and let $[v]$ denote the $\approx$-class of a vertex $v$. We call $\approx$ \emph{level-equivalence}. \end{defn} \begin{figure} \caption{An example of the graph $\mathcal{G}$ and a section of the right-angled building $\Delta$. In this example the groups $G_i,G_j,G_k,G_l$ have orders $2,2,3,3$ respectively. One of the chambers is shown in bold, and its vertices are labeled by their types. The edges are oriented to point upwards in the poset structure, so they form the Hasse diagram for $(\Delta^0,\leq)$. Four examples of level-equivalence classes are depicted, with colors red, orange, green and blue.} \label{fig:leveladj} \end{figure} \begin{lem}\label{lem:leveladj} Vertices $v_1,v_2\in\Delta^0$ are level-adjacent if and only if they are of the same type and are contained in chambers $C_1,C_2$ respectively that are $i$-adjacent for some $i\in t(v_1)\dn^\perp$. \end{lem} \begin{proof} First suppose $v_1,v_2$ are level-adjacent. Say they are contained in adjacent chambers $C_1,C_2$ respectively, with $v_1,v_2\notin C_1\cap C_2$, and with some vertex $v_1,v_2\leq u\in C_1\cap C_2$ such that $\operatorname{rk}(v_1)=\operatorname{rk}(v_2)=\operatorname{rk}(u)-1$. Let $t(u)=t(v_1)\cup\{i\}$. Note that $i\in t(v_1)\dn^\perp$. By Lemma \ref{lem:min}\ref{item:wCcapC'}, $C_1,C_2$ must be $j$-adjacent for some $j\in t(v_1)\cup\{i\}$, and since $v_1\notin C_1\cap C_2$ we deduce that $j=i$. As $v_2\notin C_1\cap C_2$ we also have $t(v_2)=t(v_1)$. Conversely, suppose that $v_1,v_2$ are of the same type and are contained in chambers $C_1,C_2$ respectively that are $i$-adjacent for some $i\in t(v_1)\dn^\perp$. Let $u\in C_1\cap C_2$ be the vertex of type $t(v_1)\cup\{i\}$ (Lemma \ref{lem:min}\ref{item:wCcapC'}). It follows that $v_1,v_2\leq u$ and $\operatorname{rk}(v_1)=\operatorname{rk}(v_2)=\operatorname{rk}(u)-1$. And applying Lemma \ref{lem:min}\ref{item:wCcapC'} again we see that $v_1,v_2\notin C_1\cap C_2$. \end{proof} \begin{cor}\label{cor:leveltype} Any two level-equivalent vertices are of the same type. \end{cor} \begin{remk}\label{remk:extreme} If $t(v)$ is a maximal spherical subset of $I$, then $v$ is $\leq$-maximal and the class $[v]=\{v\}$ is a singleton. At the opposite extreme, we note that two rank-0 vertices are level-adjacent if and only if their corresponding chambers are adjacent (by Lemma \ref{lem:leveladj}), so the set of all rank-0 vertices defines a single level-equivalence class. \end{remk} \begin{lem}\label{lem:cC[v]} Let $v$ be a vertex of type $J$ contained in a chamber $C$, and let $\mathcal{C}([v])$ denote the set of chambers that contain a vertex in $[v]$. Then $\mathcal{C}(J\dn^{\underline{\perp}},C)=\mathcal{C}([v])$. Moreover, we have a product decomposition $$\mathcal{C}(J\dn^{\underline{\perp}},C)\cong\mathcal{C}(J,C)\times\mathcal{C}(J\dn^\perp,C)$$ from Lemma \ref{lem:product} in which the sections $\mathcal{C}(J,C)\times\{C_2\}$ correspond to the sets $\mathcal{C}(v')$ for $v'\in[v]$ (which are $J$-chamber-residues), and the sections $\{C_1\}\times\mathcal{C}(J\dn^\perp,C)$ correspond to the $J\dn^\perp$-chamber-residues in $\mathcal{C}([v])$. In particular, the product decomposition would remain the same up to isomorphism if we chose a different level-equivalence class representative $v$ and chamber $C\in\mathcal{C}(v)$. \end{lem} \begin{proof} We know that all vertices in $[v]$ are of type $J$ by Corollary \ref{cor:leveltype}. If $v_1\in[v]$ is in a chamber $C_1$ and if $C_1$ is $i$-adjacent to $C_2$ for $i\in J\dn^\perp$, then the vertex $v_2\in C_2$ of type $J$ is level-adjacent to $v_1$ by Lemma \ref{lem:leveladj}. And for each $v'\in[v]$ we know that $\mathcal{C}(v')$ is a $J$-chamber-residue by Lemma \ref{lem:cCv}. Combining these two facts, we see that $\mathcal{C}(J\dn^{\underline{\perp}},C)\subset\mathcal{C}([v])$. Conversely, if $v_1,v_2\in[v]$ are level-adjacent, then they are contained in chambers $C_1,C_2$ respectively that are $i$-adjacent for some $i\in J\dn^\perp$ (Lemma \ref{lem:leveladj} again). Combined with the fact that $\mathcal{C}(v')$ is a $J$-chamber-residue for each $v'\in[v]$, we deduce that $\mathcal{C}([v])$ is contained inside a $J\dn^{\underline{\perp}}$-chamber-residue, yielding the reverse inclusion $\mathcal{C}([v])\subset\mathcal{C}(J\dn^{\underline{\perp}},C)$. The remainder of the lemma follows from Lemmas \ref{lem:cCv} and \ref{lem:product}. \end{proof} \begin{lem}\label{lem:approxjperp} If $v_1\approx v_2$ are of type $J$ and $C_1\in\mathcal{C}(v_1)$, then the intersection $\mathcal{C}(v_2)\cap\mathcal{C}(J\dn^\perp,C_1)$ consists of a single chamber. \end{lem} \begin{proof} This follows from the fact that the sets $\mathcal{C}(v_2)$ and $\mathcal{C}(J\dn^\perp,C_1)$ correspond to orthogonal sections in the product decomposition from Lemma \ref{lem:cC[v]}. \end{proof} We will need the following definition in later sections. \begin{defn}(Sets $E^-(v)$ and lower degree)\\\label{defn:E-} For $v\in\Delta^0$, let $E^-(v)$ denote the set of edges incident to $v$ that join it to a vertex of lower rank, and let $d^-(v):=|E^-(v)|$ (which in general might be $\infty$). We call $d^-(v)$ the \emph{lower degree} of $v$. Note that $d^-(v)=0$ if and only if $\operatorname{rk}(v)=0$. \end{defn} Let $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ denote the group of cubical automorphisms of $\Delta$ that preserve ranks of vertices. The following key proposition shows that $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ preserves many of the structures on $\Delta$ that we have defined so far. \begin{prop}(Things preserved by $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$)\\\label{prop:preserved} The group $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ preserves the following structures on $\Delta$: \begin{itemize} \item the poset structure on $\Delta$, \item lower degrees of vertices, \item the chambers, \item adjacency of chambers, \item galleries, \item level-equivalence of vertices, \item the sets $\mathcal{C}(v)$ and $\mathcal{C}([v])$, \item the product structures on the sets $\mathcal{C}([v])$. \end{itemize} \end{prop} \begin{proof} Lemma \ref{lem:min}\ref{item:Q} implies that $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ preserves the partial order $\leq$. Lower degrees of vertices are defined using the poset structure, so these are preserved as well. The centers of chambers are the rank-0 vertices, and each chamber is the cubical neighborhood of its center (Remark \ref{remk:chamberneigh}), so $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ also preserves the chamber structure on $\Delta$. Adjacency of chambers is preserved because of Lemma \ref{lem:min}\ref{item:adjwedge}, so galleries are preserved as well. Level-equivalence of vertices is defined using rank, chambers and the poset structures on $\Delta^0$, so this is also preserved -- along with the sets $\mathcal{C}([v])$. The sets $\mathcal{C}(v)$ are preserved because chambers are preserved. Moreover, we have that $g\mathcal{C}(v)=\mathcal{C}(gv)$ and $g\mathcal{C}([v])=\mathcal{C}([gv])$ for $v\in\Delta^0$ and $g\in\operatorname{Aut}_{\operatorname{rk}}(\Delta)$. Combining this observation with Lemma \ref{lem:cC[v]}, we deduce that an automorphism $g\in\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ induces a bijection $g:\mathcal{C}([v])\to\mathcal{C}([gv])$ that corresponds to a bijection of products $$\mathcal{C}(J,C)\times\mathcal{C}(J\dn^\perp,C)\to\mathcal{C}(t(gv),gC)\times\mathcal{C}(t(gv)\dn^\perp,gC),$$ and this bijection of products preserves the factors and their order. \end{proof} In the context of Theorem \ref{thm:Delta}, we would like the lattices $\Gamma$ and $\Lambda$ to both preserve all the structures from Proposition \ref{prop:preserved}. This may not be true for $\Lambda$, but it will hold for $\Lambda\cap\operatorname{Aut}_{\operatorname{rk}}(\Delta)$, and the following proposition implies that this is a finite-index subgroup of $\Lambda$. As Theorem \ref{thm:Delta} is a statement about commensurability, there is no harm in replacing $\Lambda$ by a finite-index subgroup, so in later sections we will be able to assume that $\Lambda$ does preserve all the structures from Proposition \ref{prop:preserved}. \begin{prop}\label{prop:finiteindex} If the graph $\mathcal{G}$ is finite, then $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ has finite index in $\operatorname{Aut}(\Delta)$. \end{prop} \begin{proof} We define a second equivalence relation $\simeq$ on $\Delta^0$ generated by the relation $R$, where $uRv$ if $u,v$ are joined by an edge path of length two, but no cube contains both $u$ and $v$. (Equivalently, $u,v$ are at distance 2 in the $\ell_\infty$ metric.) Let $u$ be the center of a chamber $C$. A vertex $w$ is adjacent to $u$ if and only if $w\in C$ and $\operatorname{rk}(w)=1$. A vertex $v\neq u$ is adjacent to $w$ if and only if it is the center of a chamber $C'$ adjacent to $C$ with $w=\wedge(C,C')$, or it is a rank-2 vertex in $C$. If $v$ is a rank-2 vertex in $C$ then there is a cube $Q$ in $C$ containing $u$ and $v$ (Lemma \ref{lem:min}\ref{item:Q}), so it follows from Lemma \ref{lem:min}\ref{item:adjwedge} that the vertices $v$ with $uRv$ are precisely the centers of chambers adjacent to $C$. We deduce that the $\simeq$-equivalence class of $u$ is the set of all rank-0 vertices. Adjacent vertices in $\Delta$ have ranks that differ by 1, so $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ is equal to the $\operatorname{Aut}(\Delta)$-stabilizer of the set of rank-0 vertices. Since $\operatorname{Aut}(\Delta)$ preserves the relation $R$ and the equivalence relation $\simeq$, we deduce that $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ is the group of $g\in\operatorname{Aut}(\Delta)$ such that $\operatorname{rk}(gv)=0$ for some rank-0 vertex $v$. Fix a chamber $C$ with center $v$. The subgroup $\Gamma<\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ acts transitively on $\mathcal{C}(\Delta)$, so every left $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$-coset contains an automorphism $g$ with $gv\in C$. The $\operatorname{Aut}(\Delta)$-stabilizer of $v$ is contained in $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$, so automorphisms $g$ in distinct $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$-cosets have distinct images $gv$. As the graph $\mathcal{G}$ is finite, we deduce that the chamber $C$ is finite, so $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ has finite index in $\operatorname{Aut}(\Delta)$. \end{proof} \begin{remk}\label{remk:Autdiscrete} If the graph $\mathcal{G}$ is finite and \emph{star-rigid} -- meaning that the only automorphism of $\mathcal{G}$ that pointwise fixes the star of a vertex is the identity -- and if the groups $(G_i)$ have order two, then $\Gamma$ has finite index in $\operatorname{Aut}(\Delta)$. In particular $\operatorname{Aut}(\Delta)$ is a discrete automorphism group (i.e. it acts properly and cocompactly on $\Delta$). The group $\Gamma$ acts transitively on the chambers, and $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ has finite index in $\operatorname{Aut}(\Delta)$ by Proposition \ref{prop:finiteindex}, so it suffices to show that $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ has finite chamber-stabilizers. The chambers are finite (as $\mathcal{G}$ is finite), so it is enough to show that the identity is the only element of $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ that pointwise fixes a chamber. Let $g\in\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ pointwise fix a chamber $C$. To show that $g$ is the identity it suffices to show that it pointwise fixes the chambers adjacent to $C$ (it follows that $g$ pointwise fixes all chambers by running the argument along galleries). Let $C'$ be $i$-adjacent to $C$ and let $v:=\wedge(C,C')$. We know that $g$ fixes $v$, and that $C'$ is the only chamber $i$-adjacent to $C$ (as $|G_i|=2$), so $g$ stabilizes $C'$ by Lemma \ref{lem:min}. For $j\in i\dn^\perp$ and $u\in C'$ the vertex of type $\{j\}$, the vertex $w\in C\cap C'$ of type $\{i,j\}$ is fixed by $g$, and $u$ is the only rank-1 vertex in $C'-C$ with $u\leq w$, hence $g$ fixes $u$. The action of $g$ on $C'$ preserves ranks of vertices, and the typing map induces an isomorphism $C'\cong C(N(\mathcal{G}))$, so the action of $g$ on $C'$ corresponds to an automorphism of $\mathcal{G}$ that pointwise fixes the star $i\dn^{\underline{\perp}}$. But $\mathcal{G}$ is star-rigid, thus this automorphism must be the identity on $\mathcal{G}$, and $g$ must pointwise fix $C'$. \end{remk} \section{Hyperplanes and separable subgroups}\label{sec:hyperplanes} In this section we recall the notion of hyperplane in a CAT(0) cube complex, cast in terms of parallelism of edges, and we analyze the hyperplanes in $\Delta$. We also recall the notion of a separable subgroup, and we relate a certain separability condition involving level-equivalence classes -- which we will need in later sections -- to a separability condition involving hyperplanes. Finally, we recall the notion of a group acting specially on a CAT(0) cube complex, and prove that $\Gamma$ is a virtually special lattice in $\operatorname{Aut}(\Delta)$ -- from which we deduce the ``only if'' direction in Theorem \ref{thm:Delta}. \begin{defn}(Parallelism and hyperplanes)\\ Two edges in a cube complex $X$ are \emph{elementary parallel} if they appear as opposite edges in some 2-cube. \emph{Parallelism} on edges of $X$ is the equivalence relation generated by elementary parallelism. For each edge $e$ there is a subspace $H(e)$ of $X$ called a \emph{hyperplane}, and we say that $e$ is \emph{dual} to $H(e)$. Geometrically, the midcubes in $X$ glue together in a natural way to form their own cube complex, and a hyperplane is a component of this cube complex, with $H(e)$ being the hyperplane containing the midpoint of $e$. We refer to \cite{WiseRiches} for the precise definition of hyperplane, but the important fact for us is that two edges are parallel if and only if they are dual to the same hyperplane. Also note that $H(ge)=gH(e)$ for $e$ an edge and $g\in\operatorname{Aut}(X)$. It will be natural in several of our arguments to consider \emph{oriented edges} (i.e. edges that come with an ordering of their vertices), and we will think of each oriented edge as pointing from an initial vertex to a terminal vertex. There is also a notion of parallelism for oriented edges. We say that two oriented edges in a cube complex $X$ are \emph{elementary parallel} if they appear as opposite edges that point in the same direction in some 2-cube. \emph{Parallelism} on oriented edges of $X$ is the equivalence relation generated by elementary parallelism. \end{defn} The following lemma enables us to label hyperplanes, and the corresponding dual edges, by elements $i\in I$. \begin{lem}\label{lem:iedge} Let $e$ be an edge in $\Delta$. Then: \begin{enumerate} \item\label{item:iedge} There is $J\in\bar{N}$ and $i\in J\dn^\perp$ such that $e$ joins vertices of type $J$ and $J\cup\{i\}$. We say that $e$ is an \textbf{$i$-edge}. \item\label{item:parisi} Any edge parallel to $e$ is also an $i$-edge. Hence $i$ only depends on the hyperplane $H(e)$, and we say that $H(e)$ is an \textbf{$i$-hyperplane}. \item\label{item:paror} If $e_1,e_2$ are parallel $i$-edges, then orienting them to have initial vertices of types $J_1,J_2$ and terminal vertices of types $J_1\cup\{i\},J_2\cup\{i\}$ makes them into parallel oriented edges $\overrightarrow{e_1}, \overrightarrow{e_2}$. \item\label{item:oneihyp} Each chamber in $\Delta$ intersects exactly one $i$-hyperplane (so intersects just one parallelism class of $i$-edges) for each $i\in I$. \end{enumerate} \end{lem} \begin{proof} Each elementary parallelism is supported on a 2-cube that lies inside a single chamber, and there is a type-preserving isomorphism from each chamber to $C(N)$, so it suffices to prove the lemma for $C(N)$ rather than $\Delta$. Statement \ref{item:iedge} holds because cubes in $C(N)$ correspond to intervals in $\bar{N}$. If $e_1,e_2$ are edges in $C(N)$ that are elementary parallel, and $e_1$ is an $i$-edge, then there is a 2-cube $Q$ containing $e_1,e_2$ as opposite edges, and the types of vertices of $Q$ make it isomorphic to the Hasse diagram of an interval in $\bar{N}$, so we deduce that $e_2$ is also an $i$-edge. Furthermore, if $e_1,e_2$ are given orientations $\overrightarrow{e_1},\overrightarrow{e_2}$ so that their terminal vertices are of types containing $i$, then $\overrightarrow{e_1},\overrightarrow{e_2}$ point in the same direction in $Q$. Statements \ref{item:parisi} and \ref{item:paror} follow. Finally, if $e$ is an $i$-edge in $C(N)$ then there is a cube $Q$ in $C(N)$ containing the center $v$ and $e$, and $e$ is parallel to the $i$-edge in $Q$ that joins $v$ to the vertex of type $\{i\}$. Hence all $i$-edges in $C(N)$ are parallel -- proving statement \ref{item:oneihyp}. \end{proof} \begin{figure} \caption{An example of the graph $\mathcal{G}$ and a section of the right-angled building $\Delta$. In this example the groups $G_i,G_j,G_k,G_l$ have orders $2,2,3,3$ respectively. One of the chambers is shown in bold, and its vertices are labeled by their types. The edges are oriented to point upwards in the poset structure, so they form the Hasse diagram for $(\Delta^0,\leq)$. A parallelism class of (oriented) $i$-edges is shown in red and a parallelism class of (oriented) $j$-edges is shown in blue.} \label{fig:iedge} \end{figure} Next we establish two basic lemmas regarding edge labels and parallelism. \begin{lem}\label{lem:ijadj} Let $v\in\Delta^0$, let $e_1$ be an $i$-edge incident at $v$, and let $e_2$ be a $j$-edge incident at $v$. Then $e_1,e_2$ form the corner of a 2-cube at $v$ if and only if $i$ is adjacent to $j$. \end{lem} \begin{proof} If $e_1,e_2$ form the corner of a 2-cube $Q$ at $v$, then the types of the vertices of $Q$ correspond to the interval from $t(v)-\{i,j\}$ to $t(v)\cup\{i,j\}$ in $\bar{N}$ (Definitions \ref{defn:cubecone} and \ref{defn:building}), so $i$ is adjacent to $j$. Conversely, suppose that $i$ is adjacent to $j$. Let $t(v)=J$. It suffices to show that there is a chamber $C$ containing both $e_1$ and $e_2$, as then we can consider the 2-cube $Q\subset C$ whose vertex types correspond to the interval from $J-\{i,j\}$ to $J\cup\{i,j\}$ in $\bar{N}$, and we observe that $e_1,e_2$ form the corner of $Q$ at $v$. If $\{i,j\}\not\subset J$, say $i\notin J$, then the other endpoint $u$ of $e_1$ is of type $J\cup\{i\}$, so $v\leq u$, and any chamber containing $e_2$ also contains $e_1$ (Lemma \ref{lem:min}\ref{item:stayC}). Now suppose $\{i,j\}\subset J$. Consider chambers $C_{\gamma_1},C_{\gamma_2}$ containing $e_1,e_2$ respectively. We have $\gamma_1^{-1}\gamma_2\in\Gamma_J$ by Lemma \ref{lem:chamberint}, so we may write $\gamma_1^{-1}\gamma_2=g_i g_j g$ where $g_i\in G_i$, $g_j\in G_j$ and $g\in \Gamma_{J-\{i,j\}}$. The other endpoints of $e_1,e_2$ are of types $J-\{i\},J-\{j\}$ respectively, so Lemma \ref{lem:chamberint} implies that $$e_1\in C_{\gamma_1g_jg}\quad\text{and}\quad e_2\in C_{\gamma_2 g_i^{-1}}.$$ But $g_i,g_j,g$ commute, so these two chambers are in fact the same chamber. \end{proof} \begin{lem}\label{lem:parallelchamber} Let $e,e'$ be $i$-edges in chambers $C,C'$ respectively. Then $e$ is parallel to $e'$ if and only if $C$ and $C'$ are contained in the same $i\dn^\perp$-chamber-residue. \end{lem} \begin{proof} First suppose that $e$ is parallel to $e'$. We wish to show that $C$ and $C'$ are contained in the same $i\dn^\perp$-chamber-residue. Since being in the same $i\dn^\perp$-chamber-residue is an equivalence relation on chambers, it suffices to consider the case where $e$ is elementary parallel to $e'$. Let $Q$ be the 2-cube containing $e$ and $e'$, and let $u,u'$ be the endpoints of $e,e'$ respectively with least rank. If we label the vertices of $Q$ by their types, then $Q$ corresponds to some interval in $\bar{N}$, and one of $u,u'$ is at the bottom of this interval -- say $u$. Then $u\leq u'$, hence $u'\in C\cap C'$ by Lemma \ref{lem:min}\ref{item:stayC}. If $u'$ has type $J$ then $i\in J\dn^\perp$ by Lemma \ref{lem:iedge}, so $J\subset i\dn^\perp$ and $C'\in\mathcal{C}(i\dn^\perp,C)$ by Lemma \ref{lem:cCv}. Conversely, suppose that $C$ and $C'$ are contained in the same $i\dn^\perp$-chamber-residue. We wish to show that $e$ is parallel to $e'$. As parallelism of edges is an equivalence relation, it suffices to consider the case where $C,C'$ are $j$-adjacent for some $j\in i\dn^\perp$. By Lemma \ref{lem:chamberint} there is an $i$-edge $e''$ in $C\cap C'$ joining a vertex of type $\{j\}$ to a vertex of type $\{i,j\}$. But by Lemma \ref{lem:iedge}, $C$ and $C'$ each intersect just one parallelism class of $i$-edges, hence $e$ is parallel to $e'$ as required. \end{proof} For a vertex $v\in\Delta^0$ recall the set $E^-(v)$ from Definition \ref{defn:E-}. These sets give us a way to characterize level-equivalence in terms of hyperplanes as follows. \begin{lem}\label{lem:approxhyp} For vertices $v_1,v_2\in\Delta^0$ we have that $v_1\approx v_2$ if and only if \begin{equation}\label{samehyps} \{H(e_1)\mid e_1\in E^-(v_1)\}=\{H(e_2)\mid e_2\in E^-(v_2)\}. \end{equation} \end{lem} \begin{proof} Suppose that $v_1\approx v_2$, with $t(v_1)=t(v_2)=J$ (Corollary \ref{cor:leveltype}). Let $e_1\in E^-(v_1)$ be a $j$-edge contained in a chamber $C_1$. Note that $j\in J$. By Lemma \ref{lem:approxjperp} there exists a chamber $C_2\in\mathcal{C}(v_2)\cap\mathcal{C}(J\dn^\perp,C_1)$, so in particular $C_2\in\mathcal{C}(j\dn^\perp,C_1)$. Let $e_2$ be the unique $j$-edge in $C_2$ incident to $v_2$, so $e_2\in E^-(v_2)$. But then $H(e_1)=H(e_2)$ by Lemma \ref{lem:parallelchamber}. This proves the $\subset$ inclusion in (\ref{samehyps}), and the reverse inclusion holds by symmetry. Conversely, suppose (\ref{samehyps}) holds and let $t(v_1)=J$. We know that $E^-(v_1)$ contains a subset $\{e_1^j\}_{j\in J}$ where $e_1^j$ is a $j$-edge, and we know from (\ref{samehyps}) that $E^-(v_2)$ contains a corresponding subset $\{e_2^j\}_{j\in J}$ such that $e_2^j$ is parallel to $e_1^j$. In particular, this implies that $J\subset t(v_2)$, and by symmetry we have $t(v_2)=J$. If $J=\emptyset$ then $\operatorname{rk}(v_1)=\operatorname{rk}(v_2)=0$, and $v_1\approx v_2$ by Remark \ref{remk:extreme}, so suppose $J\neq\emptyset$. Say that each $e_1^j$ is contained in a chamber $C_1^j$ and that each $e_2^j$ is contained in a chamber $C_2^j$. Lemma \ref{lem:parallelchamber} implies that $C_1^j$ and $C_2^j$ are contained in the same $j\dn^\perp$-chamber-residue. But we also know from Lemma \ref{lem:cCv} that the chambers $\{C_1^j\}$ are contained in the $J$-chamber-residue $\mathcal{C}(v_1)$, and that the chambers $\{C_2^j\}$ are contained in the $J$-chamber-residue $\mathcal{C}(v_2)$, so we deduce that \begin{equation}\label{intchamres} C_2\in\bigcap_{j\in J}\mathcal{C}(J\cup j\dn^\perp,C_1) \end{equation} for any $C_1\in\mathcal{C}(v_1)$ and $C_2\in\mathcal{C}(v_2)$. But $\cap_{j\in J}(J\cup j\dn^\perp)=J\dn^{\underline{\perp}}$, so the intersection of chamber-residues in (\ref{intchamres}) is equal to the chamber-residue $\mathcal{C}(J\dn^{\underline{\perp}},C_1)$ by Lemma \ref{lem:intchamres}. It then follows from Lemma \ref{lem:cC[v]} that $C_2\in\mathcal{C}([v_1])$, and since $v_2$ is the only vertex in $C_2$ of type $J$ we deduce that $v_1\approx v_2$. \end{proof} We now relate hyperplane stabilizers to level-equivalence class stabilizers. \begin{prop}\label{prop:hypstabs} For each level-equivalence class $[v]$, the intersection of the $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$-stabilizers of the hyperplanes $\{H(e)\mid e\in E^-(v)\}$ is a subgroup of the $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$-stabilizer of $[v]$, and it has finite index if the graph $\mathcal{G}$ and the groups $G_i$ are finite. \end{prop} \begin{proof} Let $A$ be the $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$-stabilizer of $[v]$. For each level-equivalence class $[v]$, it follows from Lemma \ref{lem:approxhyp} that the intersection of the $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$-stabilizers of the hyperplanes $\{H(e)\mid e\in E^-(v)\}$ is a subgroup of $A$. It also follows that $A$ acts on the set of hyperplanes $\{H(e)\mid e\in E^-(v)\}$. If the graph $\mathcal{G}$ and the groups $G_i$ are finite then this set of hyperplanes is finite by Lemma \ref{lem:lfinite}, so there is a finite-index subgroup of $A$ that stabilizes each of these hyperplanes. \end{proof} Recall the definition of separable subgroup. \begin{defn}(Separable subgroups)\\\label{defn:separable} A subgroup $H$ of a group $G$ is \emph{separable} (in $G$) if for any $g\in G-H$ there is a homomorphism $f:G\to\bar{G}$ to a finite group such that $f(g)\notin f(H)$. \end{defn} We will need the following elementary facts about separable subgroups. \begin{lem}\label{lem:separable} Let $G$ be a group. Then the following hold: \begin{enumerate} \item\label{item:H1H2} If $H_1<H_2<G$ with $H_1$ finite index in $H_2$ and separable in $G$, then $H_2$ is separable in $G$. \item\label{item:intH1} If $H_1<H_2<G$ with $H_1$ finite index in $H_2$ and separable in $G$, then there exists a finite-index normal subgroup $\hat{G}\triangleleft G$ with $H_2\cap\hat{G}<H_1$. \end{enumerate} \end{lem} When we prove the ``if'' direction of Theorem \ref{thm:Delta} in later sections, we will need the following subgroups of the uniform lattice $\Lambda<\operatorname{Aut}(\Delta)$ to be separable (in $\Lambda$): (1) $\Lambda$-stabilizers of hyperplanes, and (2) finite-index subgroups of $\Lambda$-stabilizers of level-equivalence classes. In the following lemma we give a sufficient condition for this involving intersections of hyperplane-stabilizers; in particular, this shows that the condition from Theorem \ref{thm:Delta} -- that convex subgroups of $\Lambda$ are separable -- is sufficient. \begin{prop}\label{prop:separable} Suppose that the graph $\mathcal{G}$ and the groups $G_i$ are finite, and let $\Lambda<\operatorname{Aut}(\Delta)$ be a uniform lattice. If the finite-index subgroups of finite intersections of $\Lambda$-stabilizers of hyperplanes are separable in $\Lambda$, then the finite-index subgroups of the $\Lambda$-stabilizers of level-equivalence classes are also separable in $\Lambda$. In particular, this holds if all convex subgroups of $\Lambda$ are separable. \end{prop} \begin{proof} Let $[v]$ be a level-equivalence class and let $L<\Lambda_{[v]}$ be a finite-index subgroup. Let $M<\Lambda$ be the intersection of the $\Lambda$-stabilizers of the hyperplanes $\{H(e)\mid e\in E^-(v)\}$. Let $\Lambda^{\operatorname{rk}}_{[v]},M^{\operatorname{rk}}$ be the rank-preserving subgroups of $\Lambda_{[v]},M$ respectively. Proposition \ref{prop:hypstabs} implies that $M^{\operatorname{rk}}$ is a finite-index subgroup of $\Lambda^{\operatorname{rk}}_{[v]}$. Proposition \ref{prop:finiteindex} implies that $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ has finite index in $\operatorname{Aut}(\Delta)$, so $\Lambda^{\operatorname{rk}}_{[v]},M^{\operatorname{rk}}$ have finite index in $\Lambda_{[v]},M$ respectively. Hence $L$ and $M$ are commensurable. As $L\cap M$ has finite index in $M$, the hypothesis of the proposition tells us that $L\cap M$ is separable in $\Lambda$. And as $L\cap M$ has finite index in $L$, we deduce from Lemma \ref{lem:separable}\ref{item:H1H2} that $L$ is separable in $\Lambda$. \end{proof} We recall the definition of a group acting specially on a CAT(0) cube complex \cite{HaglundWise08}, phrased in terms of parallelism of edges. \begin{defn}(Acting specially)\\\label{defn:specially} Let $G$ be a group acting on a CAT(0) cube complex $X$. We say that $G$ acts \emph{specially} on $X$ if the following two properties are satisfied: \begin{enumerate} \item(Acting \emph{cleanly})\label{item:cleanly} If $\overrightarrow{e_1}, \overrightarrow{e_2}$ are distinct oriented edges with common initial vertex $v$, and $g\in G$, then $g\overrightarrow{e_1}$ is not parallel to $\overrightarrow{e_2}$. \item(Acting \emph{nicely}) Suppose edges $e_1,e_2$ form the corner of a 2-cube at a vertex $v$, and suppose edges $e'_1,e'_2$ are incident at a vertex $v'$. If $g\in G$ is such that $ge_1$ is parallel to $e'_1$, and $e_2$ is parallel to $e'_2$, then $e'_1,e'_2$ form the corner of a 2-cube at $v'$. \end{enumerate} We say that a uniform lattice $\Gamma<\operatorname{Aut}(X)$ is \emph{special} if it acts specially on $X$, and \emph{virtually special} if it has a finite-index subgroup that acts specially on $X$. \end{defn} It is well known that any convex subgroup of a virtually special uniform lattice is separable (see for instance \cite[Corollary 7.9 and Lemma 9.16]{HaglundWise08}). Thus it follows from the following proposition that all convex subgroups of $\Gamma$ are separable. In particular, this implies the ``only if'' direction in Theorem \ref{thm:Delta}: $\Lambda$ and $\Gamma$ being weakly commensurable in $\operatorname{Aut}(\Delta)$ implies that all convex subgroups of $\Lambda$ are separable. \begin{prop}\label{prop:Gvspecial} $\Gamma<\operatorname{Aut}(\Delta)$ is virtually special. \end{prop} \begin{proof} Let $\hat{\Gamma}$ be the kernel of the natural map $\Gamma\to \prod_{i\in I} G_i$. We show that $\hat{\Gamma}$ is a special lattice in $\operatorname{Aut}(\Delta)$ by verifying the two properties from Definition \ref{defn:specially}: \begin{enumerate} \item Let $\overrightarrow{e_1}, \overrightarrow{e_2}$ be distinct oriented edges with common initial vertex $v$, let $\hat{\gamma}\in \hat{\Gamma}$, and suppose for contradiction that $\hat{\gamma}\overrightarrow{e_1}$ is parallel to $\overrightarrow{e_2}$. Suppose that $\overrightarrow{e_1}$ is an $i$-edge. The action of $\Gamma$ preserves types of vertices, so $\hat{\gamma}\overrightarrow{e_1}$ is also an $i$-edge, and $\overrightarrow{e_2}$ is an $i$-edge too by Lemma \ref{lem:iedge}. Let $t(v)=J$. We must have $i\in J$, otherwise the terminal vertices $u_1,u_2$ of $\overrightarrow{e_1}, \overrightarrow{e_2}$ would be distinct vertices of the same type $J\cup\{i\}$, and any chamber containing $v$ would contain $u_1$ and $u_2$ by Lemma \ref{lem:min}\ref{item:stayC}, a contradiction. Let $\overrightarrow{e_1}$ be contained in a chamber $C_{\gamma_1}$. By Lemma \ref{lem:chamberint} there is a chamber $C_{\gamma_1 g}$ containing $\overrightarrow{e_2}$ with $g\in\Gamma_J$. The edge $\hat{\gamma}\overrightarrow{e_1}$ is contained in the chamber $C_{\hat{\gamma}\gamma_1}$, so by Lemmas \ref{lem:CJC} and \ref{lem:parallelchamber} we deduce that $\hat{\gamma}\gamma_1\in \gamma_1g\Gamma_{i\dn^\perp}$. Since $\hat{\gamma}\in\hat{\Gamma}$, we know that the projection of $\hat{\gamma}$ to $G_i$ is trivial, so we deduce that the projection of $g$ to $G_i$ is also trivial. We have $g\in\Gamma_J$, and $\Gamma_J$ is the product of the groups $G_j$ for $j\in J$, so $g$ must be a product of elements in the groups $G_j$ for $j\in J-\{i\}$, or equivalently $g\in\Gamma_{J-\{i\}}$. The terminal vertices $u_1,u_2$ of $\overrightarrow{e_1}, \overrightarrow{e_2}$ are vertices of the same type $J-\{i\}$ in the chambers $C_{\gamma_1},C_{\gamma_1g}$ respectively, so $u_1=u_2$ by Lemma \ref{lem:chamberint} and the fact that $g\in\Gamma_{J-\{i\}}$, contradicting the distinctness of $\overrightarrow{e_1}, \overrightarrow{e_2}$. \item Suppose edges $e_1,e_2$ form the corner of a 2-cube at a vertex $v$, and suppose edges $e'_1,e'_2$ are incident at a vertex $v'$. Suppose $\hat{\gamma}\in \hat{\Gamma}$ is such that $\hat{\gamma}e_1$ is parallel to $e'_1$, and suppose $e_2$ is parallel to $e'_2$. Let $e_1$ be an $i$-edge and let $e_2$ be a $j$-edge. Lemma \ref{lem:ijadj} implies that $i$ is adjacent to $j$. The action of $\Gamma$ preserves types of vertices, so Lemma \ref{lem:iedge} implies that $e'_1$ is an $i$-edge and $e'_2$ is a $j$-edge. Applying Lemma \ref{lem:ijadj} again, we deduce that $e'_1,e'_2$ form the corner of a 2-cube at $v'$, as required.\qedhere \end{enumerate} \end{proof} \section{Residue-groupoids}\label{sec:resgroup} In this section we introduce the notion of residue-groupoids, and prove several lemmas about them. \begin{defn}(Residue groupoids)\\\label{defn:resgroup} Let $C\in\mathcal{C}(\Delta)$ and $J\subset I$. A \emph{$\mathcal{C}(J,C)$-groupoid} is a collection of maps $\phi=(\phi_{C_1,C_2})$, where $(C_1,C_2)$ ranges over all pairs of chambers in $\mathcal{C}(J,C)$ and each map $\phi_{C_1,C_2}:C_1\to C_2$ is a cubical isomorphism preserving centers, such that the following properties hold for all $C_1,C_2,C_3\in\mathcal{C}(J,C)$: \begin{enumerate} \item(Lower degree)\label{item:d-} $\phi_{C_1,C_2}$ preserves lower degrees of rank-1 vertices; \item(Identity)\label{item:id} $\phi_{C_1,C_1}$ is the identity map on $C_1$; \item(Commutativity)\label{item:compose} $\phi_{C_2,C_3}\circ\phi_{C_1,C_2}=\phi_{C_1,C_3}$; \item(Intersection)\label{item:fixint} $\phi_{C_1,C_2}$ restricts to the identity map on $C_1\cap C_2$ whenever this intersection is non-empty. \end{enumerate} If we do not wish to specify the chamber-residue $\mathcal{C}(J,C)$ then we may refer to $\phi$ as a \emph{residue-groupoid}. \end{defn} \begin{remk}\label{remk:rankgroup} Within a given chamber, the rank of a vertex is equal to the length of a shortest edge path to the center, therefore each map $\phi_{C_1,C_2}$ within a residue-groupoid preserves ranks of vertices and induces an isomorphism between the poset structures on $C_1$ and $C_2$. \end{remk} \begin{lem}\label{lem:resgroup} Let $\mathcal{C}(J,C)$ be a chamber-residue and let $\phi=(\phi_{C_1,C_2})$ be a collection of maps, where $(C_1,C_2)$ ranges over all pairs of adjacent chambers in $\mathcal{C}(J,C)$ and each map $\phi_{C_1,C_2}:C_1\to C_2$ is a cubical isomorphism preserving centers. Then $\phi$ has a unique extension to a $\mathcal{C}(J,C)$-groupoid $\bar{\phi}$ if the following five conditions hold: \begin{enumerate}[label=\wackyenum*] \item\label{item:d-'} Each $\phi_{C_1,C_2}$ preserves lower degrees of rank-1 vertices. \item\label{item:inverse} $\phi_{C_1,C_2}=\phi_{C_2,C_1}^{-1}$. \item\label{item:coi} $\phi_{C_2,C_3}\circ\phi_{C_1,C_2}=\phi_{C_1,C_3}$ for all $i\in I$ and all chambers $C_1,C_2,C_3$ in the same $\{i\}$-chamber-residue in $\mathcal{C}(J,C)$. \item\label{item:square} For all adjacent $i,j\in I$ and pairs of $i$-adjacent chambers $C_1,C_2$ and $C'_1,C'_2$, if the pairs $C_1,C'_1$ and $C_2,C'_2$ are $j$-adjacent then the following diagram commutes: \begin{equation}\label{square} \begin{tikzcd}[ ar symbol/.style = {draw=none,"#1" description,sloped}, isomorphic/.style = {ar symbol={\cong}}, equals/.style = {ar symbol={=}}, subset/.style = {ar symbol={\subset}} ] C_1\ar{d}[swap]{\phi_{C_1,C'_1}}\ar{r}{\phi_{C_1,C_2}}&C_2\ar{d}{\phi_{C_2,C'_2}}\\ C'_1\ar{r}[swap]{\phi_{C'_1,C'_2}}&C'_2. \end{tikzcd} \end{equation} \item\label{item:fixadj} $\phi_{C_1,C_2}$ restricts to the identity map on $C_1\cap C_2$. \end{enumerate} \end{lem} \begin{proof} Given an arbitrary pair of chambers $C_1,C_2\in\mathcal{C}(J,C)$, we define $\bar{\phi}_{C_1,C_2}$ to be the identity map on $C_1$ if $C_1=C_2$, otherwise we consider $(C'_0,C'_1,...,C'_n)$ a $J$-gallery from $C_1$ to $C_2$ and define $\bar{\phi}_{C_1,C_2}$ to be the following composition: \begin{equation}\label{barphidef} \bar{\phi}_{C_1,C_2}:=\phi_{C'_{n-1},C'_n}\circ\cdots\circ\phi_{C'_1,C'_2}\circ\phi_{C'_0,C'_1} \end{equation} First let's show that $\bar{\phi}_{C_1,C_2}$ is well-defined. Indeed, by Lemma \ref{lem:moveinJ}, any two $J$-galleries from $C_1$ to $C_2$ differ by a sequence of moves \ref{M1}--\ref{M3} such that the intermediate galleries are also $J$-galleries. But $\bar{\phi}_{C_1,C_2}$ is invariant under moves \ref{M1},\ref{M2},\ref{M3} precisely because of properties \ref{item:inverse},\ref{item:coi},\ref{item:square} respectively, hence $\bar{\phi}_{C_1,C_2}$ is well-defined. It is clear that $\bar{\phi}$ satisfies properties \ref{item:d-}-\ref{item:compose} of Definition \ref{defn:resgroup}, and also that it is the unique extension of $\phi$ to satisfy properties \ref{item:id} and \ref{item:compose}, so it remains to check property \ref{item:fixint}. Let $C_1,C_2\in\mathcal{C}(J,C)$ have non-empty intersection. To show that $\bar{\phi}_{C_1,C_2}$ restricts to the identity map on $C_1\cap C_2$, it suffices to check that it fixes every vertex in $C_1\cap C_2$. Let $v\in C_1\cap C_2$ be a vertex. If $t(v)=J'$ then it follows from Lemma \ref{lem:cCv} that $C_1,C_2$ are contained in the same $J'$-chamber-residue. And we already know that $C_1,C_2$ are in the same $J$-chamber-residue, so it follows from Lemma \ref{lem:intchamres} that they are in fact in the same $(J\cap J')$-chamber-residue. Let $(C'_0,C'_1,...,C'_n)$ be a $(J\cap J')$-gallery from $C_1$ to $C_2$. The chambers $C'_k$ are in the same $J'$-chamber-residue as $C_1,C_2$, so Lemma \ref{lem:cCv} tells us that $v\in\cap_k C'_k$. For each $1\leq k\leq n$ the chambers $C'_{k-1},C'_k$ are adjacent, so $\phi_{C'_{k-1},C'_k}(v)=v$ by \ref{item:fixadj}. It then follows from (\ref{barphidef}) that $\bar{\phi}_{C_1,C_2}(v)=v$ (or from the fact that $\bar{\phi}_{C_1,C_1}$ is the identity map on $C_1$ in the case that $C_1=C_2$). \end{proof} \begin{lem}\label{lem:Gammaresgroup} For each chamber-residue $\mathcal{C}(J,C)$, the group $\Gamma$ induces a $\mathcal{C}(J,C)$-groupoid $\phi$, where each map $\phi_{C_1,C_2}$ is the restriction of the unique element $\gamma\in\Gamma$ with $\gamma C_1=C_2$. \end{lem} \begin{proof} The group $\Gamma$ preserves the rank of vertices, so also preserves the lower degrees. The identity and commutativity properties for $\phi$ follow from $\Gamma$ being a group. For the intersection property, consider chambers $C_{\gamma_1},C_{\gamma_2}$ with non-empty intersection. Lemma \ref{lem:chamberint} gives us $J\in\bar{N}$ such that $\gamma_1^{-1}\gamma_2\in \Gamma_J$ and such that every point in $C_{\gamma_1}\cap C_{\gamma_2}$ is of the form $[\gamma_1,p]$ with $J\subset \underline{t}(p)$. The map $\phi_{C_{\gamma_1},C_{\gamma_2}}$ is given my left multiplication of $\gamma_2\gamma_1^{-1}$, and for $[\gamma_1,p]\in C_{\gamma_1}\cap C_{\gamma_2}$ we have $\gamma_2\gamma_1^{-1}\cdot [\gamma_1,p]=[\gamma_2,p]$, and this equals $[\gamma_1,p]$ by Definition \ref{defn:building} and the fact that $\gamma_1^{-1}\gamma_2\in\Gamma_J<\Gamma_{\underline{t}(p)}$. \end{proof} \begin{lem}\label{lem:stayj} Let $\phi_{C_1,C_2}$ be a map within a residue-groupoid $\phi$ such that $C_1,C_2$ are $i$-adjacent. Then for any $j\in i\dn^\perp$, the vertex $v_1\in C_1$ of type $\{j\}$ is level-adjacent to $\phi_{C_1,C_2}(v_1)$. In particular $\phi_{C_1,C_2}(v_1)$ is also of type $\{j\}$. \end{lem} \begin{proof} Let $u\in C_1$ be the vertex of type $\{i,j\}$. Then $u\in C_1\cap C_2$ by Lemma \ref{lem:min}\ref{item:wCcapC'} and \ref{item:iadjwedge}, and $v_1$ is the unique rank-1 vertex in $C_1-C_2$ such that $v_1\leq u$. Similarly, if $v_2\in C_2$ is the vertex of type $\{j\}$ then $v_2$ is the unique rank-1 vertex in $C_2-C_1$ such that $v_2\leq u$, and $v_2$ is level-adjacent to $v_1$. We know that $\phi_{C_1,C_2}$ fixes $C_1\cap C_2$ by the intersection property of residue-groupoids, so in particular it fixes $u$. Since $\phi_{C_1,C_2}$ preserves rank and poset structure (Remark \ref{remk:rankgroup}), we deduce that $\phi_{C_1,C_2}(v_1)=v_2$. \end{proof} \section{Hierarchy of level-equivalence classes}\label{sec:hierclasses} In this section and the subsequent two we prove the ``if'' direction in Theorem \ref{thm:Delta}. So fix $\Gamma=\Gamma(\mathcal{G},(G_i)_{i\in I})$ a graph product of finite groups, with finite underlying graph $\mathcal{G}$, let $\Delta=\Delta(\mathcal{G},(G_i)_{i\in I})$ be the associated right-angled building, and let $\Lambda<\operatorname{Aut}(\Delta)$ be a uniform lattice such that all convex subgroups of $\Lambda$ are separable. We wish to show that $\Lambda$ and $\Gamma$ are weakly commensurable in $\operatorname{Aut}(\Delta)$. Proposition \ref{prop:finiteindex} says that $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ has finite index in $\operatorname{Aut}(\Delta)$, so we may assume that $\Lambda<\operatorname{Aut}_{\operatorname{rk}}(\Delta)$. Thus $\Lambda$ preserves all the structures from Proposition \ref{prop:preserved}. In this section we endow the level-equivalence classes in $\Delta$ with a hierarchical structure, and we define an operator on the vertices of $\Delta$ called ascent that will help us to move up this hierarchy in a controlled manner. First we need two lemmas and a definition regarding the interaction between level-equivalence and the partial order on $\Delta^0$. \begin{lem}\label{lem:downtype} If $u_1\approx v_1$, $u_2\leq u_1$ and $v_2\leq v_1$ with $t(u_2)=t(v_2)$, then $u_2\approx v_2$. \end{lem} \begin{proof} Let $J_1=t(u_1)=t(v_1)$ (Corollary \ref{cor:leveltype}) and $J_2=t(u_2)=t(v_2)$. We know that $J_2\subset J_1$ since $u_2\leq u_1$, and $J_1\subset J_2\dn^{\underline{\perp}}$ as $J_1$ is spherical. Hence $J_1\dn^{\underline{\perp}}\subset J_2\dn^{\underline{\perp}}$. Let $C_1,C_2$ be chambers containing $u_2,v_2$ respectively. We have $u_1\in C_1$ and $v_1\in C_2$ by Lemma \ref{lem:min}\ref{item:stayC}. Lemma \ref{lem:cC[v]} then implies that $\mathcal{C}([u_1])=\mathcal{C}(C_1,J_1\dn^{\underline{\perp}})\subset\mathcal{C}(C_1,J_2\dn^{\underline{\perp}})=\mathcal{C}([u_2])$. Now $u_1\approx v_1$, so $\mathcal{C}([u_1])=\mathcal{C}([v_1])$, and we deduce that $C_2\in\mathcal{C}([u_2])$. But the only vertex in $C_2$ of type $J_2$ is $v_2$, hence $u_2\approx v_2$. \end{proof} \begin{defn}(1-downsets)\\\label{defn:1downset} Write $\Delta^0_1$ for the set of rank-1 vertices in $\Delta$. Define the \emph{1-downset} of a vertex $u\in\Delta^0$ to be the set $${\downarrow_1}(u):=\{v\in\Delta^0_1\mid v\leq u\}.$$ \end{defn} \begin{lem}\label{lem:downset} The following hold for any vertex $u\in\Delta^0$: \begin{enumerate} \item\label{item:downsettypes} $\{t(v)\mid v\in{\downarrow_1}(u)\}=\{\{i\}\mid i\in t(u)\}$ \item\label{item:downclasses} $\{[v]\mid v\in{\downarrow_1}(u)\}$ only depends on $[u]$. \item\label{item:downsetdet} If ${\downarrow_1}(u)$ is non-empty then it uniquely determines the vertex $u$. \end{enumerate} \end{lem} \begin{proof} The inclusion $\subset$ in \ref{item:downsettypes} is immediate from the definitions. The reverse inclusion $\supset$ holds because if $C$ is a chamber containing $u$ and $i\in t(u)$ then $C$ contains a vertex $v$ of type $\{i\}$, and necessarily $v\in{\downarrow_1}(u)$. Statement \ref{item:downclasses} follows from \ref{item:downsettypes} and Lemma \ref{lem:downtype}. Finally, if ${\downarrow_1}(u)\neq\emptyset$ then we can take $v\in{\downarrow_1}(u)$, choose a chamber $C$ containing $v$, and characterize $u$ as the unique vertex in $C$ of type $t(u)$ ($u\in C$ by Lemma \ref{lem:min}\ref{item:stayC}). But $t(u)$ is determined by ${\downarrow_1}(u)$ because of \ref{item:downsettypes}, hence ${\downarrow_1}(u)$ uniquely determines the vertex $u$. This proves \ref{item:downsetdet}. \end{proof} The hierarchy on level-equivalence classes will be modeled on the following total order. \begin{lem}\label{lem:total} We can define a total order $\preceq$ on the power set of $\{1,...,n\}$ by setting $S_1\preceq S_2$ if $S_1=S_2$ or $\max (S_1\triangle S_2)\in S_2$. \end{lem} \begin{proof} This follows from the observation that $S_1\preceq S_2$ is equivalent to $\sum_{s\in S_1} 2^s\leq\sum_{s\in S_2} 2^s$. \end{proof} \begin{defn}(Hierarchy of level-equivalence classes)\\\label{den:hiertype} Write $\mathcal{L}$ for the set of level-equivalence classes and write $q:\Delta^0_1\to\mathcal{L}/\Lambda$ for the quotient map defined by $q(v):=\Lambda\cdot[v]$. For $u\in\Delta^0$ it follows from Lemma \ref{lem:downset}\ref{item:downclasses} that $q({\downarrow_1}(u))$ only depends on $\Lambda\cdot[u]$. Now fix a total order $\preceq$ on $q(\Delta^0_1)$, and extend it to a partial order on $\mathcal{L}/\Lambda$ by setting $\Lambda\cdot[u_1]\preceq\Lambda\cdot[u_2]$ if $\Lambda\cdot[u_1]=\Lambda\cdot[u_2]$ or $$\max(q({\downarrow_1}(u_1))\triangle q({\downarrow_1}(u_2)))\in q({\downarrow_1}(u_2)).$$ This is indeed a partial order by Lemma \ref{lem:total} (noting that $q(\Delta^0_1)$ is a finite set since $\Lambda$ acts cocompactly on $\Delta$). We also let $\preceq$ denote the partial order on $\mathcal{L}$ obtained by pulling back the partial order on $\mathcal{L}/\Lambda$. \end{defn} \begin{remk}\label{remk:extremehier} A strict inequality $u_1<u_2$ in $\Delta^0$ implies that ${\downarrow_1}(u_1)\subsetneq{\downarrow_1}(u_2)$, so we get a strict inequality $[u_1]\prec[u_2]$. Thus the $\preceq$-maximal classes $[v]\in\mathcal{L}$ have $v$ a $\leq$-maximal vertex in $\Delta^0$, $t(v)$ a maximal spherical subset of $I$ and $[v]=\{v\}$ a singleton. (Although there may be some $\leq$-maximal vertices $v$ such that $[v]$ is not $\preceq$-maximal.) On the other hand, the unique $\preceq$-smallest class $[v]\in\mathcal{L}$ is the class of all rank-0 vertices (see Remark \ref{remk:extreme}). \end{remk} The convex subgroups of $\Lambda$ are separable by hypothesis, so in particular $\Lambda$ has separable hyperplane stabilizers. Thus, we may apply \cite[Lemma 9.14]{HaglundWise08} and replace $\Lambda$ by a finite-index subgroup that acts cleanly on $\Delta$ (Definition \ref{defn:specially}\ref{item:cleanly}). As a consequence, if edges $e_1,e_2$ form the corner of a 2-cube in $\Delta$ then no $\Lambda$-translate of $e_1$ is parallel to $e_2$. This leads to the following lemma. \begin{lem}\label{lem:distinctq} Let $v_1,v_2$ be vertices in a chamber $C$ of types $\{i\}$, $\{j\}$ respectively, with $i,j\in I$ adjacent. Then $q(v_1)\neq q(v_2)$. \end{lem} \begin{proof} Suppose not. Let $e_1,e_2$ be the edges that join the center of $C$ to $v_1,v_2$ respectively. Note that $e_1\in E^-(v_1)$ is an $i$-edge and $e_2\in E^-(v_2)$ is a $j$-edge (Definition \ref{defn:E-}). As $q(v_1)=q(v_2)$, there exists $\lambda\in\Lambda$ such that $\lambda v_1\approx v_2$, and by Lemma \ref{lem:approxhyp} this implies that some edge $f_2\in E^-(v_2)$ is parallel to $\lambda e_1$ (possibly $e_2=f_2$). Let $C'$ be the chamber containing $f_2$, and let $f_1$ be the $i$-edge incident to the center $u'$ of $C'$. The chambers $C,C'$ are either equal or $j$-adjacent, and $j\in i\dn^\perp$, so it follows from Lemma \ref{lem:parallelchamber} that $e_1$ is parallel to $f_1$. Hence $\lambda f_1$ is parallel to $f_2$. If we orient $f_1,f_2$ to have initial vertex $u'$, and denote these oriented edges by $\overrightarrow{f_1}, \overrightarrow{f_2}$, then it follows from Lemma \ref{lem:iedge}\ref{item:paror} that $\lambda \overrightarrow{f_1}$ is parallel to $\overrightarrow{f_2}$. But this contradicts the fact that $\Lambda$ acts cleanly on $\Delta$. \end{proof} We now introduce a (partial) binary operator on $\Delta^0$ called ascent, which moves upward with respect to the orderings $\leq$ and $\preceq$, and will play a key role in the inductive construction in Section \ref{sec:hierresgroup}. \begin{defn}(Ascent)\\ Given a chamber $C$ and vertices $u,v\in C$ with $t(u)=\{i\}$ and $i\in t(v)\dn^\perp$, we define the \emph{ascent of $v$ by $u$}, denoted $v\Uparrow u$, to be the unique vertex in $C$ of type \begin{equation}\label{ascenttype} t(v\Uparrow u)=\cup\{t(u')\mid u'\in{\downarrow_1}(v): q(u)\prec q(u')\}\cup\{i\}. \end{equation} Observe that $C$ is the only chamber containing both $u$ and $v$ by Lemma \ref{lem:min}\ref{item:wCcapC'}, so $v\Uparrow u$ only depends on $u$ and $v$. We note that (\ref{ascenttype}) implies \begin{equation}\label{ascenttypesandwich} \{i\}\subset t(v\Uparrow u)\subset t(v)\cup\{i\}. \end{equation} We also note that \begin{equation}\label{down1Down} {\downarrow_1}(v\Uparrow u)=\{u'\in{\downarrow_1}(v)\mid q(u)\prec q(u')\}\cup\{u\}. \end{equation} \end{defn} \begin{figure} \caption{An example of the graph $\mathcal{G}$ and a section of the right-angled building $\Delta$. In this example the groups $G_i,G_j,G_k,G_l$ have orders $2,2,3,3$ respectively. One of the chambers is shown in bold, and its vertices are labeled by their types. The edges are oriented to point upwards in the poset structure, so they form the Hasse diagram for $(\Delta^0,\leq)$. Three of the vertices are labeled $u,v,w$. We have ${\downarrow_1}(v)=\{v\}$, so $v\Uparrow u=u$ if $q(v)\prec q(u)$ and $v\Uparrow u=w$ if $q(u)\prec q(v)$.} \label{fig:ascent} \end{figure} In the remainder of this section we prove four lemmas about ascent. \begin{lem}\label{lem:ascentequi} Ascent is $\Lambda$-equivariant: $\lambda(v\Uparrow u)=\lambda v\Uparrow \lambda u$ for $\lambda\in\Lambda$. \end{lem} \begin{proof} Given vertices $u,v\in C$, the condition that $t(u)=\{i\}$ for some $i\in t(v)\dn^\perp$ is equivalent to $u$ being a rank-1 vertex incomparable with $v$ but with some cube in $C$ containing both $u$ and $v$. The notion of 1-downset is defined in terms of the poset structure on $\Delta^0$, so it follows from (\ref{down1Down}) and Lemma \ref{lem:downset}\ref{item:downsetdet} that ascent depends only on the poset structure on $\Delta^0$, the chamber structure on $\Delta$, the map $q$ and the order $\preceq$. All of these things are preserved by $\Lambda$, so the lemma follows. \end{proof} \begin{lem}\label{lem:ascentdown} Ascent is strictly $\preceq$-increasing: $[v]\prec[v\Uparrow u]$ \end{lem} \begin{proof} We know from Lemma \ref{lem:distinctq} that $q(u)$ is distinct from $q(u')$ for all $u'\in{\downarrow_1}(v)$, so it follows from (\ref{down1Down}) that \begin{equation*} \max(q({\downarrow_1}(v\Uparrow u))\triangle q({\downarrow_1}(v)))=q(u)\in q({\downarrow_1}(v\Uparrow u)).\qedhere \end{equation*} \end{proof} \begin{lem}\label{lem:doubleascent} Let $C$ be a chamber with distinct vertices $u_1,u_2,v$ such that $t(u_1)=\{i\}$, $t(u_2)=\{j\}$, $i,j\in t(v)\dn^\perp$ are adjacent and $q(u_1)\prec q(u_2)$. Then $(v\Uparrow u_1)\Uparrow u_2=v\Uparrow u_2$. \end{lem} \begin{proof} First we note that $(v\Uparrow u_1)\Uparrow u_2$ is well-defined since $j\in (t(v)\cup\{i\})\dn^\perp\subset t(v\Uparrow u_1)\dn^\perp$ (using (\ref{ascenttypesandwich})). Now observe that any $u'\in{\downarrow_1}(v)$ with $q(u_2)\prec q(u')$ also satisfies $q(u_1)\prec q(u')$, so $u'\in{\downarrow_1}(v\Uparrow u_1)$ by (\ref{down1Down}), and conversely we have that any $u'\in{\downarrow_1}(v\Uparrow u_1)$ with $q(u_2)\prec q(u')$ is an element of ${\downarrow_1}(v)$. We deduce that $(v\Uparrow u_1)\Uparrow u_2=v\Uparrow u_2$ since both vertices are in $C$ and are of the same type. \end{proof} \begin{lem}\label{lem:adjascent} Let $C_1,C_2$ be $i$-adjacent chambers, let $v_1,v_2$ be vertices in $C_1,C_2$ respectively with $t(v_1)=t(v_2)=J\subset i\dn^\perp$ (so $v_1,v_2$ are level-adjacent by Lemma \ref{lem:leveladj}). Then the following hold: \begin{enumerate} \item\label{item:Downu} If $u=\wedge(C_1, C_2)$ then $v_1\Uparrow u=v_2\Uparrow u$. \item\label{item:Downu1u2} If $u_1,u_2$ are vertices in $C_1,C_2$ respectively of type $\{j\}$ with $j\in(J\cup\{i\})\dn^\perp$, then $v_1\Uparrow u_1$ is level-adjacent to $v_2\Uparrow u_2$. \end{enumerate} \end{lem} \begin{proof} It follows from Lemma \ref{lem:downset}\ref{item:downclasses} that \begin{equation}\label{qdownsequal} \{[v']\mid v'\in{\downarrow_1}(v_1)\}=\{[v']\mid v'\in{\downarrow_1}(v_2)\}. \end{equation} We now prove the two parts of the lemma. \begin{enumerate} \item Combining (\ref{ascenttype}), (\ref{qdownsequal}) and the fact that level-equivalent vertices are of the same type, we deduce that $t(v_1\Uparrow u)=t(v_2\Uparrow u)$. As $i\in t(v_1\Uparrow u)$, we see that $v_1\Uparrow u, v_2\Uparrow u\in C_1\cap C_2$, so they must be the same vertex. \item The vertices $u_1,u_2$ are level-adjacent by Lemma \ref{lem:leveladj}, so $q(u_1)=q(u_2)$. Combining (\ref{ascenttype}) and (\ref{qdownsequal}) again, we deduce that $t(v_1\Uparrow u_1)=t(v_2\Uparrow u_2)$. As $t(v_1\Uparrow u_1)\subset J\cup\{j\}\subset i\dn^\perp$, we apply Lemma \ref{lem:leveladj} again to conclude that $v_1\Uparrow u_1$ is level-adjacent to $v_2\Uparrow u_2$.\qedhere \end{enumerate} \end{proof} \section{Hierarchy of residue-groupoids}\label{sec:hierresgroup} The goal of this section is to prove the following proposition (see Definition \ref{defn:actionresgroup} for what we mean by $\Lambda'$-invariance). \begin{prop}\label{prop:Deltagroupoid} There exists a $\Lambda'$-invariant $\mathcal{C}(\Delta)$-groupoid for some finite-index subgroup $\Lambda'<\Lambda$. \end{prop} We will prove this by constructing a hierarchy of residue-groupoids corresponding to the hierarchy of level-equivalence classes from the previous section. First we arrange one more property for the lattice $\Lambda$. This is a generalization of $\Lambda$ \emph{having no holonomy} in the terminology of Haglund \cite[definition 5.6 and Theorem 7.2]{Haglund06}. \begin{lem}\label{lem:trivfactor1} Replacing $\Lambda$ by a finite-index subgroup if necessary, we may assume that, for each chamber-residue $\mathcal{C}([v])=\mathcal{C}(J\dn^{\underline{\perp}},C)$, the action of $\Lambda_{[v]}$ on $\mathcal{C}(J\dn^{\underline{\perp}},C)\cong\mathcal{C}(J,C)\times\mathcal{C}(J\dn^\perp,C)$ is trivial on the first factor (where $\Lambda_{[v]}$ is the $\Lambda$-stabilizer of the class $[v]$, and the product decomposition is from Lemma \ref{lem:cC[v]}). \end{lem} \begin{proof} We know that $\Lambda_{[v]}$ preserves the product decomposition for $\mathcal{C}(J\dn^{\underline{\perp}},C)$ by Proposition \ref{prop:preserved}. Thus we get a homomorphism $\Lambda_{[v]}\to\mathfrak{S}(\mathcal{C}(J,C))$, where $\mathfrak{S}(X)$ denotes the symmetric group of a set $X$. By Lemma \ref{lem:CJC} we know that $\mathcal{C}(J,C)$ bijects with a coset of the subgroup $\Gamma_J$; but $J$ is spherical, so $\mathcal{C}(J,C)$ is finite. Hence the kernel $\hat{\Lambda}_{[v]}$ of the homomorphism $\Lambda_{[v]}\to\mathfrak{S}(\mathcal{C}(J,C))$ has finite index in $\Lambda_{[v]}$. The subgroup $\hat{\Lambda}_{[v]}$ is separable in $\Lambda$ by Proposition \ref{prop:separable}, so by Lemma \ref{lem:separable}\ref{item:intH1} there exists a finite-index normal subgroup $\hat{\Lambda}\triangleleft\Lambda$ with $\Lambda_{[v]}\cap\hat{\Lambda}<\hat{\Lambda}_{[v]}$. This means that the $\hat{\Lambda}$-stabilizer of the class $[v]$ acts trivially on the first factor of $\mathcal{C}(J\dn^{\underline{\perp}},C)\cong\mathcal{C}(J,C)\times\mathcal{C}(J\dn^\perp,C)$. It remains to achieve this property simultaneously for all classes $[v]$. The action of $\Lambda$ preserves product decompositions for all chamber-residues of the form $\mathcal{C}([v])$, so we have $$\lambda \hat{\Lambda}_{[v]}\lambda^{-1}=\hat{\Lambda}_{[\lambda v]}$$ for all $v\in\Delta^0$ and $\lambda\in\Lambda$. There are only finitely many $\Lambda$-orbits of level-equivalence classes since $\Lambda$ acts cocompactly on $\Delta$, so applying the argument of the preceding paragraph to a set of orbit representatives yields a finite-index normal subgroup $\hat{\Lambda}\triangleleft\Lambda$ with $\Lambda_{[v]}\cap\hat{\Lambda}<\hat{\Lambda}_{[v]}$ for all $v\in\Delta^0$, as required. \end{proof} We know from Proposition \ref{prop:preserved} that $\Lambda$ preserves the sets $\mathcal{C}(v)$ and $\mathcal{C}([v])$, so it is natural to make the following definition. \begin{defn}(Actions of $\Lambda$ on the collections of $\mathcal{C}(v)$-groupoids and $\mathcal{C}([v])$-groupoids)\\\label{defn:actionresgroup} Let $\phi$ be a $\mathcal{C}(v)$-groupoid and let $\lambda\in\Lambda$. We define the $\mathcal{C}(\lambda v)$-groupoid $\lambda\cdot\phi$ by the maps $$(\lambda\cdot\phi)_{\lambda C_1,\lambda C_2}:=\lambda\circ\phi_{C_1,C_2}\circ\lambda^{-1}$$ for $C_1,C_2\in\mathcal{C}(v)$. It is straightforward to check that $\lambda\cdot\phi$ satisfies all the properties of being a $\mathcal{C}(\lambda v)$-groupoid. It is also straightforward to check that this defines an action of $\Lambda$ on the collection of all $\mathcal{C}(v)$-groupoids with $v$ ranging over all the vertices of $\Delta$. Analogously, if $\phi$ is a $\mathcal{C}([v])$-groupoid and $\lambda\in\Lambda$, we can define a $\mathcal{C}([\lambda v])$-groupoid $\lambda\cdot\phi$, and this defines an action of $\Lambda$ on the collection of all $\mathcal{C}([v])$-groupoids for $v\in\Delta^0$. \end{defn} \begin{remk}\label{remk:lambdaadj} If we want to show that $\lambda\cdot\phi$ is equal to some other $\mathcal{C}(\lambda v)$-groupoid $\psi$, then it suffices to check that the diagram \begin{equation}\label{lambdasquare} \begin{tikzcd}[ ar symbol/.style = {draw=none,"#1" description,sloped}, isomorphic/.style = {ar symbol={\cong}}, equals/.style = {ar symbol={=}}, subset/.style = {ar symbol={\subset}} ] C_1\ar{d}[swap]{\lambda}\ar{r}{\phi_{C_1,C_2}}&C_2\ar{d}{\lambda}\\ \lambda C_1\ar{r}[swap]{\psi_{\lambda C_1,\lambda C_2}}&\lambda C_2 \end{tikzcd} \end{equation} commutes for all pairs of adjacent chambers $C_1,C_2$ in $\mathcal{C}(v)$. The commutative square for an arbitrary pair of chambers $C_1,C_2\in\mathcal{C}(v)$ can be obtained by composing a sequence of commutative squares for pairs of adjacent chambers corresponding to a gallery in $\mathcal{C}(v)$ that joins $C_1$ and $C_2$. Of course the same is true for $\mathcal{C}([v])$-groupoids. \end{remk} We now define what it means to have a hierarchy of residue-groupoids. \begin{defn}(Hierarchy of residue-groupoids)\\\label{defn:hierarchy} Let $\mathcal{L}'\subset\mathcal{L}$ be a union of $\Lambda$-orbits that is upward closed under $\preceq$ -- i.e. $\mathcal{L}'\ni[u_1]\preceq[u_2]$ implies $[u_2]\in\mathcal{L}'$. Let $\Lambda'<\Lambda$ be a finite-index subgroup. A \emph{$\Lambda'$-hierarchy of residue-groupoids on $\mathcal{L}'$} is a collection of residue-groupoids $(\phi^{[v]})_{[v]\in\mathcal{L}'}$, where $\phi^{[v]}$ is a $\mathcal{C}([v])$-groupoid, such that: \begin{enumerate} \item (Equivariance) $\lambda\cdot\phi^{[v]}=\phi^{[\lambda v]}$ for all $[v]\in\mathcal{L}'$ and $\lambda\in\Lambda'$. \item (Restriction) Let $C$ be a chamber with vertices $u,v\in C$ such that $t(u)=\{i\}$, $i\in t(v)\dn^\perp$ and $[v]\in\mathcal{L}'$. If $C'$ is another chamber that is $i$-adjacent to $C$, then $$\phi^{[v]}_{C,C'}=\phi^{[v\Uparrow u]}_{C,C'}.$$ \end{enumerate} \end{defn} \begin{remk}\label{remk:restriction} It is not hard to see that the restriction property is well-defined. Indeed $i\in t(v\Uparrow u)$ by (\ref{ascenttype}), so $v\Uparrow u\in C\cap C'$ and $C,C'\in\mathcal{C}([v\Uparrow u])$. Also, $[v]\prec[v\Uparrow u]$ by Lemma \ref{lem:ascentdown}, so $[v\Uparrow u]\in\mathcal{L}'$. \end{remk} \begin{remk}\label{remk:La'La''} Note that a $\Lambda'$-hierarchy of residue-groupoids on $\mathcal{L}'$ is also a $\Lambda''$-hierarchy of residue-groupoids on $\mathcal{L}'$ for any finite-index $\Lambda''<\Lambda'$. \end{remk} The rest of this section will be spent proving the following proposition. We observe that Proposition \ref{prop:Deltagroupoid} follows since $\mathcal{C}([v])=\mathcal{C}(\Delta)$ for any rank-0 vertex $v$ (note that such $[v]$ is at the bottom of the hierarchy by Remark \ref{remk:extremehier}). \begin{prop}\label{prop:fullhierarchy} There exists a $\Lambda'$-hierarchy of residue-groupoids on the whole of $\mathcal{L}$ for some finite-index $\Lambda'< \Lambda$. \end{prop} We prove this by working down the hierarchy of level-equivalence classes, noting that $\mathcal{L}/\Lambda$ is finite because $\Lambda$ acts cocompactly on $\Delta$. As a base case, there is vacuously a $\Lambda$-hierarchy of residue-groupoids on $\emptyset$. It remains to prove the inductive step, so assume that we are given a $\Lambda'$-hierarchy of residue-groupoids on $\mathcal{L}'$, denoted by $(\phi^{[v]})_{[v]\in\mathcal{L}'}$, and pick a $\preceq$-maximal class $[v]\in\mathcal{L}-\mathcal{L}'$. Our task is to extend this to a $\Lambda''$-hierarchy of residue-groupoids on $\mathcal{L}'\cup \Lambda\cdot[v]$ for some finite-index $\Lambda''<\Lambda'$. Suppose $t(v)=J$ and let $\mathcal{C}([v])=\mathcal{C}(J\dn^{\underline{\perp}},C)$ (Lemma \ref{lem:cC[v]}). Once again we will make use of the product decomposition from Lemma \ref{lem:cC[v]}: \begin{equation}\label{product} \mathcal{C}([v])=\mathcal{C}(J\dn^{\underline{\perp}},C)\cong\mathcal{C}(J,C)\times\mathcal{C}(J\dn^\perp,C) \end{equation} Note that $\Lambda'_{[v]}$ acts trivially on the first factor by Lemma \ref{lem:trivfactor1}. To extend the hierarchy of residue-groupoids we must construct a $\mathcal{C}([v])$-groupoid $\phi=\phi^{[v]}$. We first define $\phi$ on $J\dn^\perp$-chamber-residues within $\mathcal{C}([v])$ with the following lemma -- note that these $J\dn^\perp$-chamber-residues correspond to sections $\{C_1\}\times\mathcal{C}(J\dn^\perp,C)$ in the product decomposition (\ref{product}), so each is stabilized by $\Lambda'_{[v]}$. (We remark that this step is vacuous if $J$ is a maximal spherical subset of $I$, as then $J\dn^\perp=\emptyset$.) \begin{lem}\label{lem:phiJjperp} For a given chamber-residue $\mathcal{C}(J\dn^\perp,C')\subset\mathcal{C}([v])$, there exists a unique $\mathcal{C}(J\dn^\perp,C')$-groupoid $\phi$ such that: \begin{enumerate} \item\label{item:equi} (Equivariance) $\lambda\cdot\phi=\phi$ for all $\lambda\in\Lambda'_{[v]}$. \item\label{item:rest} (Restriction) If $C_1,C_2\in\mathcal{C}(J\dn^\perp,C')$ are $i$-adjacent and $u,v_1\in C_1$ are vertices of types $\{i\}$ and $J$ respectively, then $$\phi_{C_1,C_2}=\phi^{[v_1\Uparrow u]}_{C_1,C_2}.$$ \end{enumerate} \end{lem} \begin{proof} The maps $\phi_{C_1,C_2}$ are defined for adjacent chambers by \ref{item:rest} -- noting that $[v_1\Uparrow u]\in\mathcal{L}'$ by Lemma \ref{lem:ascentdown}, so $\phi^{[v_1\Uparrow u]}$ is defined. By Lemma \ref{lem:resgroup}, the collection of maps $(\phi_{C_1,C_2})$ extends to a $\mathcal{C}(J\dn^\perp,C')$-groupoid if properties \ref{item:d-'}--\ref{item:fixadj} are satisfied (and this $\mathcal{C}(J\dn^\perp,C')$-groupoid is uniquely determined), so we now check \ref{item:d-'}--\ref{item:fixadj}: \begin{enumerate}[label=\wackyenum*] \item Each $\phi^{[v_1\Uparrow u]}_{C_1,C_2}$ preserves lower degrees of rank-1 vertices, so the maps $\phi_{C_1,C_2}$ do too. \item Let $C_1,C_2\in\mathcal{C}(J\dn^\perp,C')$ be $i$-adjacent, let $u=\wedge(C_1, C_2)$, and let $v_1,v_2\in[v]$ be in $C_1,C_2$ respectively. Let $w:=v_1\Uparrow u=v_2\Uparrow u\in C_1\cap C_2$ (Lemma \ref{lem:adjascent}\ref{item:Downu}). By construction we have $$\phi_{C_1,C_2}:=\phi^{[w]}_{C_1,C_2}\quad\text{and}\quad\phi_{C_2,C_1}:=\phi^{[w]}_{C_2,C_1},$$ and these maps are inverse to each other since $\phi^{[w]}$ is a residue-groupoid. \item If $C_1,C_2,C_3$ are all in the same $\{i\}$-chamber-residue for $i\in J\dn^\perp$, then the maps $\phi_{C_1,C_2},\phi_{C_2,C_3}$ and $\phi_{C_1,C_3}$ are all defined using $\phi^{[w]}$, where $w:=v_1\Uparrow u=v_2\Uparrow u=v_3\Uparrow u$, $u$ is the unique vertex in $C_1\cap C_2\cap C_3$ of type $\{i\}$, and $v_k\in C_k\cap[v]$ for $1\leq k\leq3$. We deduce that $\phi_{C_2,C_3}\circ\phi_{C_1,C_2}=\phi_{C_1,C_3}$ because the corresponding equation holds for $\phi^{[w]}$. \item\label{item:squarescase} Consider adjacent $i,j\in J\dn^\perp$ and pairs of $i$-adjacent chambers $C_1,C_2$ and $C'_1,C'_2$, and suppose that the pairs $C_1,C'_1$ and $C_2,C'_2$ are $j$-adjacent. Let $v_1,v_2,v'_1,v'_2\in[v]$ be vertices in $C_1,C_2,C'_1,C'_2$ respectively. Put $u:=\wedge(C_1, C_2)$, $u':=\wedge(C'_1, C'_2)$, $x_1:=\wedge(C_1, C'_1)$ and $x_2:=\wedge(C_2, C'_2)$. Using Lemma \ref{lem:adjascent}\ref{item:Downu}, put \begin{align*} w&:=v_1\Uparrow u=v_2\Uparrow u,\\ w'&:=v'_1\Uparrow u'=v'_2\Uparrow u',\\ y_1&:=v_1\Uparrow x_1=v'_1\Uparrow x_1,\\ y_2&:=v_2\Uparrow x_2=v'_2\Uparrow x_2. \end{align*} An example is shown in Figure \ref{fig:lem}. Lemma \ref{lem:adjascent}\ref{item:Downu1u2} implies that $u,w,x_1,y_1$ are level-adjacent to $u',w',x_2,y_2$ respectively, so \begin{align*} q(u)&=q(u'),\\ q(w)&=q(w'),\\ q(x_1)&=q(x_2),\\ q(y_1)&=q(y_2). \end{align*} By Lemma \ref{lem:distinctq} we have $q(u)\neq q(x_1)$. Say $q(x_1)\prec q(u)$ (the opposite case follows a symmetric argument). We can then apply Lemma \ref{lem:doubleascent} to deduce that $y_1\Uparrow u=w=y_2\Uparrow u$ and $y_1\Uparrow u'=w'=y_2\Uparrow u'$. Applying the restriction property of hierarchies to $\phi^{[y_1]}=\phi^{[y_2]}$ then yields \begin{equation*} \phi^{[y_1]}_{C_1,C_2}=\phi^{[w]}_{C_1,C_2}\quad\text{and}\quad \phi^{[y_1]}_{C'_1,C'_2}=\phi^{[w']}_{C'_1,C'_2}. \end{equation*} It follows that the diagrams \begin{equation*}\label{phisquare} \begin{tikzcd}[ ar symbol/.style = {draw=none,"#1" description,sloped}, isomorphic/.style = {ar symbol={\cong}}, equals/.style = {ar symbol={=}}, subset/.style = {ar symbol={\subset}} ] C_1\ar{d}[swap]{\phi_{C_1,C'_1}}\ar{r}{\phi_{C_1,C_2}}&C_2\ar{d}{\phi_{C_2,C'_2}}\\ C'_1\ar{r}[swap]{\phi_{C'_1,C'_2}}&C'_2 \end{tikzcd} \quad\text{and}\quad \begin{tikzcd}[ ar symbol/.style = {draw=none,"#1" description,sloped}, isomorphic/.style = {ar symbol={\cong}}, equals/.style = {ar symbol={=}}, subset/.style = {ar symbol={\subset}} ] C_1\ar{d}[swap]{\phi^{[y_1]}_{C_1,C'_1}}\ar{r}{\phi^{[y_1]}_{C_1,C_2}}&C_2\ar{d}{\phi^{[y_1]}_{C_2,C'_2}}\\ C'_1\ar{r}[swap]{\phi^{[y_1]}_{C'_1,C'_2}}&C'_2 \end{tikzcd} \end{equation*} are identical. Our goal is to prove that the left-hand diagram commutes, so this reduces to showing that the right-hand diagram commutes, but this follows since $\phi^{[y_1]}$ is a residue-groupoid. \item Finally, each $\phi^{[v_1\Uparrow u]}_{C_1,C_2}$ fixes the intersection $C_1\cap C_2$ pointwise, so the same is true of the maps $\phi_{C_1,C_2}$. \end{enumerate} We now verify properties \ref{item:equi} and \ref{item:rest} from the lemma. To show that $\lambda\cdot\phi=\phi$ for $\lambda\in\Lambda'_{[v]}$ it suffices to check that diagram (\ref{lambdasquare}) from Remark \ref{remk:lambdaadj} commutes (with $\psi=\phi$). But we defined the maps $\phi_{C_1,C_2}$ for pairs of adjacent chambers using \ref{item:rest}, and these will satisfy the commutative diagram because our existing hierarchy of residue-groupoids satisfies equivariance, and because ascent is $\Lambda'$-equivariant (Lemma \ref{lem:ascentequi}). Lastly, property \ref{item:rest} holds by construction. \end{proof} \begin{figure} \caption{The region of $\Delta$ relevant to part \ref{item:squarescase} in the proof of Lemma \ref{lem:phiJjperp}, shown for the case where $q(x_1)\prec q(u)\prec q(v_1)$ and $\operatorname{rk}(v_1)=1$. The picture shows four 3-cubes glued together, one in each of the chambers $C_1,C_2,C'_1,C'_2$. The centers of these chambers are shown in red.} \label{fig:lem} \end{figure} We apply Lemma \ref{lem:phiJjperp} to all $J\dn^\perp$-chamber-residues in $\mathcal{C}([v])$ and write $(\phi_{C_1,C_2})$ for the collection of all maps obtained. If $J=\emptyset$ then $J\dn^\perp=I$ and $\mathcal{C}([v])=\mathcal{C}(\Delta)$, so the maps $(\phi_{C_1,C_2})$ would define a $\mathcal{C}(\Delta)$-groupoid $\phi$ that is invariant under $\Lambda'$. In this case $[v]$ would be the unique $\preceq$-smallest class in $\mathcal{L}$ (Remark \ref{remk:extremehier}), so the proof of Proposition \ref{prop:fullhierarchy} would be finished. If $J\neq\emptyset$ however, then we are not yet finished with the inductive step in Proposition \ref{prop:fullhierarchy}, so assume $J\neq\emptyset$ for the rest of this section. Next, we describe a procedure that extends the maps $(\phi_{C_1,C_2})$ to a $\mathcal{C}([v])$-groupoid. The idea here is to take a $\mathcal{C}(v)$-groupoid $\psi$ -- noting that $\mathcal{C}(v)$ corresponds to a section $\mathcal{C}(J,C)\times\{C_2\}$ in the product decomposition (\ref{product}) -- and extend it using the maps $(\phi_{C_1,C_2})$. Remember that $\mathcal{C}(v)$-groupoids do exist by Lemma \ref{lem:Gammaresgroup}. \begin{lem}\label{lem:barpsi} Given a $\mathcal{C}(v)$-groupoid $\psi$, there is a unique extension to a $\mathcal{C}([v])$-groupoid $\bar{\psi}$, such that $\bar{\psi}_{C_1,C_2}=\phi_{C_1,C_2}$ whenever $C_1,C_2$ are in the same $J\dn^\perp$-chamber-residue. \end{lem} \begin{proof} Let's first define the $\mathcal{C}([v])$-groupoid $\bar{\psi}$ for a given $\mathcal{C}(v)$-groupoid $\psi$. By Lemma \ref{lem:resgroup} it suffices to define the maps $\bar{\psi}_{C_1,C_2}$ for pairs of adjacent chambers, provided these maps satisfy properties \ref{item:d-'}--\ref{item:fixadj} from Lemma \ref{lem:resgroup}. We define these maps as follows. Let $C_1,C_2\in\mathcal{C}([v])$ be $i$-adjacent. If $i\in J$, then let $C'_1\in\mathcal{C}(v)\cap\mathcal{C}(J\dn^\perp,C_1)$ and $C'_2\in\mathcal{C}(v)\cap\mathcal{C}(J\dn^\perp,C_2)$ (these chambers exist and are unique by Lemma \ref{lem:approxjperp}), and define $\bar{\psi}_{C_1,C_2}$ by the following commutative diagram: \begin{equation}\label{barpsiJ} \begin{tikzcd}[ ar symbol/.style = {draw=none,"#1" description,sloped}, isomorphic/.style = {ar symbol={\cong}}, equals/.style = {ar symbol={=}}, subset/.style = {ar symbol={\subset}} ] C_1\ar{d}[swap]{\phi_{C_1,C'_1}}\ar{r}{\bar{\psi}_{C_1,C_2}}&C_2\ar{d}{\phi_{C_2,C'_2}}\\ C'_1\ar{r}[swap]{\psi_{C'_1,C'_2}}&C'_2 \end{tikzcd} \end{equation} Otherwise $i\in J\dn^\perp$, and we define \begin{equation}\label{barpsiJp} \bar{\psi}_{C_1,C_2}:=\phi_{C_1,C_2}. \end{equation} We note that (\ref{barpsiJp}) is forced by the assumption in the lemma, while (\ref{barpsiJ}) is forced by (\ref{barpsiJp}) and the commutativity property of residue-groupoids. We now verify properties \ref{item:d-'}--\ref{item:fixadj} from Lemma \ref{lem:resgroup}: \begin{enumerate}[label=\wackyenum*] \item The maps $\bar{\psi}_{C_1,C_2}$ preserve lower degrees of rank-1 vertices because all of the maps in (\ref{barpsiJ}) and (\ref{barpsiJp}) do. \item The equation $\bar{\psi}_{C_1,C_2}=\bar{\psi}_{C_2,C_2}^{-1}$ follows because all of the maps in (\ref{barpsiJ}) and (\ref{barpsiJp}) come from residue-groupoids, and residue-groupoids satisfy \ref{item:inverse}. \item Let $C_1,C_2,C_3$ be chambers in an $\{i\}$-chamber-residue in $\mathcal{C}([v])$. We want $\bar{\psi}_{C_2,C_3}\circ\bar{\psi}_{C_1,C_2}=\bar{\psi}_{C_1,C_3}$. If $i\in J$ then this follows from (\ref{barpsiJ}) and the fact that \ref{item:coi} holds for $\psi$. If $i\in J\dn^\perp$ then this follows from (\ref{barpsiJp}) and the fact that \ref{item:coi} holds for the maps $(\phi_{C_1,C_2})$. \item Let $i,j\in J\dn^{\underline{\perp}}$ be adjacent and let $C_1,C_2,C'_1,C'_2\in\mathcal{C}([v])$ be such that $C_1,C_2$ and $C'_1,C'_2$ are $i$-adjacent and $C_1,C'_1$ and $C_2, C'_2$ are $j$-adjacent. We want the following diagram to commute: \begin{equation}\label{barsquare} \begin{tikzcd}[ ar symbol/.style = {draw=none,"#1" description,sloped}, isomorphic/.style = {ar symbol={\cong}}, equals/.style = {ar symbol={=}}, subset/.style = {ar symbol={\subset}} ] C_1\ar{d}[swap]{\bar{\psi}_{C_1,C'_1}}\ar{r}{\bar{\psi}_{C_1,C_2}}&C_2\ar{d}{\bar{\psi}_{C_2,C'_2}}\\ C'_1\ar{r}[swap]{\bar{\psi}_{C'_1,C'_2}}&C'_2 \end{tikzcd} \end{equation} If $i,j\in J$ then this follows from (\ref{barpsiJ}) and the fact that \ref{item:square} holds for $\psi$. If $i,j\in J\dn^\perp$ then this follows from (\ref{barpsiJp}) and the fact that \ref{item:square} holds for the maps $(\phi_{C_1,C_2})$. If $i\in J$ and $j\in J\dn^\perp$ then (\ref{barsquare}) is identical to (\ref{barpsiJ}) after making substitutions. Lastly, the case $i\in J\dn^\perp$ and $j\in J$ follows from the previous case by symmetry. \item Let $C_1,C_2\in\mathcal{C}([v])$ be $i$-adjacent. We want $\bar{\psi}_{C_1,C_2}$ to fix $C_1\cap C_2$ pointwise. If $i\in J\dn^\perp$ then this follows from (\ref{barpsiJp}) and the fact that $\phi_{C_1,C_2}$ fixes $C_1\cap C_2$ pointwise. Now suppose $i\in J$. We induct by the length of a gallery joining $C_1$ to $\mathcal{C}(v)$. For the base case, if $C_1\in\mathcal{C}(v)$ then (\ref{barpsiJ}) implies that $\bar{\psi}_{C_1,C_2}=\psi_{C_1,C_2}$, and we are done because $\psi$ satisfies the intersection property of residue-groupoids. For the general case, take a pair of $i$-adjacent chambers $C'_1,C'_2$ such that $C_1,C'_1$ and $C_2,C'_2$ are $j$-adjacent for some $j\in J\dn^\perp$, and such that $C'_1$ is joined to $\mathcal{C}(v)$ by a shorter gallery than $C_1$ (such chambers exist by the product structure on $\mathcal{C}([v])$). The map $\bar{\psi}_{C'_1,C'_2}$ fixes $C'_1\cap C'_2$ pointwise by our induction hypothesis. Let $w=\wedge(C_1, C_2)$, which is a vertex of type $\{i\}$ (Lemma \ref{lem:min}\ref{item:iadjwedge}). We know from Lemma \ref{lem:stayj} that $w':=\phi_{C_1,C'_1}(w)$ is also of type $\{i\}$, hence $w'\in C'_1\cap C'_2$. A second application of Lemma \ref{lem:stayj} implies that $\phi_{C_2,C'_2}(w)$ is of type $\{i\}$ as well, and also lies in $C'_1\cap C'_2$, so $w'=\phi_{C_2,C'_2}(w)=\wedge(C'_1,C'_2)$. The maps $\phi_{C_1,C'_1},\phi_{C_2,C'_2}$ preserve poset structure (Remark \ref{remk:rankgroup}), so by Lemma \ref{lem:min}\ref{item:wCcapC'} they both map $C_1\cap C_2$ to $C'_1\cap C'_2$. Now consider the vertices $u_1:=\wedge(C_1,C'_1)$ and $u_2:=\wedge(C_2, C'_2)$, which are of type $\{j\}$, and the vertex $x\in C_1\cap C_2$ of type $J$. Lemma \ref{lem:adjascent}\ref{item:Downu1u2} tells us that $y_1:=x\Uparrow u_1$ is level-adjacent to $y_2:= x\Uparrow u_2$, so $[y_1]=[y_2]$. Then Lemma \ref{lem:phiJjperp}\ref{item:rest} and (\ref{barpsiJp}) imply that \begin{equation}\label{barpsiphix} \bar{\psi}_{C_1,C'_1}=\phi^{[y_1]}_{C_1,C'_1}\quad\text{and}\quad\bar{\psi}_{C_2,C'_2}=\phi^{[y_1]}_{C_2,C'_2}. \end{equation} We have $\{j\}\subset t(y_1)\subset J\cup\{j\}$ by (\ref{ascenttypesandwich}), so $i,j\in t(y_1)\dn^{\underline{\perp}}$ and the residue-groupoid $\phi^{[y_1]}$ contains the following commutative square of maps: \begin{equation*} \begin{tikzcd}[ ar symbol/.style = {draw=none,"#1" description,sloped}, isomorphic/.style = {ar symbol={\cong}}, equals/.style = {ar symbol={=}}, subset/.style = {ar symbol={\subset}} ] C_1\ar{d}[swap]{\phi^{[y_1]}_{C_1,C'_1}}\ar{r}{\phi^{[y_1]}_{C_1,C_2}}&C_2\ar{d}{\phi^{[y_1]}_{C_2,C'_2}}\\ C'_1\ar{r}[swap]{\phi^{[y_1]}_{C'_1,C'_2}}&C'_2 \end{tikzcd} \end{equation*} The horizontal maps $\phi^{[y_1]}_{C_1,C_2},\phi^{[y_1]}_{C'_1,C'_2}$ restrict to identity maps on $C_1\cap C_2,C'_1\cap C'_2$ respectively by the intersection property of residue-groupoids, so we deduce that the vertical maps $\phi^{[y_1]}_{C_1,C'_1}$ and $\phi^{[y_1]}_{C_2,C'_2}$ restrict to the same map $C_1\cap C_2\to C'_1\cap C'_2$. Applying (\ref{barpsiphix}) tells us that $\bar{\psi}_{C_1,C'_1}$ and $\bar{\psi}_{C_2,C'_2}$ also restrict to the same map $C_1\cap C_2\to C'_1\cap C'_2$. Finally, since $\bar{\psi}_{C'_1,C'_2}$ restricts to the identity on $C'_1\cap C'_2$, it follows from (\ref{barsquare}) that $\bar{\psi}_{C_1,C_2}$ restricts to the identity on $C_1\cap C_2$, as required.\qedhere \end{enumerate} \end{proof} Lemma \ref{lem:barpsi} provides us with a candidate $\mathcal{C}([v])$-groupoid to extend our $\Lambda'$-hierarchy of residue-groupoids. However, to complete the inductive step of Proposition \ref{prop:fullhierarchy} we need to extend the hierarchy to the entire orbit $\Lambda\cdot[v]$, and it needs to satisfy the equivariance property from Definition \ref{defn:hierarchy} for some finite-index $\Lambda''<\Lambda'$. We address this with following definition and lemmas. \begin{defn}(Groupoid holonomy at $\mathcal{C}(v)$)\\ Given a $\mathcal{C}([v])$-groupoid $\psi$, we let $\psi|_{\mathcal{C}(v)}$ denote the $\mathcal{C}(v)$ groupoid obtained by restricting to $\mathcal{C}(v)$. Let $\mathcal{R}\mathcal{G}(v)$ denote the collection of all $\mathcal{C}(v)$-groupoids, which is finite since $\mathcal{C}(v)$ is finite and each chamber is finite. Let $\mathfrak{S}(\mathcal{R}\mathcal{G}(v))$ denote the symmetric group on $\mathcal{R}\mathcal{G}(v)$. We define the \emph{groupoid holonomy of $\Lambda'$ at $\mathcal{C}(v)$} to be the map \begin{align*} \Upsilon:\Lambda'_{[v]}&\to\mathfrak{S}(\mathcal{R}\mathcal{G}(v))\\ \lambda&\mapsto(\psi\mapsto(\lambda\cdot\bar{\psi})|_{\mathcal{C}(v)}). \end{align*} \end{defn} \begin{lem}\label{lem:resbar} For $\lambda\in\Lambda'_{[v]}$ and $\psi\in\mathcal{R}\mathcal{G}(v)$, we have $\lambda\cdot\bar{\psi}=\overline{((\lambda\cdot\bar{\psi})|_{\mathcal{C}(v)})}$. \end{lem} \begin{proof} Lemma \ref{lem:phiJjperp}\ref{item:equi} implies that $(\lambda\cdot\bar{\psi})_{C_1,C_2}=\phi_{C_1,C_2}$ for any chambers $C_1,C_2$ in the same $J\dn^\perp$-chamber-residue, so the result follows from the uniqueness in Lemma \ref{lem:barpsi}. \end{proof} \begin{lem} The groupoid holonomy $\Upsilon$ is a homomorphism. \end{lem} \begin{proof} It is clear that $\Upsilon(1)$ is trivial. Let $\lambda_1,\lambda_2\in\Lambda'_{[v]}$ and $\psi\in\mathcal{R}\mathcal{G}(v)$. By Lemma \ref{lem:resbar} we have \begin{align*} \Upsilon(\lambda_1)\Upsilon(\lambda_2)(\psi)&=(\lambda_1\cdot(\lambda_2\cdot\bar{\psi}))|_{\mathcal{C}(v)}\\ &=((\lambda_1\lambda_2)\cdot\bar{\psi})|_{\mathcal{C}(v)}\\ &=\Upsilon(\lambda_1\lambda_2)(\psi).\qedhere \end{align*} \end{proof} We complete the inductive step of Proposition \ref{prop:fullhierarchy} with the following lemma. \begin{lem} The $\Lambda'$-hierarchy of residue-groupoids $(\phi^{[u]})_{[u]\in\mathcal{L}'}$ extends to a $\Lambda''$-hierarchy of residue-groupoids on $\mathcal{L}'\cup \Lambda\cdot[v]$ for some finite-index $\Lambda''<\Lambda'$. \end{lem} \begin{proof} We know that $\ker(\Upsilon)$ has finite index in $\Lambda'_{[v]}$, which in turn has finite index in $\Lambda_{[v]}$, so $\ker(\Upsilon)$ is separable in $\Lambda$ by Proposition \ref{prop:separable}. By Lemma \ref{lem:separable}\ref{item:intH1} there exists a finite-index normal subgroup $\Lambda''\triangleleft\Lambda$ such that $\Lambda''_{[v]}<\ker(\Upsilon)$. Intersecting with $\Lambda'$ if necessary, we may assume that $\Lambda''<\Lambda'$. As $\Lambda''_{[v]}<\ker(\Upsilon)$, it follows from Lemma \ref{lem:resbar} that \begin{equation}\label{La''fix} \lambda\cdot\bar{\psi}=\bar{\psi} \end{equation} for any $\psi\in\mathcal{R}\mathcal{G}(v)$ and $\lambda\in\Lambda''_{[v]}$. Put $\phi=\bar{\psi}$ for some $\psi\in\mathcal{R}\mathcal{G}(v)$. It remains to define residue-groupoids for the entire orbit $\Lambda\cdot[v]$. It suffices to define residue-groupoids for the orbit $\Lambda'\cdot[v]$ that satisfy the equivariance and restriction properties for a $\Lambda''$-hierarchy of residue groupoids. Indeed one could run the same argument for each $\Lambda'$-orbit in $\Lambda\cdot[v]$ and then take the intersection of the corresponding groups $\Lambda''$ (a finite intersection, so the resulting group would still have finite index in $\Lambda$). Let $(\lambda_k)$ be a set of left coset representatives for $\Lambda''\Lambda'_{[v]}$ in $\Lambda'$. For each $\lambda_k$ and each $\lambda''\in\Lambda''$ define the residue-groupoid \begin{equation}\label{philamlam} \phi^{[\lambda_k\lambda'' v]}:=(\lambda_k\lambda'')\cdot\phi. \end{equation} We must check that (\ref{philamlam}) is consistent, so suppose $[\lambda_k\lambda''_1v]=[\lambda_l\lambda''_2v]$ for two classes as above. Then $$\lambda_k\lambda''_1\in\lambda_l\lambda''_2\Lambda'_{[v]},$$ so $\lambda_k=\lambda_l$ and $\lambda''_1\in\lambda''_2\Lambda''_{[v]}$. Then $\lambda''_1\cdot\phi=\lambda''_2\cdot\phi$ by (\ref{La''fix}), so $(\lambda_k\lambda''_1)\cdot\phi=(\lambda_l\lambda''_2)\cdot\phi$ as required. It remains to check that these residue-groupoids satisfy the equivariance and restriction properties for a $\Lambda''$-hierarchy of residue groupoids: \begin{enumerate} \item (Equivariance) Given $\lambda\in\Lambda''$ and a class $[\lambda_k\lambda''v]$ as in (\ref{philamlam}), we have $\lambda\lambda_k\lambda''=\lambda_k(\lambda_k^{-1}\lambda\lambda_k)\lambda''$ with $(\lambda_k^{-1}\lambda\lambda_k)\lambda''\in\Lambda''$ since $\Lambda''$ is normal in $\Lambda'$. Therefore (\ref{philamlam}) tells us that \begin{align*} \phi^{[\lambda\lambda_k\lambda'' v]}&=(\lambda\lambda_k\lambda'')\cdot\phi\\ &=\lambda\cdot((\lambda_k\lambda'')\cdot\phi)\\ &=\lambda\cdot\phi^{[\lambda_k\lambda''v]}. \end{align*} \item (Restriction) Let $C_1$ be a chamber with vertices $u,v_1\in C_1$ such that $v_1\in[v]$, $t(u)=\{i\}$ and $i\in t(v_1)\dn^\perp=J\dn^\perp$. If $C_2$ is $i$-adjacent to $C_1$, then it follows from Lemma \ref{lem:phiJjperp}\ref{item:rest} and Lemma \ref{lem:barpsi} that $$\phi^{[v]}_{C_1,C_2}=\phi_{C_1,C_2}=\phi^{[v_1\Uparrow u]}_{C_1,C_2}.$$ This verifies the restriction property for $\phi^{[v]}$. Ascent is $\Lambda$-equivariant (Lemma \ref{lem:ascentequi}), and the residue groupoids $\phi^{[v_1\Uparrow u]}$ are from higher up the hierarchy (Lemma \ref{lem:ascentdown}) so satisfy $\Lambda'$-equivariance, thus we deduce that the restriction property holds for all residue-groupoids from (\ref{philamlam}).\qedhere \end{enumerate} \end{proof} \section{Typed atlases}\label{sec:atlas} In this section we complete the proof of Theorem \ref{thm:Delta}, using the $\Lambda'$-invariant $\mathcal{C}(\Delta)$-groupoid from Proposition \ref{prop:Deltagroupoid} to show that $\Lambda$ and $\Gamma$ are weakly commensurable. Key to this argument will be the notion of typed atlases, which in turn depends on a more general notion of typing map; roughly this will be a map $\Delta^0\to\bar{N}$ that behaves locally in the same way as the standard typing map $t$ from Definition \ref{defn:building}. From now on we will denote this standard typing map by $t_\Gamma$ (as it was defined using the action of $\Gamma$ on $\Delta$) and a general typing map by $t$, which is defined precisely as follows. Other notions that were defined earlier -- such as rank, lower degree and chamber-residues -- remain the same, so these are defined in terms of the standard typing map $t_\Gamma$. \begin{defn}(Typing map)\\\label{defn:typingmap} A \emph{typing map} is a map $t:\Delta^0\to\bar{N}$ such that \begin{enumerate} \item\label{item:toC*} For each chamber $C$ there is a cubical isomorphism $\phi:C\to C_*$ to the base chamber $C_*$ that preserves centers and satisfies $t(v)=t_\Gamma(\phi v)$ for all vertices $v\in C$. \item\label{item:d-prod} For each $i\in I$ and each vertex $v\in\Delta^0$ with $t(v)=\{i\}$, the lower degree $d^-(v)$ is equal to $|G_i|$. \end{enumerate} \end{defn} \begin{lem}\label{lem:tGamma} The standard typing map $t_\Gamma$ satisfies Definition \ref{defn:typingmap}. \end{lem} \begin{proof} Each chamber is of the form $C_\gamma$ for some $\gamma\in\Gamma$, and $\gamma:C_*\to C_\gamma$ is a cubical isomorphism preserving centers and $t_\Gamma$. For a vertex $v$ with $t_\Gamma(v)=\{i\}$ we know that $\operatorname{rk}(v)=1$, so $E^-(v)$ consists of the edges that join $v$ to a rank-0 vertex. Hence $d^-(v)=|E^-(v)|$ is equal to the number of chambers containing $v$, and this equals $|G_i|$ by Lemmas \ref{lem:CJC} and \ref{lem:cCv}. \end{proof} \begin{lem}\label{lem:fttot'} Let $t$ and $t'$ be typing maps and let $C,C'\in\mathcal{C}(\Delta)$. Then there is a unique cubical isomorphism $f:C\to C'$ such that $t(v)=t'(fv)$ for all vertices $v\in C$. Moreover, $f$ preserves centers, rank and poset structure. \end{lem} \begin{proof} Such a map $f$ can be obtained by composing the maps $C,C\to C_*$ from Definition \ref{defn:typingmap}\ref{item:toC*}. Uniqueness follows since $t,t'$ are injective on the vertex sets of $C,C'$ respectively. The maps $C,C\to C_*$ preserve centers by definition, so $f$ does as well. The edges in $C$ correspond exactly to the pairs of vertices $\{u,v\}$ such that $t_\Gamma(u)=t_\Gamma(v)\cup\{i\}$ for some $i\in I$, and the center is of type $\emptyset$, so the rank $\operatorname{rk}(v):=|t_\Gamma(v)|$ of a vertex $v\in C$ is equal to the length of a shortest edge path joining it to the center of $C$. Once can also deduce that $u\leq v$ for $u\in C$ if and only if $u$ belongs to one of the shortest edge paths joining $v$ to the center of $C$. As $f$ preserves centers, it follows that it also preserves rank and poset structure. \end{proof} \begin{cor}\label{cor:ranktype} $|t(v)|=\operatorname{rk}(v)$ for any $v\in\Delta^0$ and any typing map $t$. In particular, Definition \ref{defn:typingmap}\ref{item:d-prod} is a statement about rank-1 vertices. \end{cor} \begin{proof} Apply Lemma \ref{lem:fttot'} with $t'=t_\Gamma$. \end{proof} \begin{lem}\label{lem:typinglevel} If $t$ is a typing map and $v_1\approx v_2$ then $t(v_1)=t(v_2)$. \end{lem} \begin{proof} It suffices to consider the case where $v_1,v_2$ are level-adjacent. Suppose $v_1,v_2$ are contained in adjacent chambers $C_1,C_2$ respectively, with $v_1,v_2\notin C_1\cap C_2$, and with some vertex $v_1,v_2\leq u\in C_1\cap C_2$ such that $\operatorname{rk}(v_1)=\operatorname{rk}(v_2)=\operatorname{rk}(u)-1$. We know from Lemma \ref{lem:leveladj} that $t_\Gamma(v_1)=t_\Gamma(v_2)=J$ say, and that $t_\Gamma(u)=J\cup\{i\}$ for some $i\in J\dn^\perp$. Since $v_1\notin C_1\cap C_2$, we deduce from Lemma \ref{lem:min}\ref{item:wCcapC'} and \ref{item:iadjwedge} that $C_1,C_2$ are $i$-adjacent. Let $$\Delta^-(u):=\{v\in \Delta^0\mid v\leq u\text{ and }\operatorname{rk}(v)=\operatorname{rk}(u)-1\}.$$ We know that $\Delta^-(u)\cap C_1$ consists of one vertex $v$ with $t_\Gamma(v)=t_\Gamma(u)-\{j\}$ for each $j\in t_\Gamma(u)$. Furthermore, by Lemma \ref{lem:min}\ref{item:wCcapC'} and \ref{item:iadjwedge}, $v_1$ is the only element of $\Delta^-(u)\cap C_1$ that is not in $C_1\cap C_2$. By property \ref{item:toC*} of typing maps, we also know that $\Delta^-(u)\cap C_1$ consists of one vertex $v$ with $t(v)=t(u)-\{j\}$ for each $j\in t(u)$, hence $$t(v_1)=t(u)-\bigcap_{v\in\Delta^-(u)\cap C_1\cap C_2}t(v).$$ The same is true for $t(v_2)$, so $t(v_1)=t(v_2)$ as required. \end{proof} \begin{lem}\label{lem:ijadjacent} Let $t$ be a typing map and let $u,v$ be rank-1 vertices in a chamber $C$ with $t(u)=\{i\}$ and $t(v)=\{j\}$. Then the property of $i,j$ being adjacent is independent of the choice of $t$. \end{lem} \begin{proof} By property \ref{item:toC*} of typing maps, we see that $i,j$ are adjacent if and only if there is a rank-2 vertex $w\in C$ with $u,v\leq w$ -- and in this case $t(w)=\{i,j\}$. \end{proof} We now define a notion of typed atlas, which generalizes Haglund's notion of atlas \cite[Definition 6.1]{Haglund06}. (In fact Haglund's atlases coincide with typed atlases for which the typing map is the standard one.) Roughly speaking, a typed atlas builds on the data given by a typing map by prescribing a family of local actions of the groups $G_i$ on $\Delta$, and it does this in a way that mimics the actions of the conjugates of the $G_i$ when viewed as subgroups of $\Gamma$. \begin{defn}(Typed atlas)\\\label{defn:typedatlas} A \emph{typed atlas} is a pair $(t,\mathcal{A})$ where $t$ is a typing map and $\mathcal{A}$ is a collection of homomorphisms $(\mathcal{A}_{[v]})$, where $[v]$ ranges over level-equivalence classes of rank-1 vertices in $\Delta$, such that: \begin{itemize} \item $\mathcal{A}_{[v]}$ is a homomorphism $$\mathcal{A}_{[v]}:G_i\to\mathfrak{S}(\mathcal{C}([v]))$$ for $v\in\Delta^0$ with $t(v)=\{i\}$ ($i$ only depends on the class $[v]$ by Lemma \ref{lem:typinglevel}). \item $G_i$ preserves the product decomposition \begin{equation}\label{product2} \mathcal{C}([v])=\mathcal{C}(j\dn^{\underline{\perp}},C)\cong\mathcal{C}(\{j\},C)\times\mathcal{C}(j\dn^\perp,C) \end{equation} from Lemma \ref{lem:cC[v]}, where $t_\Gamma(v)=\{j\}$. \item $G_i$ acts simply transitively on the first factor and trivially on the second factor in (\ref{product2}). \end{itemize} We call $\mathcal{A}$ an \emph{atlas} for $t$. \end{defn} \begin{remk}\label{remk:=Gi} The first factor $\mathcal{C}(\{j\},C)$ from (\ref{product2}) has size $|G_i|$ because $|G_i|=d^-(v)=|G_j|$ (applying the second property of typing maps to $t$ and $t_\Gamma$) and $|\mathcal{C}(\{j\},C)|=|G_j|$ by Lemma \ref{lem:CJC}. Thus it is possible for $G_i$ to act simply transitively on $\mathcal{C}(\{j\},C)$. \end{remk} \begin{remk} In the case where $\Gamma$ is a right-angled Coxeter group and $\Delta$ the associated Davis complex, we know that the groups $G_i$ all have order two, so in this case there is only one atlas for each typing map. \end{remk} The reason for defining typed atlases is so that we can establish an action of $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ on the set of typed atlases. We will see later that this action is closely related to the conjugation action of $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ on the set of its uniform lattices -- which is in turn connected to weak commensurability of uniform lattices. \begin{defn}(Action on the set of typed atlases)\\ The structures used in the definitions of typing map and typed atlas depend on the standard typing map $t_\Gamma$, however these structure are also preserved by $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ (Proposition \ref{prop:preserved}), so we get an action of $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ on the set of typed atlases. More precisely this action is defined as follows: given $f\in\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ and a typed atlas $(t,\mathcal{A})$, we define $f_*(t,\mathcal{A})=(t',\mathcal{A}')$ by $t'(fv):=t(v)$ for $v\in\Delta^0$ and $\mathcal{A}'_{[fv]}(g)(fC):=f\mathcal{A}_{[v]}(g)(C)$ for $v\in\Delta^0$ with $t(v)=\{i\}$, $g\in G_i$ and $C\in\mathcal{C}([v])$. One can easily check that $f_*(t,\mathcal{A})$ is a typed atlas using Proposition \ref{prop:preserved}. \end{defn} The next step is to construct typed atlases that are invariant under (finite-index subgroups of) $\Lambda$ and $\Gamma$. This is where we make use of the $\Lambda'$-invariant $\mathcal{C}(\Delta)$-groupoid from Proposition \ref{prop:Deltagroupoid}. \begin{lem}\label{lem:Gammatypedatlas} There is a typed atlas $(t_\Gamma,\mathcal{A}_\Gamma)$ preserved by $\Gamma$. \end{lem} \begin{proof} The typing map $t_\Gamma$ is the standard typing map already discussed (Lemma \ref{lem:tGamma}). This is preserved by $\Gamma$ by construction (Definition \ref{defn:building}). The atlas $\mathcal{A}_\Gamma$ is essentially constructed using the actions of conjugates of the subgroups $G_i<\Gamma$ on $\Delta$, or equivalently if each chamber is labeled by an element of $\Gamma$ (as in Definition \ref{defn:building}) then we consider the right multiplication action of $G_i$ on these labels (twisted by inversion so that we get left actions). Precisely, $\mathcal{A}_\Gamma$ is defined as follows. Given a rank-1 vertex $v\in\Delta^0$ with $t_\Gamma(v)=\{i\}$, a chamber $C_\gamma\in\mathcal{C}([v])$, and an element $g\in G_i<\Gamma$, we define $$\mathcal{A}_{\Gamma,[v]}(g)(C_\gamma):=C_{\gamma g^{-1}}.$$ This is in the same $\{i\}$-chamber-residue as $C_\gamma$ by Lemma \ref{lem:CJC}, and $\mathcal{A}_{\Gamma,[v]}$ clearly defines a (left) action of $G_i$ on $\mathcal{C}([v])$. If we fix a chamber $C_\gamma\in\mathcal{C}([v])$, then Lemma \ref{lem:CJC} and Lemma \ref{lem:cC[v]} tell us that $\mathcal{C}([v])$ consists precisely of the chambers $C_{\gamma \gamma_1\gamma_2}$ where $\gamma_1\in \Gamma_i=G_i$ and $\gamma_2\in \Gamma_{i\dn^\perp}$. Moreover, the elements $(\gamma_1,\gamma_2)$ correspond to the coordinates in the product decomposition (\ref{product2}), and $\Gamma_i$ commutes with $\Gamma_{i\dn^\perp}$, so the action $\mathcal{A}_{\Gamma,[v]}$ preserves the product decomposition, acts simply transitively on the first factor and trivially on the second factor. The collection of homomorphisms $(\mathcal{A}_{\Gamma,[v]})$ thus defines an atlas $\mathcal{A}_\Gamma$ for $t_\Gamma$. Each homomorphism $\mathcal{A}_{\Gamma,[v]}$ is defined using right-multiplication, while the action of $\Gamma$ on $\mathcal{C}(\Delta)$ is defined using left multiplication ($\gamma'C_\gamma=C_{\gamma'\gamma}$), these operations commute with each other so one easily verifies that $\Gamma$ preserves the atlas $\mathcal{A}_\Gamma$. \end{proof} \begin{lem}\label{lem:Latypedatlas} There is a typed atlas $(t,\mathcal{A})$ preserved by a finite-index subgroup $\hat{\Lambda}<\Lambda$. \end{lem} \begin{proof} Proposition \ref{prop:Deltagroupoid} gives us a finite-index subgroup $\Lambda'<\Lambda$ and a $\Lambda'$-invariant $\mathcal{C}(\Delta)$-groupoid $\phi$. We define $t:\Delta\to\bar{N}$ so that \begin{equation}\label{t} t(v)=t_\Gamma(\phi_{C,C_*}(v)) \end{equation} for each chamber $C$ and all vertices $v\in C$. By the commutativity and intersection properties of residue-groupoids (Definition \ref{defn:resgroup}) we know that $\phi_{C_1,C_*}(v)=\phi_{C_2,C_*}(v)$ if $C_1,C_2\in\mathcal{C}(v)$, so (\ref{t}) provides a consistent definition for a map $t:\Delta\to\bar{N}$. The map $t$ satisfies property \ref{item:toC*} of typing maps because of (\ref{t}), and it satisfies property \ref{item:d-prod} because residue-groupoids preserve lower degrees of rank-1 vertices. Thus $t$ is a valid typing map. The group $\Lambda'$ may not preserve $t$, so we will pass to a finite-index subgroup of $\Lambda'$ that does by using the following holonomy map: \begin{align*} \Upsilon:\Lambda'&\to\operatorname{Aut}(C_*)\\ \lambda&\mapsto\phi_{\lambda C_*,C_*}\circ\lambda \end{align*} Here $\operatorname{Aut}(C_*)$ is the group of cubical automorphisms of the chamber $C_*$. Let's check that $\Upsilon$ is a homomorphism. Recall that, since $\phi$ is $\Lambda'$-invariant, we have \begin{equation}\label{La'invariant} \phi_{\lambda C_1,\lambda C_2}=\lambda\circ\phi_{C_1,C_2}\circ\lambda^{-1} \end{equation} for all chambers $C_1,C_2$ and $\lambda\in\Lambda'$. Suppose $\lambda_1,\lambda_2\in\Lambda'$. We then have \begin{align*} \Upsilon(\lambda_1)\circ\Upsilon(\lambda_2)&=\phi_{\lambda_1C_*,C_*}\circ\lambda_1\circ\phi_{\lambda_2C_*,C_*}\circ\lambda_2,\\ &=\phi_{\lambda_1C_*,C_*}\circ\phi_{\lambda_1\lambda_2 C_*,\lambda_1 C_*}\circ\lambda_1\circ\lambda_2,&\text{by (\ref{La'invariant}),}\\ &=\phi_{\lambda_1\lambda_2 C_*, C_*}\circ (\lambda_1\lambda_2),\\ &=\Upsilon(\lambda_1\lambda_2). \end{align*} Let $\hat{\Lambda}=\ker\Upsilon$, which has finite index in $\Lambda'$ since $C_*$ is finite. Given a vertex $v$ in a chamber $C$ and $\lambda\in\hat{\Lambda}$, we can then compute \begin{align*} \phi_{\lambda C,C_*}(\lambda v)&=\phi_{\lambda C_*,C_*}\circ\phi_{\lambda C,\lambda C_*}(\lambda v),\\ &=\phi_{\lambda C_*,C_*}\circ\lambda\circ\phi_{C,C_*}\circ\lambda^{-1}(\lambda v),&\text{by (\ref{La'invariant}),}\\ &=\phi_{C,C_*}(v), &\text{as }\lambda\in\ker\Upsilon. \end{align*} It follows from (\ref{t}) that $t(v)=t(\lambda v)$, hence $t$ is $\hat{\Lambda}$-invariant. We now turn to constructing a $\hat{\Lambda}$-invariant atlas $\mathcal{A}$ for $t$. Let $v\in\Delta^0$ be a rank-1 vertex with $t(v)=\{i\}$. We know from Remark \ref{remk:=Gi} that the first factor in the product decomposition (\ref{product2}) for $\mathcal{C}([v])$ has size equal to $|G_i|$. Construct $\mathcal{A}_{[v]}$ by arbitrarily choosing an action of $G_i$ on $\mathcal{C}([v])$ that preserves this product decomposition and such that $G_i$ acts simply transitively on the first factor and trivially on the second factor. For $\lambda\in\hat{\Lambda}$ we define $\mathcal{A}_{[\lambda v]}$ by \begin{equation}\label{Alav} \mathcal{A}_{[\lambda v]}(\lambda C)=\lambda \mathcal{A}_{[v]}(C), \end{equation} for $C\in\mathcal{C}([v])$. Any $\lambda\in\hat{\Lambda}_{[v]}$ acts trivially on the first factor of the product decomposition (\ref{product2}) by Lemma \ref{lem:trivfactor1}, so commutes with $\mathcal{A}_{[v]}$, hence $\mathcal{A}_{[\lambda v]}=\mathcal{A}_{[v]}$. Thus the definitions of the actions $\mathcal{A}_{[\lambda v]}$ given by (\ref{Alav}) are consistent. The $\hat{\Lambda}$-invariant atlas $\mathcal{A}$ is constructed by repeating this argument for each $\hat{\Lambda}$-orbit of level-equivalence classes of rank-1 vertices. \end{proof} Given typed atlases $(t,\mathcal{A})$ and $(t',\mathcal{A}')$, and an automorphism $f\in\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ with $f_*(t,\mathcal{A})=(t',\mathcal{A}')$, we would like to interpret $f$ in terms of these atlases. This motivates the following definition and subsequent three lemmas. \begin{defn}($(t,\mathcal{A})$-word of a gallery)\\\label{defn:tAword} Let $(t,\mathcal{A})$ be a typed atlas. We describe a way of associating a letter $s$ in the alphabet $$\mathfrak{A}:=\sqcup_{i\in I}(\{i\}\times (G_i-\{1\}))$$ to each ordered pair of adjacent chambers. Indeed, if $C_1,C_2$ are adjacent chambers, and $v=\wedge(C_1, C_2)$ with $t(v)=\{i\}$, and $g\in G_i$ with $\mathcal{A}_{[v]}(g)(C_1)=C_2$, then we associate the letter $(i,g)\in\mathfrak{A}$ to $(C_1,C_2)$. Note that the element $g\in G_i$ exists and is unique because $\mathcal{A}_{[v]}$ defines an action of $G_i$ on $\mathcal{C}([v])$ that restricts to a simply transitive action on $\mathcal{C}(v)$ (recall that $\mathcal{C}([v])$ corresponds to one of the sections $\mathcal{C}(\{j\},C)\times\{C_2\}$ in the product decomposition (\ref{product2}) by Lemma \ref{lem:cC[v]}). The \emph{$(t,\mathcal{A})$-word} of a gallery $(C_0,C_1,...,C_n)$ is the word $(s_1,...,s_n)$ on $\mathfrak{A}$ such that each $s_k$ is the letter associated to the pair $(C_{k-1},C_k)$. \end{defn} \begin{lem}\label{lem:wordtogallery} Let $(s_1,...,s_n)$ be a word on $\mathfrak{A}$, let $C\in\mathcal{C}(\Delta)$ and let $(t,\mathcal{A})$ be a typed atlas. Then there exists a unique gallery $(C_0,C_1,...,C_n)$ with $C_0=C$ whose $(t,\mathcal{A})$-word is $(s_1,...,s_n)$. \end{lem} \begin{proof} If $s_k=(i_k,g_k)$, then the gallery is determined recursively by $C_0:=C$ and $C_k:=\mathcal{A}_{[v_k]}(g_k)(C_{k-1})$, where $v_k$ is the unique vertex in $C_{k-1}$ with $t(v_k)=\{i_k\}$. \end{proof} \begin{lem}\label{lem:gallerytransfer} Let $(t,\mathcal{A})$ and $(t',\mathcal{A}')$ be typed atlases. Let $G=(C_0,C_1,...,C_n)$ and $G'=(C'_0,C'_1,...,C'_n)$ be galleries, and suppose that the $(t,\mathcal{A})$-word of $G$ is the same as the $(t',\mathcal{A}')$-word of $G'$. Then the end-chamber $C'_n$ depends only on the end-chambers $C_0,C_n,C'_0$, not on the choice of galleries $G,G'$. \end{lem} \begin{proof} The moves \ref{M1}--\ref{M3} from Lemma \ref{lem:moves} do not change the end-chambers of galleries, and any two galleries with the same end-chambers differ by a sequence of such moves. So it suffices to show that if we modify the gallery $G$ by one of the moves \ref{M1}--\ref{M3}, then we can modify the gallery $G'$ by a corresponding move such that the $(t,\mathcal{A})$-word of $G$ remains the same as the $(t',\mathcal{A}')$-word of $G'$. We split into three cases according to which of the moves \ref{M1}--\ref{M3} we apply to $G$: \begin{enumerate}[label={(M\arabic*)}] \item This move replaces a subgallery $(C_k)$ by $(C_k,\hat{C},C_k)$, where $C_k,\hat{C}$ are adjacent. It follows readily from Definition \ref{defn:tAword} that the $(t,\mathcal{A})$-word of $(C_k,\hat{C},C_k)$ is of the form $((i,g),(i,g^{-1}))$, and that there is a chamber $\hat{C}'$ such that the $(t',\mathcal{A}')$-word of $(C'_k,\hat{C}',C'_k)$ is also $((i,g),(i,g^{-1}))$. Hence replacing $(C'_k)$ by $(C'_k,\hat{C}',C'_k)$ in $G'$ is an \ref{M1} move which ensures that the $(t,\mathcal{A})$-word of $G$ remains the same as the $(t',\mathcal{A}')$-word of $G'$. \item This move replaces a subgallery $(C_k,C_{k+1})$ by $(C_k,\hat{C},C_{k+1})$, where $C_k,\hat{C},C_{k+1}$ are pairwise $j$-adjacent for some $j\in I$. In this case there exists a unique rank-1 vertex $v\in C_k\cap \hat{C}\cap C_{k+1}$, and $t_\Gamma(v)=\{j\}$. Observe that $C_k,\hat{C},C_{k+1}\in\mathcal{C}(v)\subset\mathcal{C}([v])$. If $t(v)=\{i\}$ then $s_{k+1}=(i,g)$ for some $g\in G_i$ with \begin{equation}\label{AgCk} \mathcal{A}_{[v]}(g)(C_k)=C_{k+1}. \end{equation} Similarly, the $(t,\mathcal{A})$-word of $(C_k,\hat{C},C_{k+1})$ takes the form $((i,g_1),(i,g_2))$ for $g_1,g_2\in G_i$ with \begin{equation}\label{Ag1Ck} \mathcal{A}_{[v]}(g_1)(C_k)=\hat{C}\quad\text{and}\quad\mathcal{A}_{[v]}(g_2)(\hat{C})=C_{k+1}. \end{equation} Let $v'\in C'_k$ be the vertex with $t'(v')=\{i\}$ and let $\hat{C}':=\mathcal{A}'_{[v']}(g_1)(C'_k)$. The action of $G_i$ on $\mathcal{C}(v)$ is simply transitive, so it follows from (\ref{AgCk}) and (\ref{Ag1Ck}) that $g=g_2g_1$, hence \begin{align*} \mathcal{A}'_{[v']}(g_2)(\hat{C}')=\mathcal{A}'_{[v']}(g)(C'_k)=C'_{k+1}. \end{align*} Thus $(C'_k,\hat{C}',C'_{k+1})$ has $(t',\mathcal{A}')$-word $((i,g_1),(i,g_2))$, and replacing $(C'_k,C'_{k+1})$ by $(C'_k,\hat{C}',C'_{k+1})$ in $G'$ is an \ref{M2} move which ensures that the $(t,\mathcal{A})$-word of $G$ remains the same as the $(t',\mathcal{A}')$-word of $G'$. \item This move replaces a subgallery $(C_k,C_{k+1},C_{k+2})$ by $(C_k,\hat{C},C_{k+2})$, where $C_k,C_{k+1}$ and $\hat{C},C_{k+2}$ are $j_1$-adjacent, and $C_k,\hat{C}$ and $C_{k+1},C_{k+2}$ are $j_2$-adjacent, for adjacent $j_1,j_2\in I$. Let $v_1:=\wedge(C_k, C_{k+1})$ and $v_2:=\wedge(C_{k+1}, C_{k+2})$. So $t_\Gamma(v_1)=\{j_1\}$ and $t_\Gamma(v_2)=\{j_2\}$. Let $t(v_1)=\{i_1\}$ and $t(v_2)=\{i_2\}$. Lemma \ref{lem:ijadjacent} implies that $i_1,i_2$ are adjacent. The $(t,\mathcal{A})$-word for $(C_k,C_{k+1},C_{k+2})$ takes the form $((i_1,g_1),(i_2,g_2))$ for some $g_1\in G_{i_1}$ and $g_2\in G_{i_2}$. Now consider the corresponding subgallery $(C'_k,C'_{k+1},C'_{k+2})$ in $G'$, which has $((i_1,g_1),(i_2,g_2))$ as its $(t',\mathcal{A}')$-word. This means that $t'(v'_1)=\{i_1\}$ and $t'(v'_1)=\{i_2\}$ for $v'_1:=\wedge(C'_k, C'_{k+1})$ and $v'_2:=\wedge(C'_{k+1}, C'_{k+2})$. Let $t_\Gamma(v'_1)=\{j'_1\}$ and $t_\Gamma(v'_2)=\{j'_2\}$. Lemma \ref{lem:ijadjacent} implies that $j'_1,j'_2$ are adjacent. By the product structure of $\{j'_1,j'_2\}$-chamber-residues (Lemma \ref{lem:product}), we deduce that there is a chamber $\hat{C}'$ that is $j'_2$-adjacent to $C'_k$ and $j'_1$-adjacent to $C'_{k+1}$. So the setup of the chambers $C'_k,C'_{k+1},C'_{k+2},\hat{C}'$ mimics the setup of the chambers $C_k,C_{k+1},C_{k+2},\hat{C}$, but with respect to $j'_1,j'_2$ instead of $j_1,j_2$. Recall from Lemma \ref{lem:cC[v]} that we have a product decomposition \begin{equation}\label{product3} \mathcal{C}([v_1])\cong \mathcal{C}(\{j_1\},C_k)\times\mathcal{C}(j_1\dn^\perp,C_k). \end{equation} If we start at a chamber in $\mathcal{C}([v])$ then moving to a $j_1$-adjacent chamber only changes the projection to the first factor of (\ref{product3}) while moving to a $j_2$-adjacent chamber only changes the projection to the second factor (as $j_2\in j_1\dn^\perp$). So the chambers $C_k,C_{k+1},C_{k+2},\hat{C}$ are all in $\mathcal{C}([v_1])$, the chambers $\hat{C},C_{k+2}$ have the same projection to the first factor as $C_k,C_{k+1}$ respectively, and $\hat{C}$ has the same projection to the second factor as $C_{k+2}$. It follows that $\hat{C},C_{k+2}\in\mathcal{C}(u_1)$ for some $u_1\in[v_1]$. Note that $\operatorname{rk}(u_1)=1$, so $u_1=\wedge(\hat{C},C_{k+2})$ by Lemma \ref{lem:min}, and $t(u_1)=t(v_1)=\{i_1\}$ by Lemma \ref{lem:typinglevel}. The $(t,\mathcal{A})$-word for $(C_k,C_{k+1})$ is $((i_1,g_1))$, so $\mathcal{A}_{[v_1]}(g_1)(C_k)=C_{k+1}$. The action of $G_{i_1}$ on $\mathcal{C}([v_1])$ defined by $\mathcal{A}_{[v_1]}$ preserves the product decomposition (\ref{product3}) and acts trivially on the second factor, so $\mathcal{A}_{[v_1]}(g_1)(\hat{C})=C_{k+2}$. Therefore $(\hat{C},C_{k+2})$ also has $(t,\mathcal{A})$-word $((i_1,g_1))$. Running the same argument with the roles of $i_1$ and $i_2$ reversed, we deduce that $(C_1,\hat{C})$ has $(t,\mathcal{A})$-word $((i_2,g_2))$, so $(C_k,\hat{C},C_{k+2})$ has $(t,\mathcal{A})$-word $((i_2,g_2),(i_1,g_1))$. Running the same argument for the chambers $C'_k,C'_{k+1},C'_{k+2},\hat{C}'$, we discover that the $(t',\mathcal{A}')$-word for $(C'_k,\hat{C}',C'_{k+2})$ is also $((i_2,g_2),(i_1,g_1))$. So replacing the subgallery $(C'_k,C'_{k+1},C'_{k+2})$ with $(C'_k,\hat{C}',C'_{k+2})$ in $G'$ is an \ref{M3} move which ensures that the $(t,\mathcal{A})$-word of $G$ remains the same as the $(t',\mathcal{A}')$-word of $G'$.\qedhere \end{enumerate} \end{proof} \begin{lem}\label{lem:fzips} Let $(t,\mathcal{A})$ and $(t',\mathcal{A}')$ be typed atlases, and let $f\in\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ with $f_*(t,\mathcal{A})=(t',\mathcal{A}')$. Let $G=(C_0,C_1,...,C_n)$ and $G'=(C'_0,C'_1,...,C'_n)$ be galleries, and suppose that the $(t,\mathcal{A})$-word of $G$ is the same as the $(t',\mathcal{A}')$-word of $G'$. If $fC_0=C'_0$ then $fC_k=C'_k$ for all $0\leq k\leq n$. \end{lem} \begin{proof} Let $(s_1,...,s_n)$ be the $(t,\mathcal{A})$-word of $G$. We prove the lemma by induction on $k$. The case $k=0$ is immediate, so suppose $k\geq 1$, and assume by the inductive hypothesis that $fC_{k-1}=C'_{k-1}$. Let $s_k=(i,g)$, $v:=\wedge(C_{k-1}, C_k)$ and $v':=\wedge(C'_{k-1}, C'_k)$. Then $t(v)=t'(v')=\{i\}$ and \begin{equation}\label{AA'} \mathcal{A}_{[v]}(g)(C_{k-1})=C_k\quad\text{and}\quad\mathcal{A}'_{[v']}(g)(C'_{k-1})=C'_k. \end{equation} We know that $\{i\}=t(v)=t'(fv)$ since $f_*(t,\mathcal{A})=(t',\mathcal{A}')$, and $fv\in C'_{k-1}$ because $fC_{k-1}=C'_{k-1}$, so we deduce that $fv=v'$. Hence it follows from $f_*(t,\mathcal{A})=(t',\mathcal{A}')$ and (\ref{AA'}) that \begin{align*} fC_k&=f\mathcal{A}_{[v]}(g)(C_{k-1})\\ &=\mathcal{A}'_{[fv]}(g)(fC_{k-1})\\ &=\mathcal{A}'_{[v']}(g)(C'_{k-1})\\ &=C'_k.\qedhere \end{align*} \end{proof} The final ingredient in the proof of Theorem \ref{thm:Delta} is the following proposition. This proposition, along with its proof, are analogous to \cite[Proposition 6.5]{Haglund06}. \begin{prop}\label{prop:typedatlas} Let $(t,\mathcal{A})$ and $(t',\mathcal{A}')$ be typed atlases. Let $C$ and $C'$ be chambers in $\Delta$ and let $f:C\to C'$ be the unique isomorphism such that $t'(fv)=t(v)$ for all vertices $v\in C$. Then $f$ admits a unique extension $\bar{f}\in\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ such that $\bar{f}_*(t,\mathcal{A})=(t',\mathcal{A}')$. \end{prop} \begin{proof} Given a gallery $G=(C_0,C_1,...,C_n)$ with origin $C_0=C$, let $G'=(C'_0,C'_1,...,C'_n)$ be the gallery with origin $C'_0=C'$ whose $(t',\mathcal{A}')$-word is the same as the $(t,\mathcal{A})$-word of $G$ -- such a gallery exists and is unique by Lemma \ref{lem:wordtogallery}. For each $k$, the $(t,\mathcal{A})$-word of the subgallery $(C_0,C_1,...,C_k)$ is equal to the $(t',\mathcal{A}')$-word of the subgallery $(C'_0,C'_1,...,C'_k)$, so it follows from Lemma \ref{lem:gallerytransfer} that the chamber $C'_k$ depends only on the chamber $C_k$, not on the choice of gallery $G$ containing $C_k$. Every chamber in $\Delta$ is joined to $C$ by some gallery, hence we get a well-defined map $\varepsilon:\mathcal{C}(\Delta)\to\mathcal{C}(\Delta)$ by sending $C_k\mapsto C'_k$ for each pair of galleries $G,G'$ as above. By Lemma \ref{lem:fttot'}, for each chamber $C_k$ there is a unique cubical isomorphism $h_k:C_k\to C'_k$ such that \begin{equation}\label{thk} t(v)=t'(h_k v) \end{equation} for all vertices $v\in C_k$. Note that $h_0=f$. We want these maps to fit together to define a cubical map $\bar{f}:\Delta\to\Delta$, so we must show that these maps agree on intersections of chambers. Let $v$ be a vertex in the intersection of chambers $\hat{C}_1$ and $\hat{C}_2$. Let $G$ be a gallery as above with $\hat{C}_1=C_{m_1}$ and $\hat{C}_2=C_{m_2}$ for some $0\leq m_1\leq m_2\leq n$. Our task is to show that $h_{m_1}v=h_{m_2} v$. By Lemma \ref{lem:cCv}, we may choose $G$ so that $v\in C_k$ for $m_1\leq k\leq m_2$. Therefore, it is enough to show that $h_{k-1}v=h_kv$ for each $m_1<\leq k\leq m_2$. Let $u:=\wedge(C_{k-1}, C_k)$ and $u':=\wedge(C'_{k-1}, C'_k)$, and let $s_k=(i,g)$. So $t(u)=\{i\}=t'(u')$, and $h_{k-1}u=u'=h_ku$ by (\ref{thk}). By Lemma \ref{lem:min}\ref{item:wCcapC'} we know that $u\leq v$, and $h_{k-1},h_k$ both preserve poset structure by Lemma \ref{lem:fttot'} so $u'\leq h_{k-1}v,h_kv\in C'_{k-1}\cap C'_k$. As $t'(h_{k-1}v)=t(v)=t'(h_kv)$, we conclude that $h_{k-1}v=h_kv$. We now have a cubical map $\bar{f}:\Delta\to\Delta$, and it preserves rank because each of the maps $h_k$ do (Lemma \ref{lem:fttot'}). We note that the inverse maps $h^{-1}_k:C'_k\to C_k$ for pairs of galleries $G,G'$ as above define an inverse to $\bar{f}$ (which is well-defined by the same argument we used for $\bar{f}$), so $\bar{f}\in\operatorname{Aut}_{\operatorname{rk}}(\Delta)$. Furthermore, $\bar{f}_*(t,\mathcal{A})=(t',\mathcal{A}')$ follows from the way we constructed $\bar{f}$ with the pairs of galleries $G,G'$. It remains to prove that such $\bar{f}$ is unique. Indeed if $\bar{f}\in\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ is an extension of $f$ with $\bar{f}_*(t,\mathcal{A})=(t',\mathcal{A}')$, and if $G,G'$ are galleries as above, then it follows from Lemma \ref{lem:fzips} that $\bar{f}C_k=C'_k$ for all $0\leq k\leq n$. Hence the action of $\bar{f}$ on $\mathcal{C}(\Delta)$ induces the map $\varepsilon:\mathcal{C}(\Delta)\to\mathcal{C}(\Delta)$ defined above. Finally, the restriction of $\bar{f}$ to a chamber $C_k$ (again for some choice of $G$ as above) must agree with $h_k$ because of (\ref{thk}) and the fact that $\bar{f}_*(t,\mathcal{A})=(t',\mathcal{A}')$, so $\bar{f}$ is uniquely determined. \end{proof} We can now prove Theorem \ref{thm:Delta}. \begin{proof}[Proof of Theorem \ref{thm:Delta}] The group $\Gamma$ acts transitively on the chambers of $\Delta$ and preserves the typed atlas $(t_\Gamma,\mathcal{A}_\Gamma)$ from Lemma \ref{lem:Gammatypedatlas}, so it follows from Proposition \ref{prop:typedatlas} that the $\operatorname{Aut}_{\operatorname{rk}}(\Delta)$-stabilizer of $(t_\Gamma,\mathcal{A}_\Gamma)$ is $\Gamma$. By Lemma \ref{lem:Latypedatlas} there is a typed atlas $(t,\mathcal{A})$ preserved by a finite-index subgroup $\hat{\Lambda}<\Lambda$. Applying Proposition \ref{prop:typedatlas} again produces an automorphism $g\in\operatorname{Aut}_{\operatorname{rk}}(\Delta)$ with $g_*(t,\mathcal{A})=(t_\Gamma,\mathcal{A}_\Gamma)$, so $g\hat{\Lambda}g^{-1}$ preserves the typed atlas $(t_\Gamma,\mathcal{A}_\Gamma)$. It follows that $g\hat{\Lambda}g^{-1}$ is a subgroup of $\Gamma$, and it has finite index since it acts cocompactly on $\Delta$. Thus $\Lambda$ and $\Gamma$ are weakly commensurable in $\operatorname{Aut}(\Delta)$. \end{proof} \section{Quasi-isometric rigidity}\label{sec:QI} In this section we prove the following theorem about the quasi-isometric rigidity of the graph product $\Gamma$. \theoremstyle{plain} \newtheorem*{thm:QI}{Theorem \ref{thm:QI}} \begin{thm:QI} Let $\Gamma=\Gamma(\mathcal{G},(G_i)_{i\in I})$ be a graph product. Suppose that $\mathcal{G}$ is a finite generalized $m$-gon, with $m\geq3$, which is bipartite with respect to the partition $I=I_1\sqcup I_2$. Suppose that $d_1,d_2,p_1,p_2\geq2$ are integers such that every $i\in I_k$ ($k=1,2$) has degree $d_k$ and $|G_i|=p_k$. Then in each of the following cases \begin{enumerate}[label=(\roman*)] \item $2< d_1,d_2,p_1,p_2$, \item $p_1=p_2=2<d_1,d_2$, \item $d_1=d_2=2<p_1,p_2$, \end{enumerate} any finitely generated group quasi-isometric to $\Gamma$ is abstractly commensurable with $\Gamma$. \end{thm:QI} The idea is to show that the associated right-angled building $\Delta=\Delta(\mathcal{G},(G_i)_{i\in I})$ has the structure of a Fuchsian building in these cases (Definition \ref{defn:Fuchsian}), and then apply the following quasi-isometric rigidity theorem of Xie \cite[Corollary 1.4]{Xie06} (which built on work of Bourdon--Pajot \cite{BourdonPajot00}). \begin{thm} Let $X$ be a locally finite Fuchsian building and suppose that $\operatorname{Aut}(X)$ contains a uniform lattice. Then any finitely generated group quasi-isometric to $X$ admits a proper and cocompact action on $X$. \end{thm} As a consequence, any group $\Gamma'$ quasi-isometric to $\Gamma$ acts properly and cocompactly on $\Delta$. Since $\Delta$ is hyperbolic (as $\mathcal{G}$ has no induced 4-cycles), such $\Gamma'$ would have a finite-index subgroup which is a special uniform lattice in $\operatorname{Aut}(\Delta)$ \cite{Agol13}, so $\Gamma'$ would be abstractly commensurable with $\Gamma$ by Theorem \ref{thm:Delta}. (In fact, for cases \ref{item:ii} and \ref{item:iii} we could use \cite[Theorem 1.7 and Theorem 1.3]{Haglund06} respectively in combination with \cite{Agol13} instead of Theorem \ref{thm:Delta} -- noting that the group of type preserving automorphisms has finite index in $\operatorname{Aut}(\Delta)$ in case \ref{item:iii}.) In the rest of this section we show how the conditions of Theorem \ref{thm:QI} endow $\Delta$ with the structure of a Fuchsian building. We follow a similar strategy to \cite{BoundsXie20}, who did exactly this for case \ref{item:ii} above (note that in this case $\Gamma$ is a right-angled Coxeter group and $\Delta$ the associated Davis complex). We remark that there are some other cases not covered by Theorem \ref{thm:QI} where $\Delta$ still has the structure of a Fuchsian building (which yield more examples of quasi-isometrically rigid groups), Theorem \ref{thm:QI} just deals with three of the simpler cases. First we discuss generalized $m$-gons. As stated in the introduction, a \emph{generalized $m$-gon} is a connected, bipartite, simplicial graph with diameter $m$ and girth $2m$ ($m\geq2$). Equivalently, it is a connected, bipartite, simplicial graph with the following two properties \cite[Theorem 1.1]{VanMaldeghem02}: \begin{enumerate} \item Given any pair of edges there is a circuit of length $2m$ containing both. \item For two circuits $A_1,A_2$ of length $2m$ with non-empty intersection, there is an isomorphism $f:A_1\to A_2$ that pointwise fixes $A_1\cap A_2$. \end{enumerate} If every vertex has degree at least 3 then we say that the generalized polygon is \emph{thick}. The above characterization of generalized polygons implies that a thick generalized polygon is simply a 1-dimensional spherical building. The classification of generalized polygons reduces to the thick case because every non-thick generalized $m$-gon is either obtained from a thick generalized $m/k$-gon by subdividing each edge into $k$ edges, or it consists of two vertices joined by a collection of (at least two) embedded paths of length $m$ such that any pair of these paths intersect only at their endpoints \cite[Theorem 3.1]{VanMaldeghem02}. Finite thick generalized $m$-gons exist only for $m\in\{2,3,4,6,8\}$ \cite{FeitHigman64}, but there are infinitely many for each $m$. For example, a generalized 2-gon is just a complete bipartite graph (with at least two vertices in each set) and a thick generalized 3-gon is precisely the incidence graph of an abstract projective plane. For any thick generalized polygon with vertex sets $I_1,I_2$, there are integers $d_1,d_2\geq3$ such that every vertex in $I_k$ has degree $d_k$ \cite[p29]{Ronan89}. \begin{defn}(Fuchsian buildings)\\\label{defn:Fuchsian} Consider a compact convex polygon $R$ in the hyperbolic plane $\mathbb{H}^2$ whose angles are of the form $\pi/m$ for $2\leq m\in\mathbb{N}$. Let $W$ be the Coxeter group generated by the reflections about the edges of $R$. It is well known that the images of $R$ under $W$ form a tessellation of $\mathbb{H}^2$. If we label the vertices of $R$ cyclically by $1,2,...,k$, then we get a $W$-invariant labeling of the tessellation of $\mathbb{H}^2$. Let $A_R$ denote the obtained labeled 2-complex. Let $X$ be a connected 2-complex with vertices labeled by $1,2,...,k$, such that each 2-cell (called a \emph{chamber}) can be identified with $R$ via a label preserving isometry. Suppose that $X$ has a family of subcomplexes (called \emph{apartments}), each of which is isomorphic to $A_R$ as labeled 2-complexes. Then $X$ is a \emph{Fuchsian building} if it satisfies the following three properties: \begin{enumerate} \item\label{item:apart} Given any two chambers there is an apartment containing both. \item\label{item:mapapart} For any two apartments $A_1, A_2$ that share a chamber there is an isomorphism of labeled 2-complexes $f:A_1\to A_2$ that pointwise fixes $A_1\cap A_2$. \item\label{item:qi} There are integers $q_i\geq2$, $i=1,2,...,k$, such that each edge of $X$ with endpoints labeled by $i,i+1$ (mod $k$) is contained in exactly $q_i+1$ chambers. \end{enumerate} \end{defn} Properties \ref{item:apart} and \ref{item:mapapart} above can be replaced with local properties by using the following theorem, which is a special case of \cite[Corollary 2.4]{GaboriauPaulin01}. \begin{thm}\label{thm:localF} Let $X$ be a simply connected 2-complex with all the assumptions from Definition \ref{defn:Fuchsian} except properties \ref{item:apart} and \ref{item:mapapart}. Then $X$ is a Fuchsian building (hence satisfies \ref{item:apart} and \ref{item:mapapart}) if the following properties hold: \begin{enumerate}[label=(\alph*)] \item\label{item:CAT1} The link of each vertex in $X$ is CAT(1). \item\label{item:embedcircle} Through any two points in the link of a vertex in $X$ there passes an isometrically embedded circle. \end{enumerate} \end{thm} As a consequence of the earlier discussion about generalized polygons, if a vertex $x\in X$ subtends angles $\pi/m$ in each of its incident 2-cells, then the link of $x$ satisfies properties \ref{item:CAT1} and \ref{item:embedcircle} above if and only if it is a generalized $m$-gon (with one edge in the link of $x$ for each 2-cell incident to $x$). We are now ready to show that $\Delta$ has the structure of a Fuchsian building in the cases of Theorem \ref{thm:QI} (in fact now we no longer require $\mathcal{G}$ to be finite). \begin{prop}\label{prop:Fuchsian} Let $\mathcal{G}$ be a generalized $m$-gon, with $m\geq3$, which is bipartite with respect to the partition $I=I_1\sqcup I_2$, and let $(G_i)_{i\in I}$ be finite groups. Suppose that $d_1,d_2,p_1,p_2\geq2$ are integers such that every $i\in I_k$ ($k=1,2$) has degree $d_k$ and $|G_i|=p_k$. Then in each of the following cases \begin{enumerate}[label=(\roman*)] \item\label{item:i'} $2< d_1,d_2,p_1,p_2$, \item\label{item:ii'} $p_1=p_2=2<d_1,d_2$, \item\label{item:iii'} $d_1=d_2=2<p_1,p_2$, \end{enumerate} the right-angled building $\Delta=\Delta(\mathcal{G},(G_i)_{i\in I})$ has the structure of a Fuchsian building. \end{prop} \begin{proof} To exhibit a Fuchsian building structure on $\Delta$ we must identify the chambers (in the sense of Definition \ref{defn:Fuchsian}). To avoid ambiguity, we will refer to chambers in the sense of Definition \ref{defn:Fuchsian} as \emph{Fuchsian chambers} and refer to chambers in the sense of Definition \ref{defn:building} as \emph{cubical chambers}. The general form of (candidate) Fuchsian chambers in each of cases \ref{item:i}--\ref{item:iii} is depicted in Figure \ref{fig:Fchambers}, along with vertex labels corresponding to the typing map $t:\Delta^0\to\bar{N}$. Each Fuchsian chamber is given a hyperbolic metric with angles as indicated and side lengths as symmetric as possible. One can easily verify in each case that $\Delta$ has the structure of a 2-complex in which the 2-cells are the Fuchsian chambers (using for instance Lemmas \ref{lem:intchamres} and \ref{lem:min}). Next we must consistently label the vertices of the Fuchsian chambers cyclically by numbers $1,2,...,k$ (where each Fuchsian chamber is a $k$-gon). This is easy in cases \ref{item:i} and \ref{item:iii} because the vertices of the Fuchsian chambers have distinct labels coming from the typing map. For case \ref{item:ii} we note that the four vertices of the Fuchsian chamber are centers of cubical chambers, and each cubical chamber is of the form $C_{\gamma}$ for some $\gamma\in\Gamma$, so we can label the vertices according to the homomorphism $$\eta:\Gamma\to \mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/2\mathbb{Z},$$ where $G_i<\Gamma$ maps onto the $k$-th factor ($k=1,2$) if $i\in I_k$ (note that all the $G_i$ have order 2). Let's analyze how this labeling might look in a single Fuchsian chamber. If two vertices of the Fuchsian chamber are in cubical chambers $C_{\gamma_1},C_{\gamma_2}$, and if the side joining the vertices has midpoint of type $\{i\}$ with $i\in I_1$, then $\gamma_2^{-1}\gamma_1\in G_i$, so $\eta(\gamma_1),\eta(\gamma_2)$ differ only in their first coordinates. Whereas a midpoint of type $\{j\}$ with $j\in I_2$ implies that $\eta(\gamma_1),\eta(\gamma_2)$ differ only in their second coordinates. A possible labeling by $\eta$ is shown in blue in Figure \ref{fig:Fchambers}, and the above discussion implies that the other three possible labelings are obtained by reflecting the picture in the $x$- and/or $y$-axes. In all cases we get cyclic labelings of the Fuchsian chambers by composing with the map $(0,0),(1,0),(1,1),(0,1)\mapsto 1,2,3,4$. In Figure \ref{fig:Fchambers}, each side of a Fuchsian chamber is labeled (in red) by the number of Fuchsian chambers that contain it. These numbers are all greater than 2, so this verifies property \ref{item:qi} in Definition \ref{defn:Fuchsian}. Finally, for properties \ref{item:apart} and \ref{item:mapapart} in Definition \ref{defn:Fuchsian} we use Theorem \ref{thm:localF}. This involves the angles for the Fuchsian chambers. The vertices with angle $\pi/m$ are all centers of cubical chambers, so have links isomorphic to $\mathcal{G}$. Meanwhile, it is straightforward to check that the vertices with right-angles all have links isomorphic to complete bipartite graphs, so these are generalized 2-gons. It follows that $\Delta$ satisfies properties \ref{item:CAT1} and \ref{item:embedcircle} from Theorem \ref{thm:localF}. \end{proof} \begin{figure} \caption{The general forms of Fuchsian chambers in each of cases \ref{item:i}--\ref{item:iii} from Proposition \ref{prop:Fuchsian}. The vertices are labeled according to the typing map $t:\Delta^0\to\bar{N}$, with $i,i_l\in I_1$ and $j,j_l\in I_2$. We assume that $\mathcal{G}$ is a hexagon in case \ref{item:iii} (in general it will be a $2m$-gon). In case \ref{item:ii} the vertex of the Fuchsian chamber in the cubical chamber $C_{\gamma}$ is labeled (in blue) by $\eta(\gamma)$. Each side of a Fuchsian chamber is labeled (in red) by the number of Fuchsian chambers that contain it. Each Fuchsian chamber is given a hyperbolic metric with angles as indicated. } \label{fig:Fchambers} \end{figure} \end{document}
arXiv
\begin{document} \defAround Vogt's theorem{Around Vogt's theorem} \thispagestyle{empty} \title{Around Vogt's theorem} \author{Alexey~Kurnosenko} \maketitle{} \centerline{\hbox{\it Institute for High Energy Physics, Protvino, Russia,} 142281} \begin{abstract} Vogt's theorem, concerning boundary angles of a convex arc with monotonic curvature ({\em spiral arc}), is taken as a starting point to establish basic properties of spirals. The theorem is expanded by removing requirements of convexity and curvature continuity; the cases of inflection and multiple windings are considered. Positional restrictions for a spiral arc with two given curvature elements at the endpoints are established, as well as the necessary and sufficient conditions for the existence of such spiral. \\~~\\ {\bf Keywords:}\, Vogt's theorem, spiral, inversive invariant, monotonic curvature, lense, biarc, bilense.\\ ~~\\ {\bf 2000 MSC:} 53A04 \\~~\\ \end{abstract} \noindent {E-mails: \verb"[email protected]"}\\ {\hphantom{E-mails:} \verb"[email protected]"} \thispagestyle{empty} \pagenumbering{arabic} \newcommand{\tmp}{} \newcommand{\Mark}[1]{${}^{#1}$\marginpar{{\small#1}}} \newcommand{\Topic}[1]{\section{#1}} \makeatletter \def\@begintheorem#1#2{\trivlist \item[\hskip \labelsep{\bf #1\ #2.}]\it} \def\@opargbegintheorem#1#2#3{\trivlist \item[\hskip \labelsep{\bf #1\ #2\ (#3).}]\it} \makeatother \newcommand{\Equa}[2]{\begin{equation}#2\label{#1}\end{equation}} \newcommand{\equa}[1]{\[ #1 \]} \newcommand{\refeq}[1]{{\rm(\ref{#1})}} \newcommand{\refeqeq}[2]{{\rm(\ref{#1},\ref{#2})}} \newcommand{\mathop{\rm sign}\nolimits}{\mathop{\rm sign}\nolimits} \newcommand{{\mathrm{i}}}{{\mathrm{i}}} \newcommand{{\mathrm{e}}}{{\mathrm{e}}} \newcommand{\D}[1]{#1^{\prime}} \newcommand{\DD}[1]{#1^{\prime\prime}} \newcommand{\Int}[4]{\displaystyle\int\limits_{#1}^{#2}{#3}\,{\mathrm{d}}#4} \newcommand{\Dfrac}[2]{\Frac{{\rm d}#1}{{\rm d}#2}} \newcommand{\Pd}[2]{\Frac{\partial#1}{\partial#2}} \newcommand{\,{=}\,}{\,{=}\,} \newcommand{\,{\not=}\,}{\,{\not=}\,} \newcommand{\,{<}\,}{\,{<}\,} \newcommand{\,{>}\,}{\,{>}\,} \newcommand{\,{\in}\,}{\,{\in}\,} \renewcommand{\leqslant}{\leqslant} \newcommand{\,{\leqslant}\,}{\,{\leqslant}\,} \renewcommand{\geqslant}{\geqslant} \newcommand{\,{\geqslant}\,}{\,{\geqslant}\,} \newcommand{\eqref}[1]{\stackrel{\refeq{#1}}{=}} \newcommand{\kappa}{\kappa} \newcommand{\Arc}[1]{\displaystyle{\buildrel\,\,\frown\over{#1}}} \newcommand{{\cal B}}{{\cal B}} \newcommand{\Biarcab}[3]{{\cal B}(#1;#2,#3)} \newcommand{\Aarc}[1]{{\cal A}(#1)} \newcommand{\Lense}[1]{{\mathbf L}(#1)} \newcommand{\Bilense}[1]{{\bf B}(#1)} \newcommand{\GT}[1]{$#1\igt0$} \newcommand{\GE}[1]{$#1\ige0$} \newcommand{\LT}[1]{$#1\ilt0$} \newcommand{\LE}[1]{$#1\ile0$} \newcommand{\EQ}[1]{$#1\ieq0$} \newcommand{\NE}[1]{$#1\ineq0$} \newcommand{\Frac}[2]{\displaystyle\frac{#1}{#2}} \newcommand{\quad\Longrightarrow\quad}{\quad\Longrightarrow\quad} \newcommand{\Vec}[1]{\stackrel{\longrightarrow}{{#1}}} \newcommand{\hphantom{{-}}}{\hphantom{{-}}} \newcommand{\gr}[1]{#1\ifmmode^{\circ}\else$^{\circ}$\fi} \newcommand{\sin\alpha}{\sin\alpha} \newcommand{\sin\beta}{\sin\beta} \newcommand{\Mat}[1]{\mathop{\rm Mat}\nolimits(#1)} \newcommand{\widetilde\alpha}{\widetilde\alpha} \newcommand{\widetilde\beta}{\widetilde\beta} \newcommand{\widetilde\tau}{\widetilde\tau} \newcommand{\widetilde\omega}{\widetilde\omega} \newcommand{\widetilde\mu}{\widetilde\mu} \newcommand{b^{\star}}{b^{\star}} \newcommand{\Kl}[1]{{\ifmmode{\cal K}_{#1}\else${\cal K}_{#1}$\fi}} \newcommand{\DKl}[1]{{\ifmmode{\cal K}^{\prime}_{#1} \else${\cal K}^{\prime}_{#1}$\fi}} \newcommand{\Kr}[1]{K(#1)} \newcommand{\Krn}[1]{\Kr{x_{#1},y_{#1},\tau_{#1},k_{#1}}} \newcommand{\Figref}[1]{{\rm\ref{F#1}}} \newcommand{\Reffigs}[1]{Figs.$\:$\Figref{#1}} \newcommand{\Reffig}[1]{Fig.$\:$\Figref{#1}} \newlength{\tmplength} \newcommand{\Infig}[3]{ \includegraphics[width=#1]{#2.eps}\caption{#3}\label{F#2}} \newcommand{\Lfig}[2]{ \begin{figure}\label{F#2} \end{figure} } \newcommand{\Lwfig}[3]{ \begin{figure}\label{F#3} \end{figure} } \newcommand{\Bfig}[3]{ \parbox[b]{#1}{\Infig{#1}{#2}{#3}} } \newcommand{\Ffig}[3]{ } \newtheorem{thm}{Theorem}[section] \newcommand{\qed}{\ifmmode{\qquad\mbox{\underline{q.e.d.}}} \else{{}~ \underline{q.e.d.}\\}\fi} \newtheorem{lem}[thm]{Lemma} \newtheorem{defn}[thm]{Definition} \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \defFig.{Fig.} \makeatletter \long\def\@makecaption#1#2{} \newdimen\localfigtabsize \localfigtabsize=0pt \long\def\@makefigcaption#1#2{ \ifdim \localfigtabsize > 0pt \else \localfigtabsize=\hsize \fi \vskip 10pt \setbox\@tempboxa\hbox{\small#1.\quad#2} \ifdim \wd\@tempboxa > \localfigtabsize \settowidth{\@tempdima}{\small#1.\quad} \addtolength{\@tempdima}{-\localfigtabsize} {\small #1.\quad\parbox[t]{-\@tempdima}{#2}\par~\par} \else \hbox to\hsize{\hfil\box\@tempboxa\hfil} \fi } \let\@makecaption\@makefigcaption \makeatother \makeatletter \newdimen\@bls \@bls=.7\baselineskip \newenvironment{pf} {\par\addvspace{\@bls plus 0.5\@bls minus 0.1\@bls}\noindent {\bfProof.}\enspace\ignorespaces} {\par\addvspace{\@bls plus 0.5\@bls minus 0.1\@bls}} \defProof.{Proof.} \def\section{\@startsection{section}{1}{\z@}{1.5\@bls plus .4\@bls minus .1\@bls}{\@bls}{\normalsize\bf}} \makeatother \Topic{Introduction} Vogt's theorem was published in 1914 (\cite{Vogt}, Satz~12). It concerns convex arcs of planar curves with continuous monotonic curvature of constant sign. The later proofs~\cite{Tohoku1,Tohoku2,Ostrowski} did not extend the class of invoked curves. Guggenheimer (\cite{Guggen}, p.{\,}48) applies the term {\em spiral arc} to such curves, and formulates Vogt's theorem as follows: {\em ``Let $A$ and $B$ be the endpoints of a spiral arc, the curvature nondecreasing from $A$ to $B$. The angle~$\beta$ of the tangent to the arc at $B$ with the chord~$AB$ is not less than the angle~$\alpha$ of the tangent at~$A$ with~$AB$. $\alpha\,{=}\,\beta$ only if the curvature is constant''.} Below $\alpha$ and $\beta$ denote algebraic values of the boundary angles with respect to the positive direction of $X$-axis, same as the direction of the chord~$\Vec{AB}$. Signed curvatures at the endpoints~$A$ and~$B$ are denoted as~$k_1$ and~$k_2$. Vogt assumes positive values for angles and curvatures, and the theorem states that $|\alpha|\,{>}\,|\beta|$ for $|k_1|\,{>}\, |k_2|$, and vice versa. Five cases, depicted in \Reffig{figure01} as arcs $AB_{i}$, can be detailed as \equa{ \begin{array}{lllcl} {AB_1:}\quad& \hphantom{0 > {}}k_1 < k_2 < 0,\quad& { \hphantom{{-}}{}\alpha = |\alpha| > |\beta|= {-}\beta} & \quad\Longrightarrow\quad& \alpha{+}\beta > 0;\\[2pt] {AB_2:}& 0 > k_1 > k_2, & { \hphantom{{-}}{}\alpha = |\alpha| < |\beta| = {-}\beta} & \quad\Longrightarrow\quad& \alpha{+}\beta < 0;\\[2pt] AB_0: & \hphantom{0 > {}}k_1=k_2,& {\pm}\alpha = |\alpha| = |\beta| = {\mp}\beta & \quad\Longrightarrow\quad& \alpha{+}\beta = 0;\\[2pt] {AB_3:}& \hphantom{0 > {}} k_1 > k_2 > 0, & { -\alpha =|\alpha| > |\beta| = \hphantom{{-}}{}\beta} & \quad\Longrightarrow\quad& \alpha{+}\beta < 0;\\[2pt] {AB_4:}& 0 < k_1 < k_2,& { {-}\alpha =|\alpha|<|\beta| = \hphantom{{-}}{}\beta} & \quad\Longrightarrow\quad& \alpha{+}\beta > 0; \end{array} } and unified to \Equa{VogtAK}{ \mathop{\rm sign}\nolimits(\alpha{+}\beta) = \mathop{\rm sign}\nolimits(k_2{-}k_1) } for whichever kind of monotonicity and curvature sign. In this notation the theorem remains valid for non-convex arcs. The proof for this case, lemma~1 in~\cite{Spiral}, required curve to be one-to-one projectable onto its chord. A variety of situations is illustrated by a family of arcs of Cornu spirals with fixed $\alpha$ and varying $\beta$, shown in \Reffig{figure02}. The sum $\alpha{+}\beta$ for curves~1 and~2 with decreasing curvature is negative; it vanishes at circular arc~3, and becomes positive for curves \hbox{4--10} with increasing curvature. Vogt's theorem covers cases \hbox{1--4} (arc~5 is not convex), lemma~1 in~\cite{Spiral}~--- cases \hbox{3--8}, curves \hbox{3--4} are covered by both, and \hbox{9--10}~--- by neither. General proof for cases \hbox{1--10}, defined below as {\em short} arcs, is the first extension of Vogt's theorem proposed herein. The requirements of convexity or projectability both served to somehow shorten the arc. But it turned out that Vogt's theorem can also be formulated for ``long'' spirals like curve~11 in \Reffig{figure02}. \Topic{Preliminary definitions and notation} We describe curves by intrinsic equation $k\,{=}\, k(s)$, $k$~being curvature, and $s$, arc length: $0\,{\leqslant}\, s\,{\leqslant}\, S$. Functions $Z(s)\,{=}\, x(s){+}{\mathrm{i}} y(s)$ and $\tau(s)$ represent coordinates and the direction of tangent. The terms {\em ``increasing'' } and {\em ``decreasing'',} applied to any function $f(s)$, are accompanied in this article by the adverb {\em ``strictly''{} } when necessary; otherwise non-strict monotonicity with $f(s){\not\equiv}const$ is assumed. \begin{defn} \label{DefSpiral0} \rm {\em Spiral} is a planar curve of monotonic, piecewise continuous curvature, not containing the circumference of a circle. Inflection and infinite curvatures at the endpoints are admitted. \end{defn} \begin{defn} \label{DefBiarc} \rm {\em Biarc} is a spiral, composed of two arcs of constant curvature (like arcs $AT_3B_3$ and $AT_4B_4$ in \Reffig{figure01}). \end{defn} \begin{defn} \label{DefShort1} \rm An arc $\Arc{AB}$ is {\em short}, if its tangent never achieves the direction $\displaystyle{\buildrel\,\,\to\over{BA}}$, opposite to the direction of its chord, except, possibly, at the endpoints. \end{defn} \begin{defn} \label{DefShort2} \rm An arc $\Arc{AB}$ is {\em short}, if it does not intersect the complement of its chord to the infinite straight line (possibly intersecting the chord itself). \end{defn} Def.~\ref{DefShort1} will be used until the equivalence of definitions~\ref{DefShort1} and~\ref{DefShort2} is proven (corollary~\ref{EquivDefCor}). The term {\em ``very short'' {}} will be sometimes used to denote an arc, one-to-one projectable onto its chord (curves 3--8 in \Reffig{figure02}). Guggenheimer uses terms {\em line element} to denote a pair $(P,{\bf t})$ of a point and a direction, and {\em curvature element} $(P,{\bf t},\rho{\bf n})$, ${\bf n}\perp{\bf t}$, adding curvature radius~$\rho$ at~$P$ (\cite{Guggen}, p.~50). We modify these definitions to $(x,y,\tau)$ and $(x,y,\tau,k)$ with tangent angle~$\tau$ and signed curvature~$k$ at the point $P\,{=}\,(x,y)$. Notation \equa{ \Kl{i}=\Krn{i} } serves to denote both the $i$-th curvature element and {\em directed curve of constant curvature} produced by \Kl{i}. Whether it be a straight line or a circular arc, it goes under general name {\em circle} [{\em of curvature}]. As in~\cite{InvInv}, we use an implicit equation of the circle $\Kl{0}\,{=}\,\Krn{0}$ in the form \Equa{Cxy}{ C(x,y;\Kl{0}) \equiv k_0\left[(x{-}x_0)^2{+}(y{-}y_0)^2\right]+ 2(x{-}x_0)\sin\tau_0 - 2(y{-}y_0)\cos\tau_0=0. } The sign of $C(x,y;\Kl{0})$ reflects the position of the point $(x,y)$ with respect to~\Kl{0}: it is from the left (\LT{C}) or from the right (\GT{C}) of the circle's boundary. Keeping in mind applications to geometric modelling, define the following: \begin{defn} \rm The {\em region of material} of the circle $\Kl{}$ is \equa{ \Mat{\Kl{}}=\{(x,y):\ C(x,y;\Kl{})\leqslant 0\}. } \end{defn} \Lwfig{t}{0pt}{figure03} To consider properties of a spiral arc in relation to its chord $\Vec{AB}$, $|AB|\,{=}\, 2c$, we choose the coordinate system such that the chord becomes the segment $[-c,c]$ of X-axis. With $\alpha\,{=}\,\tau(0)$, $k_1\,{=}\, k(0)$, and $\beta\,{=}\,\tau(S)$, $k_2\,{=}\, k(S)$, the boundary circles of curvature take form \Equa{K1K2cc}{ \Kl{1} = K(-c,0,\alpha,k_1),\qquad \Kl{2} = K(c,0,\beta,k_2). } It is often convenient to assume homothety with the coefficient $c^{-1}$, and to operate on the segment $[-1,1]$. The coordinates $x,y$ and curvatures $k$ become normalized dimensionless quantities, corresponding to \ $x/c$, \ $y/c$ and \ $kc\,{=}\,\kappa$. With such homothety applied, boundary circles~\refeq{K1K2cc} appear as \Equa{K1K2c1}{ \Kl{1} = K(-1,0,\alpha,\kappa_1),\qquad \Kl{2} = K(1,0,\beta,\kappa_2). } \begin{defn} \rm A curve whose start point is moved into position $A(-c,0)$, and the endpoint, into $B(c,0)$, is named {\em normalized arc}. The product $ck(s)\equiv \kappa(s)$, invariant under homotheties, will be referred to as {\em normalized curvature}. \end{defn} Denote $\Aarc{\xi}$ a normalized circular arc, traced from the point $A(-c,0)$ to $B(c,0)$ with the direction of tangents $\xi$ at~$A$ \ ($k_{1,2} \,{=}\, {-}\sin\xi/c$, \ $\kappa_{1,2} \,{=}\, {-}\sin\xi$). The arc $\Aarc{\pm\pi}$, passing through infinity, is coincident with the chord's complement to an infinite straight line; the arc $\Aarc{0}$ is the chord $AB$ itself. \begin{defn} \rm A {\em lense} $\Lense{\xi_1,\xi_2}$ is the region between two arcs $\Aarc{\xi_1}$ and $\Aarc{\xi_2}$, namely \equa{ \Lense{\xi_1,\xi_2}= \{\,(x,y):\; (x,y)\in\Aarc{\xi}\,\},\quad \min(\xi_1,\xi_2) < \xi < \max(\xi_1,\xi_2). } \end{defn} The arc $\Aarc{\alpha}$ shares tangent with the normalized spiral at the start point; so does $\Aarc{-\beta}$ at the endpoint. The two arcs bound the lense $\Lense{\alpha,-\beta}$, shown in gray in \Reffig{figure03}. The signed half-width $\omega$ of the lense, and the direction~$\gamma$ of its {\em bisector} $\Aarc{\gamma}$ are \Equa{OmegaGamma}{ \omega=\Frac{\alpha{+}\beta}{2},\qquad \gamma=\Frac{\alpha{-}\beta}{2}\,. } \begin{defn} \rm By the {\em inflection point} of a spiral, whose curvature $k(s)$ changes sign, shall be meant any inner point $Z(s_0)$ with \EQ{k(s_0)}. If there is no such point, i.e. curvature jump $k(s_0{-}0)\lessgtr 0 \lessgtr k(s_0{+}0)$ occurs, the jump point will be used as the inflection point with the assignment \EQ{k(s_0)}. \end{defn} \Topic{Vogt's theorem for short spirals} The subsequent proof of the modified Vogt's theorem~\refeq{VogtAK} for short spirals is similar to the proof for ``very short'' ones from~\cite{Spiral}. Both clearly show that only monotonicity of $k(s)$, and not convexity of the arc, is the basis for Vogt's theorem. \begin{thm} \label{ShortVogtTheorem} Boundary angles $\alpha$ and $\beta$ of a normalized short spiral or circular arc obey the following conditions: \Equa{VogtShort}{ \begin{array}{llcc} \mbox{if~~}k_1 < k_2:\qquad& \alpha{+}\beta>0,\quad & -\pi < \alpha \leqslant \pi,\; & -\pi < \beta \leqslant \pi; \\ \mbox{if~~}k_1 > k_2:\qquad& \alpha{+}\beta<0,\quad & -\pi \leqslant \alpha < \pi,\; & -\pi \leqslant \beta < \pi;\\ \mbox{if~~}k_1 = k_2:\qquad& \alpha{+}\beta=0,\quad & -\pi < \alpha < \pi,\; & -\pi < \beta < \pi. \end{array} } \end{thm} \begin{pf} Consider the case of increasing curvature $k(s)$, and define a new parameter~$\xi$: \equa{ \xi(s)=\Int{0}{s}{\cos\Frac{\tau(\sigma)}{2}}{\sigma},\quad 0\leqslant\xi\leqslant\xi_1=\xi(S). } By Def.~\ref{DefShort1}, $|\tau(s)|\,{<}\,\pi$ within the interval $(0,\,S)$, and $\xi(s)$ is therefore strictly increasing with~$s$. Define function $z(\xi)$: \equa{ z(\xi)=\sin\Frac{\tau(s(\xi))}{2}\,,\qquad \Dfrac{ z}{\xi}= \Dfrac{z}{s} \cdot \Dfrac{s}{\xi}= \left( \Frac{1}{2} \cos\Frac{\tau(s)}{2} \Dfrac{\tau}{s}\right) \cdot {\left(\cos\Frac{\tau(s)}{2}\right)}^{-1} =\Frac{1}{2}k(s(\xi)). } Its derivative being increasing, $z(\xi)$ is downwards convex, and its plot lies below straight line segment $l(\xi)$, connecting the endpoints $z(0)\,{=}\,\sin(\alpha/2)$ and $z(\xi_1)\,{=}\,\sin(\beta/2)$: \equa{ z(\xi)< l(\xi)= \Frac{\xi_1{-}\xi}{\xi_1}\sin(\alpha/2)+ \Frac{\xi}{\xi_1}\sin(\beta/2)\,, \qquad 0<\xi<\xi_1. } Condition $y(0)\,{=}\, y(S)$ yields \equa{ \begin{array}{rcl} 0& = &\Int{0}{S}{\sin\tau(s)}{s}= 2\int\limits_{0}^{S}\sin\Frac{\tau(s)}{2} \overbrace{\left(\cos \Frac{\tau(s)}{2}{\mathrm{d}}s\right)}^{d\xi}= 2\Int{0}{\xi_1}{z(\xi)}{\xi} < \\ & < & 2\Int{0}{\xi_1}{l(\xi)}{\xi} = \Frac{\xi_1}{2}(\sin(\alpha/2)+\sin(\beta/2)) =\xi_1\sin\Frac{\alpha{+}\beta}{4}\cos\Frac{\alpha{-}\beta}{4}\,. \end{array} } So, inequality $|{\alpha{+}\beta}|\,{\leqslant}\, 2\pi$, resulting from $\alpha,\beta\,{\in}\,[-\pi,\pi]$, can be refined to \ $0\,{<}\,\alpha{+}\beta\,{\leqslant}\, 2\pi$. Condition \GT{\alpha{+}\beta} excludes the value $-\pi$ for $\alpha$ or $\beta$, providing inequalities~\refeq{VogtShort} for the case of increasing curvature. If the curvature of the arc $Z(s)$ is decreasing, i.e. $k_2\,{<}\, k_1$, consider the arc $\bar{Z}(s)$, symmetric to $Z(s)$ about $X$-axis. Its boundary angles are $\D{\alpha}\,{=}\,{-\alpha}$, \ $\D{\beta}\,{=}\,{-\beta}$, and the curvature increases: $\D{k_1}\,{=}\,{-}k_1 < {-}k_2=\D{k_2}$. So, \GT{\D{\alpha}{+}\D{\beta}}, and \LT{\alpha{+}\beta} for the original curve. The case of constant curvature is evident.\qed{} \end{pf} \Topic{Basic inequality of the theory of spirals} In article~\cite{InvInv} we have introduced {\em an inversive invariant of a pair of circles} $\Kl{1}\,{=}\,\Krn{1}$ and $\Kl{2}\,{=}\,\Krn{2}$: \Equa{DefQ}{ \begin{array}{rcl} Q(\Kl{1},\Kl{2}) &=&\Frac{1}{4}{ k_1 k_2}[(\Delta x)^2{+}(\Delta y)^2]+ \sin^2\Frac{\Delta\tau}{2}+{}\\[6pt] &+&\Frac{1}{2}{ k_2}(\Delta x\sin\tau_1{-}\Delta y\cos\tau_1) -\Frac{1}{2}{ k_1}(\Delta x\sin\tau_2{-}\Delta y\cos\tau_2)\\[6pt] &\hphantom{{-}}{}& (\Delta x = x_2{-}x_1,\quad \Delta y = y_2{-}y_1,\quad \Delta \tau = \tau_2{-}\tau_1). \end{array} } Its value is independent of arbitrarily chosen line elements ($x_i,y_i,\tau_i$) on each circle, and is invariant under motions, homothety and inversions; and $Q(\Kl{1},\Kl{2})=Q(\Kl{2},\Kl{1})$. In particular cases, \begin{enumerate} \item[a)] \NE{k_{1,2}}, $D$ is the distance between two centres; \item[b)] \EQ{k_1}, \NE{k_2}, $L$ is (signed) distance from the centre of~\Kl{2} to the straight line~\Kl{1}; \item[c)] any $k_1, k_2$, and $\psi$ is intersection angle of two circles; \end{enumerate} the invariant $Q$ can be represented as follows: \Equa{Qabc}{ Q^{(a)}= \Frac{(k_1k_2D)^2 - (k_2{-}k_1)^2}{4k_1k_2},\qquad Q^{(b)}= \Frac{1{+}k_2L}{2},\qquad Q^{(c)}= \sin^2(\psi/2). } If two circles have no real common point, $\psi$ is complex, but $Q$ remains real. In this case\linebreak \hbox{${\mathrm{Im}}(\psi)\,{=}\, {\mathrm{cosh}}^{-1}|1{-}2Q|$} is Coxeter's inversive distance of two circles~\cite{InvDist}. \EQ{Q} if and only if two circles are tangent, or are two equally directed straight lines. Situation $Q\,{=}\, 1$ can be considered as ``antitangency'' ($\tau_1\,{=}\,\tau_2\pm\pi$ at the common point). Using this invariant is the alternative to cumbersome enumeration of variants with different mutual position (and curvature sign) of the two circles, commonly occurring in describing problems of this sort. \begin{prop} \label{PropDeviate} A curve with increasing curvature intersects its every circle of curvature from right to left (and from left to right if curvature decreases). \end{prop} Proving here this familiar statement helps us to have the subsequent proof of our basic theorem~\ref{MainTheorem} self-contained. The second needed assumption, invariance of ~$Q$, is easy to prove by deducing formulae~\refeq{Qabc}. \begin{pf} To consider behavior of a spiral at some point $P_1\,{=}\, Z(s_1)$ choose the coordinate system with the origin at $P_1$ and the axis of~X aligned with $\tau(s_1)$. The curvature element at $P_1$ becomes then $\Kl{1}\,{=}\,\Kr{0,0,0,k_1}$. Obtain function $C(s)$ by substituting $x\,{=}\, x(s)$ and $y\,{=}\, y(s)$ into implicit equation~\refeq{Cxy} of~\Kl{1}: \Equa{Cs}{ C(s)= C(x(s),y(s);\,\Kl{1})=k_1[x(s)^2+y(s)^2]-2y(s). } Differentiating $C(s)$ yields ($x$, $y$, $\tau$ and $k$ are abbreviated functions of $s$): \equa{ \begin{array}{lcl} C^{\prime}(s) &=& 2k_1(x\cos\tau+y\sin\tau)-2\sin\tau,\\ C^{\prime\prime}(s) &=& 2k_1 + 2k_1 k(y\cos\tau - x\sin\tau) - 2k\cos\tau,\\ C^{\prime\prime\prime}(s) &=& 2k^2[\sin\tau-k_1(x\cos\tau{+}y\sin\tau)] - 2\D{k}(s)[\cos\tau+k_1(x\sin\tau{-}y\cos\tau)];\\ C(s_1) &=& 0,\qquad C^{\prime}(s_1) = 0,\qquad C^{\prime\prime}(s_1) = 0,\qquad C^{\prime\prime\prime}(s_1) = {-}2k^\prime(s_1)\lessgtr 0. \end{array} } Whether $k(s)$ varies continuously or with a jump at $s_1$, function $C(s)$ undergoes variation of the opposite sign; $C(s)$ going negative (or positive, if curvature decreases), the curve locally deviates to the left (or to the right) of~\Kl{1}.\,\qed \end{pf} \begin{thm} \label{MainTheorem} Let \Kl{1} and \Kl{2} be two circles of curvature of a spiral curve. Then \Equa{Qle0}{ Q(\Kl{1},\Kl{2})\leqslant 0, } and equality holds if and only if both circles belong to a circular subarc or to a biarc. \end{thm} \begin{pf} Denote the circle of curvature at the start point as \Kl{1}, $\Kl{}(s)$ being any other circle of curvature: \equa{ \Kl{}(s)= K(x(s),y(s),\tau(s),k(s)\,),\qquad \Kl{1}= \Kl{}(0). } Two examples in \Reffig{figure04} illustrate the proof for the case of increasing curvature with negative and positive start values $k_1\,{=}\, k(0)$. The regions $\Mat{\Kl{1}}$ are shown in gray. Points $P_i$ subdivide the spiral into subarcs $0\,{\leqslant}\, s_1\,{\leqslant}\, s_2 \,{\leqslant}\, s_3 \,{\leqslant}\, s_4 \,{\leqslant}\, S$, some of them possibly absent: $P_0P_1$ is the initial subarc of constant curvature (if any), coincident with \Kl{1}. As soon as the curvature increases at $P_1$, with or without jump, the spiral deviates to the left from the circle~\Kl{1} (Prop.~\ref{PropDeviate}). The arc $P_0P_1$ may be supplemented to a biarc by another circular arc $P_1P_2$; point $P_3$ represents any curvature jump, where there are two circles of curvature $\Kl{}(s_3{\pm}0)$, left and right. Point $P_4\,{=}\, Z(s_4)$, if exists, is the point, where local property~\ref{PropDeviate} is no more valid, i.e. spiral returns to the boundary of~\Kl{1}. Thus, with expression~\refeq{Cs} for $C(s)$ involved, \equa{ \begin{array}{l} C(s) = 0\mbox{~~~~if~~~} 0 \leqslant s\leqslant s_1, \mbox{~~~or~~} s=s_4,\\ C(s) < 0\mbox{~~~~if~~~} s_1 < s < s_4. \end{array} } Associate the coordinate system with the line element $(x(s_1),y(s_1),\tau(s_1)\,)$ such that\linebreak $\Kl{1}\,{=}\,\Kl{}(0)\,{=}\,\Kl{}(s_1{-}0)\,{=}\,\Kr{0,0,0,k_1}$. Define function $Q(s)= Q(\Kl{1},\Kl{}(s))$ from~\refeq{DefQ}: \Equa{Qs}{ Q(s)= \Frac{1}{4}k(s)C(s) -\Frac{k_1}{2}[x(s)\sin\tau(s){-}y(s)\cos\tau(s)] +\sin^2\Frac{\tau(s)}{2}\qquad [Q(0)=0] } Show that $Q(s)$ is monotonic decreasing in $[0,s_4]$. Differentiating of $Q(s)$ yields \Equa{dQs}{ \D{Q}(s) = \Frac{1}{4}\D{k}(s)\,C(s)\quad\Longrightarrow\quad \D{Q}(s) \leqslant 0 } for increasing $k(s)$. Hence, $Q(s)$ is decreasing in every segment of its continuity. Make sure that jumps of $Q(s)$ at some point $s_3$, such that $ k(s_3{-}0) < k(s_3{+}0)$, conform to its decreasing behavior. Because functions $x(s)$, $y(s)$, $\tau(s)$ and $C(s)$ are continuous, and $C(s_3)$ is still negative, we deduce from~\refeq{Qs}: \Equa{Qjump}{ Q(s_3{+}0)-Q(s_3{-}0) = \Frac{1}{4}[k(s_3{+}0) - k(s_3{-}0)]\,C(s_3) < 0, } and $Q(s)$ is decreasing in the entire segment $[0,\,s_4]$. In the case of biarc in $[0,s_1] \cup [s_1,s_2]$, $k(s)$ is piecewise constant. Function $Q(s)$ is continuous and zero up to $s_1$, remains so in $s_1$ (despite of curvature jump, due to \EQ{C(s_1)}), and until $s_2$, due to \EQ{\D{k}(s)} in~\refeq{dQs}. If the entire curve is biarc, the initial circle of curvature\Kl{1} is never again reached by the second subarc until it makes complete $2\pi$-turn, which contradicts to Def.~\ref{DefSpiral0}. At $s\,{>}\, s_2$, $Q(s)$ either continuously decreases, or undergoes negative jumps like~\refeq{Qjump}. The theorem holds in $[0,s_4]$. It remains to prove that the point $Z(s_4)$ does not exist. Under conditions \EQ{C(s_4)} and \LT{Q_4\,{=}\, Q(s_4)} at such point, an attempt to determine $Z(s_4)\,{=}\, x_4{+}{\mathrm{i}} y_4$ from two equations~\refeqeq{Cs}{Qs} fails: excluding $x$ yields \equa{ \begin{array}{l} k_1^2y_4^2 - 2k_1y_4(1{+}2Q_4\cos\tau_4 {-}\cos\tau_4) + (2Q_4{-}1{+}\cos\tau_4)^2=0,\\ y_4=\Frac{1}{k_1} \left[ 1-\cos\tau_4(1{-}2Q_4) \pm \sin\tau_4\sqrt{Q_4(1{-}Q_4)}\, \right], \end{array} } i.e. unsolvability with \LT{Q_4} (or immediate contradiction \LT{\sin^2(\tau_4/2)}, if \EQ{k_1}). So, spiral never returns to its initial circle of curvature \Kl{1}. If the curvature decreases, the curve deviates to the right of \Kl{1}, and $C(s)$ changes sign. So do derivative $\D{k}(s)$ and curvature jumps, thus preserving inequalities~\refeq{Qjump},~\refeq{dQs}, and~\refeq{Qle0}.\qed{} \end{pf} The corollary to this theorem, due to W.~Vogt (Satz~1 in~\cite{Vogt}), is the absence of double points on a spiral. Kneser's theorem (see \cite{Guggen}, theorem~3--12), stating that {\em``Any circle of curvature of a spiral arc contains every smaller circle of curvature of the arc in its interior and in its turn is contained in the interior of every circle of curvature of greater radius''} concerns spirals without inflection and can be generalized as follows: \begin{cor} \label{MaterialCor} Let $\Kl{}(s)=\Kr{x(s),y(s),\tau(s),k(s{\pm}0)}$ be a family of circles of curvature of a spiral curve. Then the region of material of any circle includes the region of material of any other circle with greater curvature: \equa{ k(s_2)> k(s_1) \quad\Longrightarrow\quad \Mat{\Kl{}(s_2)}\subset \Mat{\Kl{}(s_1)}. } \end{cor} \Reffig{figure05} illustrates the statement. The region $\Mat{\Kl{M}}$ of the initial circle of curvature is the whole plane except the interior of \Kl{M}. As the curvature increases, the regions of material become smaller, each next being within the previous one. They remain unbounded up to the inflection point, whose region of material is the right half-plane. \Reffigs{figure06}a,b,c illustrate theorem~\ref{MainTheorem} for normalized spiral. With boundary circles of curvature~\refeq{K1K2c1} and angles $\omega$ and $\gamma$, defined by~\refeq{OmegaGamma}, inequality~\refeq{Qle0} takes form \Equa{Qdef1}{ \begin{array}{rcll} Q(\kappa_1, \kappa_2,\alpha,\beta) &=& \kappa_1 \kappa_2+ \kappa_2\sin\alpha- \kappa_1\sin\beta+\sin^2\gamma &{}={}\\ &=&( \kappa_1+\sin\alpha)\,( \kappa_2-\sin\beta)+\sin^2\omega &{}\leqslant 0, \end{array} } \Lwfig{p}{0.9\textwidth}{figure06}{} Having fixed~$\alpha$ and~$\beta$, consider the region of permissible values for normalized boundary curvatures in the plane $(\kappa_1,\kappa_2)$. This region consists of two subregions, each bounded by one of the two branches of the hyperbola \EQ{Q(\kappa_1, \kappa_2)}, traced at the left side of \Reffig{figure06}. Its centre is located in the point $C(\kappa_1,\kappa_2)\,{=}\,(-\sin\alpha,\sin\beta)$; these two curvatures correspond to those of lense's boundaries. Biarcs marked as $h_i$ in the right side have boundary curvatures $(\kappa_1,\kappa_2)$ corresponding to the points $H_i$ of the hyperbola. By theorem~\ref{MainTheorem}, every biarc represents the {\em unique spiral}, matching end conditions of this kind. Non-biarc curves are presented by some point~$K$ in the plane $(\kappa_1,\kappa_2)$, and the arc~$k$ of Cornu spiral in the plane $(x,y)$. \begin{cor} \label{k1k2Cor} End conditions of a normalized spiral arc obey the following inequalities: \Equa{k1k2}{ \begin{array}{rcl} \kappa_1< {-}\sin\alpha,\quad \kappa_2>\sin\beta, &\quad &\mbox{if~~}\kappa_1< \kappa_2;\\ \kappa_1> {-}\sin\alpha,\quad \kappa_2<\sin\beta, &\quad &\mbox{if~~}\kappa_1> \kappa_2. \end{array} } \end{cor} \begin{pf} Inequalities~\refeq{k1k2} merely reflect the position of two regions \LE{Q} with respect to the asymptotes of the hyperbola, vertical ($\kappa_1\,{=}\,{-}\sin\alpha$) and horizontal one ($\kappa_2\,{=}\,\sin\beta$). It remains to show that the line $\kappa_2\,{=}\, \kappa_1$ separates the two branches of hyperbola, thus connecting inequalities~\refeq{k1k2} to specified conditions, increasing or decreasing curvature. Substituting $\kappa_1\,{=}\, \kappa_2\,{=}\, \kappa$ into the equation \EQ{Q(\kappa_1,\kappa_2,\alpha,\beta)} yields \Equa{k1eqk2}{ \kappa^2+\kappa(\sin\alpha{-}\sin\beta)+\sin^2\gamma = (\kappa{+}\sin\gamma\cos\omega)^2 +\sin^2\gamma\sin^2\omega = 0. } Hence, except special cases \EQ{\sin\gamma} or \EQ{\sin\omega}, statement~\refeq{k1k2} is valid: two convex regions, bounded by the upper left and the lower right branches of hyperbola, are the\linebreak regions of possible boundary curvatures for $\kappa_1\,{<}\, \kappa_2$ and $\kappa_1\,{>}\, \kappa_2$ respectively. The first exception, \EQ{\sin\gamma}, occurs if $\alpha\,{=}\,\beta$ and provides the unique common point \EQ{\kappa_1\,{=}\, \kappa_2} without intersecting the hyperbola (point $H_0$ and degenerate biarc $h_0$ in \Reffig{figure06}b). The statement, assuming $\kappa_1\,{\not=}\, \kappa_2$, remains valid. The second exception, \EQ{\sin\omega}, i.e. $\alpha\,{=}\,{-}\beta$, is illustrated by \Reffig{figure06}d. The hyperbola degenerates into a pair of straight lines with the centre $C$ on the line $\kappa_1\,{=}\, \kappa_2$, still separating two regions in question. However, inequalities~\refeq{k1k2} should be considered as non-strict, like $\kappa_1\,{\leqslant}\,{-}\sin\alpha$, because the hyperbola is coincident with its asymptotes. By theorem~\ref{MainTheorem}, a~spiral, corresponding to equality, may be only biarc. Attempt to construct it gives the only possibility: the first subarc with $\kappa_1\,{=}\,{-}\sin\alpha$, going from~$A$ to~$B$, and the second subarc being circumference of a circle of any curvature $\kappa_2$ from~$B$ to~$B$ (examples $h_1$ and $h_2$). Similar constructions $h_{3,4}$ arise with $\kappa_2\,{=}\,\sin\beta$ and arbitrary $\kappa_1$. Since Def.~\ref{DefSpiral0} excludes this construction, the points of such degenerated hyperbola do not produce a spiral, and should be excluded. Inequalities~\refeq{k1k2} remain strict.\qed{} \end{pf} Let us apply inequality~\refeq{Qle0} to another Vogt's statement, namely, that {\em spiral has no double tangent} (Satz~7 in~\cite{Vogt}). Its refined form is illustrated by the double tangent $\Vec{BA}$ in \Reffig{figure05}, and sounds like \begin{cor} Spiral curve may have double tangent only if this tangent joins points with opposite curvature sign, or is coincident with the inflection segment of the spiral. \end{cor} \begin{pf} Two curvature elements with common tangent can be denoted as \equa{ K(x_1,y_1,\tau_1,k_1) \mbox{~~~and~~~} K(x_1{+}t\cos\tau_1,\,y_1{+}t\sin\tau_1,\,\tau_1,k_2),\quad t\neq 0. } Inequality~\refeq{Qle0} takes form \LE{4Q\,{=}\, k_1 k_2 t^2}, supplying the proof for the general case, \LT{Q}, \NE{k_{1,2}}. Consider exceptions. If, say, \EQ{k_1}, the spiral deviates from its tangent as soon as $k(s)$ becomes non-zero. By Cor.~\ref{MaterialCor}, the spiral has no more common points with this tangent. The biarc case, \EQ{Q}, is trivial: two tangent circles may have common tangent only in their unique common point.\qed{} \end{pf} \begin{cor} \label{InflectionCor} The tangent at the inflection point of a normalized spiral arc cuts the interior of the chord and is directed downwards $($i.e. \LT{\sin\tau(s_0)}$)$ if curvature increases, or upwards $($\GT{\sin\tau(s_0)}$)$ if curvature decreases. \end{cor} \begin{pf} The tangent at the inflection point $Z(s_0)$ is at the same time the circle of curvature~$\Kl{0}$ (\Reffigs{figure05} and \Figref{figure11}a). By Cor.~\ref{MaterialCor}, the endpoints $A$ and $B$ of the arc are disposed bilaterally along \Kl{0}. That's why \Kl{0} cuts the chord in the interior. For increasing $k(s)$, point~$A$ is located from the right of the line \Kl{0}, and $B$ from the left of it. For normalized curve, when $\Vec{AB}$ is brought horizontal, this is equivalent to downwards directed tangent~\Kl{0}. \qed{} \end{pf} Two following propositions are our previous results from~\cite{InvInv}. They can be easily derived from parametric equation of curve, inverse to given one, by calculating and differentiating its curvature. \begin{prop} \label{InversionProp} Inversion, applied to a spiral curve, preserves the monotonicity of the curvature, interchanging its decreasing/increasing character. \end{prop} \begin{prop} \label{InvCurvatureProp} If a curvature element~$\Kl{1}$ is inverted with respect to a circle of inversion~$\Kl{0}$, the curvature of the image~$\Kl{2}$ is given by \equa{ k_2 = 2k_0(1-2Q_{01}) - k_1,\qquad Q_{01} = Q(\Kl{0},\Kl{1}). } \end{prop} The direction, artificially assigned to the circle of inversion, does not affect the inverse curve. If $\Kl{0}$ is reversed, both $k_0$ and $(1{-}2Q_{01})$ change sign. \Topic{Vogt's theorem for long spirals} On the spiral $Z(s)\,{=}\, x(s){+}{\mathrm{i}} y(s)$, $s\,{\in}\,[0,S]$, consider subarc $s\,{\in}\,[u,v]$ and define functions \Equa{Defmu}{ h(u,v)=|Z(v)-Z(u)|,\qquad \mu(u,v)=\arg[Z(v)-Z(u)] } for the length and direction of the chord. For any subarc and the entire curve the cumulative boundary angles $\widetilde\alpha(u,v)$ and $\widetilde\beta(u,v)$ with respect to varying chord $AB(u,v)$ can be expressed as \Equa{Defabuv}{ \begin{array}{l} \widetilde\alpha(u,v) = \tau(u)-[\mu(u,v)+2\pi m],\\ \widetilde\beta(u,v) = \tau(v)-[\mu(u,v)+2\pi m], \end{array} } satisfying the natural condition for the {\em winding angle}~$\rho$ of the arc: \equa{ \widetilde\beta(u,v) - \widetilde\alpha(u,v) = \tau(v)-\tau(u) = \Int{u}{v}{k(s)}{s} = \rho(u,v). } This still allows to assign any value $\alpha{+}2\pi n$ to $\widetilde\alpha$. To fix it, we note that the angles $\widetilde\alpha$ and $\widetilde\beta$ can be unambiguously determined within the range $(-\pi,\pi)$ for rather short subarc $[u_0,v_0]$. In particular, \ \EQ{\widetilde\alpha(u,u)\,{=}\,\widetilde\beta(u,u)}. Define cumulative angular functions for any arc $[u,v]$ as \Equa{ABcum}{ \widetilde\alpha,\widetilde\beta(u,v) = \lim\limits_{{\displaystyle {}^{u_0\to u}_{ v_0\to v} }} \widetilde\alpha,\widetilde\beta(u_0,v_0)\,,\quad u\leqslant u_0 = v_0 \leqslant v, } preserving continuity at $\widetilde\alpha,\widetilde\beta=\pm\pi,\,\pm3\pi,\,\ldots\,,$ while the limits are being reached. \begin{lem} \label{CumLemma} Cumulative boundary angles $\widetilde\alpha,\,\widetilde\beta$, defined by Eq.~\refeq{ABcum}, do not depend on the start point $u_0\,{=}\, v_0$ and the way the limits are reached. \end{lem} \begin{pf} To calculate $\widetilde\alpha(u,v)$ and $\widetilde\beta(u,v)$ or, for symmetry, function \Equa{OmegaCum}{ \widetilde\omega(u,v) = \Frac{1}{2}[\widetilde\alpha(u,v)+\widetilde\beta(u,v)] = \Frac{1}{2}[\tau(u){+}\tau(v)]-[\mu(u,v)+2\pi m], } let us restore it from derivatives, which are free from $2\pi m$-uncertainty: \equa{ \begin{array}{lcl} \Pd{\mu(u,v)}{u}&=& \Pd{}{u} \arctan\Frac{y(v)-y(u)}{x(v)-x(u)}=\\[8pt] &=&\Frac{-\sin\tau(u)\,[\overbrace{x(v){-}x(u)}^{h\cos\mu}] +\cos\tau(u)\,[\overbrace{y(v){-}y(u)}^{h\sin\mu}]} {[x(v){-}x(u)]^2 + [y(v){-}y(u)]^2} = -\Frac{\sin[\overbrace{\tau(u){-}\mu(u,v)}^{\strut\alpha(u,v){+}2\pi m}]}{h(u,v)}. \end{array} } So, \equa{ \Pd{\mu(u,v)}{u}=-\Frac{\sin\alpha(u,v)}{h(u,v)},\qquad \Pd{\mu(u,v)}{v}= \Frac{\sin\beta(u,v)}{h(u,v)}, } and \Equa{domega}{ \begin{array}{l} \Pd{\widetilde\omega(u,v)}{u} = \Frac{1}{2}k(u) + \Frac{\sin\alpha(u,v)}{h(u,v)} = G(u,v),\\[10pt] \Pd{\widetilde\omega(u,v)}{v} = \Frac{1}{2}k(v) - \Frac{\sin\beta(u,v)}{h(u,v)} = H(u,v). \end{array} } Continuous function $\widetilde\omega(u,v)$ can be now calculated as \Equa{IntGH}{ \Int{W_i W}{}{G(u,v)}{u}+H(u,v)\,{\rm d}v } along any path $W_i W$ (shown in \Reffig{figure07}) in the closed triangular region $0\,{\leqslant}\, u\,{\leqslant}\, v\,{\leqslant}\, S$ with $W_i$ taken on the boundary $u\,{=}\, v$. From the expansions \equa{ \begin{array}{l} \tau(u{+}s) = \tau(u) + s k(u) +O(s^2),\\[3pt] Z(u{+}s) = Z(u) + s{\mathrm{e}}^{{\mathrm{i}}\tau(u)} + \frac{{\mathrm{i}}}{2}s^2k(u){\mathrm{e}}^{{\mathrm{i}}\tau(u)} + O(s^3),\\[2pt] h(u,u{+}s) = |Z(u{+}s){-}Z(u)| = s+O(s^2),\\[3pt] \mu(u,u{+}s) = \arg[Z(u{+}s)-Z(u)] = \arg[{\mathrm{e}}^{{\mathrm{i}}\tau(u)}(s + \frac{{\mathrm{i}}}{2}s^2k(u) + O(s^3))] =\\[2pt] \hphantom{\mu(u,u{+}s)} = \arg[{\mathrm{e}}^{{\mathrm{i}}\tau(u)}(1 + \frac{{\mathrm{i}}}{2}sk(u) + O(s^2))] = \tau(u) + \frac{1}{2}sk(u) + O(s^2),\\[3pt] \alpha(u,u{+}s) = \tau(u)-\mu(u,u{+}s)=-\frac{1}{2}sk(u) +O(s^2),\\[3pt] \beta(u,u{+}s) = \tau(u{+}s)-\mu(u,u{+}s) = \frac{1}{2}sk(u) +O(s^2), \end{array} } it follows that derivatives \refeq{domega} can be continuously defined on the line $u\,{=}\, v$ as zeros: \equa{ G(u,u)=\lim\limits_{s\to 0}G(u,u{+}s) =\lim\limits_{s\to 0} \left[\Frac{1}{2}k(u)-\Frac{\sin[s k(u)/2]}{s}\right] = 0,\quad H(u,u)\,{=}\,\ldots\,{=}\, 0 } This yields \EQ{\int_{W_1W_2}} along the line $u\,{=}\, v$; and, together with \equa{ \Pd{G(u,v)}{v} = \Pd{H(u,v)}{u}\; \left[= -\Frac{\sin(\alpha{+}\beta)}{h}\right], } ensures the independence of the integral~\refeq{IntGH} on a start point $W_i$ and integration path. In terms of definition~\refeq{ABcum}, the limits do not depend on the start point and the way they are reached. Finally, $\widetilde\alpha\,{=}\,\widetilde\omega{-}\rho/2$, $\widetilde\beta\,{=}\,\widetilde\omega{+}\rho/2$.\qed \end{pf} \begin{defn} \rm The angle $\widetilde\omega$, defined above, will be referred to as {\it Vogt's angle} of a spiral arc. For short spiral Vogt's angle is the signed half-width of the lense~$\Lense{\alpha,-\beta}$. \end{defn} \newcommand{s_{0}}{s_{0}} \begin{defn} \rm By the {\em reference point\/} of the spiral shall be meant the point $Z(s_{0})$, corresponding to the minimal absolute value of curvature (points $A_1$, $B_2$ and $O$ in \Reffig{figure08}). If the spiral has inflection, $Z(s_{0})$ is the inflection point; otherwise it is either start point ($s_{0}\ieq0$) or the end point ($s_{0}\,{=}\, S$). \end{defn} If curvature increases (decreases), function $\tau(s)$ is downwards (upwards) convex and attains minimal (maximal) value at the reference point. \begin{lem} \label{TauPiLemma} Function $\widetilde\tau(s)$ for normalized spiral, taking cumulative angles $\widetilde\tau(0)\,{=}\,\widetilde\alpha$ and $\widetilde\tau(S)\,{=}\,\widetilde\beta$ as the boundary values, can be distinguished from its other versions, $\widetilde\tau(s)\pm 2\pi m$, by the value at the reference point: \end{lem} \Equa{TauCum}{ 0< |\widetilde\tau(s_{0})|<\pi,\mbox{~~~~or~~~~} \begin{array}{ll} -\pi<\widetilde\tau(s_{0})<0 &\mbox{~~~if~~}k_1< k_2,\\ \hphantom{{-}}{}0<\widetilde\tau(s_{0})<\pi &\mbox{~~~if~~}k_1> k_2. \end{array} } \begin{pf} Based on lemma~\ref{CumLemma}, calculate angles $\widetilde\alpha(0,S)$ and $\widetilde\beta(0,S)$ as follows. Choose the coordinate system with the origin at the reference point $Z(s_{0})$ with X-axis directed along $\tau(s_{0})$ (\Reffig{figure08}). In this system function $\tau(s)$ can be defined as \equa{ \tau(s)=\Int{s_{0}}{s}{k(\xi)}{\xi},\qquad \tau(s_{0})=0. } Calculate limits~\refeq{ABcum} starting from $u_0\,{=}\, v_0 \,{=}\, s_{0}$, $u$~non\-inc\-rea\-sing, $v$~non\-de\-crea\-sing. Consider the case of increasing curvature. If curvature is nonnegative (spiral $A_1B_1$), then, by Cor.~\ref{MaterialCor}, the curve is located in the upper half-plane, except initial subarc of zero curvature, if any. We keep $u$ constant (\EQ{u\,{=}\,s_{0}}) and increase $v$. Because \equa{ \mu(s_{0},s_{0})= \lim\limits_{\varepsilon\to {+}0}\mu(s_{0},s_{0}{+}\varepsilon) =\tau(s_{0})=0, } the angle $\mu(u,v)$ never attains the values $\pm\pi$, becoming strictly positive as soon as the point $Z(v)$ deviates from the axis of $X$: \Equa{MuPi}{ 0<\mu(u,v)<\pi\qquad \mbox{if~~}u\,{\in}\,[0,s_{0}],\quad v\,{\in}\,[s_{0},S], \mbox{~~and~~} k(u)< k(v). } The case of increasing nonpositive curvature (spiral $A_2B_2$) is similar: we start with $u_0\,{=}\, v_0\,{=}\,s_{0}\,{=}\, S$ (point $B_2$), $u$ decreasing, $v\,{=}\, S$ kept constant. Curve being located in the lower half-plane, \refeq{MuPi} remains valid. So it does in the inflection case ($A_3B_3$): points $Z(u)$, $u\,{\in}\,[0,s_{0}]$, are located in the lower half-plane or its boundary, points $Z(v)$, $v\,{\in}\,[s_{0},S]$~--- in the boundary or upper half-plane. The angle $\mu(u,v)$ becomes strictly positive as soon as the point $Z(u)$ or $Z(v)$ deviates from the axis of $X$, and never attains the value $\pi$. For the three cases $2\pi m$-uncertainty in~\refeq{Defabuv} disappears, \EQ{m}, and \equa{ \widetilde\alpha = \tau(0)-\mu(0,S),\qquad \widetilde\beta = \tau(S)-\mu(0,S). } To normalize the curve, chords $A_iB_i$ should be brought to horizontal position, i.e. rotation through the angle \ $-\mu(0,S)\,{\in}\,(-\pi,0)$ should be applied. This means replacing $\tau(s)$ by \equa{ \widetilde\tau(s) = -\mu(0,S)+\tau(s), } whose values at $s=0$,~$S$ and~$s_{0}$ are equal to \equa{ \widetilde\tau(0)=\tau(0)-\mu(0,S)=\widetilde\alpha,\quad \widetilde\tau(S)=\widetilde\beta,\qquad \widetilde\tau(s_{0})= 0-\mu(0,S)\in(-\pi,0).\quad\qed{} } \end{pf} The angle $\widetilde\mu(u,v)$ can be defined in the cumulative sense similarly to $\widetilde\omega(u,v)$; limits like~\refeq{ABcum} for~$\widetilde\mu$ can be calculated starting from $\widetilde\mu(s,s)\,{=}\,\widetilde\tau(s)$. In an equivalent manner $\widetilde\mu(u,v)$ can be derived from~\refeq{OmegaCum}, where it is bracketed as $[\mu(u,v)+2\pi m]$: \Equa{MuCum}{ \widetilde\mu(u,v) = \Frac{\widetilde\tau(u){+}\widetilde\tau(v)}{2}-\widetilde\omega(u,v). } For future use we notice, as an immediate corollary of~\refeq{MuPi}, the following inequality: \Equa{DeltaPi}{ -\pi < \widetilde\mu(s_{0},S)-\widetilde\mu(0,s_{0}) < \pi. } \begin{thm} \label{LongVogtTheorem} With boundary angles, defined in cumulative sense, Vogt's theorem remains valid for a spiral of any length: \equa{ \mathop{\rm sign}\nolimits\widetilde\omega(u,v) = \mathop{\rm sign}\nolimits[k(v){-}k(u)],\quad\mbox{or}\quad \mathop{\rm sign}\nolimits(\widetilde\alpha{+}\widetilde\beta) = \mathop{\rm sign}\nolimits(k_2{-}k_1). } Except the circular subarc, wherein Vogt's angle $\widetilde\omega(u,v)$ is constant and zero, it is strictly monotonic function of the arc boundaries: \equa{ |\widetilde\omega(u_1,v_1)| < |\widetilde\omega(u,v)| \mbox{~~~~if~~~} [u_1,v_1] \subset [u,v], \mbox{~~~~and~~~} k(u)\neq k(v). } \end{thm} \begin{pf} Rewrite derivatives~\refeq{domega} involving normalized curvatures $\kappa_1\,{=}\, \frac{1}{2}h(u,v)k(u)$, $\kappa_2\,{=}\, \frac{1}{2}h(u,v)k(v)$, and, assuming increasing curvature, the first row of~\refeq{k1k2}: \Equa{dOmgdudv}{ \Pd{\widetilde\omega(u,v)}{u} = \Frac{\kappa_1+\sin\alpha}{h}\leqslant 0,\qquad \Pd{\widetilde\omega(u,v)}{v} = \Frac{\kappa_2-\sin\beta}{h} \geqslant 0. } Equalities are added to account for the possible occurrence of a circular subarc within the spiral. If it is not the case, or as soon as $k(u)\,{\not=}\, k(v)$, function $\widetilde\omega(u,v)$ grows up when $u$ decreases and/or $v$ increases. Because \EQ{\widetilde\omega(s,s)}, \ \GT{\widetilde\omega(u,v)} follows.\qed{} \end{pf} Recall that Vogt's angle is in fact the intersection angle of two circles. Taking into account continuity of $\widetilde\omega(u,v)$, and Prop.~\ref{InversionProp}, we conclude the following: \begin{cor} Inversion changes sign of Vogt's angle, preserving its absolute value. \end{cor} \Lwfig{t}{0pt}{figure09} Now consider the point $P\,{=}\, Z(s)$, moving along the normalized spiral from $A$ to $B$, and two subarcs of the spiral, $AP$ and $PB$ (\Reffig{figure09}). Denote $h_1(s)\,{=}\, h(0,s)\,{=}\,|AP|$ and $h_2(s)\,{=}\, h(s,S)\,{=}\,|BP|$, and apply similar notation to introduce functions \equa{ \begin{array}{llll} \alpha_1(s)\,{=}\, \widetilde\alpha(0,s),\quad& \beta_1(s) \,{=}\, \widetilde\beta(0,s),\quad& \omega_1(s)\,{=}\, \widetilde\omega(0,s),\quad& \mu_1(s) \,{=}\, \widetilde\mu(0,s),\\ \alpha_2(s)\,{=}\, \widetilde\alpha(s,S),\quad& \beta_2(s) \,{=}\, \widetilde\beta(s,S),\quad& \omega_2(s)\,{=}\, \widetilde\omega(s,S),\quad& \mu_2(s) \,{=}\, \widetilde\mu(s,S). \end{array} } From equations \equa{ \begin{array}{lclclc} x(s)&{}={}& h_1(s)\cos\mu_1(s)-c&{}={}&-h_2(s)\cos\mu_2(s)+c,\\ y(s)&{}={}& h_1(s)\sin\mu_1(s) &{}={}&-h_2(s)\sin\mu_2(s) \end{array} } it follows that \Equa{BiPolar}{ h_1(s) = \Frac{2c\sin\mu_2(s)}{\sin\delta(s)},\quad h_2(s) = \Frac{-2c\sin\mu_1(s)}{\sin\delta(s)},\quad \mbox{where~~~}\delta(s)=\mu_2(s)-\mu_1(s). } Function $\delta(s)$ inherits continuity and cumulative treatment from $\mu_{1,2}(s)$. It is the turning of the chord's direction at~$P$, or signed external angle of the triangle $APB$ at~$P$. If $P$ is in the upper half-plane, $\sin\delta(s)$ is negative; it is positive in the lower half-plane; $\delta(s)\,{=}\, 2\pi n$ if $P$ is within the chord, and $\delta(s)\,{=}\,\pi(2n{-}1)$ if $P$ belongs to chord's complement. The locus of points, where $\delta$ is constant, is the arc $\Aarc{-\delta}$. \begin{lem} \label{DeltaLemma} Function $\delta(s)$, defined on a spiral with increasing $(decreasing)$ curvature, is strictly increasing $(decreasing)$ from $\delta(0)\,{=}\, {-}\widetilde\alpha$ to $\delta(S)\,{=}\,\widetilde\beta$, taking value $-\pi\,{<}\,\delta(s_{0})\,{<}\,\pi$ at the reference point; its derivative is continuous and does not vanish in $[0,S]$. \end{lem} \begin{pf} Using \refeq{MuCum}, rewrite $\delta(s)=\mu_2(s){-}\mu_1(s)$ as \Equa{DeltaCum}{ \delta(s) = \left[\Frac{\widetilde\tau(s){+}\widetilde\beta}{2}-\widetilde\omega(s,S)\right] - \left[\Frac{\widetilde\alpha{+}\widetilde\tau(s)}{2}-\widetilde\omega(0,s)\right] = \Frac{1}{2}\rho(0,S)+\omega_1(s)- \omega_2(s). } Apply~\refeq{domega} to calculate derivative: \begin{eqnarray} \D{\delta}(s) &=& \D{\widetilde\omega}_1(s)-\D{\widetilde\omega}_2(s) = \Pd{\widetilde\omega(u,v)}{v}\Big|^{v=s}_{u=0} - \Pd{\widetilde\omega(u,v)}{u}\Big|^{v=S}_{u=s} =\nonumber\\[8pt] &=& \left[\Frac{1}{2}k(s)-\Frac{\sin\beta_1(s)}{h_1(s)}\right]- \left[\Frac{1}{2}k(s)+\Frac{\sin\alpha_2(s)}{h_2(s)}\right] = -\Frac{\sin\beta_1(s)}{h_1(s)}- \Frac{\sin\alpha_2(s)}{h_2(s)}.\label{DeltaInc} \end{eqnarray} Because $k(s)$ disappears in~\refeq{DeltaInc}, $\D{\delta}(s)$ is continuous even if the curvature jump occurs (smooth plot $\omega_1{-}\omega_2$ in \Reffig{figure09}, compared to $\omega_1$ and $\omega_2$, illustrates it). As the point $P(s)$ moves along the spiral, the arc $AP$ is lengthening, and the arc $PB$ shortening. From theorem~\ref{LongVogtTheorem} it follows that $\omega_1(s)$ is increasing, and $\omega_2(s)$ decreasing (the case of increasing curvature is being considered). The difference $\omega_1(s){- }\omega_2(s)$ in~\refeq{DeltaCum} is therefore an increasing function. So is $\delta(s)$: \ \GE{\D{\delta}(s)}. The only possibility for equality is that $\D{\widetilde\omega}_{1,2}(s)$ are simultaneously zeros, i.e. subarcs $AP$ and $PB$ are of constant curvature. This means that the spiral $APB$ is biarc, and $P$ is the point of tangency of its two circular arcs, as depicted in \Reffig{figure09}. If it is the case, then the last two fractions in~\refeq{DeltaInc} are equal to $k_1$ and $-k_2$ respectively, and \GT{\D{\delta}(s_P)\,{=}\,\frac{1}{2}(k_2{-}k_1)}. The derivative is thus strictly positive everywhere in $(0,S)$, $\delta(s)$ is strictly increasing. To calculate derivatives at the endpoints we have to resolve uncertanties $0/0$ in~\refeq{DeltaInc}. For $\D{\delta}(0)$ approximate the spiral near the startpoint by its circle of curvature: \equa{ \begin{array}{lcl} Z(s)& =& -c + \Frac{{\mathrm{i}}}{k_1} {\mathrm{e}}^{{\mathrm{i}}\alpha}(1- {\mathrm{e}}^{{\mathrm{i}} k_1 s}) = -c +{\mathrm{e}}^{{\mathrm{i}}\alpha}s + \Frac{{\mathrm{i}}}{2}{\mathrm{e}}^{{\mathrm{i}}\alpha} k_1s^2+O(s^3),\\[4pt] \delta(s)&=&\arg[c-Z(s)] - \arg[Z(s)+c]=\\[6pt] &=&\left[-\Frac{\sin\alpha}{2c}s+O(s^2)\right] - \left[\alpha+\Frac{1}{2}k_1 s+O(s^2)\right]= -\alpha-\Frac{k_1c{+}\sin\alpha}{2c} s+O(s^2). \end{array} } The coefficient at $s$ is the derivative $\D{\delta}(0)$. Similarly $\D{\delta}(S)$ at the endpoint~$B$ can be found. Both are positive due to~\refeq{k1k2}: \equa{ \D{\delta}(0)= -\Frac{k_1c{+}\sin\alpha}{2c}>0,\qquad \D{\delta}(S)= \Frac{k_2c{+}\sin\beta}{2c}>0. } The value of $\delta(s_{0})$ at the reference point is already estimated in~\refeq{DeltaPi}. Boundary values can be calculated from~\refeq{DeltaCum}: \equa{ \delta(0) = \Frac{\widetilde\beta{-}\widetilde\alpha}{2} + 0 - \Frac{\widetilde\alpha{+}\widetilde\beta}{2} = {-}\widetilde\alpha,\qquad \delta(S) = \Frac{\widetilde\beta{-}\widetilde\alpha}{2} + \Frac{\widetilde\alpha{+}\widetilde\beta}{2} - 0 = \widetilde\beta. \;\qed } \end{pf} \Lwfig{t}{0pt}{figure10} The plots $\omega_1{+}\omega_2$ in \Reffigs{figure09} and~\Figref{figure10} illustrate the following property of Vogt's angles: however large be the range $[0,\widetilde\omega]$ of the monotonic functions $\omega_1(s)$ and $\omega_2(s)$, their sum is enclosed in the interval of the width $\pi$: \begin{lem} Let \ $\Omega(s) = |\omega_1(s)|+|\omega_2(s)|-|\widetilde\omega|$. Then \Equa{o1o2}{ -\pi < \Omega(s) < 0 \mbox{~~~for~~~} s\in(0,S). } \end{lem} \begin{pf} For the case of increasing curvature all $\widetilde\omega$'s are nonnegative, and \equa{ \begin{array}{rcl} \Omega(s) &=& \omega_1(s)+\omega_2(s)-\widetilde\omega \eqref{OmegaCum} \left[\Frac{\widetilde\alpha{+}\widetilde\tau(s)}{2}-\mu_1(s)\right] + \left[\Frac{\widetilde\tau(s){+}\widetilde\beta}{2}-\mu_2(s)\right] - \Frac{\widetilde\alpha{+}\widetilde\beta}{2} =\\ &=& \widetilde\tau(s)-\mu_1(s)-\mu_2(s). \end{array} } Continue calculation of the derivative~\refeq{DeltaInc}, which is strictly positive: \equa{ \begin{array}{rcl} \D{\delta}(s) &=& -\Frac{h_2\sin\beta_1+h_1\sin\alpha_2}{h_1h_2} \,\eqref{BiPolar}\, 2c\Frac{\sin\mu_1\sin\beta_1-\sin\mu_2\sin\alpha_2}{h_1h_2\sin\delta}=\\[8pt] &=& 2c\Frac{\sin\mu_1\sin(\widetilde\tau{-}\mu_1)-\sin\mu_2\sin(\widetilde\tau{-}\mu_2)} {h_1h_2\sin\delta} = \Frac{-2c}{h_1h_2}\sin\Omega(s) > 0. \end{array} } The function $\Omega(s)$ takes zero values at the endpoints, and $\sin\Omega(s)$ is strictly negative in $(0,S)$. Invoking continuity of $\Omega(s)$, we conclude that it never reaches values 0 or $-\pi$ within $(0,S)$. \qed \end{pf} Now we recall Def.~\ref{DefShort1} to introduce some quantitative measure to the notion of a long spiral. The inflection point, if present, subdivides spiral into two branches, left and right. Points $s^{-}_i$, $i \,{=}\, 1,\ldots, M_1$ on the left branch, and $s^{+}_i$, $i \,{=}\, 1,\ldots, M_2$ on the right branch are those, where Def.~\ref{DefShort1} of short arc is violated, i.e. $0\,{<}\, s^{\pm}_i\,{<}\, S$, $\cos\tau(s^{\pm}_i)\,{=}\,{-}1$. Because \EQ{\sin\tau(s^{\pm}_i)}, this cannot happen at the inflection (Cor.~\ref{InflectionCor}); so, $s^{\pm}_i$ are distinct points, not continuous segments, as the inflection could be. If there are no such points or one branch is absent, the corresponding counter~$M_{1,2}$ is zero. Two other counters, $N_{1,2}$, are introduced in the context of Def.~\ref{DefShort2}. They count internal points where the spiral meets the left ($N_1$) and the right ($N_2$) complements of the chord. \begin{thm} \label{MN_Theorem} Counters $M_{1,2}$ and $N_{1,2}$ are pairwise equal. Cumulative boundary angles $\widetilde\alpha$ and $\widetilde\beta$ for a normalized spiral arc of increasing/decreasing curvature are \equa{ \begin{array}{lllll} k_1< k_2:\quad& \widetilde\alpha\,{=}\,\alpha{+}2\pi N_1,\quad & \widetilde\beta\,{=}\,\beta{+} 2\pi N_2 \quad & \mbox{\rm~with~~~} & \alpha,\beta\in(-\pi,\pi],\\ k_1\,{>}\, k_2:\quad& \widetilde\alpha\,{=}\,\alpha{-}2\pi N_1,\quad & \widetilde\beta\,{=}\,\beta{-} 2\pi N_2 \quad & \mbox{\rm~with~~~} & \alpha,\beta\in[-\pi,\pi), \end{array} } or, rewritten for the case of increasing curvature in more detailed form, \settowidth{\tmplength}{$\displaystyle{\widetilde\alpha\,{=}\,\alpha\;(N_1\,{=}\, 0),W}$} \renewcommand{\tmp}[1]{\makebox[\tmplength][l]{\mbox{$\displaystyle#1$}}} \begin{eqnarray} 0\leqslant k_1< k_2: \quad & -\pi<\alpha< 0,\; -\pi<\beta\leqslant\pi,\quad & \tmp{\widetilde\alpha\,{=}\,\alpha\;(N_1\ieq0),} \tmp{\widetilde\beta\,{=}\,\beta{+}2\pi N_2;} \label{CaseInc1}\\ k_1< 0<\,k_2: \quad & -\pi<\alpha\leqslant\pi,\; -\pi<\beta \leqslant\pi,\quad & \tmp{\widetilde\alpha\,{=}\,\alpha{+}2\pi N_1,} \tmp{\widetilde\beta\,{=}\,\beta{+}2\pi N_2;} \label{CaseInc2}\\ k_1< k_2\leqslant 0: \quad & -\pi<\alpha\leqslant\pi,\; -\pi<\beta < 0,\quad & \tmp{\widetilde\alpha\,{=}\,\alpha{+}2\pi N_1,} \tmp{\widetilde\beta\,{=}\,\beta\;(N_2\,{=}\, 0).} \label{CaseInc3} \end{eqnarray} \end{thm} \Lwfig{t}{0pt}{figure11} \begin{pf} Two cases, \refeq{CaseInc2} and~\refeq{CaseInc1}, are illustrated by \Reffig{figure11}. Right plots show functions $\widetilde\tau(s)$ and $\delta(s)$. Consider the inflection case~\refeq{CaseInc2}, \Reffig{figure11}a. Function $\widetilde\tau(s)$ is decreasing from $\widetilde\alpha$ to its minimal value $\widetilde\tau(s_{0})\,{\in}\,(-\pi,0)$ at the reference (inflection) point, and then increases to $\widetilde\beta$. In doing so, it meets $M_1{+}M_2$ times the levels $\pi(2m{-}1)$ (points $T_i$). The following sequence of its values can be derived according to definition of counters $M_{1,2}$: \equa{ \widetilde\alpha,\, \underbrace{\pi(2M_1{-}1),\, \pi(2M_1{-}3),\,\ldots,\,\pi}_{M_1},\, \widetilde\tau(s_{0}),\, \underbrace{\pi,\,3\pi,\,\ldots,\,\pi(2M_2{-}1)}_{M_2},\, \widetilde\beta. } By lemma~\ref{DeltaLemma}, function $\delta(s)$ is monotonic increasing from $-\widetilde\alpha$ to $\widetilde\beta$. Spiral cuts the complement of the chord at points $C_i$, when $\delta(s)$ meets levels $\pi(2n{-}1)$. The sequence of its values, similar to that of $\widetilde\tau(s)$, looks like \equa{ -\widetilde\alpha,\, \underbrace{-\pi(2M_1{-}1),\, -\pi(2M_1{-}3),\,\ldots,\,-\pi}_{N_1},\, \delta(s_{0}),\, \underbrace{\pi,\,3\pi,\,\ldots,\,\pi(2M_2{-}1)}_{N_2},\, \widetilde\beta. } The number of underbraced points is $N_1{+}N_2$ by definition of counters $N_{1,2}$. Separating them into two groups is justified as follows. First, the tangent at the inflection point separates two branches of the spiral and, by Cor.~\ref{InflectionCor}, two complements of the chord; hence, all the $N_1$ intersections with the left complement of the chord belong to the left branch and are followed by the set of the right-sided ones. Second, by lemma~\ref{DeltaLemma}, the value $\delta(s)\,{=}\, {-}\pi$ terminates the first group of points, whose number is $N_1$ by definition and $M_1$ by calculation. Similarly, $M_2\,{=}\, N_2$. From the above sequences it follows as well: if the directions of tangents are $\alpha$ and~$\beta$, and $|\alpha|,\,|\beta|<\pi$, the values $\alpha{+}2\pi N_1$ and $\beta{+}2\pi N_2$ are to be assigned to cumulative angles $\widetilde\alpha,\,\widetilde\beta$. If $\alpha$ or $\beta$ is equal to $\pm\pi$, the correspondence is kept by the alternative resolved in favour of $+\pi$ (and $-\pi$ in the case of decreasing curvature). In the case \refeq{CaseInc1} of increasing nonnegative curvature (\Reffig{figure11}b), the curve has no left branch, and \EQ{M_1} by definition. Point~$A$ is the reference point, so, by lemma~\ref{TauPiLemma}, $-\pi\,{<}\,\widetilde\alpha\,{<}\, 0$, \ $\widetilde\alpha\,{=}\,\alpha$; the tangent at $A$ is directed downwards. The region $\Mat{\Kl{1}}$ is either half-plane to the left of \Kl{1} (if \EQ{k_1}), or the interior of the circle \Kl{1} located in this half-plane. It covers the entire curve (Cor.~\ref{MaterialCor}), and cannot include any point of the left complement of the chord. Therefore $N_1\,{=}\, 0\,{=}\, M_1$. The rest of the proof is similar to that of the inflection case. Equality $N_2\,{=}\, M_2$ results from monotonic increasing behavior of functions $\widetilde\tau(s)$ (from $\widetilde\alpha$ to $\widetilde\beta$) and $\delta(s)$ (from $-\widetilde\alpha$ to $\widetilde\beta$), and counting points $T_i$ and $C_i$. The proofs for the case~\refeq{CaseInc3} of increasing nonpositive curvature can be obtained by applying symmetry about Y-axis. Symmetry about X-axis provides the proof for the three similar cases of decreasing curvature.\qed{} \end{pf} \begin{cor} \label{EquivDefCor} Definitions {\,\rm\ref{DefShort1}\,} and {\,\rm\ref{DefShort2}\,} of a short spiral are equivalent. \end{cor} Polygonal line $ACDF$ in \Reffig{figure12} bounds the open region of possible values of $\widetilde\alpha,\widetilde\beta$ for spirals with increasing curvature. Boundary $CD$ results from Vogt's theorem, $AC$ and $DF$~--- from~\refeq{CaseInc1}--\refeq{CaseInc3}. Triangle $GDC$, including half-open segments $(CG]$ and $[GD)$, is the boundary for short spirals. Biarc curves can be constructed with $\widetilde\alpha,\widetilde\beta$ in the interior of tra\-pe\-zium $BCDEB$ (see discussion in the next section). Similar regions for decreasing curvature are symmetric about the line $CD$ (\EQ{\widetilde\alpha{+}\widetilde\beta}). Describing these regions in the coordinate system $(\rho\,{=}\,\widetilde\alpha{-}\widetilde\beta,\,2\widetilde\omega\,{=}\,\widetilde\alpha{+}\widetilde\beta)$ looks as follows: \begin{cor} The winding angle~$\rho$ of a spiral is limited by \equa{ \hphantom{0 < 2|\widetilde\omega| <{}} |\rho| < 2|\widetilde\omega|+ 2\pi. } If spiral has no inflection {\rm(}cases \refeq{CaseInc1} and \refeq{CaseInc3}{\rm)}, than \equa{ 0 < 2|\widetilde\omega| < |\rho| < 2|\widetilde\omega|+ 2\pi. } \end{cor} If a spiral undergoes inversion, $|\widetilde\omega|$ remains constant, and $\rho$ can vary in these limits. The latter inequality is similar to the fact that a small non-closed circular arc (\EQ{|\widetilde\omega|}) can be transformed by inversion to almost $2\pi$-circle, and vise versa ($0\,{\leqslant}\, |\rho| \,{<}\, 2|\widetilde\omega| {+} 2\pi\,{=}\, 2\pi$). \Topic{Biarc curves} Biarc curves, considered hitherto as a flexible tool for curves interpolation, play an important role in the theory of spiral curves. However much had been written about biarcs (see~\cite{Biarc92} and references herein, \cite{LongBiarcs}~for long biarcs), the presented description seems to have several advantages. Normalized position can be considered as canonical for these curves, and allows to separate the parameters of shape from positional ones. The proposed parametrization yields a set of simple and symmetric reference formulae. No different treatment for ``C-shaped'' and ``S-shaped'' biarcs is needed. The specific cases of $\alpha\,{=}\,{\pm}\pi$ or $\beta\,{=}\,{\pm}\pi$, usually omitted, are taken into consideration. The condition of tangency of two arcs, forming biarc, is the equation \equa{ Q(\kappa_1, \kappa_2, \alpha, \beta)= ( \kappa_1+\sin\alpha)\,( \kappa_2-\sin\beta)+\sin^2\omega = 0 } (recall hyperbolas in \Reffig{figure06}). This condition allows two arcs to be a pair of equally directed straight lines (``biarc'' $h_0$ in \Reffig{figure06}b). The hyperbola can be parametrized as follows: \Equa{BFamily}{ \left\{ \begin{array}{l} \kappa_1(b)= -\sin\alpha - b^{-1}\sin\omega,\\ \kappa_2(b)=\hphantom{{-}}{}\sin\beta + b\sin\omega, \end{array} \right. } Note that $\omega$ is the half-width of the lense, equal to Vogt's angle $\widetilde\omega$ only if biarcs is short; otherwise $\widetilde\omega = \omega \pm \pi$. Parametrization~\refeq{BFamily} supplies the parameter $b$ for the one-parametric family of biarcs with fixed chord $[-1,1]$ and fixed tangent directions $\alpha$ and~$\beta$. As established in the proof of Cor.~\ref{k1k2Cor} (\Reffig{figure06}d), biarcs with \EQ{\sin\omega} do not exist. Every value of~$b$ produces the unique point on the hyperbola, unique pair of circles \Kl{1} and \Kl{2}, tangent at the point~$T$, unique path $ATB$, and unique biarc, denoted below as $\Biarcab{b}{\alpha}{\beta}$ or simply ${\cal B}(b)$. Solving the system of two equations \EQ{C(x,y;\Kl{1,2})}~\refeq{Cxy} yields coordinates of the point $T\,{=}\,(x_0,y_0)$ of contact of two arcs: \Equa{PointT}{ x_0 = \Frac{b^2-1}{\Delta},\quad y_0 = \Frac{2b\sin\gamma}{\Delta},\qquad \Delta = b^2+2b\cos\gamma+1\,. } The direction $\tau_0$ of the common tangent at this point is given by \Equa{Tau0}{ \begin{array}{l} \sin\tau_0 \,{=}\, {-}(b^2\sin\alpha + 2b\sin\omega + \sin\beta)/\Delta,\\ \cos\tau_0 \,{=}\,\hphantom{{-}}{}(b^2\cos\alpha + 2b\cos\omega + \cos\beta)/\Delta, \end{array} \quad \tan\Frac{\tau_0}{2}={-}\Frac {b\sin(\alpha/2)+\sin(\beta/2)} {b\cos(\alpha/2)+\cos(\beta/2)}\,. } The circular arc from $B$ to $A$, complementary to the arc~$\Aarc{\gamma}$, will be referred to as the {\em complement of the bisector} of the lense. Both bisector and its complement form the circle~$\Gamma$, shown dashed in \Reffig{figure13}: \equa{ \Gamma = K(-1,0,\gamma,{-}\sin\gamma),\qquad C(x,y;\Gamma)\eqref{Cxy}{-}\sin\gamma(x^2+y^2-1)-2y\cos\gamma. } Except property~\refeq{PropMonoLength}, the following properties of biarcs are either known or easy to prove by means of elementary geometry: \def(\roman{enumi}){(\roman{enumi})} \def\roman{enumi}{\roman{enumi}} \begin{enumerate} \item \label{PropCircle} {\em The locus of contact points $T(b)\,{=}\,(x_0(b),\,y_0(b))$~\refeq{PointT} is the circle~$\Gamma$. Points $T(b)$ with \GT{b} are located on the bisector, those with \LT{b}, on its complement. } \item \label{PropOmega} {\em All biarcs meet the the circle~$\Gamma$ at the constant angle~$\omega$. } \item {\em Definitions $\Biarcab{\infty}{\alpha}{\beta}\,{=}\,\Aarc{\alpha}$ and $\Biarcab{0}{\alpha}{\beta}\,{=}\,\Aarc{-\beta}$, are illustrated by \Reffig{figure13} and justified as follows\,}: \Equa{BiarcLim}{ \begin{array}{lllll} b\to \infty: & \kappa_1\to{-}\sin\alpha, \;& \kappa_2\to\infty, &\; T\to B,& \Biarcab{b}{\alpha}{\beta}\to \Aarc{\alpha};\\ b\to 0:& \kappa_1\to\infty, & \kappa_2\to\sin\beta,\; &\; T\to A,\:& \Biarcab{b}{\alpha}{\beta}\to \Aarc{-\beta}. \end{array} } \item \label{PropCum} {\em The possible values for the pair of counters $(N_1,N_2)$ are $(0,0)$, $(0,1)$ or $(1,0)$. For given tangents $\alpha$, $\beta$, the cumulative angles $\widetilde\alpha$, $\widetilde\beta$ can take values {\rm(}arrows mark the cases of increasing $\,({}^\Uparrow)$ or decreasing $\,({}^\Downarrow)$ curvature{\rm):}} \Equa{BiarcCum}{ \begin{array}{ll} \mbox{if~~~}\alpha{+}\beta>0,\quad& (\widetilde\alpha,\widetilde\beta) = (\alpha,\:\beta)^\Uparrow,\quad (\alpha{-}2\pi,\:\beta)^\Downarrow,\quad (\alpha,\:\beta{-}2\pi)^\Downarrow;\\ \mbox{if~~~}\alpha{+}\beta<0,\quad& (\widetilde\alpha,\widetilde\beta) = (\alpha,\:\beta)^\Downarrow,\quad (\alpha{+}2\pi,\:\beta)^\Uparrow,\quad (\alpha,\:\beta{+}2\pi)^\Uparrow . \end{array} } {\em The first group, biarcs with $(\widetilde\alpha,\widetilde\beta)\,{=}\,(\alpha,\beta)$, corresponds to \GT{b}; they are short and enclosed within the lense. Biarcs with \LT{b} are located outside the lense. They are long, unless \ $\alpha\,{=}\,{\pm\pi}$ or $\beta\,{=}\,{\pm\pi}$ {\rm(see~\ref{PropPi})}. } \item \label{PropBinf} {\em Discontinuous biarcs {\rm\,(such as shown in \Reffigs{figure06}a,b by dotted lines)\,} correspond to nonpositive parameter value~$b^{\star}$, defined by \equa{ b^{\star}\,{=}\, \left\{ \begin{array}{lll} -\Frac{\sin\omega}{\sin\alpha}\quad& \mbox{if~~~} |\alpha| \,{\geqslant}\, |\beta|\quad & [\,\kappa_1(b^{\star})\,{=}\, 0\,], \\[8pt] -\Frac{\sin\beta}{\sin\omega}\quad& \mbox{if~~~} |\alpha| \,{\leqslant}\, |\beta|\quad & [\,\kappa_2(b^{\star})\ieq0\,], \end{array} \right. . } Three biarcs with $b\,{\in}\, \{\infty;b^{\star};0\}$ subdivide the XY-plane into three regions, one of them being the lense. Every region encloses one of the three subfamilies~\refeq{BiarcCum}. } \item \label{PropPi} {\em If $\alpha\,{=}\,{\pm\pi}$ or $\beta\,{=}\,{\pm\pi}$ {\rm\,(\Reffig{figure06}c)}, one of these three regions, as well as one of three subfamilies~\refeq{BiarcCum}, disappear. The degenerate biarc is at the same time lense's boundary ($b^{\star}\,{=}\, 0$ or $b^{\star}\,{=}\,\infty$). All such biarcs are short; and \ $\Biarcab{b}{\pi}{\beta}\,{=}\,\Biarcab{-b}{-\pi}{\beta}$, \ $\Biarcab{b}{\alpha}{\pi}\,{=}\,\Biarcab{-b}{\alpha}{-\pi}$. } \item \label{PropXY} {\em Taking into account biarcs ${\cal B}(\infty)$, ${\cal B}(b^{\star})$ and ${\cal B}(0)$, there is a unique biarc ${\cal B}(b(x,y)\,)$, passing through every point $(x,y)$ in the plane, excluding poles $A$ and $B$. Namely, \Equa{Bxy}{ b(x,y) = \left\{ \begin{array}{ll} \Frac{\sin\omega[(x{+}1)^2+y^2]} {(1{-}x^2{-}y^2)\sin\alpha-2y\cos\alpha},\quad& \mbox{if~~} C(x,y;\Gamma)\sin\omega \leqslant 0,\\[12pt] \Frac{(1{-}x^2{-}y^2)\sin\beta+2y\cos\beta} {\sin\omega[(x{-}1)^2+y^2]},\quad& \mbox{if~~} C(x,y;\Gamma)\sin\omega\geqslant 0. \end{array} \right. } } \item \label{PropMonoLength} {\em The length $L(b)$ of short biarc \equa{ L(b)=\Frac{\tau_0(b){-}\alpha}{k_1(b)}+ \Frac{\beta{-}\tau_0(b)}{k_2(b)}\qquad \left[ L(0)=\Frac{2c\alpha}{\sin\alpha},\quad L(\infty)=\Frac{2c\beta}{\sin\beta}\right], } is strictly monotonic function of $b$, or constant, if $\alpha\,{=}\,\beta$ {\rm(}the uncertanties $0/0$, wherever occur, are simply reducible{\rm)}. } \end{enumerate} To prove~\refeq{PropCircle} note that~\refeq{PointT} is the parametric equation of the circle~$\Gamma$. Ordinates of the points, belonging to the bisector, should have the same sign as the angle $\gamma$, and this is achieved at \GT{b}: \equa{ \gamma \neq 0,\quad b > 0 \quad\Longrightarrow\quad \mathop{\rm sign}\nolimits y_0 = \mathop{\rm sign}\nolimits \gamma \qquad\mbox{(\Reffig{figure06}a)}. } If \EQ{\gamma}, the bisector is coincident with the chord; this corresponds to \GT{b} as well: \equa{ \gamma = 0, \quad b > 0 \quad\Longrightarrow\quad y_0 = 0,\quad |x_0|=\Big|\Frac{b^2-1}{b^2+2b+1}\Big|= \Big|\Frac{b{-}1}{b{+}1}\Big| < 1 \qquad\mbox{(\Reffig{figure06}b)}. } In both cases points $T(b)$ for biarcs with \LT{b} fill the complement of the bisector. Provided that the locus $T(b)$ is circle~$\Gamma$, \refeq{PropOmega} is evident. Property \refeq{PropCum} can be derived from the fact that one of two subarcs of a biarc is located inside the circle~$\Gamma$, and the other outside it. Only outside one can meet the chord's complement; and this may happen only once. Points of contact $T(b)$, \GT{b}, are inside the lense. So are subarcs $AT$, $TB$, and the entire biarc curve. Being inside lense, {\em biarcs with \GT{b} are short}, their cumulative boundary angles are $(\widetilde\alpha,\widetilde\beta)\,{=}\,(\alpha,\beta)$. Points $T(b)$, \LT{b}, filling the complement of the bisector, are outside the lense together with associated biarcs. The discontinuous biarc ${\cal B}(b^{\star})$ may arise only if one of the two curvatures is zero. Let \EQ{\kappa_1(b^{\star})} (\Reffig{figure06}a), i.e. $b^{\star}\,{=}\,{-}\sin\omega/\sin\alpha$. The parametric equation of the subarc $AT$ is \equa{ x(s)=-1+s\cos\alpha,\qquad y(s)=s\sin\alpha. } This ray reaches the point of contact $T$ \ \refeq{PointT} when $y(s_T)\,{=}\, s_T\sin\alpha\,{=}\, y_0$, i.e. \equa{ s_T = \Frac{2b^{\star}\sin\gamma}{({b^{\star}}^2{+}2b^{\star}\cos\gamma{+}1)\sin\alpha} = \Frac{-2\sin\gamma\sin\omega} {\sin^2\alpha{-}2\cos\gamma\sin\alpha\sin\omega{+}\sin^2\omega} = {-}2\Frac{\sin\omega}{\sin\gamma} } ($\omega{+}\gamma$ was substituted for $\alpha$). If \GT{s_T}, then tangency occurs before the ray goes off into infinity, and the biarc is normally continued by the second arc $TB$ with \NE{\kappa_2}. Discontinuity occurs under the condition $-\infty\,{\leqslant}\, s_T \,{<}\, 0$, equivalent to $\cos\alpha\,{\leqslant}\,\cos\beta$, and rewritten in~\refeq{PropBinf} as $|\alpha|\,{\geqslant}\,|\beta|$. To derive~\refeq{Bxy}, one should first decide, to which subarc of the sought for biarc $\Biarcab{b}{\alpha}{\beta}$ the point $(x,y)$ belongs. The circle~$\Gamma$ separates two subarcs, and the decision depends on the sign of $C(x,y;\Gamma)$. The arc $AT$ goes to the left of~$\Gamma$ ($AT\,{\in}\,\Mat{\Gamma}$, \ \LE{C(x,y;\Gamma)}) if the vector, defined by $\alpha$, points to the left of the vector, defined by $\gamma$: $\gamma<\alpha\,{=}\,\omega{+}\gamma$, i.e. \GT{\omega}, \ \GT{\sin\omega}. And $AT$ goes to the right of~$\Gamma$ if \LT{\sin\omega}. So, \equa{ \begin{array}{rcl} (x,y)\,{\in}\, \Arc{AT}\,{\in}\,\Kl{1} &\Longleftrightarrow& [C(x,y;\Gamma)\ile0\:\wedge\:\sin\omega\igt0] \;\vee\; [C(x,y;\Gamma)\ige0\:\wedge\:\sin\omega\ilt0]\\ &\Longleftrightarrow& C(x,y;\Gamma){\cdot}\sin\omega\ile0. \end{array} } Under this condition define $b$ from the implicit equation~\refeq{Cxy} of circle~\Kl{1}: \equa{ (\underbrace{-\sin\alpha{-} b^{-1}\sin\omega}_{\kappa_1}) \left[(x{+}1)^2{+}y^2\right]+ 2(x{+}1)\sin\alpha - 2y\cos\alpha = 0. } Below we prove property \refeq{PropMonoLength}. \begin{pf} For the case of symmetric lense, i.e. $\alpha\,{=}\,\beta\,{=}\,\omega$, \EQ{\gamma}, $\tau_0(b)\,{=}\, {-}\omega\,{=}\, const$, and \equa{ L(b)=\Frac{-2\omega}{-\sin\omega- b^{-1}\sin\omega}+ \Frac{2\omega}{\sin\omega+b\sin\omega} =\Frac{2\omega}{\sin\omega}\, } ($c\,{=}\, 1$ assumed). For general case, \NE{\gamma}, we replace the parameter $b$ by $\theta\,{=}\, \tau_0(b)$. As $b$ varies from 0 to $\infty$, $\theta$ varies (monotonously) from $\beta$ to $-\alpha$. Solving~\refeq{Tau0} for $b$ yields \newcommand{\ffrac}[1]{\frac{#1}{2}} \newcommand{\Sam}[1]{\sin^{#1}\ffrac{\alpha{-}\theta}} \newcommand{\Sap}[1]{\sin^{#1}\ffrac{\alpha{+}\theta}} \newcommand{\Sbm}[1]{\sin^{#1}\ffrac{\beta{-}\theta}} \newcommand{\Sbp}[1]{\sin^{#1}\ffrac{\beta{+}\theta}} \equa{ b=-\Frac{\tan\ffrac{\theta}\cos\ffrac{\beta} +\sin\ffrac{\beta}} {\tan\ffrac{\theta}\cos\ffrac{\alpha}+\sin\ffrac{\alpha}} =-\Frac{\Sbp{}}{\Sap{}}\:, } and \equa{ \begin{array}{lll} \kappa_1(\theta) =\sin\gamma\, \Frac{\Sam{}}{\Sbp{}},\;& \kappa_2(\theta) =\sin\gamma\, \Frac{\Sbm{}}{\Sap{}},\;& \kappa_2{-}\kappa_1=-\Frac{\sin^2\gamma\sin\omega}{\Sap{}\Sbp{}},\\[9pt] \D{\kappa_1}(\theta)=\Frac{-\sin\gamma\sin\omega}{2\Sbp{2}},& \D{\kappa_2}(\theta)=\Frac{-\sin\gamma\sin\omega}{2\Sap{2}}.& \end{array} } The length of biarc as a function of $\theta$, and its derivative appear as \equa{ \begin{array}{lcl} L(\theta)&=&\Frac{\theta{-}\alpha}{\kappa_1(\theta)}+ \Frac{\beta{-}\theta}{\kappa_2(\theta)}\,; \\[9pt] \D{L}(\theta)&=&\Frac{ \kappa_1-(\theta{-}\alpha)\D{\kappa_1}}{\kappa_1^2}- \Frac{\kappa_2+(\beta{-}\theta)\D{\kappa_2}}{\kappa_2^2} = \Frac{\kappa_1\kappa_2(\kappa_2{-}\kappa_1)- \D{\kappa_1}\kappa_2^2(\theta{-}\alpha)- \D{\kappa_2}\kappa_1^2(\beta{-}\theta)}{\kappa_1^2\kappa_2^2}=\\[9pt] &=&\Frac{\sin\omega \left[-2\sin\gamma\Sam{}\Sbm{}+ (\theta{-}\alpha)\Sbm{2}+ (\beta{-}\theta)\Sam{2} \right] } {2\sin\gamma\Sam{2}\Sbm{2}} . \end{array} } We have to prove that the above bracketed expression is of constant sign. Applying one more substitution, \equa{ \theta\,{=}\,\omega-x, \mbox{~~~i.e.~~} \alpha-\theta=x+\gamma,\qquad \beta-\theta=x-\gamma, } denote this expression as $F(x;\gamma)$: \equa{ \begin{array}{lcl} F(x;\gamma)&=& -2\sin\gamma\sin\ffrac{x{+}\gamma}\sin\ffrac{x{-}\gamma} -(x{+}\gamma)\sin^2\ffrac{x{-}\gamma} +(x{-}\gamma)\sin^2\ffrac{x{+}\gamma}\\[8pt] &=& \cos x(\gamma\cos\gamma{+}\sin\gamma) + x\sin x\sin\gamma - \sin\gamma\cos\gamma - \gamma . \end{array} } Since $\theta\,{\in}\,[-\alpha,\beta]\,{\in}\,[-\pi,\pi]$ and $|\omega|\,{<}\,\pi$, it is sufficient to explore the interval $x\,{\in}\,[-2\pi,2\pi]$. Because $F(x;\gamma)$ is even with respect to~$x$ \ $[F(-x;\gamma)\,{=}\, F(x;\gamma)\,]$, and odd with respect to the parameter \ $[F(x;-\gamma)\,{=}\, {-}F(x;\gamma)\,]$, we explore its behavior only for \GT{\gamma} and \GE{x}. The plot of $F(x;\frac{2}{3}\pi)$ is shown on the left side of \Reffig{figure14}. To find extrema of $F(x;\gamma)$ solve the equation \EQ{\D{F_x}(x;\gamma)}: \equa{ x\cos x\sin\gamma - \gamma\sin x\cos\gamma = 0. } Its non-negative roots \ $x_0,x_1,x_2,\ldots$ \ are: \EQ{x_0}, and those of the equation $x\cot x\,{=}\, \gamma\cot\gamma$; in particular, $x_1\,{=}\,\gamma$. The roots are shown as dots in the right side of \Reffig{figure14}, where the function $x\to x\cot x$ is plotted. From piecewise monotonicity of this function it is clear that $x_1\,{\in}\,(0,\pi)$, $x_2\,{\in}\,(\pi,2\pi)$, etc. We can now describe the behavior of $F(x;\gamma)$ in $x\,{\in}\,[0,2\pi]$ as follows. At \EQ{x} function has negative local minimum \equa{ F(0;\gamma)=2\sin^2\ffrac{\gamma}(\sin\gamma{-}\gamma)<0,\quad F^{\prime\prime}_{xx}(0;\gamma)=\sin\gamma{-}\gamma\cos\gamma>0. } It increases to the maximum \EQ{F(x_1;\gamma)\,{=}\, F(\gamma;\gamma)}, and then decreases to the subsequent minimum at $x\,{=}\, x_2$. While increasing from $x_2$ to $x_3\,{\in}\,(2\pi,3\pi)$, function passes through the boundary $x\ieq2\pi$ of the interval under investigation, still remaining negative at this point: \LT{F(2\pi;\gamma)\,{=}\, F(0;\gamma)}. It is therefore negative in $[-2\pi,2\pi]$, except two zeros at $x\,{=}\,{\pm}\gamma$. The derivative $\D{L}(\theta)$ does not change sign; and $L(\theta)$ is strictly monotonic. \qed{} \end{pf} \Lwfig{t}{0pt}{figure14} \Topic{Positional inequalities for short spirals} The following theorem generalizes the earlier results for ``very short'' spirals (theorem~3 in~\cite{Spiral}) and for convex ones (theorem~5 in~\cite{Theorem5}). \begin{thm} \label{LenseTheorem} Short spiral arc is located within its lense. Except endpoints, the arc has no common points with lense's boundary. \end{thm} \begin{pf} As the point $P(s)$ moves along the curve, the circular arcs $APB\,{=}\,\Aarc{-\delta(s)}$, containing $P$, fill continuously the lense (\Reffig{figure15}). Because \ $-\delta(0)\,{=}\, \alpha$, and $\delta(s)$ is strictly monotonic (lemma~\ref{DeltaLemma}), the curve at the very beginning deviates immediately from the boundary arc $\Aarc{\alpha}$ to the interior of the lense. Near the end point the behavior is similar.\,\qed \end{pf} \begin{cor} A short spiral may cut its chord only once; and this occurs if and only if the tangent angles $\alpha$ and $\beta$ are nonzero and of the same sign. \end{cor} More severe limitation can be derived if boundary curvatures are known, as shown in \Reffig{figure16}. For every inner point of a spiral the unique biarc $\Biarcab{b}{\alpha}{\beta}$ can be constructed. Thus generated subfamily of biarcs fill {\em bilense\,}, i.e. the region, bounded by two biarcs, $AT_1B$ and $AT_2B$. Arcs $AT_1$ and $T_2B$ belong to boundary circles of curvatures of the enclosed spiral. Returning to \Reffigs{figure06}a,b,c, this corresponds to projection of the point $K\,{=}\,(\kappa_1,\kappa_2)$ onto the hyperbola \EQ{Q}, yielding two points, $H_1\,{=}\,(\kappa_1,g_2)$ and $H_2\,{=}\,(g_1,\kappa_2)$. They provide in turn two biarcs, marked as~$h_1$ and~$h_2$, and bilense. We are going to prove that any short spiral, whose boundary parameters $(\alpha,\beta,\kappa_1,\kappa_2)$ belong to the closed region $KH_1H_2K$, is covered by corresponding bilense. If point $K$ is being moved backwards and upwards to infinity (or, in the case of decreasing curvature, forwards and downwards within the lower right branch of hyperbola), biarcs $h_1$ and $h_2$ approache the boundaries of the lense~\refeq{BiarcLim}, covering all shorts spirals with given tangents $\alpha,\beta$ and $-\infty\,{\leqslant}\, \kappa_1\,{<}\, \kappa_2\,{\leqslant}\, \infty$. It was the subject of theorem~\ref{LenseTheorem}. \begin{defn} \label{DefBilense} \rm A {\em bilense} $\Bilense{\alpha,\beta,b_1,b_2}$, $0\,{\leqslant}\, b_1 \,{<}\, b_2 \,{\leqslant}\,\infty$, generated by a short non-biarc spiral with end conditions \equa{ \alpha,\;\beta,\quad \mbox{such~that}\quad |\omega|\neq\pi, \quad\mbox{and}\quad \kappa_1= {-}\sin\alpha - b_1^{-1}\sin\omega,\quad \kappa_2=\sin\beta + b_2\sin\omega, } is the region, bounded by two biarcs $\Biarcab{b_1}{\alpha}{\beta}$ and $\Biarcab{b_2}{\alpha}{\beta}$, namely, \equa{ \Bilense{\alpha,\beta,b_1,b_2} = \{ (x,y):\; (x,y)\in\Biarcab{b}{\alpha}{\beta}\,\}. } \end{defn} Choosing the parent spiral to be non-biarc, we force $Q$ to be strictly negative, and avoid bilense of zero width. Condition $b_1\,{<}\, b_2$ in Def.~\ref{DefBilense}, for both increasing and decreasing curvature, results from \equa{ Q(\kappa_1, \kappa_2,\alpha,\beta) = \left(1-\Frac{b_2}{b_1}\right)\sin^2\omega < 0. } \begin{thm} \label{BilenseTheorem} All normalized short spirals with fixed boundary tangents $\alpha,\,\beta$, such that $|\omega|\neq\pi$, and curvature $\kappa(s)$, such that $\kappa_1\,{\leqslant}\, \kappa(s) \,{\leqslant}\, \kappa_2$, or $\kappa_1\,{\geqslant}\, \kappa(s) \,{\geqslant}\, \kappa_2$, are covered by the corresponding bilense $\Bilense{\alpha,\beta,b_1,b_2}$, \equa{ b_1=\Frac{-\sin\omega}{\kappa_1+\sin\alpha},\quad b_2=\Frac{\kappa_2-\sin\beta}{\sin\omega}\,. } \end{thm} \begin{pf} \Reffig{figure16} clarifies the proof, based on the monotonicity of the map \equa{ \mbox{point on the curve}\;\rightarrow\; \mbox{biarc through this point,} } i.e. monotonicity of the function $b(s)\,{=}\, b(x(s),y(s))$, defined by~\refeq{Bxy}. Consider the case of increasing curvature. Denote $C\,{=}\, Z(\bar{s})$ the point where the spiral meets the bisector of the lense. From the proof of theorem~\ref{LenseTheorem} it is clear that such point exists, is unique, and the subarc $AC$ of the spiral is located in the upper half of the lense. From $\delta(\bar{s})=-\gamma$ derive equality $\omega_1(\bar{s})\,{=}\,\omega_2(\bar{s})$: \equa{ \begin{array}{rcl} 0&=&2[\delta(\bar{s}){+}\gamma] = 2\mu_2(\bar{s}){-}2\mu_1(\bar{s})+\alpha{-}\beta =\\[6pt] &=&[\underbrace{\alpha{-}\mu_1(\bar{s})}_{\alpha_1} + \underbrace{\tau(\bar{s}){-}\mu_1(\bar{s})}_{\beta_1}] - [\underbrace{\tau(\bar{s}){-}\mu_2(\bar{s})}_{\alpha_2} + \underbrace{\beta{-}\mu_2(\bar{s})}_{\beta_2}] = 2[\omega_1(\bar{s})-\omega_2(\bar{s})]. \end{array} } Denote $\omega_0 = \omega_1(\bar{s})\,{=}\,\omega_2(\bar{s})$ and apply inequality~\refeq{o1o2}, taking into account that $0\,{<}\,\omega\,{<}\,\pi$: \equa{ \Omega(\bar{s}) = 2\omega_0{-}\omega < 0 \quad\Longrightarrow\quad \omega_0 < \Frac{\omega}{2} < \Frac{\pi}{2}\,. } The map $b(s)\,{=}\, b(x(s),y(s)\,)$ for the points of $AC$ is determined by the first expression of~\refeq{Bxy}. Applying substitutions \equa{ \begin{array}{l} h_1\,{=}\,\sqrt{(x{+}1)^2+y^2},\qquad D\,{=}\,(1{-}x^2{-}y^2)\sin\alpha-2y\cos\alpha \,{=}\, \Frac{h_1^2\sin\omega}{b},\\ \D{x}=\cos\tau, \quad \D{y}=\sin\tau, \qquad x{+}1 = h_1\cos\mu_1,\quad y = h_1\sin\mu_1 \end{array} } ($x$, $y$, $h_1$, $\mu_1$, $\tau$, $D$, $b$ are functions of $s$), calculate derivative $\D{b}(s)$: \equa{ \begin{array}{rcl} \Dfrac{b}{s} &=& \Dfrac{}{s}\:\Frac{h_1^2\sin\omega }{D} =2\sin\omega \Frac{[(x{+}1)\D{x}{+}y\D{y}]D+ h_1^2[(x\D{x}{+}y\D{y})\sin\alpha+\D{y}\cos\alpha]} {D^2} =\\[9pt] &=& 2\sin\omega \Frac{[(x{+}1)^2-y^2]\sin(\alpha{+}\tau) - 2y(x{+}1)\cos(\alpha{+}\tau)} {D^2} =\\[9pt] &=& 2\sin\omega \Frac{h_1^2 [\cos2\mu_1\sin(\alpha{+}\tau) - \sin2\mu_1\cos(\alpha{+}\tau)]} {[h_1^2\,b^{-1}\sin\omega]^2} = \Frac{2 b^2 \sin 2\omega_1]}{h_1^2\,\sin\omega} \geqslant 0. \end{array} } This expression is non-negative in $(0,\bar{s}]$ because $\omega_1(s)$ is monotonic increasing with $s$ up to the value $\omega_1(\bar{s})\,{=}\, \omega_0\,{<}\,\frac{\pi}{2}$. It may be zero while \EQ{\omega_1(s)}, i.e. in the case of initial circular subarc, partially coincident with the initial curvature element of the spiral (arc $AT_1$). The value $b(0)$ can be calculated from expansions \equa{ \begin{array}{l} x(s)=-1+s\cos\alpha-\Frac{\kappa_1}{2}s^2\sin\alpha+O(s^3),\quad y(s)=s\sin\alpha+\Frac{\kappa_1}{2}s^2\cos\alpha+O(s^3):\\ b(0)=\lim\limits_{s\to0}b(s)= \lim\limits_{s\to0}\Frac{(4+\kappa_1^2s^2)\sin\omega} {-4(\kappa_1{+}\sin\alpha)-\kappa_1^2s^2\sin\alpha} = \Frac{-\sin\omega}{\kappa_1+\sin\alpha} = b_1. \end{array} } Similarly, from the second expression of~\refeq{Bxy}, $b(S)\,{=}\, b_2$, and, due to $2\omega_2(s)\,{<}\,\pi$ in $[\bar{s},S)$, the derivative remains non-negative in $[\bar{s},S)$: \equa{ \D{b}(s) =\Dfrac{}{s}\: \Frac{(1{-}x^2{-}y^2)\sin\beta+2y\cos\beta} {\sin\omega[(x{-}1)^2+y^2]} = \ldots = \Frac{2\,\sin 2\omega_2}{h_2^2\,\sin\omega}\geqslant 0 } For decreasing curvature functions $\omega_{1,2}(s)$ as well as the constant $\omega$ change sign; $b(s)$ remains monotonic increasing. \qed{} \end{pf} \Reffig{figure17} illustrates an application of these results to a spiral in the whole. A spiral is presented in \Reffig{figure17}a by a set of points $P_1,\ldots,P_n$, $n\,{=}\, 11$, and tangents $\tau_1,\,\tau_n$ at the endpoints. The constraint was imposed that the subarcs $\Arc{P_iP}_{i{+}1}$ were one-to-one projectable onto the corresponding chord (i.e. $|\widetilde\alpha_i,\widetilde\beta_i|\,{<}\,\pi/2$). For the practice of curves interpolation it is a quite weak limitation. \Lwfig{t}{0pt}{figure17} In \Reffig{figure17}b circular arcs $A_1,\ldots,A_n$ are constructed as follows: arc $A_1$ passes through points $P_1$ and $P_2$, matching at $P_1$ given tangent $\tau_1$; arcs $A_i$, $i\,{=}\, 2,\ldots,n{-}1$ pass through three consecutive points $P_{i{-}1},P_i,P_{i{+}1}$; arc $A_n$ passes through two points $P_{n{-}1},P_n$, matching given tangent $\tau_n$ at the endpoint. Thus on each chord $P_iP_{i+1}$ we get a lense, bounded by arcs $A_i$ and $A_{i{+}1}$. The following was proven in \cite{Spiral}, theorem~5: \begin{itemize} \item[$\bullet$] {\em The sequence $k_1,\ldots,k_n$ of curvatures of arcs $A_i$ is monotonic.} \item[$\bullet$] {\em The union of such lenses covers all spirals, matching given interpolation data.} \end{itemize} The width of this region is of the order $O(h_i^3)$, \ $h_i\,{=}\, |P_iP_{i{+}1}|$. \Reffig{figure17}c shows this construction for 15 points. The influence of discretization is also seen from comparing left and right branches of the spiral. Thus, the measure of determinancy of a spiral by inscribed polygonal line is provided, without invoking any empirics of particular interpolational algorithms. \Topic {Existence theorems} The converse of Vogt's theorem, namely, the problem of joining two line or curvature elements by a spiral arc, was considered by A.\,Ostrowski~\cite{Ostrowski}. His solution concerns only $C^2$-continuous convex spirals. It states that for two given line elements Vogt's theorem is the sufficient condition for the existence of such spiral. If curvatures $R_1^{-1}$ and $R_2^{-1}$ at the endpoints are involved, additional condition $D\,{<}\,|R_2{-}R_1|$ is required, $D$~being the distance between the centres of two boundary circles of curvature. Rewritten in terms of this article, this condition is the particular case of inequality \LT{Q(\Kl{1},\Kl{2})}. Theorem~2 in~\cite{Spiral} establishes this condition for ``very short'' spirals, regardless of convexity, and includes the biarc case as the unique solution if \EQ{Q(\Kl{1},\Kl{2})}. \begin{thm} \label{ExistenceShort} The necessary and sufficient conditions for the existence of a short spiral curve, matching at the endpoints two given curvature elements~\refeq{K1K2c1}, are: modified Vogt's theorem~\refeq{VogtShort} and inequality \LE{Q(\Kl{1},\Kl{2})}; if \EQ{Q}, biarc is the unique solution. \end{thm} \begin{pf} Theorems \ref{ShortVogtTheorem} and \ref{MainTheorem} prove the necessity of these conditions. To prove sufficiency, construct a smooth three-arc spiral curve whose boundary curvature elements are \Kl{1} and \Kl{2}, shown as $AC$ and $DB$ in \Reffig{figure18}. Apply inversion about the circle \equa{ \Kl{}^{\star}=\Aarc{\gamma^{\star}}= \Kr{-1,0,\gamma^{\star},\kappa^{\star}},\qquad \gamma^{\star}=\gamma/2,\quad \quad\kappa^{\star}=-\sin(\gamma^{\star}), } shown dotted-dashed; its centre is the point $O\,{=}\,(0,y^\star)\,{=}\,(0,-\cot\gamma^\star)$. The inversion is chosen to make the lense symmetric, its former bisector $\Aarc{\gamma}$ is transformed into the chord $\Aarc{0}$. For \EQ{\gamma} this is just a symmetry about X-axis. If $\gamma\neq0$ (which means $|\omega|\,{<}\,\pi$), we have to verify that all the points of the interior of the lense remain in the interior of its image, i.e. $O$ is located outside the lense. To do it, we enter lense's boundary ordinates as $y_1\,{=}\,\tan\frac{\alpha}{2}$ and $y_2\,{=}\,{-\tan\frac{\beta}{2}}$, and check the sign of the product \equa{ (y^\star-y_1)(y^\star-y_2)= \left(-\cot\Frac{\gamma}{2} - \tan\Frac{\alpha}{2}\right) \left(-\cot\Frac{\gamma}{2} + \tan\Frac{\beta}{2}\right) =\Frac{\cos^2\frac{\omega}{2}} {\sin^2\frac{\gamma}{2} \cos\frac{\alpha}{2} \cos\frac{\beta}{2}}>0. } \Lwfig{t}{0pt}{figure18} \noindent New boundary angles $\D{\alpha},\:\D{\beta}$ can be calculated from conditions $ \gamma^{\star}\,{=}\,\frac{1}{2}(\alpha{+}\D{\alpha})$, $-\gamma^{\star}\,{=}\,\frac{1}{2}(\beta{+}\D{\beta})$; and new curvatures $\D{\kappa_1},\:\D{\kappa_2}$, from Prop.~\ref{InvCurvatureProp}: \equa{ \begin{array}{lcll} \D{\kappa_1}&=& 2\kappa^{\star}(1{-}2Q_{01})-\kappa_1,\quad& Q_{01}=Q(\Kl{}^{\star},\Kl{1}) =\sin^2[(\alpha{-}\gamma^{\star})/2], \\[3pt] \D{\kappa_2}&=& 2\kappa^{\star}(1{-}2Q_{02})-\kappa_2, & Q_{02}=Q(\Kl{}^{\star},\Kl{2}) =\sin^2[(\gamma^{\star}{+}\beta)/2] \end{array} } (to calculate $Q$'s the last equation of~\refeq{Qabc} was used). This yields \equa{ \D{\alpha}=\D{\beta}=\D{\omega}= -\omega, \qquad \D{\kappa_1}= [-\kappa_1{-}\sin\alpha]+\sin\omega,\quad \D{\kappa_2}=-[\kappa_2{-}\sin\beta] - \sin\omega. } (e.g., $\D{\kappa_1}\,{=}\, {-}2\sin\gamma^\star \cos(\alpha{-}\gamma^\star){-}\kappa_1 \,{=}\,{-}\sin\alpha{-}\sin(2\gamma^\star{-}\alpha) {-}\kappa_1 =-\kappa_1{-}\sin\alpha{+}\sin\omega$). For the initial conditions, corresponding to increasing curvature (i.e. $0\,{<}\,\omega\,{\leqslant}\,\pi$, $\kappa_1\,{<}\,\kappa_2$), $\sin\omega$ is non-negative, and the bracketed terms are positive. The latter is direct consequence of inequality \LE{Q(\Kl{1},\Kl{2})} (Cor.~\ref{k1k2Cor}). New end conditions correspond to decreasing curvature with boundary curvatures opposite in sign for whichever signs of~$\kappa_{1,2}$: $\D{\kappa_1}\igt0\,{>}\,\D{\kappa_2}$. Inversed curvature elements are shown as $AC_1$ and $D_1B$. Show that point $C_1\,{=}\,(x_1,0)$ is located to the left of $D_1\,{=}\,(x_2,0)$: \equa{ \begin{array}{ll} AC_1=-\Frac{2\sin\D{\alpha}}{\D{\kappa_1}}= \Frac{2\sin\omega}{\sin\omega-\kappa_1-\sin\alpha},\quad& x_1=-1+AC_1=\Frac{\sin\omega+\kappa_1+\sin\alpha}{\sin\omega-\kappa_1-\sin\alpha},\\ & \\ D_1B=\hphantom{{-}}{}\Frac{2\sin\D{\beta}}{\D{\kappa_2}}= \Frac{2\sin\omega}{\sin\omega+\kappa_2-\sin\beta},\quad& x_2=\hphantom{{-}}{}1-D_1B=\Frac{\kappa_2-\sin\beta-\sin\omega}{\sin\omega+\kappa_2-\sin\beta}. \end{array} } The denominators in the expressions for $x_{1,2}$ being positive, it is easy to check that the condition $x_1\,{\leqslant}\, x_2$ is equivalent to \LE{Q}. The equalities, if occur, are simultaneous, and two arcs form a unique biarc solution. Otherwise the existence of straight line $L\,{=}\,\Kr{x_0,0,\lambda,0}$, smoothly joining two given arcs, is evident; its parameters $x_0$ and $\lambda$ ($x_1\,{<}\, x_0\,{<}\, x_2$, \GT{\lambda}) can be calculated from two equations \EQ{Q(L,\Kl{1,2})}. The backward inversion resets the increasing curvature and yields the sought for solution~--- intermediate arc $L_1$, image of $L$. \qed \end{pf} \begin{thm} \label{ExistenceTheorem} The necessary and sufficient condition for the existence of a non-biarc spiral curve, matching at the endpoints two given curvature elements~\refeq{K1K2cc}, is \equa{ Q(\Kl{1},\Kl{2}) =(k_1c+\sin\alpha)(k_2c-\sin\beta) +\sin^2\Frac{\alpha{+}\beta}{2} < 0. } \end{thm} \begin{pf} The necessity results from theorem~\ref{MainTheorem}. To prove sufficiency, a sought for spiral can be constructed as a three-arc curve. This problem was explored in~\cite{InvInv}, and the existence of solutions, all of them being spirals iff \LT{Q}, was established. A simple proof of sufficiency, alternative to that of~\cite{InvInv}, can be proposed. Apply inversion, bringing two given circles into concentric position (\Reffig{figure19}). Condition \LT{Q} means that the circles \Kl{1} and \Kl{2} do not intersect, and are not tangent. Therefore such inversion exists. Denote the images of \Kl{1} and \Kl{2} as \equa{ \begin{array}{lll} \DKl{1}=\Kr{x_1,y_1,\D{\alpha},\D{k}_1},\quad& x_1=a+\sin\D{\alpha}/\D{k}_1, \quad& y_1=b-\cos\D{\alpha}/\D{k}_1, \\ \DKl{2}=\Kr{x_2,y_2,\D{\beta},\,\D{k}_2},& x_2=a+\sin\D{\beta}/\D{k}_2, & y_2=b-\cos\D{\beta}/\D{k}_2. \end{array} } The expressions for $x_{1,2}$ and $y_{1,2}$ assure concentricity with the common centre $(a,b)$. Recalculate~\refeq{DefQ}, which remains invariant under inversion: \equa{ Q(\DKl{1},\DKl{2})=\Frac{-(\D{k}_2-\D{k}_1)^2}{4\D{k}_1\D{k}_2}= Q(\Kl{1},\Kl{2})<0. } Negative value of the invariant means that two curvatures $\D{k}_1$ and $\D{k}_2$ are of the same sign, and two circles \DKl{1} and \DKl{2} are parallel. All possible intermediate arcs $T_1T_2$ have the same curvature $k_0$, and the sequence $\D{k}_1,k_0,\D{k_2}$ is monotonic: \equa{ \Frac{1}{k_0}=\Frac{1}{\D{k_1}}+\Frac{1}{\D{k_2}},\qquad k_0=\Frac{2\D{k}_1\D{k}_2}{\D{k}_1+\D{k}_2} \quad\Longrightarrow\quad \D{k}_1\lessgtr k_0 \lessgtr \D{k_2}. } An intermediate arc can be constructed for any images $A$ and $B$ of two given endpoints. The backward inversion restores the initial type of monotonicity of curvature.\qed{} \end{pf} \Lwfig{t}{.8\textwidth}{figure19} Note that the strict form of inequality in theorem~\ref{ExistenceTheorem} is strong enough to exclude both the chord of zero length and the equality $k_1\,{=}\, k_2$: with \EQ{c} we get $Q\,{=}\,\sin^2\gamma$, and with $k_1c\,{=}\, k_2c\,{=}\,\kappa$ the invariant $Q$ takes non-negative form~\refeq{k1eqk2}. We need not to invoke Vogt's theorem: if the sum $\alpha{+}\beta$ does not suit condition $k_1 \lessgtr k_2$~\refeq{VogtAK}, cumulative angles $\widetilde\alpha\,{=}\,\alpha{\pm}2\pi$ or $\widetilde\beta\,{=}\,\beta{\pm}2\pi$ resolve the contradiction, resulting to a long spiral as a solution. \Topic{Conclusions} The author's looking at the theory of spirals and revisiting Vogt's theorem was initially tied to the hypothesis that, under certain constraints, the curve can be fairly well reproduced from the interpolation data or inscribed polygon. And, contrary to traditional treatment of this problem, it was interesting to impose more fundamental constraints than, say, artificial limits for derivatives, etc. Such constraint was suggested by the Four-Vertex theorem, and the definition of the problem sounded like: {\em Amongst all curves matching given interpolation data select those having a minimum of vertices; estimate the region covered by them. } The solution for spiral curves, i.e. with the minimum of vertices being zero, was cited here as \Reffig{figure17}. The preliminary extension for non-spiral curves was demonstrated in~\cite{Spiral}. Constraints for non-spirals, similar to \refeq{k1k2}, were discussed in~\cite{Vertex}. The tendency to minimize the number of vertices is very close to the notion of {\em fair curve}, widely discussed in Computer-Aided Design (CAD) applications~\cite{Fairness}. In recent years much attention is given to curves with monotonic curvature in CAD related publications. Refs.~\cite{Theorem5,SpiralSeg} provide just start points and titles for the bibliographical search. A lot of research is aimed to extract spiral subarcs from B\'{e}zier or NURBS curves, whose polynomial nature is far short of spirality. We have chosen a somewhat different approach, having focused on general properties, induced by monotonicity of curvature. Most of the earlier studies of spiral curves seem to have been aimed at obtaining the results similar to the Four-Vertex-theorem, and limited to this objective. They required continuity of evolute as the basis for the proofs, and therefore were restricted to the curves with continuous curvature of constant sign. Similarly, a lot of CAD applications propose separate treatment of ``C-shaped'' and ``S-shaped'' spirals, which is usually unnecessary. As a final note, we give some attention to the term {\,\em spiral\,} by itself. Literature treatments are not rigorous and vary considerably. A curve with a monotonic polar equation $r(\varphi)$ is often meant by a spiral~(\cite{EncDictMath}, p.~325). Considering spirality as a property of shape, the relation to a specific coordinate system is the drawback of this definition. Guggenheimer's treatment of spirality as monotonicity of curvature is purely shape related. Three examples illustrate some ambiguities: \begin{itemize} \item[$\bullet$] Fermat spiral, $r\,{=}\, a\sqrt{\varphi}$: curvature is not monotonous; \item[$\bullet$] Cornu spiral $k(s)\,{=}\, s/a^2$: no polar equation for the curve as the whole; \item[$\bullet$] C\^otes' spiral $r\,{=}\, a/\cos(k\varphi)$: neither definition is applicable. \end{itemize} Curves with monotonic curvature comprise an important subset of planar curves. The list of their properties, compiled from~\cite{Vogt}, \cite{Guggen} (pp.~48--54), and this article, is far short of being complete. This class of curves is worthy of definite naming. To designate them, the term {\em ``true spirals''} is being proposed. \paragraph{Acknowledgement.} The author is grateful to prof. Victor Zalgaller, who explored his collection of references, and called author's attention to the research of \hbox{Wolfgang} Vogt. \newcommand{\Jnum}[1]{$N$#1} \newcommand{\Jitem}[6]{ {\it#1}, #2. #3, {\bf#4}(#5), #6.} \end{document}
arXiv
Pricing and ordering strategies of supply chain with selling gift cards On analyzing and detecting multiple optima of portfolio optimization January 2018, 14(1): 325-347. doi: 10.3934/jimo.2017049 Is social responsibility for firms competing on quantity evolutionary stable? Caichun Chai 1,2, , Tiaojun Xiao 1,2,, and Eilin Francis 3, School of Management Science and Engineering, Nanjing University, Nanjing 210093, China Institute of Game Behavior and Operations Management, Nanjing University of Finance and Economics, Nanjing 210046, China Economics Department, University of California, Santa Cruz, CA 95064, USA * Corresponding author: T. Xiao Received March 2016 Revised October 2016 Published June 2017 Fund Project: This study is funded by: (ⅰ) China National Funds for Distinguished Young Scientists under Grant 71425001; (ⅱ) National Natural Science Foundation of China under Grants 71371093 and 11301001 This paper studies the evolutionary stable strategies and preferences regarding corporate social responsibility of competing firms. Firms randomly compete with each other in pairs. Shareholder-oriented firms have no social responsibility concern, whereas a firm that is concerned with social responsibility is stakeholder-oriented. Each firm first picks one of two production strategies: shareholder-oriented or stakeholder-oriented, and then decides production quantity. We find that socially responsible firms have lower retail prices. The evolutionary stability of a strategy depends on product substitutability and the degree to which firms care about social responsibility. When product substitutability is relatively high, stakeholder-oriented strategy is the evolutionary stable strategy; if product substitutability is lower than a threshold, shareholder-oriented strategy is evolutionary stable; and with moderate product substitutability, both strategies are evolutionary stable. Furthermore, we consider how the degree of social responsibility preference evolves according to the adaptive dynamics to continuously stable preference. We find that the non social responsibility concern behavior is not an evolutionary stable preference; there is a unique continuously stable degree of social responsibility preference. Furthermore, we find the evolutionary stability of shareholder-oriented and stakeholder-oriented depends on the initial distribution of firms' strategies under the continuously stable social responsibility preference. Keywords: Evolutionary stable strategy, social responsibility, production strategy, evolutionary game theory. Mathematics Subject Classification: Primary: 91A22; Secondary: 93D20. Citation: Caichun Chai, Tiaojun Xiao, Eilin Francis. Is social responsibility for firms competing on quantity evolutionary stable?. Journal of Industrial & Management Optimization, 2018, 14 (1) : 325-347. doi: 10.3934/jimo.2017049 F. Allen, Corporate governance in emerging economics, Oxford Review of Economic Policy, 21 (2005), 164-177. Google Scholar F. Allen, E. Carletti and R. Marquez, Stakeholder Governance, Competition and Firm Value, CESifo Working paper No. 4652,2014. Google Scholar M. Aoki and G. Jackson, Understanding an emergent diversity of corporate governance and organizational architecture: An essentiality-based analysis, Industrial and Corporate Change, 17 (2008), 1-27. doi: 10.1093/icc/dtm037. Google Scholar A. Arya, B. Mittendorf and D. E. M. Sappington, Outsourcing, vertical integration, and price vs. quantity competition, International Journal of Industrial Organization, 26 (2008), 1-16. doi: 10.1016/j.ijindorg.2006.10.006. Google Scholar M. L. Barnett, Stakeholder influence capacity and the variability of financial returns to corporate social responsibility, Academy of Management Review, 32 (2007), 794-816. doi: 10.5465/AMR.2007.25275520. Google Scholar M. Becht, P. Bolton and A. Röell, Corporate governance and control, Handbook of the Economics of Finance, 1 (2003), 1-109. doi: 10.1016/S1574-0102(03)01005-7. Google Scholar S. K. Berninghaus, C. Korth and S. Napel, Reciprocity-an indirect evolutionary analysis, Journal of Evolutionary Economics, 17 (2007), 579-603. Google Scholar H. Bester and W. Güth, Is altruism evolutionarily stable?, Journal of Economic Behavior & Organization, 34 (1998), 193-209. doi: 10.1016/S0167-2681(97)00060-7. Google Scholar F. Bolle, Is altruism evolutionarily stable? And envy and malevolence? Remarks on Bester and G$ddot{u}$th, Journal of Economic Behavior & Organization, 42 (2000), 131-133. Google Scholar A. B. Carroll and K. M. Shabana, The business case for corporate social responsibility: A review of concepts, research and practice, International Journal of Management Reviews, 12 (2010), 85-105. Google Scholar C. R. Carter and M. M. Jennings, Social responsibility and supply chain relationships, Transportation Research Part E: Logistics and Transportation Review, 38 (2002), 37-52. doi: 10.1016/S1366-5545(01)00008-4. Google Scholar C. Chai, T. Xiao and T. Xu, Evolutionary dynamics of manufacturers' production strategies based on Moran process, Systems Engineering-Theory & Practice, 35 (2015), 2262-2270. Google Scholar G. Charness and E. Haruvy, Altruism, equity, and reciprocity in a gift-exchange experiment: An encompassing approach, Games and Economic Behavior, 40 (2002), 203-231. Google Scholar K. Chen, X. Wang and M. Huang, Salesforce contract design, joint pricing and production planning with asymmetric overconfidence sales agent, Journal of Industrial and Management Optimization, 13 (2017), 873-899. doi: 10.3934/jimo.2016051. Google Scholar [15] R. Cressman, The Stability Concept of Evolutionary Game Theory, Springer-Verlag, Berlin, Heidelberg, 1992. doi: 10.1007/978-3-642-49981-4. Google Scholar J. M. Cruz and T. Wakolbinger, Multi period effects of corporate social responsibility on supply chain networks, transaction costs, emissions, and risk, International Journal of Production Economics, 116 (2008), 61-74. Google Scholar J. M. Cruz, The impact of corporate social responsibility in supply chain management: Multicriteria decision-making approach, Decision Support Systems, 48 (2009), 224-236. doi: 10.1016/j.dss.2009.07.013. Google Scholar E. Dekel, J. C. Ely and O. Yilankaya, Evolution of preferences, The Review of Economic Studies, 74 (2007), 685-704. doi: 10.1111/j.1467-937X.2007.00445.x. Google Scholar D. Denis and J. McConnell, International corporate governance, Journal of Financial and Quantitative Analysis, 38 (2003), 1-36. Google Scholar U. Dieckmann, Can adaptive dynamics invade?, Trends in Ecology & Evolution, 12 (1997), 128-131. doi: 10.1016/S0169-5347(97)01004-5. Google Scholar U. Dieckmann and R. Law, The dynamical theory of coevolution: A derivation from stochastic ecological processes, Journal of Mathematical Biology, 34 (1996), 579-612. doi: 10.1007/BF02409751. Google Scholar R. Fisman, S. Kariv and D. Markovits, Individual preferences for giving, The American Economic Review, 97 (2007), 1858-1876. Google Scholar D. Friedman, Evolutionary games in economics, Econometrica, 59 (1991), 637-666. doi: 10.2307/2938222. Google Scholar D. Friedman and D. N. Ostrov, Conspicuous consumption dynamics, Games and Economic Behavior, 64 (2008), 121-145. doi: 10.1016/j.geb.2007.12.008. Google Scholar [25] D. Friedman and B. Sinervo, Evolutionary Games in Natural, Social and Virtual Worlds, Oxford University Press, New York, 2016. doi: 10.1093/acprof:oso/9780199981151.001.0001. Google Scholar D. Friedman and N. Singh, Equilibrium vengeance, Games and Economic Behavior, 66 (2009), 813-829. doi: 10.1016/j.geb.2008.10.006. Google Scholar M. Friedman, The social responsibility of business is to increase its profits, Corporate Ethics and Corporate Governance, (2007), 173-178. doi: 10.1007/978-3-540-70818-6_14. Google Scholar G. E. Goering, The strategic use of managerial incentives in a non-profit firm mixed duopoly, Managerial and Decision Economics, 28 (2007), 83-91. doi: 10.1002/mde.1307. Google Scholar G. E. Goering, Socially concerned firms and the provision of durable goods, Economic Modelling, 25 (2008), 575-583. doi: 10.1016/j.econmod.2007.10.001. Google Scholar G. E. Goering, Corporate social responsibility and marketing channel coordination, Research in Economics, 66 (2012), 142-148. doi: 10.1016/j.rie.2011.10.001. Google Scholar J. J. Griffin and J. F. Mahon, The corporate social performance and corporate financial performance debate, Business & Society, 36 (1997), 5-31. doi: 10.1177/000765039703600102. Google Scholar W. Güth, On ultimatum bargaining experiments--A personal review, Journal of Economic Behavior & Organization, 27 (1995), 329-344. Google Scholar W. Güth and B. Peleg, When will payoff maximization survive? An indirect evolutionary analysis, Journal of Evolutionary Economics, 11 (2001), 479-499. Google Scholar W. G$\ddot{u}$th and M. Yaari, An evolutionary approach to explain reciprocal behavior in a simple strategic game, in Explaining Process and Change-Approaches to Evolutionary Economics (eds. U. Witt), The University of Michigan Press, Ann Arbor, MI, (1992), 23-34. Google Scholar I. Hansson and C. Stuart, Socialization and altruism, Journal of Evolutionary Economics, 2 (1992), 301-312. doi: 10.1007/BF01200128. Google Scholar J. Henrich, J. Ensminger, R. McElreath, A. Barr, C. Barrett, A Barrett and J. Ziker, Markets, religion, community size, and the evolution of fairness and punishment, Science, 327 (2010), 1480-1484. doi: 10.1126/science.1182238. Google Scholar S. Huck and J. Oechssler, The indirect evolutionary approach to explaining fair allocations, Games and Economic Behavior, 28 (1999), 13-24. doi: 10.1006/game.1998.0691. Google Scholar [38] K. T. Jackson, Building Reputational Capital: Strategies for Integrity and Fair Play that Improve the Bottom Line, Oxford University Press, Oxford, 2004. Google Scholar M. C. Jensen, Value maximization, stakeholder theory, and the corporate objective function, Business Ethics Quarterly, 12 (2002), 235-256. doi: 10.7312/chew14856-001. Google Scholar M. Jensen and W. Meckling, Theory of the firm: Managerial behavior agency costs and ownership structure, Journal of Financial Economics, 3 (1976), 305-360. Google Scholar J. J. Kline, Heterogeneous or homogeneous quantity competition, Economics Letters, 66 (2000), 353-359. doi: 10.1016/S0165-1765(99)00197-4. Google Scholar KPMG, International Survey of Corporate Social Responsibility Reporting, 2008. Google Scholar M. Kopel and B. Brand, Socially responsible firms and endogenous choice of strategic incentives, Economic Modelling, 29 (2012), 982-989. Google Scholar J. Krafft, Y. Qu, F. Quatraro and J. L. Ravix, Corporate governance, value and performance of firms: New empirical results on convergence from a large international database, Industrial and Corporate Change, 23 (2014), 361-397. doi: 10.1093/icc/dtt007. Google Scholar E. Kurucz, B. Colbert and D. Wheeler, The business case for corporate social responsibility, in The Oxford Handbook of Corporate Social Responsibility (eds. A. Crane, A. McWilliams, D. Matten, J. Moon and D. Siegel), Oxford, Oxford University Press, (2008), 83-112. Google Scholar L. Lambertini and A. Tampieri, Corporate Social Responsibility in a Mixed Oligopoly, Working Paper No. 723, University of Bologna, 2010. doi: 10.2139/ssrn.1729130. Google Scholar Y. Li, M. Shan and M. Z. F. Li, Advance selling decisions with overconfident consumers, Journal of Industrial and Management Optimization, 12 (2016), 891-905. doi: 10.3934/jimo.2016.12.891. Google Scholar D. Lien, Competition between non-profit and for-profit firms, International Journal of Business and Economics, 1 (2002), 193-207. Google Scholar R. LaPorta, F. Lopez-de-Silanes, A. Shleifer and R. Vishny, Investor protection and corporate valuation, The Journal of Finance, 57 (2002), 1147-1170. Google Scholar [50] C. Laszlo, The Sustainable Company: How to Create Lasting Value through Social and Environmental Performance, Island Press, Washington, 2003. Google Scholar M. D. P. Lee, A review of the theories of corporate social responsibility: Its evolutionary path and the road ahead, International Journal of Management Reviews, 10 (2008), 53-73. doi: 10.1111/j.1468-2370.2007.00226.x. Google Scholar M. Magill, M. Quinzii and J. C. Rochet, A theory of the stakeholder corporation, Econometrica, 83 (2015), 1685-1725. doi: 10.3982/ECTA11455. Google Scholar J. F. Mahon and J. J. Griffin, Painting a portrait: A reply, Business and Society, 38 (1999), 126-133. doi: 10.1177/000765039903800106. Google Scholar J. D. Margolis and J. P. Walsh, Misery loves companies: Social initiatives by business, Administrative Science Quarterly, 48 (2003), 268-305. doi: 10.2307/3556659. Google Scholar T. W. McGuire and R. Staelin, An industry equilibrium analysis of downstream vertical integration, Marketing Science, 2 (1983), 161-191. Google Scholar A. McWilliams, D. S. Siegel and P. M. Wright, Corporate social responsibility: Strategic implications, Journal of Management Studies, 43 (2000), 1-18. doi: 10.1111/j.1467-6486.2006.00580.x. Google Scholar A. McWilliams and D. S. Siegel, Corporate social responsibility: A theory of the firm perspective, Academy of Management Review, 26 (2001), 117-127. Google Scholar [58] R. Nelson and S. G. Winter, An Evolutionary Theory of Economic Change, Bellknap Press, Cambridge, 1982. Google Scholar D. Ni, L. W. Kevin and X. Tang, Social responsibility allocation in two-echelon supply chains: Insights from wholesale price contracts, European Journal of Operational Research, 207 (2012), 1269-1279. doi: 10.1016/j.ejor.2010.06.026. Google Scholar M. Omran, P. Atrill and J. Pointon, Stakeholders: Corporate mission statements and investor returns, Business Ethics: A European Review, 11 (2002), 318-328. Google Scholar M. Orlitzy, F. L. Schmidt and S. L. Rynes, Corporate social and financial performance: A meta-analysis, Organization Studies, 24 (2003), 403-441. Google Scholar K. O'Sullivan, Virtue rewarded: Companies are suddenly discovering the profit potential of social responsibility, CFO, (2006), 47-52. Google Scholar K. M. Page and M. A. Nowak, A generalized adaptive dynamics framework can describe the evolutionary ultimatum game, Journal of Theoretical Biology, 209 (2001), 173-179. doi: 10.1006/jtbi.2000.2251. Google Scholar S. Panda, Coordination of a socially responsible supply chain using revenue sharing contract, Transportation Research Part E., 67 (2014), 92-104. Google Scholar V. Padmanabhan and I. P. L. Png, Reply to "Do returns policies intensify retail competition?", Marketing Science, 23 (2004), 614-618. doi: 10.1287/mksc.1040.0091. Google Scholar S. Pivato, N. Misani and A. Tencati, The impact of corporate social responsibility on consumer trust: The case of organic food, Business Ethics: A European Review, 17 (2008), 3-12. doi: 10.1111/j.1467-8608.2008.00515.x. Google Scholar A. Possajennikov, On the evolutionary stability of altruistic and spiteful preferences, Journal of Economic Behavior & Organization, 42 (2000), 125-129. doi: 10.1016/S0167-2681(00)00078-0. Google Scholar P. Rhode and M. Stegeman, Non-Nash equilibria of Darwinian dynamics with applications to duopoly, International Journal of Industrial Organization, 19 (2001), 415-453. doi: 10.1016/S0167-7187(99)00025-9. Google Scholar O. Rieckers and G. Spindler, Corporate governance: Legal aspects, in The German Financial System (eds. J. Krahnen and R. Schmidt), Oxford, Oxford University Press, (2004), 350-385. Google Scholar R. M. Roman, S. Hayibor and B. R. Agle, The relationship between social and financial performance repainting a portrait, Business & Society, 38 (1999), 109-125. Google Scholar M. Salehi and Z. Azary, Stakeholders' perceptions of corporate social responsibility, International Business Research, 2 (2009), 63-72. Google Scholar M. Schaffer, Evolutionary stable strategies for a finite population and a variable contest size, Journal of Theoretical Biology, 132 (1998), 469-478. doi: 10.1016/S0022-5193(88)80085-7. Google Scholar R. Schmidt, Corporate governance in Germany: An economic perspective, in The German Financial System (eds. J. Krahnen and R. Schmidt), Oxford, Oxford University Press, (2004), 386-424. doi: 10.2139/ssrn.477761. Google Scholar R. Sethi and E. Somanathan, Preference evolution and reciprocity, Journal of Economic Theory, 97 (2001), 273-297. doi: 10.1006/jeth.2000.2683. Google Scholar Y. Shirata, The evolution of fairness under an assortative matching rule in the ultimatum game, International Journal of Game Theory, 41 (2012), 1-21. doi: 10.1007/s00182-011-0271-0. Google Scholar A. Shleifer and R. Vishny, A survey of corporate governance, Journal of Finance, 52 (1997), 737-783. doi: 10.3386/w5554. Google Scholar J. Tirole, Corporate governance, Econometrica, 69 (2001), 1-35. doi: 10.1111/1468-0262.00177. Google Scholar A. Traulsen and C. Hauert, Stochastic evolutionary game dynamics, Reviews of Nonlinear Dynamics and Complexity, 2 (2009), 25-61. Google Scholar D. J. Vogel, Is there a market for virtue? The business case for corporate social responsibility, California Management Review, 47 (2005), 19-45. doi: 10.2307/41166315. Google Scholar S. A. Waddock and S. B. Graves, The corporate social performance-financial performance link, Strategic Management Journal, 18 (1997), 303-319. Google Scholar T. Xiao and G. Chen, Wholesale pricing and evolutionarily stable strategies of Retailers with imperfectly observable objective, European Journal of Operational Research, 196 (2009), 1190-1201. doi: 10.1016/j.ejor.2008.04.009. Google Scholar T. Xiao and G. Yu, Supply chain disruption management and evolutionarily stable strategies of retailers in the quantity-setting duopoly situation with homogeneous goods, European Journal of Operational Research, 173 (2006), 648-668. doi: 10.1016/j.ejor.2005.02.076. Google Scholar Y. Yi and H. Yang, Wholesale pricing and evolutionary stable strategies of retailers under network externality, European Journal of Operational Research, 259 (2017), 37-47. doi: 10.1016/j.ejor.2016.09.014. Google Scholar M. Yoshimori, Whose company is it? The concept of the corporation in Japan and the west, Long Range Planning, 28 (1995), 33-44. doi: 10.1016/0024-6301(95)00025-E. Google Scholar Figure 1. Evolutionary dynamics when $2(1-\sqrt{1-\alpha})\leq d^2$ Figure 2. Evolutionary dynamics when $(2-\alpha)(1-\sqrt{1-\alpha})\geq d^2$ Figure 3. Evolutionary dynamics when $(2-\alpha)(1-\sqrt{1-\alpha}) < d^2 < 2(1-\sqrt{1-\alpha})$ Figure 4. Evolution of firms' behavior Figure 5. SR preference and the equilibrium behavior Figure 6. The profit of a firm with others' SR factor $\beta$ Figure 7. Profit of a firm versus $\alpha$ ($\beta = 0.6$) Figure 8. Profit of a firm versus $\alpha$ and $\beta$ Figure 9. The dynamics of firm's SR preference Table 1. The equilibrium expressions under given strategy for one-shot game Strategy Quantity ($q_1, q_2$) Price ($p_1, p_2$) Profit ($\pi_1, \pi_2$) $TT$ $\frac{a-c}{2-\alpha+d}, $ $\frac{c(1+d)+a(1-\alpha)}{2+d-\alpha}, $ $\frac{(a-c)^2(1-\alpha)}{(2+d-\alpha)^2}, $ $\frac{a-c}{2-\alpha+d}$ $\frac{c(1+d)+a(1-\alpha)}{2+d-\alpha}$ $\frac{(a-c)^2(1-\alpha)}{(2+d-\alpha)^2}$ $HH$ $\frac{a-c}{2+d}, $ $\frac{a+c(1+d)}{2+d}, $ $\frac{(a-c)^2}{(2+d)^2}, $ $\frac{a-c}{2+d}$ $\frac{a+c(1+d)}{2+d}$ $\frac{(a-c)^2}{(2+d)^2}$ $TH$ $\frac{(a-c)(2-d)}{4-d^2-2\alpha}, $ $\frac{c[2-d^2+d(1-\alpha)]+a(2-d)(1-\alpha)}{4-d^2-2\alpha}, $ $\frac{(a-c)^2(2-d)^2(1-\alpha)}{(4-d^2-2\alpha)^2}, $ $\frac{(a-c)(2-d-\alpha)}{4-d^2-2\alpha}$ $\frac{a(2-d-\alpha)+c(2+d-d^2-\alpha)}{4-d^2-2\alpha}$ $\frac{(a-c)^2(2-d-\alpha)^2}{(4-d^2-2\alpha)^2}$ $HT$ $\frac{(a-c)(2-d-\alpha)}{4-d^2-2\alpha}, $ $\frac{a(2-d-\alpha)+c(2+d-d^2-\alpha)}{4-d^2-2\alpha}, $ $\frac{(a-c)^2(2-d-\alpha)^2}{(4-d^2-2\alpha)^2}, $ $\frac{(a-c)(2-d)}{4-d^2-2\alpha}$ $\frac{c[2-d^2+d(1-\alpha)]+a(2-d)(1-\alpha)}{4-d^2-2\alpha}$ $\frac{(a-c)^2(2-d)^2(1-\alpha)}{(4-d^2-2\alpha)^2}$ Astridh Boccabella, Roberto Natalini, Lorenzo Pareschi. On a continuous mixed strategies model for evolutionary game theory. Kinetic & Related Models, 2011, 4 (1) : 187-213. doi: 10.3934/krm.2011.4.187 Anna Lisa Amadori, Astridh Boccabella, Roberto Natalini. A hyperbolic model of spatial evolutionary game theory. Communications on Pure & Applied Analysis, 2012, 11 (3) : 981-1002. doi: 10.3934/cpaa.2012.11.981 King-Yeung Lam. Dirac-concentrations in an integro-pde model from evolutionary game theory. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 737-754. doi: 10.3934/dcdsb.2018205 Scott G. McCalla. Paladins as predators: Invasive waves in a spatial evolutionary adversarial game. Discrete & Continuous Dynamical Systems - B, 2014, 19 (5) : 1437-1457. doi: 10.3934/dcdsb.2014.19.1437 William H. Sandholm. Local stability of strict equilibria under evolutionary game dynamics. Journal of Dynamics & Games, 2014, 1 (3) : 485-495. doi: 10.3934/jdg.2014.1.485 John Cleveland. Basic stage structure measure valued evolutionary game model. Mathematical Biosciences & Engineering, 2015, 12 (2) : 291-310. doi: 10.3934/mbe.2015.12.291 Serap Ergün, Bariş Bülent Kırlar, Sırma Zeynep Alparslan Gök, Gerhard-Wilhelm Weber. An application of crypto cloud computing in social networks by cooperative game theory. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-15. doi: 10.3934/jimo.2019036 Stamatios Katsikas, Vassilli Kolokoltsov. Evolutionary, mean-field and pressure-resistance game modelling of networks security. Journal of Dynamics & Games, 2019, 6 (4) : 315-335. doi: 10.3934/jdg.2019021 Hassan Najafi Alishah, Pedro Duarte. Hamiltonian evolutionary games. Journal of Dynamics & Games, 2015, 2 (1) : 33-49. doi: 10.3934/jdg.2015.2.33 Xue-Yan Wu, Zhi-Ping Fan, Bing-Bing Cao. Cost-sharing strategy for carbon emission reduction and sales effort: A nash game with government subsidy. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-29. doi: 10.3934/jimo.2019040 Ka Wo Lau, Yue Kuen Kwok. Optimal execution strategy of liquidation. Journal of Industrial & Management Optimization, 2006, 2 (2) : 135-144. doi: 10.3934/jimo.2006.2.135 Shui-Nee Chow, Kening Lu, Yun-Qiu Shen. Normal forms for quasiperiodic evolutionary equations. Discrete & Continuous Dynamical Systems - A, 1996, 2 (1) : 65-94. doi: 10.3934/dcds.1996.2.65 Alexey Cheskidov, Landon Kavlie. Pullback attractors for generalized evolutionary systems. Discrete & Continuous Dynamical Systems - B, 2015, 20 (3) : 749-779. doi: 10.3934/dcdsb.2015.20.749 Andrzej Swierniak, Michal Krzeslak. Application of evolutionary games to modeling carcinogenesis. Mathematical Biosciences & Engineering, 2013, 10 (3) : 873-911. doi: 10.3934/mbe.2013.10.873 Fengjun Wang, Qingling Zhang, Bin Li, Wanquan Liu. Optimal investment strategy on advertisement in duopoly. Journal of Industrial & Management Optimization, 2016, 12 (2) : 625-636. doi: 10.3934/jimo.2016.12.625 Minette Herrera, Aaron Miller, Joel Nishimura. Altruistic aging: The evolutionary dynamics balancing longevity and evolvability. Mathematical Biosciences & Engineering, 2017, 14 (2) : 455-465. doi: 10.3934/mbe.2017028 Jim M. Cushing. The evolutionary dynamics of a population model with a strong Allee effect. Mathematical Biosciences & Engineering, 2015, 12 (4) : 643-660. doi: 10.3934/mbe.2015.12.643 Jeremias Epperlein, Vladimír Švígler. On arbitrarily long periodic orbits of evolutionary games on graphs. Discrete & Continuous Dynamical Systems - B, 2018, 23 (5) : 1895-1915. doi: 10.3934/dcdsb.2018187 Amy Veprauskas, J. M. Cushing. Evolutionary dynamics of a multi-trait semelparous model. Discrete & Continuous Dynamical Systems - B, 2016, 21 (2) : 655-676. doi: 10.3934/dcdsb.2016.21.655 Jinyuan Zhang, Aimin Zhou, Guixu Zhang, Hu Zhang. A clustering based mate selection for evolutionary optimization. Big Data & Information Analytics, 2017, 2 (1) : 77-85. doi: 10.3934/bdia.2017010 Caichun Chai Tiaojun Xiao Eilin Francis
CommonCrawl
\begin{document} \title*{No Multiple Collisions for Mutually Repelling Brownian Particles } \author{Emmanuel C\'{e}pa and Dominique L\'{e}pingle} \institute{{\small MAPMO, Universit\'{e} d'Orl\'{e}ans, \\ B.P.6759, 45067 Orl\'{e}ans Cedex 2, France} \\ {\tt e-mail: [email protected], [email protected]}} \maketitle \let\leq=\leqslant\let\le=\leqslant \let\geq=\geqslant\let\ge=\geqslant \def\mathbb{N}{\mathbb{N}} \def\mathbb{Z}{\mathbb{Z}} \def\mathbb{Q}{\mathbb{Q}} \def\mathbb{R}{\mathbb{R}} \def\mathbb{C}{\mathbb{C}} \def\mathbb{H}{\mathbb{H}} \def\mathbb{E}{\mathbb{E}} \def\mathbb{P}{\mathbb{P}} \def\mathbb{L}{\mathbb{L}} \def1\hspace{-1.2mm}\mbox{{\normalsize I}}{1\hspace{-1.2mm}\mbox{{\normalsize I}}} \def \noindent { Proof{}.} { \noindent { Proof{}.} } \newcommand{\operatorname{card}}{\operatorname{card}} \def\qed{\qed} \newtheorem{Th}{Theorem} \newtheorem{Def}[Th]{Definition} \newtheorem{Lemme}[Th]{Lemma} \newtheorem{Cor}[Th]{Corollary} \newtheorem{Rmq}[Th]{Remark} \newtheorem{Prop}[Th]{Proposition} \newtheorem{Not}[Th]{Notation} \newcommand\Section{\setcounter{equation}{0} \section} \def\thesection.\arabic{equation}{\thesection.\arabic{equation}} \noindent {\small{\bf Summary.} Although Brownian particles with small mutual electrostatic repulsion may collide, multiple collisions at positive time are always forbidden.} \section{Introduction} A three-dimensional Brownian motion $B_t= (B ^{1}_t , B ^{2}_t , B ^{3}_t )$ does not hit the axis $\{x_1 = x_2=x_3 \}$ except possibly at time $0$. An easy proof is obtained by applying Ito's formula to $R_t \, = \, [ (B ^{1}_t -B ^{2}_t )^2 + (B ^{1}_t -B ^{3}_t )^2 + (B ^{2}_t -B ^{3}_t )^2 ]$ and remarking that up to the multiplicative constant $3$ the process $R$ is the square of a two-dimensional Bessel process for which $\{0 \}$ is a polar state. This remark will be our guiding line in the sequel. We consider a filtered probability space $(\Omega , {\cal F} ,( {\cal F} _t)_{t \geq 0} , \mathbb{P})$ and for $N \geq 3$ the following system of stochastic differential equations \begin{displaymath} dX_t^{i} \; = \; dB_t^{i} \; + \; \displaystyle \lambda \sum_{1 \leq j \neq i \leq N} \frac {dt}{X_t^{i}-X_t^{j}} \, , \; i = 1 , 2 , \ldots , N \end{displaymath} with boundary conditions \begin{displaymath} X_t ^{1}\, \le \, X_t ^{2}\, \le \cdots \le \, X_t ^{N} \, , \; \quad 0 \le t < \infty \, , \end{displaymath} and a random, ${\cal F} _0$-measurable, initial value satisfying \begin{displaymath} X_0 ^{1}\, \le \, X_0 ^{2}\, \le \cdots \le \, X_0 ^{N} \, . \end{displaymath} Here $B_t= (B ^{1}_t , B ^{2}_t , \ldots ,B ^{N}_t )$ denotes a standard $N$-dimensional $ ( {\cal F} _t)$-Brownian motion and $\lambda$ is a positive constant. This system has been extensively studied in \cite{Ch}, \cite{RS}, \cite{CL1}, \cite{BBCL}, \cite{CL2}, \cite{Fon}. For comments on the relationship between this system and the spectral analysis of Brownian matrices, and also conditioning of Brownian particles, we refer to the introduction and the bibliography in \cite{CL2}. When $\lambda \ge \displaystyle \frac{1}{2}$, establishing strong existence and uniqueness is not difficult, because particles never collide, as proved in~\cite{RS}. The general case with arbitrary coupling strength is investigated in \cite{CL1} and it is proved in \cite{CL2} that collisions occur a.s. if and only if $ 0< \lambda < \displaystyle \frac{1}{2}$. As for multiple collisions (three or more particles at the same location), it has been stated without proof in \cite{Sp} and \cite{CL3} that they are impossible. The proof we give below, with a Bessel process unexpectedly coming in, is just an exercise on Ito's formula. \section{A key Bessel process} We consider for any $t \ge 0$ $$S_t \, = \, \displaystyle \sum _{j=1} ^N \sum _{k=1} ^N (X ^{j}_t -X ^{k}_t )^2 \; .$$ \begin{Th} For any $\lambda > 0$, the process $S$ divided by the constant $2N$ is the square of a Bessel process with dimension $(N-1)(\lambda N +1)$. \end{Th} \noindent { Proof{}.} It is purely computational. Ito's formula provides for any $j \neq k$ $$ \begin{array}{lll} (X ^{j}_t -X ^{k}_t )^2 & = & (X ^{j}_0 -X ^{k}_0 )^2 \, + 2 \displaystyle \int_0 ^t (X ^{j}_s -X ^{k}_s ) d (B ^{j}_s -B ^{k}_s ) \\ & & + 2 \lambda \displaystyle \sum_{1 \leq l \neq j \leq N} \displaystyle \int_0^t \frac {X_s^{j}-X_s^{k}} {X_s^{j}-X_s^{l}} \, ds \, + 2 \lambda \displaystyle \sum_{1 \leq m \neq k \leq N} \displaystyle \int_0^t \frac {X_s^{k}-X_s^{j}} {X_s^{k}-X_s^{m}} \, ds \\ && + 2\,t \, . \end{array} $$ Adding the $N(N-1)$ equalities we get $$ \begin{array}{lll} S_t & = & S_0 \, + \, 2 \displaystyle \displaystyle \sum_{j=1} ^N \sum_{k=1} ^N \int _0 ^t (X ^{j}_s -X ^{k}_s ) d (B ^{j}_s -B ^{k}_s ) \\ & & + \, 4 \lambda \displaystyle \sum_{j=1} ^N \sum _{k=1} ^N \sum _{1 \leq l \neq j \leq N} \displaystyle \int _0 ^t \frac {X_s^{j}-X_s^{k}} {X_s^{j}-X_s^{l}} \, ds \, + \, 2N(N-1)t \, . \end{array} $$ But $$ \begin{array}{l} \displaystyle \sum_{j=1} ^N \sum_{k=1} ^N \sum_{1 \leq l \neq j \leq N} \displaystyle \int_0 ^t \frac {X_s^{j}-X_s^{k}} {X_s^{j}-X_s^{l}} \, ds \\ = \, \displaystyle \sum_{j=1} ^N \sum_{k=1} ^N \displaystyle \sum_{1 \leq l \neq j \leq N} \bigg [ \displaystyle \int _0 ^t \frac {X_s^{j}-X_s^{l}} {X_s^{j}-X_s^{l}} \, ds \, + \, \displaystyle \int_0 ^t \frac {X_s^{l}-X_s^{k}} {X_s^{j}-X_s^{l}} \, ds \bigg ] \\ = \; N^2 (N-1)t \, - \, \displaystyle \sum_{l=1} ^N \sum_{k=1} ^N \sum _{1 \leq j \neq l \leq N} \displaystyle \int _0 ^t \frac {X_s^{l}-X_s^{k}} {X_s^{l}-X_s^{j}} \, ds \\ = \; \displaystyle \frac {1}{2} N^2 (N-1)t \, . \end{array} $$ For the martingale term, we compute $$ \begin{array}{l} \displaystyle \sum _{j=1} ^N ( \sum _{k=1} ^N (X ^{j}_s -X ^{k}_s) )^2 \\ = \; \displaystyle \sum _{j=1} ^N \sum _{k=1} ^N \sum _{l=1} ^N (X ^{j}_s -X ^{k}_s) (X ^{j}_s -X ^{l}_s)\\ = \; \displaystyle \sum _{j=1} ^N \sum _{k=1} ^N \sum _{l=1} ^N (X ^{j}_s -X ^{k}_s) ^2 \, + \, \displaystyle \sum _{j=1} ^N \sum _{k=1} ^N \sum _{l=1} ^N (X ^{j}_s -X ^{k}_s) (X ^{k}_s -X ^{l}_s)\\ = \; \displaystyle \frac {N}{2} S_s \, . \end{array} $$ Let $B'$ be a linear Brownian motion independent of $B$. The process $C$ defined by : $$C_t \, = \, \displaystyle \int _0 ^t 1\hspace{-1.2mm}\mbox{{\normalsize I}} _ {\{S_s > 0\}} \; \displaystyle \frac { \displaystyle \sum _{j=1} ^N \sum _{k=1} ^N (X ^{j}_s -X ^{k}_s) d B ^{j}_s } { \sqrt { \frac{N}{2} S_s} } \; + \; \displaystyle \int _0 ^t 1\hspace{-1.2mm}\mbox{{\normalsize I}} _ {\{S_s = 0\}} dB'_s $$ is a linear Brownian motion and we have $$ S_t \; = \; S_0 \, + \, 2 \displaystyle \int _0 ^t \sqrt { 2 N S_s} dC_s \, + \, 2N(N-1)( \lambda N +1) t \, ,$$ which completes the proof. \qed \section{Multiple collisions are not allowed} Since multiple collisions do not occur for Brownian particles without interaction, we can guess they do not either in case of mutual repulsion. Here is the proof. \begin{Th} For any $\lambda > 0$, multiple collisions cannot occur after time $0$. \end{Th} \noindent { Proof{}.} i) For $3 \leq r \leq N$ and $1 \leq q \leq N-r+1$, let \[ \begin{array}{lll} I \, = \, \{ q, q+1, \ldots, q+r-1 \} \\ S_t ^I\, = \, \displaystyle \sum _{j \in I} \sum _{k \in I} (X ^{j}_t -X ^{k}_t )^2 \\ \tau ^I \, = \, \inf \{ t >0 : S_t ^I = 0 \}\;. \end{array} \] ii) We first consider the initial condition $X_0$. From \cite{CL1}, Lemma 3.5, we know that for any $1 \le i < j \le N$ and any $t < \infty$, we have a.s. $$ \displaystyle \int _0 ^t \frac {du} {X_u^{j}-X_u^{i}} \; < \; \infty \, . $$ Therefore for any $u >0$ there exists $0 < v < u$ such that $ X_v ^{1}\, < \, X_v^{2}\, < \cdots < \, X_v ^{N} \; \mbox {a.s.} $ In order to prove $\mathbb{P} ( \tau ^I = \infty) =1$, we may thus assume $ X_0 ^{1}\, < \, X_0^{2}\, < \cdots < \, X_0 ^{N} \; \mbox {a.s.} $, which implies for any $I$ that $S_0^I > 0$ and so $\tau ^I > 0$ a.s. \\ iii) We know (\cite{RY}, XI, section 1) that $\{0\}$ is polar for the Bessel process $\sqrt { S_t} / \sqrt {2N}$, which means that $\tau ^I = \infty$ a.s. for $I \, = \, \{ 1, 2, \ldots, N \}$. We will prove the same result for any $I$ by backward induction on $r = \operatorname{card} (I)$. Assume there are no $s$-multiple collisions for any $s>r$. Then $$ \begin{array}{lll} S_t ^I & = & S_0 ^I \, + \, 4 \displaystyle \displaystyle \sum _{j\in I} \sum _{k \in I} \int _0 ^t (X ^{j}_s -X ^{k}_s ) d B ^{j}_s \\ & & + \, 4 \lambda \displaystyle \sum _{j \in I} \sum _{k \in I} \sum _{l \notin I} \displaystyle \int _0 ^t \frac {X_s^{j}-X_s^{k}} {X_s^{j}-X_s^{l}} \, ds \, + \, 2r(r-1) ( \lambda r + 1) t \, . \end{array} $$ We set for $n \in \mathbb{N} ^*$, $\tau ^I _n \, = \, \inf \{ t >0 : S_t ^I \le \displaystyle \frac{1}{n} \}$. For any $t \ge 0$, $$ \begin{array}{lll} \log S_{t \wedge \tau ^I _n} ^I & = & \log S_0 ^I \, + \, 4 \displaystyle \displaystyle \sum _{j\in I} \sum _{k \in I} \int _0 ^ {t \wedge \tau ^I _n} \displaystyle \frac {X ^{j}_s -X ^{k}_s }{S_s^I} d B ^{j}_s \\ & & + \, 2 \lambda \displaystyle \sum _{j \in I} \sum _{k \in I} \sum _{l \notin I} \displaystyle \int _0 ^ {t \wedge \tau ^I _n} \displaystyle \frac {(X ^{j}_s -X ^{k}_s) }{S_s^I} \bigg [ \frac {1}{X_s^{j}-X_s^{l}} - \frac {1}{X_s^{k}-X_s^{l}} \bigg ] \, ds \\ && \, + \, 2r[(r-1) ( \lambda r + 1) - 2] \, \displaystyle \int _0 ^ {t \wedge \tau ^I _n} \displaystyle \frac {ds }{S_s^I}\\ & > & - \infty \;. \end{array} $$ >From the induction hypothesis we deduce that for $j,k \in I$ and $l \notin I$, a.s. on $\{ \tau ^I < \infty \}$, $ (X ^j _ { \tau ^I } - X ^ l _ { \tau ^I }) (X ^k _ { \tau ^I } - X ^ l _ { \tau ^I }) \, > \, 0$ and so $$ \begin{array}{l} \displaystyle \int _0 ^ {t \wedge \tau ^I } \displaystyle \frac {(X ^{j}_s -X ^{k}_s) }{S_s} \bigg [ \frac {1}{X_s^{j}-X_s^{l}} - \frac {1}{X_s^{k}-X_s^{l}} \bigg ] \, ds \\ = \; - \displaystyle \int _0 ^ {t \wedge \tau ^I } \displaystyle \frac {(X ^{j}_s -X ^{k}_s)^ 2 }{S_s} \frac {ds}{(X_s^{j}-X_s^{l} ) (X_s^{k}-X_s^{l})} \\ \; - \infty \, . \end{array} $$ The martingale $(M_n , {\cal F} _ {t \wedge \tau ^I _n} )_ {n \geq 1}$ defined by $$M_n \; = \; 4 \displaystyle \displaystyle \sum _{j\in I} \sum _{k \in I} \int _0 ^ {t \wedge \tau ^I _n} \displaystyle \frac {X ^{j}_s -X ^{k}_s }{S_s^I} d B ^{j}_s $$ has associated increasing process $A_n \, = \, 8 r \displaystyle \int _0 ^ {t \wedge \tau ^I _n} \displaystyle \frac {ds }{S_s^I} $. It follows that $M_n \, + \, \displaystyle \frac {1}{4} [(r-1) ( \lambda r + 1) - 2] A_n$ either tends to a finite limit or to $+ \infty$ as $n$ tends to $+ \infty$. Then for any $t \ge 0$, $\log S_{t \wedge \tau ^I } ^I \, > \, - \infty$ and so $\mathbb{P} ( \tau ^I = \infty) =1$, which completes the proof. \qed \section{ Brownian particles on the circle} We now turn to the popular model of interacting Brownian particles on the circle (\cite{Sp}, \cite{CL2}). Consider the system of stochastic differential equations \begin{displaymath} dX_t^{i} \; = \; dB_t^{i} \; + \; \displaystyle \frac{\lambda}{2} \sum _{1 \leq j \neq i \leq N} \cot(\frac{X_t^i-X_t^j}{2}) dt \, , \; i = 1 , 2 , \ldots , N \end{displaymath} with the boundary conditions \begin{displaymath} X_t ^{1}\, \le \, X_t ^{2}\, \le \cdots \le \, X_t ^{N} \le X_t^1\; +\; 2\pi \, , \; \quad 0 \le t < \infty \, . \end{displaymath} As expected we can prove there are no multiple collisions for the particles $Z_t^j\;=\; e^{i\,X_t^j} $ that live on the unit circle. The proof is more involved and will be deduced by approximation from the previous one. \begin{Th} Multiple collisions for the particles on the circle do not occur after time $0$ for any $\lambda > 0$. \end{Th} \noindent Sketch of the proof. For the sake of simplicity, we only deal with the $N$-collisions. Let \[ \begin{array}{lll} R_t & = & \displaystyle \sum _{j=1} ^N \sum _{k=1} ^N \sin^2(\frac{X ^{j}_t -X ^{k}_t}{2} ) \\ \sigma_n & = & \inf\{t>0\,:\,R_t\leq \frac{1}{n}\} \;. \end{array} \] We apply Ito's formula to $\log\,R_t$ and get \[ \log \,R_{t\wedge \sigma_n} \, = \, \log\,R_0 \, +\sum_{j=1}^N\,\int_0^{t\wedge \sigma_n}\,H_s^j\,dB_s^j \, + \, \int_0^{t\wedge \sigma_n}L_s\,ds \] for some continuous processes $H^{j}$ and $L$. We divide each integral into an integral over $\{R_s\geq \frac{1}{2}\}$ and an integral over $\{R_s< \frac{1}{2}\}$. The first type integrals do not pose any problem. When $R_s<\frac{1}{2}$, we replace $X^j_s$ with \[ Y^j_s\,=\, X^j_s \quad \mbox{ or } Y^j_s\,=\,X^j_s\,-\,2\pi \] in such a way that for any $j,k$ we have $\mid Y_s^j-Y_s^k \mid<\pi/3$. The processes $H^{j}$ and $L$ have the same expressions in terms of $X$ or $Y$. With this change of variables we may approximate $\sin x$ by $x$, $\cos x$ by 1 and replace the trigonometric functions by approximations of the linear ones which we have met in the previous sections. We obtain that \[ \log \,R_{t\wedge \sigma_n} \, = \, \log\,R_0 \,+\,M_n \, + \, \displaystyle \frac {1}{4} [(N-1) ( \lambda N + 1) - 2] A_n \,+ \,\int_0^{t\wedge \sigma_n}\,D_s\,ds \] where $M_n$ is a martingale with associated increasing process $A_n$ and $D$ is a.s. a locally integrable process. Details are left to the reader as well as the case of an arbitrary subset $I$ like those in Section 3. \qed \end{document}
arXiv
\begin{document} \date{9 October 2018} \title{Singular control and optimal stopping of memory mean-field processes} \author{Nacira Agram$^{1,2}$, Achref Bachouch$^{1}$, Bernt \O ksendal$^{1}$ and Frank Proske$^{1}$ } \maketitle \paragraph{MSC(2010):}60H10, 60HXX, 93E20, 93EXX, 46E27, 60BXX. \paragraph{Keywords:}Memory mean-field stochastic differential equation; reflected advanced mean-field backward stochastic differential equation; singular control; optimal stopping. \footnotetext[1]{Department of Mathematics, University of Oslo, P.O. Box 1053 Blindern, N--0316 Oslo, Norway. Emails: \texttt{[email protected], [email protected], [email protected], [email protected]}.\newline This research was carried out with support of the Norwegian Research Council, within the research project Challenges in Stochastic Control, Information and Applications (STOCONINF), project number 250768/F20.} \footnotetext[2]{ University Mohamed Khider, Biskra, Algeria. } \begin{abstract} The purpose of this paper is to study the following topics and the relation between them:\\ (i) Optimal singular control of mean-field stochastic differential equations with memory,\\ (ii) reflected advanced mean-field backward stochastic differential equations, and\\ (iii) optimal stopping of mean-field stochastic differential equations.\\ More specifically, we do the following: \begin{itemize} \item We prove the existence and uniqueness of the solutions of some reflected advanced memory backward stochastic differential equations (AMBSDEs), \item we give sufficient and necessary conditions for an optimal singular control of a memory mean-field stochastic differential equation (MMSDE) with partial information, and \item we deduce a relation between the optimal singular control of a MMSDE, and the optimal stopping of such processes. \end{itemize} \end{abstract} \section{Introduction} \noindent Let $( \Omega ,\mathcal{F},\mathbb{P})$ be a given probability space with filtration $\mathbb{F}=(\mathcal{F}_{t})_{t\geq0}$ generated by a 1-dimensional Brownian motion $B=B(t,\omega); (t,\omega) \in[0,T] \times\Omega.$ Let $\mathbb{G} =\{\mathcal{G}_{t}\}_{t\geq0}$ be a given subfiltration of $\mathbb{F} =(\mathcal{F}_{t})_{t\geq0}$ , in the sense that $\mathcal{G}_{t} \subset\mathcal{F}_{t}$ for all $t.$ \noindent The purpose of this paper is to study the following concepts and problems, and the relation between them. For simplicity of notation we deal only with the $1$-dimensional case.\newline \begin{itemize} \item \emph{Topic 1: Optimal singular control of memory mean-field stochastic differential equations:}\newline Consider the following \emph{mean-field memory singular controlled system}, with a state process $X(t)=X^{\xi}(t)$ and a singular control process $\xi(t),$ of the form \begin{equation} \left\{ \begin{array} [c]{l} dX(t)=b(t,X(t),X_{t},M(t),M_{t},\xi(t),\omega)dt+\sigma(t,X(t),X_{t} ,M(t),M_{t},\xi(t),\omega)dB(t)\\ \quad\quad\quad+\lambda(t,\omega)d\xi(t);\quad t\in\lbrack0,T],\\ X(t)=\alpha(t);\quad t\in\lbrack-\delta,0], \end{array} \right. \label{eq6.1a} \end{equation} where \begin{align*} & X_{t}=\{X(t-s)\}_{0\leq s\leq\delta},\quad\text{(the \emph{memory segment of}}X(t)),\\ & M(t)=\mathcal{L}(X(t))\quad\text{(the \emph{law of}}X(t)),\\ & M_{t}=\{M(t-s)\}_{0\leq s\leq\delta},\quad\text{(the \emph{memory segment of}}M(t)). \end{align*} We assume that our control process $\xi(t)$ is ${\mathbb{R}}$-valued right-continuous $\mathbb{G}$-adapted process, and $t\mapsto\xi(t)$ is increasing (non-decreasing) with $\xi(0^{-})=0$, and such that the corresponding state equation has a unique solution $X$ with $\omega\mapsto X(t,\omega)\in L^{2}(\mathbb{P})$ for all $t$. The set of such processes $\xi$ is denoted by $\Xi$. \newline The \emph{performance functional} is assumed to be of the form \begin{align*} J(\xi) & ={\mathbb{E}}[ {\textstyle\int_{0}^{T}} f(t,X(t),X_{t},M(t),M_{t},\xi(t),\omega)dt+g(X(T),M(T),\omega)\\ & \qquad+ {\textstyle\int_{0}^{T}} h(t,X(t),\omega)d\xi(t)];\quad\xi\in\Xi\,. \end{align*} For simplicity we will in the following suppress the $\omega$ in the notation. \newline We may interpret these terms as follows:\newline The state $X(t)$ may be regarded as the value at time $t$ of, e.g. a fish population. The control process $\xi(t)$ models the amount harvested up to time $t$, the coefficient $\lambda(t)$ is the unit price of the amount harvested, $f$ is a profit rate, $g$ is a bequest or salvage value function, and $h$ is a cost rate for the use of the singular control $\xi$. The $\sigma$-algebra $\mathcal{G}_{t}$ represents the amount of information available to the controller at time $t.$ The problem we consider, is the following: \begin{problem} Find an optimal control $\hat{\xi}\in\Xi$ such that \begin{equation} J(\hat{\xi})=\sup_{\xi\in\Xi}J(\xi)\,.\label{eq6.4} \end{equation} \end{problem} This problem turns out to be closely related to the following topic: \end{itemize} \begin{itemize} \item \emph{Topic 2: Reflected mean-field backward stochastic differential equations} \end{itemize} We study reflected AMBSDEs where at any time $t$ the driver $F$ may depend on future information of the solution processes. More precisely, for a given driver $F$, a given threshold process $S(t)$ and a given terminal value $R$ we consider the following type of reflected AMBSDEs in the unknown processes $Y,Z,K$: \begin{equation} \left\{ \begin{array} [c]{l} (i)Y(t)=R+ {\textstyle\int_{t}^{T}} F(s,Y(s),Z(s),\mathbb{E}[Y^{s}|\mathcal{F}_{s}],\mathbb{E}[Z^{s} |\mathcal{F}_{s}],\mathcal{L}(Y^{s},Z^{s}))ds\\ \quad\quad\quad\quad\quad\quad+K(T)-K(t)- {\textstyle\int_{t}^{T}} Z(s)dB(s); \quad0\leq t\leq T,\\ (ii)Y(t)\geq S(t); \quad0\leq t\leq T,\\ (iii){ {\textstyle\int_{0}^{T}} }(Y(t)-S(t))dK^{c}(t)=0\text{ a.s. and }\triangle K^{d}(t)=-\triangle Y(t)\mathbf{1}_{\{Y(t^{-})=S(t^{-})\}}\text{ a.s.},\\ (iv)Y(t)=R; \quad t\geq T,\\ (v)Z(t)=0; \quad t>T. \end{array} \right. \end{equation} \vskip 0.3cm \noindent Here $\mathcal{L}(Y^{s},Z^{s})$ is the joint law of paths $(Y^{s},Z^{s})$, and for a given positive constant $\delta$ we have put \[ \quad Y^{t}:=\{Y(t+s)\}_{s\in\lbrack0,\delta]}\text{ and }Z^{t}:=\{Z(t+s)\}_{s\in\lbrack0,\delta]}\text{ (the (time)-advanced segment). } \] \vskip0.2cm This problem is connected to the following: \begin{itemize} \item \emph{Topic 3: Optimal stopping and its relation to the problems above.} \end{itemize} For $t \in [0,T]$ let $\mathcal{T}_{[t,T]}$ denote the set of all $\mathbb{F}$-stopping times $\tau$ with values in $[t,T].$\\ Suppose $\left( Y,Z,K\right) $ is a solution of the reflected AMBSDE in Topic 2 above. \begin{description} \item[(i)] Then, for $t\in\left[ 0,T\right]$, the process $Y(t)$ is the solution of the optimal stopping problem \begin{align} Y(t)=\underset{\tau\in\mathcal{T}_{[t,T]}}{ess\sup}\ \Big\{ \mathbb{E} [ {\textstyle\int_{t}^{\tau}} &F(s,Y(s), Z(s), \mathbb{E}[Y^{s}|\mathcal{F}_s],\mathbb{E}[Z^{s}|\mathcal{F}_s],\mathcal{L}(Y^{s},Z^{s}))ds\nonumber\\ &+S(\tau)\mathbf{1}_{\tau<T} +R\mathbf{1}_{\tau=T}|\mathcal{F}_{t}]\Big\}. \end{align} \item[(ii)] Moreover, for $t \in [0,T]$ the solution process $K(t)$ is given by \begin{align} &K(T)-K(T-t)\nonumber\\ &=\underset{s\leq t}{\max}\Big\{R+ \int_{T-s}^{T} F(r,Y(r), Z(r), \mathbb{E}[Y^{r}|\mathcal{F}_r],\mathbb{E}[Z^{r}|\mathcal{F}_r],\mathcal{L}(Y^{r},Z^{r}))dr\nonumber\\ &-\int _{T-s}^{T} Z(r)dB(r)-S(T-s)\Big\}^{-}, \end{align} where $x^{-}=\max(-x,0),$ and an optimal stopping time $\hat{\tau}_{t}$ is given by \begin{align*} \hat{\tau}_{t}:&=\inf\{s\in\lbrack t,T],Y(s)\leq S(s)\}\wedge T\\ & =\inf\{s\in\lbrack t,T],K(s) > K(t)\}\wedge T. \end{align*} \item[(iii)] In particular, if we choose $t=0$, we get that \begin{align*} \hat{\tau}_{0}:&=\inf\{s\in\lbrack 0,T],Y(s)\leq S(s)\}\wedge T\\ & =\inf\{s\in\lbrack 0,T],K(s) > 0\}\wedge T \end{align*} solves the optimal stopping problem \begin{align} Y(0)&=\sup_{\tau\in\mathcal{T}_{[0,T]}}\mathbb{E}[{\textstyle\int_{0}^{\tau}} F(s,Y(s), Z(s), \mathbb{E}[Y^{s}|\mathcal{F}_s],\mathbb{E}[Z^{s}|\mathcal{F}_s],\mathcal{L}(Y^{s},Z^{s}))ds\nonumber\\ &+S(\tau)\mathbf{1}_{\tau<T} +R\mathbf{1}_{\tau=T}],t\in\left[ 0,T\right] . \end{align} \end{description} \noindent More specifically, the content of the paper is the following:\newline \noindent In Section 2, we define the spaces of measures and spaces of path segments with their associated norms, and we give the necessary background results for our methods.\newline \noindent In Section 3, we prove existence and uniqueness of the solution for a class of reflected advanced mean-field backward stochastic differential equations. \newline \noindent In Section 4, we recall a fundamental connection between a class of reflected AMBSDEs and optimal stopping under partial information. equations. \newline \noindent Then in Section 5, we study the problem of optimal singular control of memory mean-field stochastic differential equations. We give sufficient and necessary conditions for optimality in terms of variational inequalities. \newline \noindent Finally, in Section 6, we deduce a relation between the following quantities:\\ (i) The solution of a singular control problem for a mean-field SDE with memory. \\ (ii) The solution of a coupled system of forward memory \& backward advanced mean-field SDEs.\\ (iii) The solution of an optimal stopping problem involving these quantities. \newline \section{A Hilbert space of random measures} \noindent In this section, we proceed as in Agram and \O ksendal \cite{AO3}, \cite{AO2} and construct a Hilbert space $\mathcal{M}$ of random measures on $\mathbb{R}$. It is simpler to work with than the Wasserstein metric space that has been used by many authors previously. See e.g. Carmona \textit{et al} \cite{carmona1}, \cite{carmona}, Buckdahn \textit{et al} \cite{BLP} and the references therein. \noindent Following Agram and \O ksendal \cite{AO3}, \cite{AO2}, we now introduce the following Hilbert spaces: \begin{definition} \- \begin{itemize} \item Let $n$ be a given natural number. Then we define $\tilde{\mathcal{M} }=\tilde{\mathcal{M}}^{n}$ to be the pre-Hilbert space of random measures $\mu$ on $\mathbb{R}^{n}$ equipped with the norm \[ \begin{array} [c]{lll} \left\Vert \mu\right\Vert _{\tilde{\mathcal{M}}^{n}}^{2} & := & \mathbb{E[} {\textstyle\int_{\mathbb{R}^n}} |\hat{\mu}(y)|^{2}(1+|y|)^{-2}dy]\text{,} \end{array} \] with $y=(y_{1},y_{2}, ... ,y_{n})\in\mathbb{R}^{n}$, and $\hat{\mu}$ is the Fourier transform of the measure $\mu$, i.e. \[ \begin{array} [c]{lll} \hat{\mu}(y) & := & { {\textstyle\int_{\mathbb{R}^n}} }e^{-ixy}d\mu(x);\quad y\in\mathbb{R}^{n}, \end{array} \] where $xy =x \cdot y = x_{1} y_{1} + x_{2} y_{2} + ... + x_{n} y_{n}$ is the scalar product in $\mathbb{R}^{n}$. \item $\tilde{\mathcal{M}}_{\delta}$ is the pre-Hilbert space of all path segments $\overline{\mu}=\{\mu(s)\}_{s\in[0,\delta]}$ of processes $\mu (\cdot)$ with $\mu(s)\in\tilde{\mathcal{M}}$ for each $s\in[0,\delta]$, equipped with the norm \begin{equation} \left\Vert \overline{\mu}\right\Vert ^{2}_{\tilde{\mathcal{M}}_{\delta}}:={ {\textstyle\int_0^{\delta}} }\left\Vert \mu(s)\right\Vert ^{2}_{\tilde{\mathcal{M}}}ds. \end{equation} \item We let $\mathcal{M}$ and $\mathcal{M}_{\delta}$ denote the completion of $\tilde{\mathcal{M}}$ and $\tilde{\mathcal{M}}_{\delta}$ and we let $\mathcal{M}_{0}$ and $\mathcal{M}_{0,\delta}$ denote the set of deterministic elements of $\mathcal{M}$ and $\mathcal{M}_{0,\delta}$, respectively. \end{itemize} \end{definition} \noindent There are several advantages with working with this Hilbert space $\mathcal{M}$, compared to the Wasserstein metric space: \begin{itemize} \item A Hilbert space has a useful stronger structure than a metric space. \item Our space $\mathcal{M}$ is easier to work with. \item The Wasserstein metric space $\mathcal{P}_{2}$ deals only with probability measures with finite second moment, while our Hilbert space deals with any (possibly random) measure $\mu\in\mathcal{M}$. \end{itemize} \noindent Let us give some examples for $n=1$: \begin{example} [Measures]\- \begin{enumerate} \item Suppose that $\mu=\delta_{x_{0}}$, the unit point mass at $x_{0} \in\mathbb{R}$. Then $\delta_{x_{0}}\in\mathcal{M}_{0}$ and \[ { {\textstyle\int_{\mathbb{R}}} }e^{ixy}d\mu(x)=e^{ix_{0}y}, \] and hence \[ \begin{array} [c]{lll} \left\Vert \mu\right\Vert _{\mathcal{M}_{0}}^{2} & = {\textstyle\int_{\mathbb{R}}} |e^{ix_{0}y}|^{2}(1+|y|)^{-2}dy & <\infty\text{.} \end{array} \] \item Suppose $d\mu(x)=f(x)dx$, where $f\in L^{1}(\mathbb{R})$. Then $\mu \in\mathcal{M}_{0}$ and by Riemann-Lebesque lemma, $\hat{\mu}(y)\in C_{0}(\mathbb{R})$, i.e. $\hat{\mu}$ is continuous and $\hat{\mu }(y)\rightarrow0$ when $|y|\rightarrow\infty$. In particular, $|\hat{\mu}|$ is bounded on $\mathbb{R}$ and hence \[ \begin{array} [c]{lll} \left\Vert \mu\right\Vert _{\mathcal{M}_{0}}^{2} & = {\textstyle\int_{\mathbb{R}}} |\hat{\mu}(y)|^{2}(1+|y[)^{-2}dy & <\infty\text{.} \end{array} \] \item Suppose that $\mu$ is any finite positive measure on $\mathbb{R}$. Then $\mu\in\mathcal{M}_{0}$ and \[ \begin{array} [c]{lll} |\hat{\mu}(y)| & \leq {\textstyle\int_{\mathbb{R}}} d\mu(y)=\mu(\mathbb{R}) & <\infty\text{ for all }y\text{,} \end{array} \] and hence \[ \begin{array} [c]{lll} \left\Vert \mu\right\Vert _{\mathcal{M}_{0}}^{2} & = {\textstyle\int_{\mathbb{R}}} |\hat{\mu}(y)|^{2}(1+|y|)^{-2}dy & <\infty\text{.} \end{array} \] \item Next, suppose $x_{0}=x_{0}(\omega)$ is random. Then $\delta _{x_{0}(\omega)}$ is a random measure in $\mathcal{M}$. Similarly, if $f(x)=f(x,\omega)$ is random, then $d\mu(x,\omega)=f(x,\omega)dx$ is a random measure in $\mathcal{M}$.\newline \end{enumerate} \end{example} \begin{definition} [Law process]From now on we use the notation \[ M_{t}:=M(t):=\mathcal{L}(X(t));\quad0\leq t\leq T, \] for the law process $\mathcal{L}(X(t))$ of $X(t)$ with respect to the probability $\mathbb{P}$. \end{definition} We recall the following results from Agram \& \O ksendal \cite{AO3}: \begin{lemma} \label{m'}The map $t\mapsto M(t):[0,T]\rightarrow\mathcal{M}_{0}$ is absolutely continuous, and the derivative \[ M^{\prime}(t):=\frac{d}{dt}M(t) \] exists for all $t$. \end{lemma} \begin{lemma} If $X(t)$ is an It™-LŽvy process as in \eqref{eq6.1a}, then the derivative $M^{\prime}(s):=\frac{d}{ds}M(s)$ exists in $\mathcal{M}_{0}$ for a.a. $s$, and we have \[ M(t)=M(0)+ {\textstyle\int_{0}^{t}} M^{\prime}(s)ds;\quad t\geq0. \] \end{lemma} The following result, based on Agram \& \O ksendal \cite{AO2}, is essential for our approach: \begin{lemma} \label{Lemma 4} \- \begin{description} \item[(i)] Let $X^{(1)}$ and $X^{(2)}$ be two $2$-dimensional random variables in $L^{2}(\mathbb{P})$. Then there exist a constant $C_{0}$ not depending on $X^{(1)}$ and $X^{(2)}$, such that \[ \begin{array} [c]{lll} \left\Vert \mathcal{L}(X^{(1)})-\mathcal{L}(X^{(2)})\right\Vert _{\mathcal{M} _{0}^{2}}^{2} & \leq & C_{0}\ \mathbb{E}[(X^{(1)}-X^{(2)})^{2}]\text{.} \end{array} \] \item[(ii)] Let $\{X^{(1)}(t)\}_{t\in [0,T]},$ $\{X^{(2)}(t)\}_{t\in [0.T]}$ be two paths, such that \[ \mathbb{E}[ {\textstyle\int_{0}^{T}} X^{(i)2}(s)ds]<\infty \text{ for }i=1,2\text{.} \] Then, for all $t$, \[ \begin{array} [c]{lll} ||\mathcal{L}(X_{t}^{(1)})-\mathcal{L}(X_{t}^{(2)})||_{\mathcal{M}_{0,\delta }^{2}}^{2} & \leq & C_{0}\ \mathbb{E}[ {\textstyle\int_{-\delta}^{0}} (X^{(1)}(t-s)-X^{(2)}(t-s))^{2}ds]\text{.} \end{array} \] \end{description} \end{lemma} \noindent{Proof.} \quad By definition of the norms and standard properties of the complex exponential function, we have \begin{align*} & ||\mathcal{L}(X^{(1)},X^{(2)})-\mathcal{L}(\widetilde{X}^{(1)} ,\widetilde{X}^{(2)})||_{\mathcal{M}_{0}^{2}}^{2}\\ & :={ {\textstyle\int_{\mathcal{\mathbb{R}}^{2}}} }|\widehat{\mathcal{L}}(X^{(1)},X^{(2)})(y_{1},y_{2})-\widehat{\mathcal{L} }(\widetilde{X}^{(1)},\widetilde{X}^{(2)})(y_{1},y_{2})|^{2}e^{-y_{1} ^{2}-y_{2}^{2}}dy_{1}dy_{2}\\ & ={ {\textstyle\int_{\mathbb{R}^{2}}} }|{ {\textstyle\int_{\mathbb{R}^{2}}} }e^{-i(x^{(1)}y_{1}+x^{(2)}y_{2})}d\mathcal{L}(X^{(1)},X^{(2)})(x^{(1)} ,x^{(2)})\\ & - {\textstyle\int_{\mathcal{\mathbb{R}}^{2}}} e^{-i(\widetilde{x}^{(1)}y_{1}+\widetilde{x}^{(2)}y_{2})}d\mathcal{L} (\widetilde{X}^{(1)},\widetilde{X}^{(2)})(\widetilde{x}^{(1)},\widetilde {x}^{(2)})|^{2}e^{-y_{1}^{2}-y_{2}^{2}}dy_{1}dy_{2}\\ & ={ {\textstyle\int_{\mathcal{\mathbb{R}}^{2}}} }|\mathbb{E}[e^{-i(X^{(1)}y_{1}+X^{(2)}y_{2})}-e^{-i(\widetilde{X}^{(1)} y_{1}+\widetilde{X}^{(2)}y_{2})}]|^{2}e^{-y_{1}^{2}-y_{2}^{2}}dy_{1}dy_{2}\\ & \leq{ {\textstyle\int_{\mathcal{\mathbb{R}}^{2}}} }\mathbb{E}[|e^{-i(X^{(1)}y_{1}+X^{(2)}y_{2})}-e^{-i(\widetilde{X}^{(1)} y_{1}+\widetilde{X}^{(2)}y_{2})}|^{2}]e^{-y_{1}^{2}-y_{2}^{2}}dy_{1}dy_{2}\\ & ={ {\textstyle\int_{\mathcal{\mathbb{R}}^{2}}} }\mathbb{E}[(\cos(X^{(1)}y_{1}+X^{(2)}y_{2})-\cos(\widetilde{X}^{(1)} y_{1}+\widetilde{X}^{(2)}y_{2})^{2}\\ & +(\sin(X^{(1)}y_{1}+X^{(2)}y_{2})-\sin(\widetilde{X}^{(1)}y_{1} +\widetilde{X}^{(2)}y_{2}))^{2}]e^{-y_{1}^{2}-y_{2}^{2}}dy_{1}dy_{2}\\ & \leq{ {\textstyle\int_{\mathcal{\mathbb{R}}^{2}}} }(\mathbb{E}[|(X^{(1)}-\widetilde{X}^{(1)})y_{1}+(X^{(2)})-\widetilde{X} ^{(2)})y_{2}|^{2}]\\ & +\mathbb{E}[(X^{(1)}-\widetilde{X}^{(1)})y_{1}+(X^{(2)})-\widetilde {X}^{(2)})y_{2}|^{2})]e^{-y_{1}^{2}-y_{2}^{2}}dy_{1}dy_{2}\\ & =2{ {\textstyle\int_{\mathcal{\mathbb{R}}^{2}}} }(\mathbb{E}[|(X^{(1)}-\widetilde{X}^{(1)})y_{1}+(X^{(2)})-\widetilde{X} ^{(2)})y_{2}|]^{2})e^{-y_{1}^{2}-y_{2}^{2}}dy_{1}dy_{2}\\ & \leq4{ {\textstyle\int_{\mathcal{\mathbb{R}}^{2}}} }(\mathbb{E}[(X^{(1)}-\widetilde{X}^{(1)})^{2}]y_{1}^{2}+\mathbb{E} [(X^{(2)}-\widetilde{X}^{(2)})^{2}]y_{2}^{2})e^{-y_{1}^{2}-y_{2}^{2}} dy_{1}dy_{2}\\ & \leq C_{0}\mathbb{E}[(X^{(1)}-\widetilde{X}^{(1)})^{2}+(X^{(2)} )-\widetilde{X}^{(2)})^{2}]. \end{align*} \noindent Similarly, we get that \[ \begin{array} [c]{lll} ||\mathcal{L}(X_{t}^{(1)})-\mathcal{L}(X_{t}^{(2)})||_{\mathcal{M}_{0,\delta }^{2}}^{2} & \leq & {\textstyle\int_{\mathbb{-\delta}}^{0}} \left\Vert \mathcal{L}(X^{(1)}(t-s))-\mathcal{L}(X^{(2)}(t-s))\right\Vert _{\mathcal{M}_{0}^{2}}^{2}ds\\ & \leq & C_{0}\ \mathbb{E}[ {\textstyle\int_{\mathbb{-\delta}}^{0}} (X^{(1)}(t-s)-X^{(2)}(t-s))^{2}ds]. \end{array} \] $\square$\newline \subsection{Spaces} \noindent Throughout this work, we will use the following spaces: \begin{itemize} \item $\mathbb{L}^{2}$ is the space of measurable functions{ }$\sigma :[0,\delta]\rightarrow\mathbb{R}$, such that \[ \parallel\sigma\parallel_{\mathbb{L}^{2}}^{2}:= {\textstyle\int_{0}^{\delta}} |\sigma(r)|^{2}dr<\infty. \] \item $\mathcal{S}^{2}$ is the set of ${\mathbb{R}}$-valued $\mathbb{F} $-adapted c\`{a}dl\`{a}g processes $(X(t))_{t\in\lbrack0,T]}$, such that \[ {\Vert X\Vert}_{\mathcal{S}^{2}}^{2}:={\mathbb{E}}[\sup_{t\in\lbrack 0,T]}|X(t)|^{2}]~<~\infty\;. \] \item $L^{2}$ is the set of ${\mathbb{R}}$-valued $\mathbb{F}$-predictable processes $(Q(t))_{t\in\lbrack0,T]}$, such that \[ \Vert Q\Vert_{L^{2}}^{2}:={\mathbb{E}}[ {\textstyle\int_{0}^{T}} |Q(t)|^{2}dt]<~\infty\;. \] \item $\Xi$ is the set of $\mathbb{G}$-adapted, nondecreasing right-continuous processes $\xi$ with $\xi(0^{-})=0$ (the set of admissible singular controls). \item $L^{2}(\Omega,\mathcal{F}_{t})$ is the set of ${\mathbb{R}}$-valued square integrable $\mathcal{F}_{t}$-measurable random variables. \item $\mathcal{R}$ is the set of functions $r:\mathbb{R} _{0}\rightarrow\mathbb{R}.$ \item $C_{a}([0,T],\mathcal{M}_{0})$ denotes the set of absolutely continuous functions $m:[0,T]\rightarrow\mathcal{M}_{0}.$ \end{itemize} \section{Existence and Uniqueness of Solutions of Reflected AMBSDEs} In this section, we will prove existence and uniqueness of solutions of reflected mean-field BSDEs with a generator which is \emph{(time-) advanced}, in the sense that at any time $t$, the generator may depend on future values up to a positive constant $\delta$ as follows:\newline For a given driver $F$, terminal value $R$ and barrier (or obstacle) process $S$, we say that an $\mathbb{F}$-adapted process $(Y,Z,K)\in\mathcal{S}^{2}\times L^{2}\times\Xi$ is a solution of the corresponding reflected AMBSDEs if the following holds: \begin{equation} \left\{ \begin{array} [c]{l} (i)Y(t)=R+ {\textstyle\int_{t}^{T}} F(s,Y(s),Z(s),\mathbb{E}[Y^{s}|\mathcal{F}_{s}],\mathbb{E}[Z^{s} |\mathcal{F}_{s}],\mathcal{L}(Y^{s},Z^{s}))ds\\ \quad\quad\quad\quad\quad\quad+K(T)-K(t)- {\textstyle\int_{t}^{T}} Z(s)dB(s);\quad0\leq t\leq T,\\ (ii)Y(t)\geq S(t);\quad0\leq t\leq T,\\ (iii){ {\textstyle\int_{0}^{T}} }(Y(t)-S(t))dK^{c}(t)=0\text{ a.s. and }\triangle K^{d}(t)=-\triangle Y(t)\mathbf{1}_{\{Y(t^{-})=S(t^{-})\}}\text{ a.s.},\\ (iv)Y(t)=R;\quad t\geq T,\\ (v)Z(t)=0;\quad t>T, \end{array} \right. \label{eq3.1} \end{equation} where $Y^{s}=\left( Y(s+r)\right) _{r\in\left[ 0,\delta\right] } ,Z^{s}=\left( Z(s+r)\right) _{r\in\left[ 0,\delta\right] },$ the terminal condition $R\in L^{2}(\Omega,\mathcal{F}_{T})$, the driver $F:[0,T]\times \Omega\times\mathbb{R}^{2}\mathbb{\times L}^{2}\times\mathbb{L}^{2} \times\mathcal{M}_{0,\delta}\longrightarrow\mathbb{R}$ is $\mathcal{F}_{t} $-progressively measurable and we have denoted by $K^{c}$ and $K^{d}$ the continuous and discontinuous parts of $K$ respectively. \noindent We may remark here that in order to guarantee adaptedness, the time-advanced terms are given under conditional expectation with respect to $\mathcal{F}_{s}$. \noindent Our result can be regarded as an extension of the existing results on advanced BSDEs of Peng \& Yang \cite{peng2009}, \O ksendal \textit{et al} \cite{Oksendal2011}, Jeanblanc \textit{et al} \cite{JLA} and we refer here to the paper by Quenez and Sulem \cite{QS} on reflected BSDEs for c\`{a}dl\`{a}g obstacle. \noindent To obtain the existence and the uniqueness of a solution, we make the following set of assumptions: \begin{itemize} \item For the driver $F,$ we assume \item[ (i)] There exists a constant $c\in \mathbb{R} $ such that \[ |F(\cdot,0,0,0,0,\mathcal{L}(0,0))|\leq c, \] where $\mathcal{L}(0,0)$ is the Dirac measure with mass at zero. \item[ (ii)] There exists a constant $C_{Lip}^{F}\in \mathbb{R} $ such that, for $t\in\lbrack0,T],$ \begin{align*} & |F(t,y_{1},z_{1},y_{2},z_{2},\mathcal{L}(y_{2},z_{2}))-F(t,y_{1}^{\prime },z_{1}^{\prime},y_{2}^{\prime},z_{2}^{\prime},\mathcal{L}(y_{2}^{\prime },z_{2}^{\prime}))|^{2}\\ & \leq C_{Lip}^{F}\{|y_{1}-y_{1}^{\prime}|^{2}+|z_{1}-z_{1}^{\prime} |^{2}+||y_{2}-y_{2}^{\prime}||_{\mathbb{L}^{2}}^{2}+||z_{2}-z_{2}^{\prime }||_{\mathbb{L}^{2}}^{2}\\ & +||\mathcal{L}(y_{2},z_{2})-\mathcal{L}(y_{2}^{\prime},z_{2}^{\prime })||_{\mathcal{M}_{0,\delta}}^{2})\}, \end{align*} for all $y_{1},z_{1},y_{1}^{\prime},z_{1}^{\prime}\in{\mathbb{R}},$ $y_{2},z_{2},y_{2}^{\prime},z_{2}^{\prime}\in\mathbb{L}^{2},$ $\mathcal{L} (y_{2},z_{2}),\mathcal{L}(y_{2}^{\prime},z_{2}^{\prime})\in\mathcal{M} _{0,\delta}.$ \item For the barrier $S,$ we assume: \item[(iii)] The barrier $S$ is nondecreasing, $\mathbb{F}$-adapted, c\`{a}dl\`{a}g process satisfying \[ \mathbb{E}[\underset{t\in\lbrack0,T]}{\sup}|S(t)|^{2}]<\infty. \] \item[(iv)] $Y(t)\geq S(t); 0\leq t\leq T$. \item For the local time $K,$ we assume: \item[(v)] $K$ is a nondecreasing ${\mathbb{F}}$-adapted c\`{a}dl\`{a}g process with $K(0^{-})=0,$ such that ${ {\textstyle\int_{0}^{T}} }(Y(t)-S(t))dK^{c}(t)=0$ a.s. and $\triangle K^{d}(t)=-\triangle Y(t)\mathbf{1}_{\{Y(t^{-})=S(t^{-})\}}$ a.s. \end{itemize} \begin{theorem} [Existence and Uniqueness]Under the above assumptions (i)-(v), the reflected AMBSDEs (\ref{eq3.1}) has a unique solution $(Y,Z,K) \in\mathcal{S}^{2} \times L^{2} \times\Xi$. \end{theorem} \noindent{Proof.} \quad For $t\in\lbrack0,T]$ and for all {$\beta>0$, we define the Hilbert space }$\mathbb{H}_{{\beta}}^{2}$ to be the set of all $(Y,Z)\in\mathcal{S}^{2}\times L^{2}$, equipped with the norm \[ ||(Y,Z)||_{\mathbb{H}_{{\beta}}^{2}}^{2}:={\mathbb{E}}[ {\textstyle\int_{0}^{T+\delta}} e^{\beta t}(Y^{2}(t)+Z^{2}(t))dt]\;. \] \noindent Define the {mapping $\Phi:\mathbb{H}_{{\beta}}^{2}\rightarrow$ }$\mathbb{H}_{{\beta}}^{2}$ by $\Phi(y,z)=(Y,Z)$ where $(Y,Z)\in ${{$\mathcal{S}^{2}$}$\times${$L^{2}(\subset{{L^{2}}\times{L^{2}}})$}} is defined by \[ \left\{ \begin{array} [c]{ll} Y(t) & =R+{ {\textstyle\int_{t}^{T}} }F(s,{y(s),z(s)},\mathbb{E}[y^{s}|\mathcal{F}_{s}],\mathbb{E}[z^{s} |\mathcal{F}_{s}],\mathcal{L}(y^{s},z^{s}))ds\\ & +K(T)-K(t)-{ {\textstyle\int_{t}^{T}} }Z(s)dB(s); \quad0\leq t\leq T,\\ Y(t) & =R; \quad t\geq T,\\ Z(t) & =0; \quad t>T. \end{array} \right. \] To prove the theorem, it suffices to prove that {$\Phi$ is a contraction mapping in }$\mathbb{H}_{{\beta}}^{2}${ under the norm $||\cdot||_{\mathbb{H} _{{\beta}}^{2}}$ for large enough }${\beta}${. For two arbitrary elements $(y_{1},z_{1},k_{1})$ and $(y_{2},z_{2},k_{2})$, we denote their difference by} \[ (\widetilde{y},\widetilde{z},\widetilde{k})=(y_{1}-y_{2},z_{1}-z_{2} ,k_{1}-,k_{2})\;. \] Applying It\^{o} formula for semimartingale, we get \begin{align*} & \mathbb{E}[ {\textstyle\int_{0}^{T}} e^{\beta t}(\beta\widetilde{Y}^{2}(t)+\widetilde{Z}^{2}(t))dt]\\ & =2\mathbb{E}[ {\textstyle\int_{0}^{T}} e^{\beta t}\widetilde{Y}(t)\{F(t,{y_{1}(t),z_{1}(t)},\mathbb{E}[y_{1} ^{t}|\mathcal{F}_{t}],\mathbb{E}[z_{1}^{t}|\mathcal{F}_{t}],\mathcal{L} (y_{1}^{t},z_{1}^{t}))\\ & -F(t,{y_{2}(t),z_{2}(t)},\mathbb{E}[y_{2}^{t}|\mathcal{F}_{t} ],\mathbb{E}[z_{2}^{t}|\mathcal{F}_{t}],\mathcal{L}(y_{2}^{t},z_{2} ^{t}))\}dt]\;\\ & +2\mathbb{E}[ {\textstyle\int_{0}^{T}} e^{\beta t}\widetilde{Y}(t)dK^{1}(t)]-2\mathbb{E}[ {\textstyle\int_{0}^{T}} e^{\beta t}\widetilde{Y}(t)dK^{2}(t)]. \end{align*} We have that \begin{align*} \widetilde{Y}(t)dK^{1,c}(t) & =(Y^{1}(t)-S(t))dK^{1,c}(t)-(Y^{2} (t)-S(t))dK^{1,c}(t)\\ & =-(Y^{2}(t)-S(t))dK^{1,c}(t)\leq0\text{ a.s.,} \end{align*} and by symmetry, we have also $\widetilde{Y}(t)dK^{2,c}(t)\geq0$ a.s. For the discontinuous case, we have as well \begin{align*} \widetilde{Y}(t)dK^{1,d}(t) & =(Y^{1}(t)-S(t))dK^{1,d}(t)-(Y^{2} (t)-S(t))dK^{1,d}(t)\\ & =-(Y^{2}(t)-S(t))dK^{1,d}(t)\leq0\text{ a.s.,} \end{align*} and by symmetry, we have also $\widetilde{Y}(t)dK^{2,d}(t)\geq0$ a.s. \noindent{By Lipschitz assumption and standard estimates, it follows that } \begin{align*} &\mathbb{E}[ {\textstyle\int_{0}^{T}} e^{\beta t}(\beta\widetilde{Y}^{2}(t)+\widetilde{Z}^{2}(t))dt]\\ & \leq 8\rho C^{2}\text{ }\mathbb{E}[ {\textstyle\int_{0}^{T}} e^{\beta t}\widetilde{Y}^{2}(t)dt]\\ &+\tfrac{1}{2\rho}\mathbb{E}[{\textstyle\int_{0}^{T}} e^{\beta t}(\widetilde{y}^{2}(t)+\widetilde{z}^{2}(t)+{\textstyle\int_{0}^{\delta}} (\widetilde{y}^{2}(t+r)+\widetilde{z}^{2}(t+r))dr)dt]\;. \end{align*} By change of variable $s=t+r$, we get \begin{align*} & \mathbb{E}[ {\textstyle\int_{0}^{T}} e^{\beta t} {\textstyle\int_{0}^{\delta}} (\widetilde{y}^{2}(t+r)+\widetilde{z}^{2}(t+r))dr)dt]\\ & \leq\mathbb{E}[ {\textstyle\int_{0}^{T}} e^{\beta t} {\textstyle\int_{t}^{t+\delta}} (\widetilde{y}^{2}(s)+\widetilde{z}^{2}(s))ds)dt]. \end{align*} Fubini's theorem gives that \begin{align*} & \mathbb{E}[ {\textstyle\int_{0}^{T}} e^{\beta t} {\textstyle\int_{0}^{\delta}} (\widetilde{y}^{2}(t+r)+\widetilde{z}^{2}(t+r))dr)dt]\\ & \leq\mathbb{E}[ {\textstyle\int_{0}^{T+\delta}} ( {\textstyle\int_{s-\delta}^{s}} e^{\beta t}dt)(\widetilde{y}^{2}(s)+\widetilde{z}^{2}(s)))ds]\\ & \leq\mathbb{E}[ {\textstyle\int_{0}^{T+\delta}} e^{\beta s}(\widetilde{y}^{2}(s)+\widetilde{z}^{2}(s)))ds]. \end{align*} Consequently, by choosing $\beta=1+8\rho C^{2},$ we have \[ \mathbb{E}[ {\textstyle\int_{0}^{T}} e^{\beta t}(\widetilde{Y}^{2}(t)+\widetilde{Z}^{2}(t))dt]\leq\tfrac{1}{\rho }\;\mathbb{E}[ {\textstyle\int_{0}^{T+\delta}} e^{\beta t}(\widetilde{y}^{2}(t)+\widetilde{z}^{2}(t))dt]\;. \] Since $\widetilde{Y}(t)=\widetilde{Z}(t)=0$ for $t>T$, we get \[ ||(\widetilde{Y},\widetilde{Z})||_{\mathbb{H}_{{\beta}}^{2}}^{2}\leq\tfrac {1}{\rho}\;||(\widetilde{y},\widetilde{z})||_{\mathbb{H}_{{\beta}}^{2}}^{2}\;. \] { }For{ }$\rho${$>1$, we get that }$\Phi$ is a contraction on $\mathbb{H} _{{\beta}}^{2}.$ $\square$ \section{Reflected AMBSDEs and optimal stopping under partial information} In this section we recall a connection between reflected AMBSDEs and optimal stopping problems under partial information. \begin{definition} Let $F:\Omega\times\lbrack0,T]\times\mathbb{R}^{2}\times \mathbb{L}^{2}\times\mathbb{L}^{2}\times\mathcal{M}_{0,\delta}\rightarrow\mathbb{R}$ be a given function. \\Assume that: $\bullet$ $F$ is $\mathbb{G}$-adapted and $|F(t,0,0,0,0,\mathcal{L}(0,0))|<c$, for all $t$; for some constant $c$. $\bullet$ $S(t)$ is a given $\mathbb{F}$-adapted c\`{a}dl\`{a}g nondecreasing process, such that \newline \[ \mathbb{E[}\underset{t\in\lbrack0,T]}{\sup}(S(t))^{2}]<\infty. \] $\bullet$ The terminal value $R \in L^{2}\left( \Omega,\mathcal{F} _{T}\right) $ is such that $R \geq S(T)$ a.s. We say that a $\mathbb{G}$-adapted triplet $\left( Y,\mathcal{Z},K\right) $ is a solution of the reflected AMBSDE with driver $F$, terminal value $R$ and the reflecting barrier $S(t)$ under the filtration $\mathbb{G}$, if the following hold: \begin{enumerate} \item \[ \mathbb{E[} {\textstyle\int_{0}^{T}} |F(s,Y(s),Z(s),\mathbb{E[}Y^{s}|\mathcal{F}_{s}],\mathbb{E[}\mathcal{Z}^{s} |\mathcal{F}_{s}],\mathcal{L}(Y^{s},\mathcal{Z}^{{s}}))|^{2}ds]<\infty, \] \item \[ \mathcal{Z}(t) \text{ is a } \mathbb{G}-martingale, \] \item \[ \begin{array} [c]{c} Y(t)=R+ {\textstyle\int_{t}^{T}} F(s,Y(s),Z(s),\mathbb{E[}Y^{s}|\mathcal{F}_{s}],\mathbb{E[}\mathcal{Z}^{s} |\mathcal{F}_{s}],\mathcal{L}(Y^{s},\mathcal{Z}^{{s}}))ds\\ - {\textstyle\int_{t}^{T}} dK(s)- {\textstyle\int_{t}^{T}} d\mathcal{Z}(s); \quad t\in\left[ 0,T\right] , \end{array} \newline \] or, equivalently,\newline \[ \begin{array} [c]{c} Y(t)=\mathbb{E}[R+ {\textstyle\int_{t}^{T}} F(s,Y(s),Z(s),\mathbb{E[}Y^{s}|\mathcal{F}_{s}],\mathbb{E[}\mathcal{Z}^{s} |\mathcal{F}_{s}],\mathcal{L}(Y^{s},\mathcal{Z}^{{s}}))ds\\ - {\textstyle\int_{t}^{T}} dK(s)|\mathcal{G}_{t}]; t\in\left[ 0,T\right] , \end{array} \] \item $K(t)$ is nondecreasing, $\mathbb{G}$-adapted, c\`{a}dl\`{a}g and $K(0^{-})=0,$ \item $Y(t)\geq S(t)$ a.s.; $t\in\lbrack0,T],$ \item $ {\textstyle\int_{0}^{T}} (Y(t)-S(t))dK(t)=0$ a.s. \end{enumerate} \end{definition} \vskip0.3cm The following result is essentially due to El Karoui \textit{et al} \cite{EKPPQ}. See also \O ksendal \& Sulem \cite{OS3} and \O ksendal \& Zhang \cite{OZ}. \begin{theorem} For $t\in\lbrack0,T]$, let $\mathcal{T}_{[t,T]}$ denote the set of all $\mathbb{G}$-stopping times $\tau:\Omega\mapsto\lbrack t,T].$\newline Suppose $\left( Y,\mathcal{Z},K\right) $ is a solution of the reflected AMBSDE above. \end{theorem} \begin{description} \item[(i)] Then $Y(t)$ is the solution of the optimal stopping problem \[ \begin{array} [c]{c} Y(t)=\underset{\tau\in\mathcal{T}_{[t,T]}}{ess\sup}\quad\{\mathbb{E}[ {\int_{t}^{\tau}} F(s,Y(s),\mathcal{Z}(s),Y^{s},\mathcal{Z}^{s},\mathcal{L}(Y^{s},\mathcal{Z}^{s}))ds\\ +S(\tau)\mathbf{1}_{\tau<T}+R\mathbf{1}_{\tau=T}|\mathcal{G}_{t}]\}; \quad t\in\left[ 0,T\right] . \end{array} \] \item[(ii)] Moreover the solution process $K(t)$ is given by \begin{align} K(T)-K(T-t)&=\underset{s\leq t}{\max}\Big\{R+ \int_{T-s}^{T} F(r,Y(r),\mathcal{Z}(r),\mathbb{E[}Y^{r}|\mathcal{F}_{r}],\mathbb{E[}\mathcal{Z}^{r}|\mathcal{F}_{r}],\mathcal{L}(Y^{r},\mathcal{Z}^{r}))dr\nonumber\\ &-\int _{T-s}^{T} d\mathcal{Z}(r)-S(T-s)\Big\}^{-}; \quad t\in\left[ 0,T\right] , \end{align} where $x^{-}=\max(-x,0),$ and an optimal stopping time $\hat{\tau}_{t}$ is given by \begin{align*} \hat{\tau}_{t}:&=\inf\{s\in\lbrack t,T],Y(s)\leq S(s)\}\wedge T\\ & =\inf\{s\in\lbrack t,T],K(s) > K(t)\}\wedge T. \end{align*} \item[(iii)] In particular, if we choose $t=0$, we get that \begin{align*} \hat{\tau}_{0}:&=\inf\{s\in\lbrack 0,T],Y(s)\leq S(s)\}\wedge T\\ & =\inf\{s\in\lbrack 0,T],K(s) > 0\}\wedge T, \end{align*} solves the optimal stopping problem \begin{align*} Y(0)&=\sup_{\tau\in\mathcal{T}_{[0,T]}}\mathbb{E}[{\textstyle\int_{0}^{\tau}} F(s,Y(s), Z(s), \mathbb{E}[Y^{s}|\mathcal{F}_s],\mathbb{E}[Z^{s}|\mathcal{F}_s],\mathcal{L}(Y^{s},Z^{s}))ds\\ &+S(\tau)\mathbf{1}_{\tau<T} +R\mathbf{1}_{\tau=T}]; t\in\left[ 0,T\right] . \end{align*} \end{description} \section{Optimal singular control of memory mean-field SDEs} We now return to the singular control problem stated in the Introduction:\newline \subsection{Problem statement} Consider the following \emph{mean-field memory singular controlled system}, with a state process $X(t)=X^{\xi}(t)$ and a singular control process $\xi(t),$ of the form \begin{equation} \left\{ \begin{array} [c]{l} dX(t)=b(t,X(t),X_{t},M(t),M_{t},\xi(t))dt+\sigma(t,X(t),X_{t},M(t),M_{t} ,\xi(t))dB(t)\\ \quad\quad\quad+\lambda(t)d\xi(t); \quad t\in\lbrack0,T],\\ X(t)=\alpha(t); \quad t\in\lbrack-\delta,0], \end{array} \right. \label{eq6.1} \end{equation} where $X_{t}=\{X(t-s)\}_{0\leq s\leq\delta},$ $M(t)=\mathcal{L}(X(t)),$ $M_{t}=\{M(t-s)\}_{0\leq s\leq\delta},$ $b,\sigma:\Omega\times\lbrack 0,T]\times \mathbb{R} \times\mathbb{L}^{2}\times\mathcal{M}_{0}\times\mathcal{M}_{0,\delta} \times{\mathbb{R}\times\Xi\rightarrow\mathbb{R}},$ $\lambda:[0,T]\rightarrow \mathbb{R} .$\newline We assume that our control process $\xi(t)$ is ${\mathbb{R}} $-valued right-continuous $\mathbb{G}$-adapted processes, and $t\mapsto\xi(t)$ is increasing (nondecreasing) with $\xi(0^{-})=0$, and such that the corresponding state equation has a unique solution $X$ with $\omega\mapsto X(t,\omega)\in L^{2}(\mathbb{P})$ for all $t$. The set of such processes $\xi$ is denoted by $\Xi$. \newline \noindent The \emph{performance functional} is assumed to be of the form \begin{equation} \begin{array} [c]{c} J(\xi)={\mathbb{E}}[ {\textstyle\int_{0}^{T}} f(t,X(t),X_{t},M(t),M_{t},\xi(t))dt+g(X(T),M(T))\\ \qquad+ {\textstyle\int_{0}^{T}} h(t,X(t))d\xi(t)];\quad\xi\in\Xi, \end{array} \label{eq6.3} \end{equation} where $f:\Omega\times\lbrack0,T]\times \mathbb{R} \times\mathbb{L}^{2}\times\mathcal{M}_{0}\times\mathcal{M}_{0,\delta} \times{\mathbb{R}\times\Xi\rightarrow\mathbb{R}},$ $h:\Omega\times \lbrack0,T]\times \mathbb{R} \rightarrow \mathbb{R} ,$ $g:\Omega\times \mathbb{R} \times\mathcal{M}_{0}\rightarrow \mathbb{R} .$\newline The problem we consider, is the following: \begin{problem} \label{prob} Find an optimal control $\hat{\xi}\in\Xi$, such that \begin{equation} J(\hat{\xi})=\sup_{\xi\in\Xi}J(\xi)\,. \label{eq6.4} \end{equation} \end{problem} \noindent First we explain some notation and introduce some useful dual operators. \newline Let $L_{0}^{2}$ denote the set of measurable stochastic processes $Y(t)$ on $\mathbb{R}$ such that $Y(t)=0$ for $t<0$ and for $t>T$ and \[ {\mathbb{ E}}[{\textstyle\int_{0}^{T}} Y^{2}(t)dt]<\infty\quad a.s. \] \begin{itemize} \item Let $G(t,\bar{x})=G_{\bar{x}}(t,\cdot):[0,T]\times\mathbb{L}^{2} \mapsto\mathbb{R}$ be a bounded linear functional on $\mathbb{L}^{2}$ for each $t$, uniformly bounded in $t$.Then the map \[ Y\mapsto\mathbb{E}[ {\textstyle\int_{0}^{T}} \left\langle G_{\overline{x}}(t),Y_{t}\right\rangle dt];\quad Y\in L_{0}^{2} \] is a bounded linear functional on $L_{0}^{2}$. Therefore, by the Riesz representation theorem there exists a unique process denoted by $G_{\bar{x} }^{\ast}(t)\in L_{0}^{2}$ such that \begin{equation} {\mathbb{E}}[{ {\textstyle\int_{0}^{T}} }\left\langle G_{\overline{x}}(t),Y_{t}\right\rangle dt]={\mathbb{E}}[{ {\textstyle\int_{0}^{T}} }G_{\bar{x}}^{\ast}(t)Y(t)dt], \label{eq6.7a} \end{equation} for all $Y\in L_{0}^{2}$. \end{itemize} We illustrate these operators by some auxiliary results. \begin{lemma} Consider the case when \[ G_{\bar{x}}(t,\cdot)=\left\langle F,\cdot\right\rangle p(t), \text{ with } p \in L_0^2. \] Then \begin{equation} G_{\bar{x}}^{\ast}(t):=\left\langle F,p^{t}\right\rangle \label{eq4.8} \end{equation} satisfies \eqref{eq6.7a}, where $p^{t}:=\{p(t+r)\}_{r\in\lbrack0,\delta]}$. \end{lemma} \noindent{Proof.} \quad\quad We must verify that if we define $G^{*}_{\bar{x} }(t)$ by \eqref{eq4.8}, then \eqref{eq6.7a} holds. To this end, choose $Y\in L_{0}^{2}$ and consider \begin{align} & {\textstyle\int_{0}^{T}} \left\langle F,p^{t}\right\rangle Y(t)dt={ {\textstyle\int_{0}^{T}} }\left\langle F,\{p(t+r)\}_{r\in\lbrack0,\delta]}\right\rangle Y(t)dt\nonumber\\ & = {\textstyle\int_{0}^{T}} \left\langle F,\{Y(t)p(t+r)\}_{r\in\lbrack0,\delta]}\right\rangle dt=\left\langle F,\{ {\textstyle\int_{r}^{T+r}} Y(u-r)p(u)du\}_{r\in\lbrack0,\delta]}\right\rangle \nonumber\\ & =\left\langle F,\{ {\textstyle\int_{0}^{T}} Y(u-r)p(u)du\}_{r\in\lbrack0,\delta]}\right\rangle ={ {\textstyle\int_{0}^{T}} }\left\langle F,Y_{u}\right\rangle p(u)du\nonumber\\ & ={ {\textstyle\int_{0}^{T}} }\left\langle \nabla_{\bar{x}}G(u),Y_{u}\right\rangle du.\nonumber \end{align} $\square$ \begin{example} (i) For example, if $a\in\mathbb{R}^{[0,\delta]}$ is a bounded function and $F(\bar{x})$ is the averaging operator defined by \[ F(\bar{x})=\left\langle F,\bar{x}\right\rangle = {\textstyle\int_{-\delta}^{0}} a(s)x(s)ds \] when $\bar{x}=\{x(s)\}_{s\in\lbrack0,\delta]}$, then \[ \left\langle F,p^{t}\right\rangle ={ {\textstyle\int_{0}^{\delta}} }a(r)p(t+r)dr. \] (ii) Similarly, if $t_{0}\in\lbrack0,\delta]$ and $G$ is evaluation at $t_{0} $, i.e. \[ G(\bar{x})=x(t_{0})\text{ when }\bar{x}=\{x(s)\}_{s\in\lbrack0,\delta]}, \] then \[ \left\langle G,p^{t}\right\rangle =p(t+t_{0}). \] \end{example} We now have the machinery to start working on Problem (\ref{prob} )\textbf{.}\newline Let $\widehat{\mathcal{M}}$ be the set of all random measures on $[0,T]$. Define the \textit{(singular) Hamiltonian} \[ H:[0,T]\times\mathbb{R}\times\mathbb{L}^{2}\times\mathcal{M}_{0} \times\mathcal{M}_{0,\delta}\times\Xi\times\mathbb{R}\times\mathbb{R}\times C_{a}([0,T],\mathcal{M}_{0})\mapsto\widehat{\mathcal{M}} \] as the following random measure: \begin{align} dH(t) & =dH(t,x,\bar{x},m,\bar{m},\xi,p^{0},q^{0},p^{1})\label{eq5.2a}\\ & =H_{0}(t,x,\bar{x},m,\bar{m},\xi,p^{0},q^{0},p^{1})dt+\{\lambda (t)p^{0}+h(t,x)\}d\xi(t)\,,\nonumber \end{align} where \begin{align} & H_{0}(t,x,\bar{x},m,\bar{m},\xi,p^{0},q^{0},p^{1})\label{eq5.3a}\\ & :=f(t,x,\bar{x},m,\bar{m},\xi)+b(t,x,\bar{x},m,\bar{m},\xi)p^{0} +\sigma(t,x,\bar{x},m,\bar{m},\xi)q^{0}+\left\langle p^{1},\beta (m)\right\rangle ,\nonumber \end{align} where $\beta(m)$ is defined below. Here $m$ denotes a generic value of the measure $M(t)$. We assume that f, b, $\sigma,\gamma,h$ and $g$ are Fr\'{e}chet differentiable $(C^{1})$ in the variables $x,\bar{x},m,\bar{m},\xi$. Then the same holds for $H_{0}$ and $H$.\newline\noindent We define the adjoint processes $(p^{0},q^{0}),(p^{1},q^{1})$ as the solutions of the following BSDEs, respectively: \begin{equation} \left\{ \begin{array} [c]{ll} dp^{0}(t) & =-\Big\{\tfrac{\partial H_{0}}{\partial x}(t)+\mathbb{E[} \nabla_{\bar{x}}^{\ast}H_{0}(t)|\mathcal{F}_{t}]\Big\}dt-\frac{\partial h}{\partial x}(t)d\xi(t)+q^{0}(t)dB(t);\quad t\in\lbrack0,T],\\ p^{0}(t) & =\tfrac{\partial g}{\partial x}(T);\quad t\geq T,\\ q^{0}(t) & =0;\quad t>T, \end{array} \right. \label{eqp0} \end{equation} and \begin{equation} \left\{ \begin{array} [c]{ll} dp^{1}(t) & =-\{\nabla_{m}H_{0}(t)+\mathbb{E}\left[ \nabla_{\bar{m}}^{\ast }H_{0}(t)|\mathcal{F}_{t}\right] \}dt+q^{1}(t)dB(t);\quad t\in\lbrack0,T],\\ p^{1}(t) & =\nabla_{m}g(T);\quad t\geq T,\\ q^{1}(t) & =0;\quad t>T, \end{array} \right. \label{eqp1} \end{equation} where $g(T)=g(X(T),M(T))$ and \[ H_{0}(t)=H_{0}(t,x,\bar{x},m,\bar{m},\xi,p^{0},q^{0},p^{1})_{x=X(t),\bar {x}=X_{t},m=M(t),\bar{m}=M_{t},\xi=\xi(t),p^{0}=p^{0}(t),q^{0}=q^{0} (t),p^{1}=p^{1}(t)}. \] Here $\nabla_{m}H_{0}$ is the Frech\'{e}t derivative of $H_{0}$ with respect to $m$, and $\nabla_{\bar{m}}^{\ast}H_{0}$ is defined similarly to $\nabla_{\bar{x}}^{\ast}H_{0}$.\newline \subsection{A sufficient maximum principle for singular mean field control with partial information} \label{sec6.2} We proceed to state a sufficient maximum principle (a verification theorem) for the singular mean-field control problem described by \eqref{eq6.1} - \eqref{eq6.4}. Because of the mean-field terms, it is natural to consider the two-dimensional system $(X(t),M(t))$, where the dynamics for $M(t)$ is the following: \begin{equation} \begin{cases} dM(t)=\beta(M(t)dt,\nonumber\\ M(0)\in\mathcal{M}_{0}, \end{cases} \end{equation} where we have put $\beta(M(t))=M^{\prime}(t)$. See Lemma \ref{m'}.\newline \begin{theorem} [\emph{Sufficient maximum principle for mean-field singular control} ]\label{th5.1a} Let $\hat{\xi}\in\Xi$ be such that the system of \eqref{eq6.1} and \eqref{eqp0} - \eqref{eqp1} has a solution $\hat{X}(t),\hat{p}^{0} (t),\hat{q}^{0}(t),\hat{p}^{1}(t),\hat{q}^{1}(t)$ and set $\hat{M} (t)=\mathcal{L}(\hat{X}(t))$. Suppose the following conditions hold: \begin{itemize} \item (The concavity assumptions) The functions \begin{align} & \mathbb{R}\times\mathbb{L}^{2}\times\mathcal{M}_{0}\times\mathcal{M} _{0,\delta}\times\Xi\ni(x,\bar{x},m,\bar{m},\xi)\mapsto dH(t,x,\bar{x} ,m,\bar{m},\xi,\hat{p}^{0}(t),\hat{q}^{0}(t),\hat{p}^{1}(t),\hat{q} ^{1}(t))\nonumber\\ & \text{ and }\nonumber\\ & \mathbb{R}\times\mathcal{M}_{0}\ni(x,m)\mapsto g(x,m)\nonumber\\ & \text{ are concave for all }t\in\lbrack0,T]\text{ and almost all }\omega \in\Omega. \label{eq3.10a} \end{align} \item (Conditional variational inequality) For all $\xi\in\Xi$ we have \[ \mathbb{E}[dH(t)|\mathcal{G}_{t}]\leq\mathbb{E}[d\hat{H}(t)|\mathcal{G} _{t}],\, \] i.e. \begin{equation} \begin{array} [c]{c} \mathbb{E}[H_{0}(t)|\mathcal{G}_{t}]dt+\mathbb{E}[\lambda(t)\hat{p} ^{0}(t)+\hat{h}(t)|\mathcal{G}_{t}]d\xi(t)\\ \leq\mathbb{E}[\hat{H}_{0}(t)|\mathcal{G}_{t}]dt+\mathbb{E}[\lambda(t)\hat {p}^{0}(t)+\hat{h}(t)|\mathcal{G}_{t}]d\hat{\xi}(t), \end{array} \label{eq5.21} \end{equation} where the inequality is interpreted in the sense of inequality between random measures in $\mathcal{M}$. \end{itemize} Then $\hat{\xi}(t)$ is an optimal control for $J(\xi)$. \end{theorem} \noindent{Proof.} \quad Choose $\xi\in\Xi$ and consider \[ J(\xi)-J(\hat{\xi})=I_{1}+I_{2}+I_{3}, \] where \begin{align} & I_{1}={\mathbb{E}}[ {\textstyle\int_{0}^{T}} \{f(t)-\hat{f}(t)\}dt],\nonumber\\ & I_{2}={\mathbb{E}}[g(T)-\hat{g}(T)],\nonumber\\ & I_{3}={\mathbb{E}}[ {\textstyle\int_{0}^{T}} h(t)d\xi(t)-\hat{h}(t)d\hat{\xi}(t)].\label{eq6.16} \end{align} By the definition of the Hamiltonian \eqref{eq5.3a} we have \begin{equation} \begin{array} [c]{ll} I_{1} & =\mathbb{E[} {\textstyle\int_{0}^{T}} \{H_{0}(t)-\hat{H}_{0}(t)-\hat{p}^{0}(t)\tilde{b}(t)-\hat{q}^{0} (t)\tilde{\sigma}(t)-\langle\hat{p}^{1}(t),\tilde{M}^{\prime}(t)\rangle\}dt], \end{array} \label{i1} \end{equation} where $\tilde{b}(t)=\check{b}(t)-\hat{b}(t)$ etc. By the concavity of $g$ and the terminal values of the BSDEs \eqref{eqp0}, \eqref{eqp1}, we have \[ \begin{array} [c]{lll} I_{2} & \leq\mathbb{E}[\tfrac{\partial g}{\partial x}(T)\tilde{X} (T)+\langle\nabla_{m}g(T),\tilde{M}(T)\rangle] & =\mathbb{E}[\hat{p} ^{0}(T)\tilde{X}(T)+\langle\hat{p}^{1}(T),\tilde{M}(T)\rangle]. \end{array} \] Applying the It\^{o} formula to $\hat{p}^{0}(t)\tilde{X}(t)$ and $\langle \hat{p}^{1}(t),\tilde{M}(t)\rangle$, we get \begin{align} I_{2} & \leq\mathbb{E}[\hat{p}^{0}(T)\tilde{X}(T)+\langle\hat{p} ^{1}(T),\tilde{M}(T)\rangle]\nonumber\\ & =\mathbb{E}[ {\textstyle\int_{0}^{T}} \hat{p}^{0}(t)d\tilde{X}(t)+ {\textstyle\int_{0}^{T}} \tilde{X}(t)d\hat{p}^{0}(t)+ {\textstyle\int_{0}^{T}} \hat{q}^{0}(t)\tilde{\sigma}(t)dt\nonumber\\ & +\mathbb{E}[ {\textstyle\int_{0}^{T}} \langle\hat{p}^{{1}}(t),d\tilde{M}(t)\rangle+ {\textstyle\int_{0}^{T}} \tilde{M}(t)d\hat{p}^{1}(t)]\nonumber\\ & =\mathbb{E}[{\textstyle\int_{0}^{T}}\hat{p}^{0}(t)\tilde{b} (t)dt-{\textstyle\int_{0}^{T}}\tfrac{\partial\hat{H}_{0}}{\partial x} (t)\tilde{X}(t)dt-{\textstyle\int_{0}^{T}}\mathbb{E}[\nabla_{\bar{x}}^{\ast }\hat{H}_{0}(t)|\mathcal{F}_{t}]\widetilde{X}(t)dt\nonumber\\ & -{\int_{0}^{T}}\tfrac{\partial\hat{h}}{\partial x}(t)\widetilde{X} (t)d\hat{\xi}(t)+{\textstyle\int_{0}^{T}}\hat{q}^{0}(t)\tilde{\sigma }(t)dt+{\textstyle\int_{0}^{T}}\langle\hat{p}^{1}(t),\tilde{M}^{\prime }(t)\rangle dt\nonumber\\ & -{\textstyle\int_{0}^{T}}\langle\nabla_{m}\hat{H}_{0}(t),\tilde {M}(t)\rangle dt-{\textstyle\int_{0}^{T}}\mathbb{E}[\nabla_{\bar{m}}^{\ast }\hat{H}_{0}(t)|\mathcal{F}_{t}]\tilde{M}(t)dt],\label{I2} \end{align} where we have used that the $dB(t)$ and $\tilde{N}(dt,d\zeta)$ integrals with the necessary integrability property are martingales and then have mean zero. Substituting $\left( \ref{i1}\right) $ and $\left( \ref{I2}\right) $ in \eqref{eq6.16}, yields \begin{align*} & J(\xi)-J(\hat{\xi})\\ & \leq\mathbb{E}[ {\textstyle\int_{0}^{T}} \{H_{0}(t)-\hat{H}_{0}(t)-\tfrac{\partial\hat{H}_{0}}{\partial x}(t)\tilde {X}(t)-\langle\nabla_{\bar{x}}\hat{H}_{0}(t),\tilde{X}_{t}\rangle\\ & -\langle\nabla_{m}\hat{H}_{0}(t),\tilde{M}(t)\rangle-\langle\nabla_{\bar {m}}\hat{H}_{0}(t),\tilde{M}_{t}\rangle\}dt+{\textstyle\int_{0}^{T}} h(t)d\xi(t)\\ & -{\textstyle\int_{0}^{T}}\hat{h}(t)d\hat{\xi}(t)-{\textstyle\int_{0}^{T} }\tfrac{\partial\hat{h}}{\partial x}(t)\widetilde{X}(t)d\hat{\xi}(t)\\ & +{\textstyle\int_{0}^{T}}(\lambda(t)\hat{p}^{0}(t)+h(t))d\xi (t)-{\textstyle\int_{0}^{T}}(\lambda(t)\hat{p}^{0}(t)+\hat{h}(t))d\hat{\xi }(t)\\ & -{\textstyle\int_{0}^{T}}(\lambda(t)\hat{p}^{0}(t)+h(t))d\xi (t)+{\textstyle\int_{0}^{T}}(\lambda(t)\hat{p}^{0}(t)+\hat{h}(t))d\hat{\xi }(t)]. \end{align*} By the concavity of $dH$ and the fact that the process $\xi$ is $\mathbb{G} $-adapted, we obtain \begin{align} J(\xi)-J(\hat{\xi}) & \leq\mathbb{E}[ {\textstyle\int_{0}^{T}} \tfrac{\partial\hat{H}_{0}}{\partial\xi}(t)(\xi(t)-\hat{\xi} (t))dt+{\textstyle\int_{0}^{T}}(\lambda(t)\hat{p}^{0}(t)+h(t)(d\xi (t)-d\hat{\xi}(t))]\nonumber\\ & =\mathbb{E}[ {\textstyle\int_{0}^{T}} \mathbb{E}(\tfrac{\partial\hat{H}_{0}}{\partial\xi}(t)(\xi(t)-\hat{\xi }(t))+\hat{h}(t)(d\xi(t)-d\hat{\xi}(t))|\mathcal{G}_{t})dt]\nonumber\\ & =\mathbb{E}[{\textstyle\int_{0}^{T}}\langle\mathbb{E}(\nabla_{\xi}\hat {H}(t)|\mathcal{G}_{t}),\xi(t)-\hat{\xi}(t)\rangle dt]\leq0,\nonumber \end{align} where $\frac{\partial\hat{H}_{0}}{\partial\xi}=\nabla_{\xi}\hat{H}_{0}.$ The last equality holds because $\xi=\hat{\xi}$ maximizes the random measure $dH(t,\hat{X}(t),\hat{X}_{t},\hat{M}(t),\hat{M}_{t},\xi,\hat{p}^{0}(t),\hat {q}^{0}(t),\hat{p}^{1}(t))$ at $\xi=\hat{\xi}$. $\square$ \noindent From the above result, we can deduce the following \emph{sufficient variational inequalities}. \begin{theorem} [\emph{Sufficient variational inequalities}]Suppose that $H_{0}$ does not depend on $\xi$,i.e. that \[ \tfrac{\partial H_{0}}{\partial\xi}=0, \] and that the following variational inequalities hold: \begin{align} & (i)\quad{\mathbb{ E}}[\lambda(t) \hat{p}^{0}(t)+h(t,\hat{X}(t)) | \mathcal{G}_t]\leq 0,\label{eq5.28}\\ & (ii)\quad {\mathbb{ E}}[\lambda(t) \hat{p}^{0}(t)+h(t,\hat{X}(t)) | \mathcal{G}_t]d\hat{\xi}(t)=0. \label{eq5.29} \end{align} Then $\hat{\xi}$ is an optimal singular control.\newline \end{theorem} \noindent {Proof.} \quad Suppose \eqref{eq5.28} - \eqref{eq5.29} hold. Then for $\xi\in\Xi$ we have \[ {\mathbb{ E}}[\lambda(t) \hat{p}^{0}(t)+h(t,\hat{X}(t)) | \mathcal{G}_t]d\xi (t)\leq0={\mathbb{ E}}[\lambda(t) \hat{p}^{0}(t)+h(t,\hat{X}(t)) | \mathcal{G}_t]d\hat {\xi}(t). \] Since $H_{0}$ does not depend on $\xi$, it follows that \eqref{eq5.21} hold. $\square$ \newline \subsection{A necessary maximum principle for singular mean-field control} \label{sec5} In the previous section we gave a verification theorem, stating that if a given control $\hat{\xi}$ satisfies \eqref{eq3.10a}-\eqref{eq5.21}, then it is indeed optimal for the singular mean-field control problem. We now establish a partial converse, implying that if a control $\hat{\xi}$ is optimal for the singular mean-field control problem, then it is a conditional critical point for the Hamiltonian. \newline For $\xi\in\Xi$, let $\mathcal{V}(\xi)$ denote the set of $\mathbb{G}$-adapted processes $\eta$ of finite variation such that there exists $\varepsilon=\varepsilon(\xi)>0$ satisfying \begin{equation} \xi+a\eta\in\Xi\text{ for all }a\in\lbrack0,\varepsilon].\label{eq5.1a} \end{equation} Note that the following processes $\eta_{i}(s),i=1,2,3$ belong to $\mathcal{V}(\xi)$: \begin{align*} \eta_{1}(s) & :=\alpha(\omega)\chi_{\lbrack t,T]}(s),\text{ where } t\in\lbrack0,T],\alpha>0\text{ is }\mathcal{G}_{t}\text{-measurable },\\ \eta_{2}(s) & :=\xi(s),\\ \eta_{3}(s) & :=-\xi(s),s\in\lbrack0,T]. \end{align*} Then for $\xi\in\Xi$ and $\eta\in\mathcal{V}(\xi)$ we have, by our smoothness assumptions on the coefficients, \begin{align} & \lim_{a\rightarrow0^{+}}\tfrac{1}{a}(J(\xi+a\eta)-J(\xi))\label{eq5.2}\\ & ={\mathbb{E}}[ {\textstyle\int_{0}^{T}} \{\tfrac{\partial f}{\partial x}(t)Z(t)+\left\langle \nabla_{\bar{x} }f(t),Z_{t}\right\rangle +\left\langle \nabla_{m}f(t),DM(t)\right\rangle \nonumber\\ & +\left\langle \nabla_{\bar{m}}f(t),DM_{t}\right\rangle \}dt+\tfrac{\partial f}{\partial\xi}(t)\eta(t)+\tfrac{\partial g}{\partial x}(T)Z(T)\nonumber\\ & +\left\langle \nabla_{m}g(T),DM(T)\right\rangle + {\textstyle\int_{0}^{T}} \tfrac{\partial h}{\partial x}(t)Z(t)d\xi(t)+ {\textstyle\int_{0}^{T}} h(t)d\eta(t)],\nonumber \end{align} where \begin{equation} \begin{array} [c]{l} Z(t):=Z_{\eta}(t):=\lim_{a\rightarrow0^{+}}\tfrac{1}{a}(X^{(\xi+a\eta )}(t)-X^{(\xi)}(t))\\ Z_{t}:=Z_{t,\eta}:=\lim_{a\rightarrow0^{+}}\tfrac{1}{a}(X_{t}^{(\xi+a\eta )}-X_{t}^{(\xi)}) \end{array} \label{eq5.3} \end{equation} and \begin{equation} \begin{array} [c]{l} DM(t):=D_{\eta}M(t):=\lim_{a\rightarrow0^{+}}\tfrac{1}{a}(M^{(\xi+a\eta )}(t)-M^{(\xi)}(t)),\\ DM_{t}:=D_{\eta}M_{t}:=\lim_{a\rightarrow0^{+}}\tfrac{1}{a}(M_{t}^{(\xi +a\eta)}-M_{t}^{(\xi)}). \end{array} \label{eq5.4a} \end{equation} Then \[ \left\{ \begin{array} [c]{ll} dZ(t) & =[\tfrac{\partial b}{\partial x}(t)Z(t)+\left\langle \nabla_{\bar{x} }b(t),Z_{t}\right\rangle +\left\langle \nabla_{m}b(t),DM(t)\right\rangle +\left\langle \nabla_{\bar{m}}b(t),DM_{t}\right\rangle \\ & +\tfrac{\partial b}{\partial\xi}(t)\eta(t)]dt+[\tfrac{\partial\sigma }{\partial x}(t)Z(t)+\left\langle \nabla_{\bar{x}}\sigma(t),Z_{t}\right\rangle +\left\langle \nabla_{m}\sigma(t),DM(t)\right\rangle \\ & +\left\langle \nabla_{\bar{m}}\sigma(t),DM_{t}\right\rangle +\tfrac{\partial b}{\partial\xi}(t)\eta(t)]dB(t)+\lambda(t)d\eta(t)\;;\\ Z(0) & =0\,, \end{array} \right. \] and similarly with $dZ_{t},dDM(t)$ and $dDM_{t}$. \noindent We first state and prove a basic step towards a necessary maximum principle. \begin{proposition} \label{prop}\ Let $\xi\in\Xi$ and choose $\eta\in\mathcal{V}(\xi)$.Then \begin{equation} \tfrac{d}{da}J(\xi+a\eta)|_{a=0}={\mathbb{E}}[ {\textstyle\int_{0}^{T}} \tfrac{\partial H_{0}}{\partial\xi}(t)\eta(t)dt+ {\textstyle\int_{0}^{T}} \{\lambda(t)p^{0}(t)+h(t)\}d\eta(t)]. \label{eq6.44} \end{equation} \end{proposition} \noindent{Proof.} \quad Let $\xi\in\Xi$ and $\eta\in\mathcal{V}(\xi)$. Then we can write \begin{equation} \tfrac{d}{da}J(\xi+a\eta)|_{a=0}=A_{1}+A_{2}+A_{3}+A_{4}, \label{eq6.45} \end{equation} where \begin{align*} A_{1} & ={\mathbb{E}}[ {\textstyle\int_{0}^{T}} \{\tfrac{\partial f}{\partial x}(t)Z(t)+\left\langle \nabla_{\bar{x} }f(t),Z_{t}\right\rangle +\left\langle \nabla_{m}f(t),DM(t)\right\rangle +\left\langle \nabla_{\bar{m}}f(t),DM_{t}\right\rangle \}dt],\\ A_{2} & ={\mathbb{E}}[ {\textstyle\int_{0}^{T}} \tfrac{\partial f}{\partial\xi}(t)\eta(t)dt],\\ A_{3} & ={\mathbb{E}}[\tfrac{\partial g}{\partial x}(T)Z(T)+\left\langle \nabla_{m}g(T),DM(T)\right\rangle ]\\ A_{4} & ={\mathbb{E}}[ {\textstyle\int_{0}^{T}} \tfrac{\partial h}{\partial x}(t)Z(t)d\xi(t)+h(t)d\eta(t)]. \end{align*} By the definition of $H_{0}$ we have \begin{align} A_{1} & ={\mathbb{E}}[ {\textstyle\int_{0}^{T}} Z(t)\{\tfrac{\partial H_{0}}{\partial x}(t)-\tfrac{\partial b}{\partial x}(t)p^{0}(t)-\tfrac{\partial\sigma}{\partial x}(t)q^{0}(t)\}dt \label{eq5.20} \\ & + {\textstyle\int_{0}^{T}} \left\langle \nabla_{\bar{x}}H_{0}(t)-\nabla_{\bar{x}}b(t)p^{0}(t)-\nabla _{\bar{x}}\sigma(t)q^{0}(t),Z_{t}\right\rangle dt\nonumber\\ & + {\textstyle\int_{0}^{T}} \left\langle \nabla_{m}H_{0}(t)-\nabla_{m}b(t)p^{0}(t)-\nabla_{m} \sigma(t)q^{0}(t),DM(t)\right\rangle dt\nonumber\\ & + {\textstyle\int_{0}^{T}} \left\langle \nabla_{\bar{m}}H_{0}(t)-\nabla_{\bar{m}}b(t)p^{0}(t)-\nabla _{\bar{m}}\sigma(t)q^{0}(t),DM_{t}\right\rangle \}dt],\nonumber \end{align} and \[ A_{2}={\mathbb{E}}[ {\textstyle\int_{0}^{T}} \{\tfrac{\partial H_{0}}{\partial\xi}(t)-\tfrac{\partial b}{\partial\xi }(t)p^{0}(t)-\tfrac{\partial\sigma}{\partial\xi}(t)q^{0}(t)\}\eta(t)dt]. \] By the terminal conditions of $p^{0}(T)$, $p^{1}(T)$ (see \eqref{eqp0}-\eqref{eqp1}) and the It\^{o} formula, we have \begin{align} A_{3} & ={\mathbb{E}}[p^{0}(T)Z(T)+\left\langle p^{1}(T),DM(T)\right\rangle ]\label{eq6.49}\\ \text{ } & ={\mathbb{E}}[ {\textstyle\int_{0}^{T}} p^{0}(t)dZ(t)+ {\textstyle\int_{0}^{T}} Z(t)dp^{0}(t)\nonumber\\ & + {\textstyle\int_{0}^{T}} q^{0}(t)\{\tfrac{\partial\sigma}{\partial x}(t)Z(t)+\left\langle \nabla _{\bar{x}}\sigma(t),Z(t)\right\rangle +\left\langle \nabla_{m}\sigma (t),DM(t)\right\rangle \nonumber\\ & +\left\langle \nabla_{\bar{m}}\sigma(t),DM(t)\right\rangle +\tfrac {\partial\sigma}{\partial\xi}(t)\eta(t)\}dt\nonumber\\ & +\left\langle p^{1}(t),dDM(t)\right\rangle +\left\langle DM(t),dp^{1} (t)\right\rangle \nonumber\\ & ={\mathbb{E}}[ {\textstyle\int_{0}^{T}} p^{0}(t)\{\tfrac{\partial b}{\partial x}(t)Z(t)+\left\langle \nabla_{\bar{x} }b(t),Z_{t}\right\rangle +\left\langle \nabla_{m}b(t),DM(t)\right\rangle \nonumber\\ & \text{ }+\left\langle \nabla_{\bar{m}}b(t),DM_{t}\right\rangle +\tfrac{\partial b}{\partial\xi}(t)\eta(t)\}dt\nonumber\\ & + {\textstyle\int_{0}^{T}} q^{0}(t)\{\tfrac{\partial\sigma}{\partial x}(t)Z(t)+\left\langle \nabla _{\bar{x}}\sigma(t),Z_{t}\right\rangle +\left\langle \nabla_{m}\sigma (t),DM(t)\right\rangle \nonumber\\ & \text{ }+\left\langle \nabla_{\bar{m}}\sigma(t),DM_{t}\right\rangle +\tfrac{\partial\sigma}{\partial\xi}(t)\eta(t)\}dt\nonumber\\ & +{\textstyle\int_{0}^{T}}p^{0}(t)\lambda(t)d\eta(t)+{\textstyle\int_{0} ^{T}}\big\{Z(t)(-\{\tfrac{\partial H_{0}}{\partial x}(t)+{\mathbb{E}} (\nabla_{\bar{x}}^{\ast}H_{0}(t)|\mathcal{F}_{t})\})\nonumber\\ & -\left\langle \nabla_{m}H_{0}(t)+\mathbb{E}[\nabla_{\bar{m}}^{\ast} H_{0}(t)|\mathcal{F}_{t}],DM(t)\right\rangle \big\}dt-{\textstyle\int_{0}^{T} }\tfrac{\partial h}{\partial x}(t)Z(t)d\xi(t)].\nonumber \end{align} Combining \eqref{eq6.45}-\eqref{eq6.49} and using \eqref{eq6.7a}, we get \eqref{eq6.44}. $\square$ \begin{theorem} [Necessary maximum principle for mean-field singular control]\label{th5.1b} Suppose $\hat{\xi}\in\Xi$ is optimal, i.e. satisfies \eqref{eq6.4}. Suppose that \[ \tfrac{\partial H_{0}}{\partial\xi}=0. \] Then the following variational inequalities hold: \begin{align} & (i)\quad{\mathbb{ E}}[\lambda(t) \hat{p}^0(t) + h(t)|\mathcal{G}_t ]\leq0\text{ for all }t\in\lbrack0,T]\text{ a.s. }\quad\text{and}\label{eq1.17a}\\ & (ii)\quad{\mathbb{ E}}[\lambda(t)\hat{p}^0(t)+\hat{h}(t)|\mathcal{G}_{t}]d\hat{\xi }(t)=0\text{ for all }t\in\lbrack0,T]\,\text{ a.s. } \label{eq1.17b} \end{align} \end{theorem} \noindent{Proof.} \quad From Proposition (\ref{prop}) we have, since $\hat {\xi}$ is optimal, \begin{equation} 0\geq\tfrac{d}{da}J(\hat{\xi}+a\eta)|_{a=0}={\mathbb{E}}[ {\textstyle\int_{0}^{T}} \{\lambda(t)\hat{p}^{0}(t)+\hat{h}(t)\}d\eta(t)], \label{eq5.17a} \end{equation} for all $\eta\in\mathcal{V}(\hat{\xi})$.\newline If we choose $\eta$ to be a pure jump process of the form \[ \eta(s)= {\textstyle\sum_{0<t_{i}\leq s}} \alpha(t_{i}), \] where $\alpha(s)>0$ is $\mathcal{G}_{s}$-measurable for all $s$, then $\eta \in\mathcal{V}(\hat{\xi})$ and \eqref{eq5.17a} gives \[ {\mathbb{E}}[\{\lambda(t)\hat{p}^{0}(t)+\hat{h}(t)\}\alpha(t_{i})]\leq0\text{ for all }t_{i}\text{ a.s. } \] Since this holds for all such $\eta$ with arbitrary $t_{i}$, we conclude that \begin{equation} {\mathbb{E}}[\lambda(t)\hat{p}^{0}(t)+\hat{h}(t)|\mathcal{G}_{t}]\leq0\text{ for all }t\in\lbrack0,T]\text{ a.s. } \label{eq5.23a} \end{equation} Finally, applying \eqref{eq5.17a} to $\eta_{1}:=\hat{\xi}\in\mathcal{V} (\hat{\xi})$ and then to $\eta_{2}:=\hat{\xi}\in\mathcal{V}(\hat{\xi})$ we get, for all $t\in\lbrack0,T]$, \begin{equation} {\mathbb{E}}[\lambda(t)\hat{p}^{0}(t)+\hat{h}(t)|\mathcal{G}_{t}]d\hat{\xi }(t)=0\text{ for all }t\in\lbrack0,T]\text{ a.s. } \label{eq5.24a} \end{equation} With \eqref{eq5.23a} and \eqref{eq5.24a} the proof is complete. $\square$ \section{Application to optimal stopping} \label{sec6.1} From now on, let us assume, in addition to $$\frac{\partial H_0}{\partial \xi}=0,$$ that \begin{align} \lambda(t)&=-\lambda_{0} \text{ where } \lambda_{0} > 0, \text { and } \\ \mathbb{G}&=\mathbb{F}. \end{align} \noindent Then, dividing by $\lambda_0$ in \eqref{eq1.17a} - \eqref{eq1.17b} we get \begin{align} \label{eq6.2} & (i) \quad \hat{p}^0(t) \geq \frac{1}{\lambda_{0}} \hat {h}(t)) \text{ for all } t\in[0, T] \text{ a.s. } \quad\text{and}\\ & (ii) \quad\Big\{\hat{p}^0(t) - \frac{1}{\lambda_{0}} \hat{h}(t)\Big\} d\hat{\xi}(t) = 0 \text{ for all } t\in[0, T]\, \text{ a.s. } \label{eq6.3} \end{align} \noindent Comparing with \eqref{eq3.1}, we see that \eqref{eq6.2}-\eqref{eq6.3}, together with the singular BSDE \eqref{eqp0} for ${p}^0=\hat{p}^0,{q}^0=\hat{q}^0,\xi=\hat{\xi},$ constitutes an AMBSDEs related to the type discussed in Section 3 above, with \newline \begin{equation} \label{eq5.48a} S(t)=\frac{1}{\lambda_{0}} \hat{h}(t), \end{equation} and \begin{align} Y(t)&:= \hat{p}^0(t)\label{eq5.48} ,\\ Z(t)&:=\hat{q}^0(t),\\ dK(t)&:= \frac{\partial\hat{h}}{\partial x}(t) d\hat{\xi}(t). \label{eq5.49} \end{align} \noindent We summarize what we have proved as follows: \begin{theorem} Suppose $\hat{\xi}$ is an optimal control for the singular control problem \eqref{eq6.1} - \eqref{eq6.4}, with corresponding optimal processes $\hat {X}(t),\hat{X}_{t},\hat{M}(t),\hat{M}_{t}$. Define $S, Y,Z,K$ as in \eqref{eq5.48a}, \eqref{eq5.48}, \eqref{eq5.49}. \ Then $\hat{X}$ together with $(Y,Z,K)$ solve the following \emph{forward-backward memory-advanced mean-field singular reflected system}: \end{theorem} \begin{itemize} \item (i) Forward mean-field \emph{memory} singular SDE in $\hat{X}$: \begin{equation} \left\{ \begin{array} [c]{l} d\hat{X}(t)=b(t,\hat{X}(t),\hat{X}_{t},\hat{M}(t),\hat{M}_{t})dt \\ +\sigma(t,\hat{X}(t),\hat{X}_{t},\hat{M}(t),\hat{M}_{t})dB(t) -\lambda_{0}d\hat{\xi }(t);\quad t\in\lbrack0,T]\\ X(t)=\alpha(t),\quad t\in\lbrack-\delta,0], \end{array} \right. \end{equation} \item (ii) \emph{Advanced} reflected BSDE in $(Y,Z,K)$ (for given $\hat {X}(t)$): \[ \begin{cases} & dY(t)=-\big\{\frac{\partial\hat{H}_{0}}{\partial x}(t)+{\mathbb{E} }[\nabla^{*}_{\bar{x}}\hat{H}_{0}(t)|\mathcal{F}_{t}]\big \}dt\\ & -dK(t)+Z(t)dB(t);\quad t\in\lbrack0,T],\\ & Y(t)\geq S(t);\quad t\in\lbrack0,T],\\ & [Y(t)-S(t)]dK(t)=0;\quad t\in\lbrack0,T],\\ & Y(T)=\frac{\partial g}{\partial x}(T). \end{cases} \] \end{itemize} \subsection{Connection to optimal stopping of memory mean-field SDE} \vskip 0.3cm If we combine the results above, we get \begin{theorem} Suppose $\hat{\xi}$ is an optimal control for the singular control problem \eqref{eq6.1} - \eqref{eq6.4}, with corresponding optimal processes $\hat {X}(t),\hat{X}_{t},\hat{M}(t),\hat{M}_{t}$ and adjoint processes $\hat{p}^{0}(t),\hat{q}^{0}(t)$. Put \begin{equation} R=\frac{\partial g}{\partial x}(T). \end{equation} Let $$S(t), (Y(t),Z(t),K(t))$$ be as above and define \begin{align} F(t)&:=F(t,\hat{X}(t),\hat{M}(t),\hat{X}_t,\hat{M}_t, Y(t),Z(t),Y^{t},\mathcal{Z}^{t})\nonumber\\ &:=\frac{\partial\hat{H}_{0}}{\partial x}(t)+{\mathbb{E} }[\nabla^{*}_{\bar{x}}\hat{H}_{0}(t)|\mathcal{F}_{t}]. \end{align} \begin{description} \item[(i)] Then, for each $t\in\left[ 0,T\right], Y(t)$ is the solution of the optimal stopping problem \begin{align} Y(t)=\underset{\tau\in\mathcal{T}_{[t,T]}}{ess\sup}\Big\{ \mathbb{E} [ {\textstyle\int_{t}^{\tau}} F(s)ds+S(\tau)\mathbf{1}_{\tau<T} +R\mathbf{1}_{\tau=T}|\mathcal{F}_{t}]\Big\}. \end{align} \item[(ii)] Moreover, for $t \in [0,T]$ the solution process $K(t)$ is given by \begin{align} &K(T)-K(T-t)\nonumber\\ &=\underset{s\leq t}{\max}\Big\{R+ \int_{T-s}^{T} F(r)dr-\int _{T-s}^{T} Z(r)dB(r)-S(T-s)\Big\}^{-}, \end{align} where $x^{-}=\max(-x,0),$ and an optimal stopping time $\hat{\tau}_{t}$ is given by \begin{align*} \hat{\tau}_{t}:&=\inf\{s\in\lbrack t,T],Y(s)\leq S(s)\}\wedge T\\ &=\inf\{s\in\lbrack t,T],K(s) > K(t)\}\wedge T. \end{align*} \item[(iii)] In particular, if we choose $t=0$ we get that \begin{align*} \hat{\tau}_{0}:&=\inf\{s\in\lbrack 0,T],Y(s)\leq S(s)\}\wedge T \\ &=\inf\{s\in\lbrack 0,T],K(s) > 0\}\wedge T \end{align*} solves the optimal stopping problem \[ Y(0)=\sup_{\tau\in\mathcal{T}_{[0,T]}}\mathbb{E}[{\textstyle\int_{0}^{\tau}} F(s)ds+S(\tau)\mathbf{1}_{\tau<T} +R\mathbf{1}_{\tau=T}] . \] \end{description} \end{theorem} \end{document} \subsection{Connection to the mean-field reflected BSDE} \label{sec6.1} From now on, let us for simplicity assume that \begin{equation} \mathbb{G}=\mathbb{F} \text{ and } \lambda(t)=-\lambda_{0} \text{ where } \lambda_{0} > 0. \end{equation} Then, dividing by $\lambda$ in \eqref{eq1.17a} - \eqref{eq1.17b} we get \begin{align} \label{eq1.19} & (i) \quad\hat{p}(t) \geq\frac{1}{\lambda_{0}}h(t, \hat {X}(t)) \text{ for all } t\in[0, T] \text{ a.s. } \quad\text{and}\\ & (ii) \quad\Big[\hat{p}(t) - \frac{1}{\lambda_{0}} h(t, \hat{X}(t))\Big] d \hat{\xi}(t) = 0 \text{ for all } t\in[0, T]\, \text{ a.s. } \label{eq1.20} \end{align} Comparing with \eqref{RMF-DBSDE}, we see that \eqref{eq1.19}-\eqref{eq1.20}, together with the BSDE \eqref{bsde} for $p=\hat{p},q=\hat{q},\xi=\hat{\xi},$ constitutes a mean-field memory reflected BSDE related to the type discussed in Section 3 above, with \newline \[ S(t)=\frac{1}{\lambda_{0}} h(t,\hat{X}(t)) \] and with $(\hat{p},\hat{q})$ replaced by $(Y,Z)$ and $\frac{\partial h}{\partial x}(t, \hat{X}(t))d\hat{\xi}(t)$ replaced by $dK(t)$.\newline We summarize what we have proved as follows: \begin{theorem} Suppose $\hat{\xi}$ is an optimal control for the singular control problem \eqref{eq6.1} - \eqref{eq6.4}, with corresponding optimal processes $\hat {X}(t),\hat{X}_{t},\hat{M}(t),\hat{M}_{t}$. Define \begin{equation} d\hat{K}(t):=\frac{\partial h}{\partial x}(t,\hat{X}(t))d\hat{\xi}(t) \end{equation} Then $\hat{X}$, together with $(\hat{Y}:=\hat{p},\hat{Z}:=\hat{q},\hat{K})$ solve the following \emph{forward-backward memory-advanced mean-field singular reflected system}: \begin{itemize} \item (i) Forward mean-field memory singular SDE in $\hat{X}$: \begin{equation} \begin{cases} d\hat{X}(t)=b(t,\hat{X}(t),\hat{X}_{t},\hat{M}(t),\hat{M}_{t})dt+\sigma (t,\hat{X}(t),\hat{X}_{t},\hat{M}(t),\hat{M}_{t})dB(t)-\lambda_{0}d\hat{\xi }(t);\quad t\in\lbrack0,T]\\ \hat{X}(0)=x\in\mathbb{R}, \end{cases} \label{eq3.1a} \end{equation} \item (ii) Advanced reflected BSDE in $(\hat{Y},\hat{Z},\hat{K})$ (for given $\hat {X}(t)$): \[ \begin{cases} & d\hat{Y}(t)=-\big\{\frac{\partial\hat{H}_{0}}{\partial x}(t)+{\mathbb{E} }[\nabla_{\bar{x}}\hat{H}_{0}^{t}|\mathcal{F}_{t}]+\nabla_{m}^{\ast}\hat {H}_{0}(t)+{\mathbb{E}}[\nabla_{\bar{m}}^{\ast}\hat{H}_{0}^{t}|\mathcal{F} _{t}]\big \}dt\\ & -d\hat {K}(t)+\hat{Z}(t)dB(t);\quad t\in\lbrack0,T]\\ & \hat{Y }(t)\geq\frac{1}{\lambda_{0}}h(t,\hat{X}(t));\quad t\in\lbrack0,T]\\ & [\hat{ Y}(t)-\frac{1}{\lambda_{0}}h(t,\hat{X}(t)]d\hat{K}(t)=0;\quad t\in\lbrack0,T]\\ & \hat{Y }(T)=\frac{\partial g}{\partial x}(\hat{X}(T)).\end{cases} \] \end{itemize} \end{theorem} \begin{remark} This system is memory mean-field in the forward SDE for $\hat{X}$, but the advanced reflected BSDE for $(\hat{Y},\hat{Z},\hat{K})$ is not of mean-field type, since the laws $\mathcal{L}(\hat{Y}(t))$ and $\mathcal{L}(\hat{Z}(t))$ do not appear here. \end{remark} \subsection{Connection to optimal stopping of memory mean-field mean-field SDE} If we combine the results above we get \begin{theorem} Suppose $\hat{\xi}$ is an optimal control for the singular control problem \eqref{eq6.1} - \eqref{eq6.4}, with corresponding optimal processes $\hat {X}(t),\hat{X}_{t},\hat{M}(t),\hat{M}_{t}$ and adjoint processes $\hat{p}(t),\hat{q}(t)$. Let $$(\hat{Y}(t),\hat{Z}(t),d\hat{K}(t)):=(\hat{p}(t),\hat{q}(t),\frac{\partial h}{\partial x}(t,\hat{X}(t))d\hat{\xi}(t))$$ be as in Theorem 5.8 and define \begin{align} F(t)&:=F(t,\hat{X}(t),\hat{M}(t),\hat{X}_t,\hat{M}_t, \hat{Y}(t),\hat{Z}(t),\hat{Y}^{t},\hat{Z}^{t})\nonumber\\ &:=\frac{\partial\hat{H}_{0}}{\partial x}(t)+{\mathbb{E} }[\nabla_{\bar{x}}\hat{H}_{0}^{t}|\mathcal{F}_{t}]+\nabla_{m}^{\ast}\hat {H}_{0}(t)+{\mathbb{E}}[\nabla_{\bar{m}}^{\ast}\hat{H}_{0}^{t}|\mathcal{F} _{t}]. \end{align} \begin{description} \item[(i)] Then, for each $t\in\left[ 0,T\right], \hat{Y}(t)$ is the solution of the optimal stopping problem \begin{align} \hat{Y}(t)=\underset{\tau\in\mathcal{T}_{[t,T]}}{ess\sup}\Big\{ \mathbb{E} [ {\textstyle\int_{t}^{\tau}} F(s)ds+S(\tau)\mathbf{1}_{\tau<T} +R\mathbf{1}_{\tau=T}|\mathcal{F}_{t}]\Big\}. \end{align} \item[(ii)] Moreover, for $t \in [0,T]$ the solution process $\hat{K}(t)$ is given by \begin{align} &\hat{K}(T)-\hat{K}(T-t)\nonumber\\ &=\underset{s\leq t}{\max}\Big\{R+ \int_{T-s}^{T} F(r)dr-\int _{T-s}^{T} \hat{Z}(r)dB(r)-S(T-s)\Big\}, \end{align} where $x^{-}=\max(-x,0),$ and an optimal stopping time $\hat{\tau}_{t}$ is given by \begin{align*} \hat{\tau}_{t}:&=\inf\{s\in\lbrack t,T],\hat{Y}(s)\leq S(s)\}\wedge T=\inf\{s\in\lbrack t,T],\hat{K}(s) > \hat{K}(t)\}\wedge T. \end{align*} \item[(iii)] In particular, if we choose $t=0$ we get that \begin{align*} \hat{\tau}_{0}:&=\inf\{s\in\lbrack 0,T],\hat{Y}(s)\leq S(s)\}\wedge T=\inf\{s\in\lbrack 0,T],\hat{K}(s) > 0\}\wedge T \end{align*} solves the optimal stopping problem \[ \hat{Y}(0)=\sup_{\tau\in\mathcal{T}_{[0,T]}}\mathbb{E}[{\textstyle\int_{0}^{\tau}} F(s)ds+S(\tau)\mathbf{1}_{\tau<T} +R\mathbf{1}_{\tau=T}] . \] \end{description} \end{theorem} \end{document}
arXiv
\begin{document} \title{Backdoors to Tractable Valued CSP} \author{Robert Ganian \and M. S. Ramanujan \and Stefan Szeider} \institute{Algorithms and Complexity Group, TU Wien, Vienna, Austria} \maketitle \begin{abstract} We extend the notion of a strong backdoor from the CSP setting to the Valued CSP setting (VCSP, for short). This provides a means for augmenting a class of tractable VCSP instances to instances that are outside the class but of small distance to the class, where the distance is measured in terms of the size of a smallest backdoor. We establish that VCSP is fixed-parameter tractable when parameterized by the size of a smallest backdoor into every tractable class of VCSP instances characterized by a (possibly infinite) tractable valued constraint language of finite arity and finite domain. We further extend this fixed-parameter tractability result to so-called ``scattered classes'' of VCSP instances where each connected component may belong to a different tractable class. \end{abstract} \section{Introduction} \label{sec:intro} Valued CSP (or VCSP for short) is a powerful framework that entails among others \blfootnote{The authors acknowledge support by the Austrian Science Fund (FWF, project P26696). Robert Ganian is also affiliated with FI MU, Brno, Czech Republic.}the problems CSP and MAX-CSP as special cases \cite{Zivny12}. A VCSP instance consists of a finite set of cost functions over a finite set of variables which range over a domain $D$, and the task is to find an instantiation of these variables that minimizes the sum of the cost functions. The VCSP framework is robust and has been studied in different contexts in computer science. In its full generality, VCSP considers cost functions that can take as values the rational numbers and positive infinity. CSP (feasibility) and Max-CSP (optimisation) arise as special cases by limiting the values of cost functions to $\{0,\infty\}$ and $\{0,1\}$, respectively. Clearly VCSP is in general intractable. Over the last decades much research has been devoted into the identification of tractable VCSP subproblems. An important line of this research (see, e.g., \cite{JeavonsKrokhinZivny14,KolmogorovZivny,ThapperZivny15}) is the characterization of tractable VCSPs in terms of restrictions on the underlying \emph{valued constraint language} $\Gamma$, i.e., a set~$\Gamma$ of cost functions that guarantees polynomial-time solvability of all VCSP instances that use only cost functions from $\Gamma$. The VCSP restricted to instances with cost functions from $\Gamma$ is denoted by $\textsc{VCSP}[\Gamma]$. In this paper we provide algorithmic results which allow us to gradually augment a tractable VCSP based on the notion of a (strong) \emph{backdoor} into a tractable class of instances, called the \emph{base class}. Backdoors where introduced by Williams \emph{et al.}~\cite{WilliamsGomesSelman03,WilliamsGomesSelman03a} for SAT and CSP and generalize in a natural way to VCSP{}. Let $\mathcal{C}$ denote a tractable class of VCSP instances over a finite domain $D$. A backdoor of a VCSP instance $\mathcal{P}$ into~$\mathcal{C}$ is a (small) subset $B$ of the variables of $\mathcal{P}$ such that for all partial assignments $\alpha$ that instantiate $B$, the restricted instance $\mathcal{P}|_\alpha$ belongs to the tractable class $\mathcal{C}$. Once we know such a backdoor~$B$ of size $k$ we can solve $\mathcal{P}$ by solving at most $|D|^k$ tractable instances. In other words, VCSP is then \emph{fixed parameter tractable} parameterized by backdoor size. This is highly desirable as it allows us to scale the tractability for $\mathcal{C}$ to instances outside the class, paying for an increased ``distance'' from $\mathcal{C}$ only by a larger constant factor. In order to apply this backdoor approach to solving a \textsc{VCSP}{} instance, we first need to \emph{find} a small backdoor. This turns out to be an algorithmically challenging task. The fixed-parameter tractability of backdoor detection has been subject of intensive research in the context of SAT (see, e.g., \cite{GaspersSzeider12}) and CSP (see, e.g., \cite{CarbonnelCooper16}). In this paper we extend this line of research to \textsc{VCSP}. First we obtain some basic and fundamental results on backdoor detection when the base class is defined by a valued constraint language $\Gamma$. We obtain fixed-parameter tractability for the detection of backdoors into $\textsc{VCSP}[\Gamma]$ where $\Gamma$ is a valued constraint language with cost functions of bounded arity. In fact, we show the stronger result: fixed-parameter tractability also holds with respect to \emph{heterogeneous} base classes of the form $\textsc{VCSP}[\Gamma_1] \cup \dots \cup \textsc{VCSP}[\Gamma_\ell]$ where different assignments to the backdoor variables may result in instances that belong to different base classes $\textsc{VCSP}[\Gamma_i]$. A similar result holds for \textsc{CSP}{}, but the \textsc{VCSP}{} setting is slightly more complicated as a valued constraint language of finite arity over a finite domain is not necessarily finite. Secondly, we extend the basic fixed-parameter tractability result to so-called \emph{scattered} base classes of the form $\textsc{VCSP}[\Gamma_1]\uplus \dots \uplus \textsc{VCSP}[\Gamma_\ell]$ which contain \textsc{VCSP}{} instances where each connected component belongs to a tractable class $\textsc{VCSP}[\Gamma_i]$ for some $1\leq i \leq \ell$---again in the heterogeneous sense that for different assignments to the backdoor variables a single component of the reduced instance may belong to different classes $\textsc{VCSP}[\Gamma_i]$. Backdoors into a scattered base class can be much smaller than backdoors into each single class it is composed of, hence the gain is huge if we can handle scattered classes. This boost in scalability does not come for free. Indeed, already the ``crisp'' case of CSP, which was the topic of a recent SODA paper \cite{GanianRamanujanSzeider16}, requires a sophisticated algorithm which makes use of advanced techniques from parameterized algorithm design. This algorithm works under the requirement that the constraint languages contain all unary constraints (i.e., is conservative); this is a reasonable requirement as one needs these unary cost functions to express partial assignments (see also Section~\ref{sec:prelim} for further discussion). Here we lift the crisp case to general \textsc{VCSP}, and this also represents our main technical contribution. To achieve this, we proceed in two phases. First we transform the backdoor detection problem from a general scattered class $\textsc{VCSP}(\Gamma_1)\uplus \dots \uplus \textsc{VCSP}(\Gamma_\ell)$ to a scattered class $\textsc{VCSP}(\Gamma_1')\uplus \dots \uplus \textsc{VCSP}(\Gamma_\ell')$ over \emph{finite} valued constraint languages~$\Gamma_i'$. In the subsequent second phase we transform the problem to a backdoor detection problem into a scattered class $\textsc{VCSP}(\Gamma_1'')\uplus \dots \uplus \textsc{VCSP}(\Gamma_\ell'')$ where each $\Gamma_i''$ is a finite crisp language; i.e., we reduce from the \textsc{VCSP}{} setting to the \textsc{CSP}{} setting. We believe that this sheds light on an interesting link between backdoors in the \textsc{VCSP}{} and \textsc{CSP}{} settings. The latter problem can now be solved using the known algorithm~\cite{GanianRamanujanSzeider16}. \subsection*{Related Work} Williams \emph{et al.}~\cite{WilliamsGomesSelman03,WilliamsGomesSelman03a} introduced backdoors for CSP or SAT as a theoretical tool to capture the overall combinatorics of instances. The purpose was an analysis of the empirical behaviour of backtrack search algorithms. Nishimura et. al~\cite{NishimuraRagdeSzeider04-informal} started the investigation on the parameterized complexity of \emph{finding} a small SAT backdoor and using it to solve the instance. This lead to a number of follow-up work (see \cite{GaspersSzeider12}). Parameterized complexity provides here an appealing framework, as given a CSP instance with $n$ variables, one can trivially find a backdoor of size $\leq k$ into a fixed tractable class of instances by trying all subsets of the variable set containing $\leq k$ variables; but there are $\Theta(n^k)$ such sets, and therefore the running time of this brute-force algorithm scales very poorly in~$k$. Fixed-parameter tractability removes $k$ from the exponent providing running times of the form $f(k)n^{c}$ which yields a significantly better scalability in backdoor size. Extensions to the basic notion of a backdoor have been proposed, including backdoors with empty clause detection \cite{DilkinaGomesSabharwal07}, backdoors in the context of learning \cite{DilkinaGomesSabharwal09}, heterogeneous backdoors where different instantiations of the backdoor variables may result in instances that belong to different base classes~\cite{GaspersMisraOSZ14}, and backdoors into scattered classes where each connected component of an instance may belong to a different tractable class~\cite{GanianRamanujanSzeider16}. Le Bras \emph{et al.}~\cite{LenbrasBernsteinGomesSelmanDover13} used backdoors to speed-up the solution of hard problems in materials discovery, using a crowd sourcing approach to find small backdoors. The research on the parameterized complexity of backdoor detection was also successfully extended to other problem areas including disjunctive answer set programming~\cite{FichteSzeider15,FichteSzeider15b}, abstract argumentation~\cite{DvorakOrdyniakSzeider12}, and integer linear programming \cite{GanianOrdyniak16}. There are also several papers that investigate the parameterized complexity of backdoor detection for CSP{}. Bessi{\`e}re \emph{et al.}~\cite{BessiereCarbonnelHebrardKatsirelosWalsh13}, considered ``partition backdoors'' which are sets of variables whose deletion partitions the CSP instance into two parts, one falls into a tractable class defined by a conservative polymorphism, and the other part is a collection of independent constraints. They also performed an empirical evaluation of the backdoor approach which resulted in promising results. Gaspers \emph{et al.}~\cite{GaspersMisraOSZ14} considered heterogeneous backdoors into tractable CSP classes that are characterized by polymorphisms. A similar approach was also undertaken by Carbonnel \emph{et al.}~\cite{CarbonnelCooperHebrard14} who also considered base classes that are ``$h$-Helly'' for a fixed integer $h$ under the additional assumption that the domain is a finite subset of the natural numbers and comes with a fixed ordering. \section{Preliminaries} \label{sec:prelim} \subsection{Valued Constraint Satisfaction} For a tuple $t$, we shall denote by $t[i]$ its $i$-th component. We shall denote by $\cal Q$ the set of all rational numbers, by ${\cal Q}_\geq 0$ the set of all nonnegative rational numbers, and by ${\cal \ol Q}_{\geq 0}$ the set of all nonnegative rational numbers together with positive infinity, $\infty$. We define $\alpha+\infty=\infty+\alpha=\infty$ for all $\alpha\in {\cal \ol Q}_{\geq 0}$, and $\alpha \cdot \infty = \infty$ for all $\alpha\in {\cal Q}_{\geq 0}$. The elements of ${\cal \ol Q}_{\geq 0}$ are called \emph{costs}. For every fixed set $D$ and $m\geq 0$, a function $\phi$ from $D^m$ to ${\cal \ol Q}_{\geq 0}$ will be called a \emph{cost function} on $D$ of arity $m$. $D$ is called the \emph{domain}, and here we will only deal with finite domains. If the range of $\phi$ is $\{0,\infty\}$, then $\phi$ is called a \emph{crisp} cost function. With every relation $R$ on $D$, we can associate a crisp cost function $\phi_R$ on $D$ which maps tuples in $R$ to $0$ and tuples not in $R$ to $\infty$. On the other hand, with every $m$-ary cost function $\phi$ we can associate a relation $R_\phi$ defined by $(x_1,\dots,x_m)\in R_\phi \Leftrightarrow \phi(x_1,\dots,x_m)< \infty$. In the view of the close correspondence between crisp cost functions and relations we shall use these terms interchangeably in the rest of the paper. A \textsc{VCSP} instance consists of a set of variables, a set of possible values, and a multiset of valued constraints. Each valued constraint has an associated cost function which assigns a cost to every possible tuple of values for the variables in the scope of the valued constraint. The goal is to find an assignment of values to all of the variables that has the minimum total cost. A formal definition is provided below. \begin{definition}[\textsc{VCSP}] An instance $\cal P$ of the \textsc{Valued Constraint satisfaction Problem}, or \textsc{VCSP}, is a triple $(V,D,\cal C)$ where $V$ is a finite set of \emph{variables}, which are to be assigned values from the set $D$, and $\cal C$ is a multiset of \emph{valued constraints}. Each $c\in \cal C$ is a pair $c=(\vec{x},\phi)$, where $\vec{x}$ is a tuple of variables of length $m$ called the \emph{scope} of $c$, and $\phi: D^m\rightarrow {\cal \ol Q}_{\geq 0}$ is an $m$-ary cost function. An \emph{assignment} for the instance $\cal P$ is a mapping $\tau$ from $V$ to $D$. We extend $\tau$ to a mapping from $V^k$ to $D^k$ on tuples of variables by applying $\tau$ componentwise. The \emph{cost} of an assignment $\tau$ is defined as follows: \[\text{Cost}_{\cal P}(\tau)=\sum_{(\vec{x},\phi)\in \cal C}\phi(\tau(\vec{x})).\] The task for VCSP is the computation of an assignment with minimum cost, called a \emph{solution} to $\cal P$. \end{definition} For a constraint $c$, we will use $\text{\normalfont{\bfseries var}}(c)$ to denote the set of variables which occur in the scope of $c$. We will later also deal with the \emph{constraint satisfaction problem}, or \textsc{CSP}{}. Having already defined {\textsc{VCSP}}, it is advantageous to simply define {\textsc{CSP}} as the special case of {\textsc{VCSP}} where each valued constraint has a crisp cost function. The following representation of a cost function will sometimes be useful for our purposes. A \emph{cost table} for an $m$-ary cost function $\phi$ is a table with $D^m$ rows and $m+1$ columns with the following property: each row corresponds to a unique tuple $\vec{a}=(a_1,\dots,a_m)\in D^m$, for each $i\in [m]$ the position $i$ of this row contains $a_i$, and position $m+1$ of this row contains $\phi(a_1,\dots,a_m)$. A \emph{partial assignment} is a mapping from $V'\subseteq V$ to $D$. Given a partial assignment $\tau$, the \emph{application} of $\tau$ on a valued constraint $c=(\vec{x},\phi)$ results in a new valued constraint $c|_{\tau}=(\vec{x}',\phi')$ defined as follows. Let $\vec{x}'=\vec{x}\setminus V'$ (i.e., $\vec{x}'$ is obtained by removing all elements in $V\cap \vec{x}$ from $\vec{x}$) and $m'=|\vec{x'}|$. Then for each $\vec{a'}\in D^{m'}$, we set $\phi'(\vec{a'})=\phi(\vec{a})$ where for each $i\in [m]$ \[\vec{a}[i]= \left\{ \begin{array}{ll} \tau(\vec{x}[i])& \mbox{\ if } \vec{x}[i]\in V' \\ \vec{a'}[i-j] & \mbox{\ otherwise, where } j=|\SB \vec{x}[p] \SM p\in [i] \SE \cap V'|. \end{array} \right. \] Intuitively, the tuple $\vec{a}$ defined above is obtained by taking the original tuple $\vec{a}'$ and enriching it by the values of the assignment $\tau$ applied on the ``missing'' variables from $\vec{x}$. In the special case when $\vec{x}'$ is empty, the valued constraint $c|_{\tau}$ becomes a nullary constraint whose cost function $\phi'$ will effectively be a constant. The application of $\tau$ on a \textsc{VCSP} instance $\cal P$ then results in a new \textsc{VCSP} instance ${\cal P}|_{\tau}=(V\setminus V', D, \cal C')$ where ${\cal C'}=\SB c|_{\tau} \SM c\in \cal C\SE$. It will be useful to observe that applying a partial assignment $\tau$ can be done in time linear in $|\cal P|$ (each valued constraint can be processed independently, and the processing of each such valued constraint consists of merely pruning the cost table). \subsection{Valued Constraint Languages} A \emph{valued constraint language} (or \emph{language} for short) is a set of cost functions. The arity of a language $\Gamma$ is the maximum arity of a cost function in $\Gamma$, or $\infty$ if $\Gamma$ contains cost functions of arbitrarily large arities. Each language $\Gamma$ defines a set $\textsc{VCSP}[\Gamma]$ of \textsc{VCSP} instances which only use cost functions from $\Gamma$; formally, $(V, D, {\cal C})\in\textsc{VCSP}[\Gamma]$ iff each $(\vec{x},\phi)\in \cal C$ satisfies $\phi\in \Gamma$. A language is \emph{crisp} if it contains only crisp cost functions. A language $\Gamma$ is \emph{globally tractable} if there exists a polynomial-time algorithm which solves $\textsc{VCSP}[\Gamma].$\footnote{The literature also defines the notion of \emph{tractability}~\cite{JeavonsKrokhinZivny14,KrokhinBulatovJeavons05}, which we do not consider here. We remark that, to the best of our knowledge, all known tractable constraint languages are also globally tractable~\cite{JeavonsKrokhinZivny14,KrokhinBulatovJeavons05} }. Similarly, a class $\cal H$ of \textsc{VCSP}{} instances is called tractable if there exists a polynomial-time algorithm which solves $\cal H$. For technical reasons, we will implicitly assume that every language contains all nullary cost functions (i.e., constants); it is easily seen that adding such cost functions into a language has no impact on its tractability. There are a few other properties of languages that will be required to formally state our results. A language $\Gamma$ is \emph{efficiently recognizable} if there exists a polynomial-time algorithm which takes as input a cost function $\phi$ and decides whether $\phi\in\Gamma$. We note that every finite language is efficiently recognizable. A language $\Gamma$ is \emph{closed under partial assignments} if for every instance $\mathcal{P}\in \textsc{VCSP}[\Gamma]$ and every partial assignment $\tau$ on $\mathcal{P}$ and every valued constraint $c=(\vec{x},\phi)$ in $\mathcal{P}$, the valued constraint $c|_{\tau}=(\vec{x}',\phi')$ satisfies $\phi'\in \Gamma$. The \emph{closure of a language~$\Gamma$ under partial assignments}, is the language $\Gamma'\supseteq \Gamma$ containing all cost functions that can be obtained from $\Gamma$ via partial assignments; formally, $\Gamma'$ contains a cost function $\phi'$ if and only if there exists a cost function $\phi\in \Gamma$ such that for a constraint $c=(\vec{x},\phi)$ and an assignment $\tau:X\rightarrow D$ defined on a subset $X\subseteq \text{\normalfont{\bfseries var}}(c)$ we have $c|_{\tau}=(\vec{x}',\phi')$. If a language $\Gamma$ is closed under partial assignments, then also $\textsc{VCSP}[\Gamma]$ is closed under partial assignments, which is a natural property and provides a certain robustness of the class. This robustness is also useful when considering backdoors into $\textsc{VCSP}[\Gamma]$ (see Section~\ref{sec:bdstandard}), as then every superset of a backdoor remains a backdoor. Incidentally, being closed under partial assignments is also a property of tractable classes defined in terms of a polynomial-time subsolver~\cite{WilliamsGomesSelman03,WilliamsGomesSelman03a} where the property is called \emph{self-reducibility}. A language is \emph{conservative} if it contains all unary cost functions~\cite{KolmogorovZivny}. We note that being closed under partial assignments is closely related to the well-studied property of conservativeness. Crucially, for every conservative globally tractable language $\Gamma$, its closure under partial assignments $\Gamma'$ will also be globally tractable; indeed, one can observe that every instance $\mathcal{P}\in \textsc{VCSP}[\Gamma']$ can be converted, in linear time, to a solution-equivalent instance $\mathcal{P}'\in \textsc{VCSP}[\Gamma]$ by using infinity-valued (or even sufficiently high-valued) unary cost functions to model the effects of partial assignments. \subsection{Parameterized Complexity} \label{sub:parcomp} We give a brief and rather informal review of the most important concepts of parameterized complexity. For an in-depth treatment of the subject we refer the reader to other sources \cite{CyganFKLMPPS15,DowneyFellows13,FlumGrohe06,Niedermeier06}. The instances of a parameterized problem can be considered as pairs $(I,k)$ where~$I$ is the \emph{main part} of the instance and $k$ is the \emph{parameter} of the instance; the latter is usually a non-negative integer. A parameterized problem is \emph{fixed-parameter tractable} (FPT) if instances $(I,k)$ of size $n$ (with respect to some reasonable encoding) can be solved in time $\mathcal{O}(f(k)n^c)$ where $f$ is a computable function and $c$ is a constant independent of $k$. The function $f$ is called the \emph{parameter dependence}, and algorithms with running time in this form are called \emph{fixed-parameter algorithms}. Since the parameter dependence is usually superpolynomial, we will often give the running times of our algorithms in $\mathcal{O}^*$ notation which suppresses polynomial factors. Hence the running time of an FPT algorithm can be simply stated as $\mathcal{O}^*(f(k))$. The exists a completeness theory which allows to obtain strong theoretical evidence that a parameterized problem is \emph{not} fixed-parameter tractable. This theory is based on a hierarchy of parameterized complexity classes $\W{1}\subseteq \W{2} \subseteq \dots$ where all inclusions are believed to be proper. If a parameterized problem is shown to be $\W{i}$-hard for some $i\geq 1$, then the problem is unlikely to be fixed-parameter tractable, similarly to an NP-complete problem being solvable in polynomial time \cite{CyganFKLMPPS15,DowneyFellows13,FlumGrohe06,Niedermeier06}. \section{Backdoors into Tractable Languages} \label{sec:bdstandard} This section is devoted to establishing the first general results for finding and exploiting backdoors for \textsc{VCSP}. We first present the formal definition of backdoors in the context of VCSP and describe how such backdoors once found, can be used to solve the VCSP instance. Subsequently, we show how to detect backdoors into a single tractable {\textsc{VCSP}} class with certain properties. In fact, our proof shows something stronger. That is, we show how to detect \emph{heterogeneous} backdoors into a finite set of VCSP classes which satisfy these properties. The notion of heterogeneous backdoors is based on that introduced by Gaspers \emph{et al.}~\cite{GaspersMisraOSZ14}. For now, we proceed with the definition of a backdoor. \begin{definition}\label{def:backdoors} Let $\cal H$ be a fixed class of VCSP instances over a domain $D$ and let ${\cal P}=(V,D,\cal C)$ be a VCSP instance. A \emph{backdoor} into $\cal H$ is a subset $X\subseteq V$ such that for each assignment $\tau:X\rightarrow D$, the reduced instance ${\cal P}|_{\tau}$ is in $\cal H$. \end{definition} We note that this naturally corresponds to the notion of a \emph{strong} backdoor in the context of Constraint Satisfaction and Satisfiability~\cite{WilliamsGomesSelman03,WilliamsGomesSelman03a}; here we drop the adjective ``strong'' because the other kind of backdoors studied on these structures (so-called \emph{weak} backdoors) do not seem to be useful in the general {\textsc{VCSP}} setting. Namely, in analogy to the CSP setting, one would define a weak backdoor of a VCSP instance ${\cal P}=(V,D,\cal C)$ into $\cal H$ as a subset $X\subseteq V$ such that for some assignment $\tau:X\rightarrow D$ (i)~the reduced instance ${\cal P}|_{\tau}$ is in $\cal H$ and (ii)~$\tau$ can be extended to an assignment to $V$ of minimum cost. However, in order to ensure (ii) we need to compare the cost of $\tau$ with the costs of all other assignments $\tau'$ to~$V$. If $X$ is not a strong backdoor, then some of the reduced instances ${\cal P}|_{\text{$\tau'$ restricted to $X$}}$ will be outside of ${\cal H}$, and so in general we have no efficient way of determining a minimum cost assignment for it. We begin by showing that small backdoors for globally tractable languages can always be used to efficiently solve {\textsc{VCSP}} instances as long as the domain is finite (assuming such a backdoor is known). \begin{lemma} \label{lem:using} Let $\cal H$ be a tractable class of \textsc{VCSP}{} instances over a finite domain $D$. There exists an algorithm which takes as input a $\textsc{VCSP}$ instance $\cal P$ along with a backdoor $X$ of ${\mathcal P}=(V,D,\cal C)$ into $\cal H$, runs in time $\mathcal{O}^*(|D|^{|X|})$, and solves ${\mathcal P}$. \end{lemma} \begin{proof} Let $\mathcal{B}$ be a polynomial-time algorithm which solves every ${\mathcal P}$ in $\cal H$, i.e., outputs a minimum-cost assignment in ${\mathcal P}$; the existence of $\mathcal{B}$ follows by the tractability of $\cal H$. Consider the following algorithm ${\mathcal A}$. First, ${\mathcal A}$ branches on the at most $|D|^{|X|}$-many partial assignments of $X$. In each branch, ${\mathcal A}$ then applies the selected partial assignment~$\tau$ to obtain the instance ${\mathcal P}|_{\tau}$ in linear time. In this branch, ${\mathcal A}$ proceeds by calling $\mathcal{B}$ on ${\mathcal P}|_{\tau}$, and stores the produced assignment along with its cost. After the branching is complete ${\mathcal A}$ reads the table of all of the at most $|D|^{|X|}$ assignments and costs outputted by $\mathcal{B}$, and selects one assignment (say $\alpha$) with a minimum value (cost) $a$. Let $\tau$ be the particular partial assignment on $X$ which resulted in the branch leading to $\alpha$. ${\mathcal A}$ then outputs the assignment $\alpha \cup \tau$ along with the value (cost) $a$. \end{proof} Already for crisp languages it is known that having a small backdoor does not necessarily allow for efficient (i.e., fixed-parameter) algorithms when the domain is not bounded. Specifically, the \W{1}-hard $k$-clique problem can be encoded into a \textsc{CSP}{} with only~$k$ variables~\cite{PapadimitriouYannakakis99}, which naturally contains a backdoor of size at most $k$ for every crisp language under the natural assumption that the language contains the empty constraint. Hence the finiteness of the domain in Lemma~\ref{lem:using} is a necessary condition for the statement to hold. Next, we show that it is possible to find a small backdoor into $\textsc{VCSP}[\Gamma]$ efficiently (or correctly determine that no such small backdoor exists) as long as $\Gamma$ has two properties. First, $\Gamma$ must be efficiently recognizable; it is easily seen that this condition is a necessary one, since detection of an empty backdoor is equivalent to determining whether the instance lies in $\textsc{VCSP}[\Gamma]$. Second, the arity of $\Gamma$ must be bounded. This condition is also necessary since already in the more restricted \textsc{CSP}{} setting it was shown that backdoor detection for a wide range of natural crisp languages (of unbounded arity) is \W{2}-hard~\cite{GaspersMisraOSZ14}. Before we proceed, we introduce the notion of heterogeneous backdoors for \textsc{VCSP}{} which represent a generalization of backdoors into classes defined in terms of a single language. For languages $\Gamma_1,\dots,\Gamma_\ell$, a heterogeneous backdoor is a backdoor into the class ${\cal H}=\textsc{VCSP}[\Gamma_1]\cup \dots \cup \textsc{VCSP}[\Gamma_\ell]$; in other words, after each assignment to the backdoor variables, all cost functions in the resulting instance must belong to a language from our set. We now show that detecting small heterogenous backdoors is fixed-parameter tractable parameterized by the size of the backdoor. \enlargethispage*{5mm} \begin{lemma} \label{lem:finding} Let $\Gamma_1,\dots, \Gamma_\ell$ be efficiently recognizable languages over a domain $D$ of size at most $d$ and let $q$ be a bound on the arity of $\Gamma_i$ for every $i\in [\ell]$. There exists an algorithm which takes as input a $\textsc{VCSP}$ instance $\cal P$ over $D$ and an integer $k$, runs in time $O^*( (\ell \cdot d \cdot (q+1))^{k})$, and either outputs a backdoor $X$ of $\cal P$ into $\textsc{VCSP}[\Gamma_1]\cup \dots \cup \textsc{VCSP}[\Gamma_\ell]$ such that $|X|\leq k$ or correctly concludes that no such backdoor exists. \end{lemma} \begin{proof} The algorithm is a standard branching algorithm (see also \cite{GaspersMisraOSZ14}). Formally, the algorithm is called ${\sf Detectbd}$, takes as input an instance $\mathcal{P}=(V,D,\cal C)$, integer $k$, a set of variables $B$ of size at most $k$ and in time $O^*((\ell\cdot d \cdot (q+1))^k)$ either correctly concludes that $\cal P$ has no backdoor $Z\supseteq B$ of size at most $k$ into $\textsc{VCSP}[\Gamma_1]\cup \dots \cup \textsc{VCSP}[\Gamma_\ell]$ or returns a backdoor $Z$ of $\cal P$ into $\textsc{VCSP}[\Gamma_1]\cup \dots \cup \textsc{VCSP}[\Gamma_\ell]$ of size at most $k$. The algorithm is initialized with $B=\emptyset$. In the base case, if $|B|=k$, and $B$ is a backdoor of $\mathcal{P}$ into $\textsc{VCSP}[\Gamma_1]\cup \dots \cup \textsc{VCSP}[\Gamma_\ell]$ then we return the set $B$. Otherwise, we return {{\sc No}}. We now move to the description of the case when $|B|<k$. In this case, if for every $\sigma:B\to D$ there is an $i\in [\ell]$ such that $\mathcal{P}|_{\sigma}\in \textsc{VCSP}[\Gamma_i]$, then it sets $ Z=B$ and returns it. That is, if $B$ is already found to be a backdoor of the required kind, then the algorithm returns $B$. Otherwise, it computes an assignment $\sigma:B\to D$ and valued constraints $c_1,\dots, c_\ell$ in $\mathcal{P}|_{\sigma}$ such that for every $i\in [\ell]$, the cost function of $c_i$ is not in $\Gamma_i$. Observe that for some $\sigma$, such a set of constraints must exist. Furthermore, since every $\Gamma_i$ is efficiently recognizable and $B$ has size at most $k$, the selection of these valued constraints takes time $\mathcal{O}^*(d^k)$. The algorithm now constructs a set $Y$ as follows. Initially, $Y=\emptyset$. For each $i\in [\ell]$, if the scope of the constraint $c_i$ contains more than $q$ variables then it adds to $Y$ an arbitrary $q+1$-sized subset of the scope of $c_i$. Otherwise, it adds to $Y$ all the variables in the scope of $c$. This completes the definition of $Y$. Observe that any backdoor set for the given instance which contains $B$ must also intersect $Y$. Hence the algorithm now branches on the set $Y$. Formally, for every $x\in Y$ it executes the recursive calls ${\sf Detectbd}(\mathcal{P},k,B\cup \{x\})$. If for some $x\in Y$, the invoked call returned a set of variables, then it must be a backdoor set of the given instance and hence it is returned. Otherwise, the algorithm returns {{\sc No}}. Since the branching factor of this algorithm is at most $\ell \cdot (q+1)$ and the set $B$, whose size is upper bounded by $k$, is enlarged with each recursive call, the number of nodes in the search tree is bounded by $\mathcal{O}((\ell\cdot (q+1))^k)$. Since the time spent at each node is bounded by $\mathcal{O}^*(d^k)$, the running time of the algorithm ${\sf Detectbd}$ is bounded by $\mathcal{O}^*((\ell\cdot (q+1)\cdot d)^k)$. \end{proof} Combining Lemmas~\ref{lem:using} and~\ref{lem:finding}, we obtain the main result of this section. \enlargethispage*{5mm} \begin{corollary} \label{cor:bd} Let $\Gamma_1,\dots, \Gamma_\ell$ be globally tractable and efficiently recognizable languages each of arity at most $q$ over a domain of size $d$. There exists an algorithm which solves {\textsc{VCSP}} in time $O^*( (\ell \cdot d\cdot (q+1))^{k^2+k})$, where $k$ is the size of a minimum backdoor of the given instance into $\textsc{VCSP}[\Gamma_1]\cup \dots \cup \textsc{VCSP}[\Gamma_\ell]$. \end{corollary} \section{Backdoors into Scattered Classes} \label{sec:bdscattered} Having established Corollary~\ref{cor:bd} and knowing that both the arity and domain restrictions of the language are necessary, it is natural to ask whether it is possible to push the frontiers of tractability for backdoors to more general classes of VCSP instances. In particular, there is no natural reason why the instances we obtain after each assignment into the backdoor should necessary always belong to the same language $\Gamma$ even if $\Gamma$ itself is one among several globally tractable languages. In fact, it is not difficult to show that as long as each ``connected component'' of the instance belongs to some tractable class after each assignment into the backdoor, then we can use the backdoor in a similar fashion as in Lemma~\ref{lem:using}. Such a generalization of backdoors from single languages to collections of languages has recently been obtained in the \textsc{CSP}{} setting~\cite{GanianRamanujanSzeider16} for conservative constraint languages. We proceed by formally defining these more general classes of \textsc{VCSP}{} instances, along with some other required notions. \subsection{Scattered Classes} \label{sub:scattered} A {\textsc{VCSP}} instance $(V,D,\cal C)$ is \emph{connected} if for each partition of its variable set into nonempty sets $V_1$ and $V_2$, there exists at least one constraint $c\in \cal C$ such that $\text{\normalfont{\bfseries var}}(c)\cap V_1\neq \emptyset$ and $\text{\normalfont{\bfseries var}}(c)\cap V_2\neq \emptyset$. A \emph{connected component} of $(V,D,\cal C)$ is a maximal connected subinstance $(V',D,{\cal C}')$ for $V'\subseteq V$, ${\cal C}'\subseteq \cal C$. These notions naturally correspond to the connectedness and connected components of standard graph representations of {\textsc{VCSP}} instances. Let $\Gamma_1,\dots,\Gamma_d$ be languages. Then the \emph{scattered class} $\textsc{VCSP}(\Gamma_1)\uplus \dots \uplus \textsc{VCSP}(\Gamma_d)$ is the class of all instances $(V,D,\cal C)$ which may be partitioned into pairwise variable disjoint subinstances $(V_1,D,{\cal C}_1),\dots,(V_d,D,{\cal C}_d)$ such that $(V_i,D,{\cal C}_i)\in \textsc{VCSP}[\Gamma_i]$ for each $i\in [d]$. Equivalently, an instance ${\mathcal P}$ is in $\textsc{VCSP}(\Gamma_1)\uplus \dots \uplus \textsc{VCSP}(\Gamma_d)$ iff each connected component in ${\mathcal P}$ belongs to some $\textsc{VCSP}[\Gamma_i]$, $i\in [d]$. \begin{lemma} Let $\Gamma_1,\dots, \Gamma_d$ be globally tractable languages. Then there exists a polynomial-time algorithm solving {\textsc{VCSP}} for all instances $P\in \textsc{VCSP}(\Gamma_1)\uplus \dots \uplus \textsc{VCSP}(\Gamma_d)$. \end{lemma} It is worth noting that while scattered classes on their own are a somewhat trivial extension of the tractable classes defined in terms of individual languages, backdoors into scattered classes can be much smaller than backdoors into each individual globally tractable language (or, more precisely, each individual class defined by a globally tractable language). That is because a backdoor can not only simplify cost functions to ensure they belong to a specific language, but it can also disconnect the instance into several ``parts'', each belonging to a different language, and furthermore the specific language each ``part'' belongs to can change for different assignments into the backdoor. As a simple example of this behavior, consider the boolean domain, let $\Gamma_1$ be the globally tractable crisp language corresponding to Horn constraints~\cite{Schaefer78}, and let $\Gamma_2$ be a globally tractable language containing only submodular cost functions~\cite{CohenJeavonsZivny08}. It is not difficult to construct an instance ${\mathcal P}=(V_1\cup V_2\cup \{x\},\{0,1\},\cal C)$ such that (\textbf{a}) every assignment to $x$ disconnects $V_1$ from $V_2$, (\textbf{b}) in ${\mathcal P}|_{x\mapsto 0}$, all valued constraints over $V_1$ are crisp Horn constraints and all valued constraints over $V_2$ are submodular, and (\textbf{c}) in ${\mathcal P}|_{x\mapsto 1}$, all valued constraints over $V_1$ are submodular and all valued constraints over $V_2$ are crisp Horn constraints. In the hypothetical example above, it is easy to verify that $x$ is a backdoor into $\textsc{VCSP}[\Gamma_1]\uplus \textsc{VCSP}[\Gamma_2]$ but the instance does not have a small backdoor into neither $\textsc{VCSP}[\Gamma_1]$ nor $\textsc{VCSP}[\Gamma_2]$. It is known that backdoors into scattered classes can be used to obtain fixed-parameter algorithms for \textsc{CSP}{}, i.e., both finding and using such backdoors is FPT when dealing with crisp languages of bounded arity and domain size~\cite{GanianRamanujanSzeider16}. Crucially, these previous results relied on the fact that every crisp language of bounded arity and domain size is finite (which is not true for valued constraint languages in general). We formalize this below. \begin{theorem}[{\cite[Lemma 1.1]{GanianRamanujanSzeider16}}] \label{thm:scatcsp} Let $\Gamma_1,\dots,\Gamma_\ell$ be globally tractable conservative crisp languages over a domain $D$, with each language having arity at most $q$ and containing at most $p$ relations. There exists a function $f$ and an algorithm solving {\textsc{VCSP}} in time $\mathcal{O}^*(f(\ell,|D|,q,k,p))$, where $k$ is the size of a minimum backdoor into $\textsc{VCSP}[\Gamma_1] \uplus \dots \uplus \textsc{VCSP}[\Gamma_\ell]$. \end{theorem} Observe that in the above theorem, when $q$ and $|D|$ are bounded, $p$ is immediately bounded. However, it is important that we formulate the running time of the algorithm in this form because in the course of our application, these parameters have to be bounded separately. Our goal for the remainder of this section is to extend Theorem~\ref{thm:scatcsp} in the {\textsc{VCSP}} setting to also cover infinite globally tractable languages (of bounded arity and domain size). Before proceeding, it will be useful to observe that if each $\Gamma_1,\dots,\Gamma_\ell$ is globally tractable, then the class $\textsc{VCSP}[\Gamma_1] \uplus \dots \uplus \textsc{VCSP}[\Gamma_\ell]$ is also tractable (since each connected component can be resolved independently of the others). \subsection{Finding Backdoors to Scattered Classes} \label{sub:findingscat} In this subsection, we prove that finding backdoors for $\textsc{VCSP}$ into scattered classes is fixed-parameter tractable. This will then allow us to give a proof of our main theorem, stated below. \begin{restatable}{theorem}{finaltheorem} \label{thm:vcsp} Let $\Delta_1,\dots,\Delta_\ell$ be conservative, globally tractable and efficiently recognizable languages over a finite domain and having constant arity. Then $\textsc{VCSP}$ is fixed-parameter tractable parameterized by the size of a smallest backdoor of the given instance into $\textsc{VCSP}{(\Delta_1)} \uplus \cdots \uplus \textsc{VCSP}{(\Delta_\ell)}$. \end{restatable} Recall that the closure of a conservative and globally tractable language under partial assignments is also a globally tractable language. Furthermore, every backdoor of the given instance into $\textsc{VCSP}{(\Delta_1)} \uplus \cdots \uplus \textsc{VCSP}{(\Delta_\ell)}$ is also a backdoor into $\textsc{VCSP}{(\Gamma_1)} \uplus \cdots \uplus \textsc{VCSP}{(\Gamma_\ell)}$ where $\Gamma_i$ is the closure of $\Delta_i$ under partial assignments. Due to Lemma \ref{lem:using}, it follows that it is sufficient to compute a backdoor of small size into the scattered class $\textsc{VCSP}{(\Gamma_1)} \uplus \cdots \uplus \textsc{VCSP}{(\Gamma_\ell)}$ where each $\Gamma_i$ is closed under partial assignments. Our strategy for finding backdoors to scattered classes defined in terms of (potentially infinite) globally tractable languages relies on a two-phase transformation of the input instance. In the first phase (Lemma \ref{lem:finitation}), we show that for every choice of $\Gamma_1,\dots,\Gamma_d$ (each having bounded domain size and arity), we can construct a set of finite languages $\Gamma'_1,\dots,\Gamma'_d$ and a new instance ${\mathcal P}'$ such that there is a one-to-one correspondence between backdoors of~${\mathcal P}$ into $\Gamma_1\uplus \dots \uplus \Gamma_d$ and backdoors of~${\mathcal P}'$ into $\Gamma'_1\uplus \dots \uplus \Gamma'_d$. This allows us to restrict ourselves to only the case of finite (but not necessarily crisp) languages as far as backdoor detection is concerned. In the second phase (Lemma \ref{lem:vcsptocsp}), we transform the instance and languages one more time to obtain another instance ${\mathcal P}''$ along with finite crisp languages $\Gamma''_1,\dots,\Gamma''_d$ such that there is a one-to-one correspondence between the backdoors of~${\mathcal P}''$ and backdoors of ${\mathcal P}'$. We crucially note that the newly constructed instances are equivalent \emph{only} with respect to backdoor detection; there is no correspondence between the solutions of these instances. Before proceeding, we introduce a natural notion of replacement of valued constraints which is used in our proofs. \begin{definition}\label{def:replace} Let $\mathcal{P}=(V,D,\mathcal{C})$ be a $\textsc{VCSP}$ instance and let $c=(\vec{x},\phi)\in \mathcal{C}$. Let $\phi'$ be a cost function over $D$ with the same arity as $\phi$. Then the operation of \emph{replacing} $\phi$ in $c$ with $\phi'$ results in a new instance $\mathcal{P}'=(V,D,({\mathcal{C}}\setminus \{c\})\cup \{(\vec{x},\phi')\})$. \end{definition} \begin{lemma} \label{lem:finitation} Let $\Gamma_1,\dots,\Gamma_\ell$ be efficiently recognizable languages closed under partial assignments, each of arity at most $q$ over a domain $D$ of size $d$. There exists an algorithm which takes as input a $\textsc{VCSP}$ instance ${\mathcal P}=(V,D,\cal C)$ and an integer $k$, runs in time $\mathcal{O}^*(f(\ell,d,k,q))$ for some function $f$ and either correctly concludes that $\mathcal{P}$ has no backdoor into $\textsc{VCSP}(\Gamma_1)\uplus \dots \uplus \textsc{VCSP}(\Gamma_\ell)$ of size at most $k$ or outputs a $\textsc{VCSP}$ instance ${\mathcal P}'=(V,D',\cal C')$ and languages $\Gamma'_1,\dots,\Gamma'_{\ell}$ with the following properties. \begin{enumerate} \item For each $i\in [\ell]$, the arity of $\Gamma_i'$ is at most $q$ \item For each $i\in [\ell]$, $\Gamma'_i$ is over $D'$ and $D'\subseteq D$ \item Each of the languages $\Gamma_1',\dots, \Gamma_\ell'$ is closed under partial assignments and contains at most $g(\ell,d,k,q)$ cost functions for some function $g$. \item For each $X\subseteq V$, $X$ is a minimal backdoor of ${\mathcal P}$ into $\textsc{VCSP}(\Gamma_1)\uplus \dots \uplus \textsc{VCSP}(\Gamma_\ell)$ of size at most $k$ if and only if $X$ is a minimal backdoor of ${\mathcal P}'$ into $\textsc{VCSP}(\Gamma'_1)\uplus \dots \uplus \textsc{VCSP}(\Gamma'_{\ell})$ of size at most $k$. \end{enumerate} \end{lemma} \begin{proof} We will first define a function mapping the valued constraints in $\mathcal{C}$ to a finite set whose size depends only on $\ell,d,k$ and $q$. Subsequently, we will show that every pair of constraints in $\mathcal{C}$ which are mapped to the same element of this set are, for our purposes (locating a backdoor), interchangeable. We will then use this observation to define the new instance $\mathcal{P}'$ and the languages $\Gamma'_1,\dots,\Gamma'_{\ell'}$. To begin with, observe that if the arity of a valued constraint in $\mathcal{P}$ is at least $q+k+1$, then $\mathcal{P}$ has no backdoor of size at most $k$ into $\textsc{VCSP}(\Gamma_1)\uplus \dots \uplus \textsc{VCSP}(\Gamma_\ell)$. Hence, we may assume without loss of generality that the arity of every valued constraint in $\mathcal{P}$ is at most $q+k$. Let $\mathcal{F}$ be the set of all functions from $[q+k]\times 2^{[q+k]}\times D^{[q+k]} \to 2^{[\ell]}\cup \{\bot\}$, where $\bot$ is used a special symbol expressing that $\mathcal{F}$ is ``out of bounds.'' Observe that $|\mathcal{F}|\leq \eta(\ell,d,k,q)=(2^\ell+1)^{(2d)^{(q+k)+\log (q+k)}}$. We will now define a function ${\sf Type}:\mathcal{C}\to \mathcal{F}$ as follows. We assume without loss of generality that the variables in the scope of each constraint in $\mathcal{C}$ are numbered from $1$ to $|\text{\normalfont{\bfseries var}}(c)|$ based on their occurrence in the tuple $\vec{x}$ where $c=(\vec{x},\phi)$. Furthermore, recall that $|\text{\normalfont{\bfseries var}}(c)|\leq q+k$. For $c\in \mathcal{C}$, we define ${\sf Type}(c)=\delta\in \mathcal{F}$ where $\delta$ is defined as follows. Let $r\leq q+k$, $Q\subseteq [q+k]$ and $\gamma:[q+k]\to D$. Let $\gamma[Q\cap [r]]$ denote the restriction of $\gamma$ to the set $Q\cap [r]$. Furthermore, recall that $c|_{\gamma[Q\cap [r]]}$ denotes the valued constraint resulting from applying the partial assignment $\gamma$ on the variables of $c$ corresponding to all those indices in $Q\cap [r]$. Then, $\delta(r,Q,\gamma)=\bot$ if $r\neq |\text{\normalfont{\bfseries var}}(c)|$. Otherwise, $\delta(r,Q,\gamma)=L\subseteq [\ell]$ where $i\in [\ell]$ is in $L$ if and only if $c|_{\gamma[Q\cap [r]]}\in \textsc{VCSP}({\Gamma_i})$. This completes the description of the function ${\sf Type}$; observe that ${\sf Type}(c)$ can be computed in time which is upper-bounded by a function of $\ell, d, k, q$. For every $\delta\in \mathcal{F}$, if there is a valued constraint $c\in \mathcal{C}$ such that ${\sf Type}(c)=\delta$, we pick and fix one arbitrary such valued constraint $c^*_\delta=(\vec{x^*_\delta},\phi^*_\delta)$. We now proceed to the definition of the instance $\mathcal{P}'$ and the languages $\Gamma'_1,\dots, \Gamma'_{\ell'}$. Observe that for 2 constraints $c=(\vec{x_1},\phi),c'=(\vec{x'},\phi')\in \mathcal{C}$, if ${\sf Type}(c)={\sf Type}(c')$ then $|\text{\normalfont{\bfseries var}}(c)|=|\text{\normalfont{\bfseries var}}(c')|$. Hence, the notion of \emph{replacing} $\phi$ in $c$ with $\phi'$ is well-defined (see Definition \ref{def:replace}). We define the instance $\mathcal{P}'$ as the instance obtained from $\mathcal{P}$ by replacing each $c=(\vec{x},\phi)\in \mathcal{C}$ with the constraint $(\vec{x},\phi^*_\delta)$ where $\delta={\sf Type}(c)$. For each $i\in [\ell]$ and cost function $\phi\in \Gamma_i$, we add $\phi$ to the language $\Gamma_i'$ if and only if for some $\delta\in \mathcal{F}$ and some set $Q\subseteq \text{\normalfont{\bfseries var}}(c^*_\delta)$ and assignment $\gamma:Q\to D$, the constraint $c|_{\gamma[Q]}=(\vec{x}\setminus Q,\phi)$. Clearly, for every $i\in [\ell]$, $|\Gamma_i'|\leq d^q\cdot |\mathcal{F}|\leq d^q\cdot \eta(\ell,d,k,q)$. Finally, for each $\Gamma_i'$, we compute the closure of $\Gamma_i'$ under partial assignments and add each relation from this closure into $\Gamma_i'$. Since the size of each $\Gamma_i'$ is bounded initially in terms of $\ell,d,k,q$, computing this closure can be done in time $\mathcal{O}^*(\lambda(\ell,d,k,q))$ for some function $\lambda$. Since each cost function has arity $q$ and domain $D$, the size of the final language $\Gamma_i'$ obtained after this operation is blown up by a factor of at most $d^q$, implying that in the end, $|\Gamma_i'|\leq d^{2q}\cdot |\mathcal{F}|\leq d^{2q}\cdot \eta(\ell,d,k,q)$. Now, observe that the first two statements of the lemma follow from the definition of the languages $\{\Gamma_i'\}_{i\in [\ell]}$. Furthermore, the number of cost functions in each $\Gamma_i'$ is bounded by $d^q\cdot \eta(\ell,d,k,q)$, and so the third statement holds as well. Therefore, it only remains to prove the final statement of the lemma. Before we do so, we state a straightforward consequence of the definition of~$\mathcal{P}'$. \begin{observation}\label{obs:bijection} For every $Y\subseteq V$, $\gamma:Y\to D$ and connected component $\mathcal{H}'$ of $\mathcal{P}'|_\gamma$, there is a connected component $\mathcal{H}$ of $\mathcal{P}|_\gamma$ and a bijection $\psi:\mathcal{H}\to\mathcal{H}'$ such that for every $c\in \mathcal{H}$, ${\sf Type}(c)={\sf Type}(\psi(c))$. Furthermore, for every $c=(\vec{x},\phi)\in \mathcal{H}$, the constraint $\psi(c)$ is obtained by replacing $\phi$ in $c$ with $\phi^*_{{\sf Type}(c)}$. \end{observation} We now return to the proof of Lemma~\ref{lem:finitation}. Consider the forward direction and let $X$ be a backdoor of size at most $k$ for $\mathcal{P}$ into $\textsc{VCSP}(\Gamma_1)\uplus \dots \uplus \textsc{VCSP}(\Gamma_\ell)$ and suppose that $X$ is \emph{not} a backdoor for $\mathcal{P}'$ into $\textsc{VCSP}(\Gamma'_1)\uplus \dots \uplus \textsc{VCSP}(\Gamma_\ell')$. Then, there is an assignment $\gamma:X\to D$ such that for some connected component $\mathcal{H}$ of $\mathcal{P}'|_\gamma$, there is no $i\in \ell$ such that all constraints in $\mathcal{H}'$ lie in $\textsc{VCSP}(\Gamma'_i)$. By Observation \ref{obs:bijection} above, there is a connected component $\mathcal{H}$ in $\mathcal{P}|_\gamma$ and a bijection $\psi:\mathcal{H}\to\mathcal{H}'$ such that for every $c\in \mathcal{H}$, ${\sf Type}(c)={\sf Type}(\psi(c))$. Since $X$ is a backdoor for $\mathcal{P}$, there is a $j\in \ell$ such that all constraints in $\mathcal{H}$ lie in $\textsc{VCSP}(\Gamma_j)$. Pick an arbitrary constraint $c=(\vec{x},\phi)\in \mathcal{H}$. Let $c'=(\vec{x},\phi^*_{{\sf Type}(c)})$ be the constraint $\psi(c)$. By definition of $\phi^*_{{\sf Type}(c)}$ it follows that $c'|_\gamma\in \textsc{VCSP}(\Gamma_j)$. The fact that this holds for an arbitrary constraint in $\mathcal{H}$ along with the fact that $\psi$ is a bijection implies that every constraint in $\mathcal{H}'$ is in fact in $\textsc{VCSP}(\Gamma_j')$, a contradiction. The argument in the converse direction is symmetric. This completes the proof of the final statement of the lemma. The time taken to compute $\mathcal{P}'$ and the languages $\Gamma_1',\dots, \Gamma_\ell'$ is dominated by the time required to compute the function ${\sf Type}$. Since the languages $\Gamma_1,\dots, \Gamma_\ell$ are efficiently recognizable, this time is bounded by $\mathcal{O}^*(|\mathcal{F}|)$, completing the proof of the lemma. \end{proof} \begin{lemma} \label{lem:vcsptocsp} Let $\Gamma_1,\dots,\Gamma_\ell$ be efficiently recognizable languages closed under partial assignments, each of arity at most $q$ over a domain $D$ of size $d$. Let $\mathcal{P}'=(V,D',\mathcal{C}')$ be the $\textsc{VCSP}$ instance and let $\Gamma_1',\dots,\Gamma_\ell'$ be languages returned by the algorithm of Lemma \ref{lem:finitation} on input $\mathcal{P}$ and $k$. There exists an algorithm which takes as input $\mathcal{P}'$, these languages and $k$, runs in time $\mathcal{O}^*(f(\ell,d,k,q))$ for some function $f$ and outputs a \textsc{CSP}{} instance ${\mathcal P}''=(V''\supseteq V,D'',\cal C'')$ and crisp languages $\Gamma_1'',\dots, \Gamma_\ell''$ with the following properties. \begin{enumerate} \item For each $i\in [\ell]$, the arity of $\Gamma_i''$ is at most $q+1$ \item $D''\supseteq D$ and $|D''|\leq \beta(q,d,k)$ for some function $\beta$. \item The number of relations in each of the languages $\Gamma_1'',\dots, \Gamma_\ell''$ is at most $\alpha(q,d,k)$ for some function $\alpha$. \item if $X$ is a minimal backdoor of arity at most $k$ of ${\mathcal P}''$ into $\textsc{CSP}(\Gamma''_1)\uplus \dots \uplus \textsc{CSP}(\Gamma''_{\ell})$, then $X\subseteq V$. \item For each $X\subseteq V$, $X$ is a minimal backdoor of ${\mathcal P}'$ into $\textsc{VCSP}(\Gamma_1')\uplus \dots \uplus \textsc{VCSP}(\Gamma_\ell')$ if and only if $X$ is a minimal backdoor of ${\mathcal P}''$ into $\textsc{CSP}(\Gamma''_1)\uplus \dots \uplus \textsc{CSP}(\Gamma''_{\ell})$. \end{enumerate} \end{lemma} \begin{proof} We propose a fixed-parameter algorithm ${\mathcal A}$, and show that it has the claimed properties. It will be useful to recall that we do not distinguish between crisp cost functions and relations. We also formally assume that $D'$ does not intersect the set of rationals $\cal Q$; if this is not the case, then we simply rename elements of $D'$ to make sure that this holds. Within the proof, we will use $\vec{a}\circ b$ to denote the concatenation of vector $\vec{a}$ by element $b$. First, let $T_i$ be the set of all values which are returned by at least one cost function from $\Gamma'_i$, $i\in [\ell]$, for at least one input. Let $T=\bigcup_{i\in [\ell]} T_i$. Observe that $|T|$ is upper-bounded by the size, domain and arity of our languages. Let us now set $D''=D'\cup T\cup \epsilon$. Intuitively, our goal will be to represent the cost function in each valued constraint in $\mathcal{P}'$ by a crisp cost function with one additional variable which ranges over $T$, where $T$ corresponds to a specific value which occurs in one of our base languages. Note that this satisfies Condition~$2.$ of the lemma, and that $T$ can be computed in linear time from the cost tables of $\Gamma'_1,\dots,\Gamma'_\ell$. We will later construct $k+1$ such representations (each with its own additional variable) to ensure that the additional variables are never selected by minimal backdoors. Next, for each language $\Gamma_i'$, $i\in [\ell]$, we compute a new crisp language $\Gamma_i''$ as follows. For each $\phi\in \Gamma_i'$ of arity $t$, we add a new relation $\psi$ of arity $t+1$ into $\Gamma_i''$, and for each tuple $(x_1,\dots,x_t)$ of elements from $D'$ we add the tuple $(x_1,\dots,x_t,\phi(x_1,\dots,x_t))$ into $\psi$; observe that this relation exactly corresponds to the cost table of $\phi$. We then compute the closure of $\Gamma_i''$ under partial assignments and add each relation from this closure into $\Gamma_i''$. Observe that the number of relations in $\Gamma_i''$ is bounded by a function of $|T|$ and $|\Gamma_i'|$, and furthermore the number of tuples in each relation is upper-bounded by $q^{|D'|}$, and so Conditions~$1.$ and~$3.$ of the lemma hold. The construction of each $\Gamma_i''$ from $\Gamma_i'$ can also be done in linear time from the cost tables of $\Gamma'_1,\dots,\Gamma'_\ell$. Finally, we construct a new instance ${\mathcal P}''=(V'',D'',\cal C'')$ from ${\mathcal P}'=(V,D',\cal C')$ as follows. At the beginning, we set $V'':=V$. For each $c'=(\vec{x}',\phi')\in \mathcal{C}'$, we add $k+1$ unique new variables $v^1_{c'},\dots,v^{k+1}_{c'}$ into $V''$ and add $k+1$ constraints $c''^{1},\dots,c''^{k+1}$ into $\mathcal{C}''$. For $i\in [k+1]$, each $c''^{i}=(\vec{x}' \circ v^i_{c'},\psi'')$ where $\psi''$ is a relation that is constructed similarly as the relations in our new languages $\Gamma_i''$ above. Specifically, for each tuple $(x_1,\dots,x_t)$ of elements from $D'$ we add the tuple $(x_1,\dots,x_t,\phi'(x_1,\dots,x_t))$ into $\psi''$, modulo the following exception. If $\phi'(x_1,\dots,x_t)\not \in D''$, then we instead add the tuple $(x_1,\dots,x_t,\epsilon)$ into $\psi''$. Clearly, the construction of our new instance ${\mathcal P}''$ takes time at most $\mathcal{O}(|{\cal C'}|+q^{|D'|})$. This concludes the description of ${\mathcal A}$. It remains to argue that Conditions~$4.$ and~$5.$ of the lemma hold. First, consider a minimal backdoor $X$ of size at most $k$ of ${\mathcal P}''$ into $\textsc{CSP}[\Gamma''_1]\uplus \dots \uplus \textsc{CSP}[\Gamma''_{\ell}]$, and assume for a contradiction that there exists some $c'=(\vec{x}',\phi')\in \mathcal{C}'$ and $i\in [k+1]$ such that $v^i_{c'}\in X$. First, observe that this cannot happen if the whole scope of $c''^{i}$ lies in $X$. By the size bound on $X$, there exists $j\in [k+1]$ such that $v^j_{c'}\not \in X$. Then for each partial assignment $\tau$ of $X$, the relation $\phi''$ in $c''^j$ belongs to the same globally tractable language as the rest of the connected component of $\mathcal{P}''$ containing the scope of $c''$ (after applying $\tau$). Since the relation $\phi''$ in $c''^j$ is precisely the same as in $c''^i$ and the scope of $c''^i$ must lie in the same connected component as that of $c''^j$, it follows that $X\setminus \{v_{x'}^i\}$ is also a backdoor of ${\mathcal P}''$ into $\textsc{CSP}(\Gamma''_1)\uplus \dots \uplus \textsc{CSP}(\Gamma''_{\ell})$. However, this contradicts the minimality of $X$. Finally, for Condition~$5.$, consider an arbitrary backdoor $X$ of ${\mathcal P}'$ into $\textsc{VCSP}(\Gamma'_1)\uplus \dots \uplus \textsc{VCSP}(\Gamma'_{\ell})$, and let us consider an arbitrary assignment from $X$ to $D''$. It will be useful to note that while the contents of relations and/or cost functions in individual (valued) constraints depend on the particular choice of the assignment to $X$, which variables actually occur in individual components depends only on the choice of $X$ and remains the same for arbitrary assignments. Now observe that each connected component ${\mathcal P}^{\text{CSP}}$ of ${\mathcal P}''$ after the application of the (arbitrarily chosen) assignment will fall into one of the following two cases. ${\mathcal P}^{\text{CSP}}$ could contain a single variable $v_{c'}$ with a single constraint whose relation lies in every language $\Gamma''_i$, $i\in [\ell]$; this occurs precisely when the whole scope of a valued constraint $c'\in \mathcal{C}'$ lies in $X$, and the relation will either contain a singleton element from $T$ or be the empty relation. In this case, we immediately conclude that ${\mathcal P}^{\text{CSP}}\in \textsc{CSP}(\Gamma_i)$ for each $i\in [\ell]$. Alternatively, ${\mathcal P}^{\text{CSP}}$ contains at least one variable $v\in V$. Let ${\mathcal P}^{\text{VCSP}}$ be the unique connected component of ${\mathcal P}'$ obtained after the application of an arbitrary assignment from $X$ to $D'$ which contains $v$. Observe that the variable sets of ${\mathcal P}^{\text{CSP}}$ and ${\mathcal P}^{\text{VCSP}}$ only differ in the fact that ${\mathcal P}^{\text{CSP}}$ may contain some of the newly added variables $v_{c'}$ for various constraints $c'$. Now let us consider a concrete assignment $\tau$ from $X$ to $D'$ along with an $i\in [\ell]$ such that after the application of $\tau$, the resulting connected component ${\mathcal P}^{\text{VCSP}}$ belongs to $\textsc{VCSP}(\Gamma'_i)$. It follows by our construction that applying the same assignment $\tau$ in ${\mathcal P}''$ will result in a connected component ${\mathcal P}^{\text{CSP}}$ corresponding to ${\mathcal P}^{\text{VCSP}}$ such that ${\mathcal P}^{\text{VCSP}}\in \textsc{CSP}(\Gamma''_i)$; indeed, whenever $\Gamma'_i$ contains an arbitrary cost function $\phi(\vec{x})=\beta$, the language $\Gamma''_i$ will contain the relation $(\vec{x}\circ\beta)$. By the above, the application of an assignment from $X$ to $D'$ in ${\mathcal P}''$ will indeed result in an instance in $\textsc{CSP}(\Gamma_1''\uplus \dots \uplus \Gamma_\ell'')$. But recall that the domain of ${\mathcal P}''$ is $D''$, which is a superset of $D'$; we need to argue that the above also holds for assignments $\tau$ from $X$ to $D''$. To this end, consider an arbitrary such $\tau$ and let $\tau_0$ be an arbitrary assignment from $X$ to $D'$ which matches $\tau$ on all mappings into $D'$. Let us compare the instances ${\mathcal P}''_{\tau_0}$ and ${\mathcal P}''_{\tau}$. By our construction of ${\mathcal P}'$, whenever $\tau$ maps at least one variable from the scope of some constraint $c''$ to $D''\setminus D'$, the resulting relation will be the empty relation. It follows that each constraint in ${\mathcal P}''_{\tau}$ will either be the same as in ${\mathcal P}''_{\tau}$, or will contain the empty relation. But since the empty relation is included in every language $\Gamma_1'',\dots,\Gamma_\ell''$, we conclude that each connected component of ${\mathcal P}''_\tau$ must belong to at least one language $\Gamma_i''$, $i\in[\ell]$. This shows that $X$ must also be a backdoor of ${\mathcal P}''$ into $\textsc{CSP}[\Gamma''_1]\uplus \dots \uplus \textsc{CSP}[\Gamma''_\ell]$. For the converse direction, consider a minimal backdoor $X$ of ${\mathcal P}''$ into $\textsc{CSP}[\Gamma''_1]\uplus \dots \uplus \textsc{CSP}[\Gamma''_\ell]$. Since we already know that Condition~$4.$ holds, $X$ must be a subset of~$V$. The argument from the previous case can then simply be reversed to see that $X$ will also be a backdoor of ${\mathcal P}'$ into $\textsc{VCSP}[\Gamma'_1]\uplus \dots \uplus \textsc{VCSP}[\Gamma'_\ell]$; in fact, the situation in this case is much easier since only assignments into $D'$ need to be considered. Summarizing, we gave a fixed-parameter algorithm and then showed that it satisfies each of the required conditions, and so the proof is complete. \end{proof} We are now ready to prove Theorem \ref{thm:vcsp}, which we restate for the sake of convenience. \finaltheorem* \begin{proof} For each $i\in[\ell]$, let $\Gamma_i$ denote the closure of $\Delta_i$ under partial assignments. Observe that every backdoor of the given instance into $\textsc{VCSP}{(\Delta_1)} \uplus \cdots \uplus \textsc{VCSP}{(\Delta_\ell)}$ is also a backdoor into $\textsc{VCSP}{(\Gamma_1)} \uplus \cdots \uplus \textsc{VCSP}{(\Gamma_\ell)}$. Furthermore, each $\textsc{VCSP}{(\Gamma_i)}$ is tractable since $\textsc{VCSP}{(\Delta_1)}$ is tractable and conservative. Hence, it is sufficient to compute and use a backdoor of size at most $k$ into $\textsc{VCSP}{(\Gamma_1)} \uplus \cdots \uplus \textsc{VCSP}{(\Gamma_\ell)}$. The claimed algorithm has two parts. The first one is finding a backdoor into $\textsc{VCSP}{(\Gamma_1)} \uplus \cdots \uplus \textsc{VCSP}{(\Gamma_\ell)}$ and the second one is using the computed backdoor to solve $\textsc{VCSP}$. Given an instance $\mathcal{P}$ and $k$, we first execute the algorithm of Lemma \ref{lem:finitation} to compute the instance $\mathcal{P}'$, and the languages $\Gamma_1',\dots, \Gamma_\ell'$ with the properties stated in the lemma. We then execute the algorithm of Lemma \ref{lem:vcsptocsp} with input $\mathcal{P}'$, $k$, and $\Gamma_1',\dots, \Gamma_\ell'$ to compute the CSP instance $\mathcal{P}''$ and crisp languages $\Gamma_1'',\dots, \Gamma_\ell''$ with the stated properties. Following this, we execute the algorithm of Theorem \ref{thm:scatcsp} with input $\mathcal{P}'',k$. If this algorithm returns {{\sc No}} then we return {{\sc No}} as well. Otherwise we return the set returned by this algorithm as a backdoor of size at most $k$ for the given instance $\mathcal{P}$. Finally, we use the algorithm of Lemma \ref{lem:using} with $\cal H$ set to be the class $\textsc{VCSP}{(\Gamma_1)} \uplus \cdots \uplus \textsc{VCSP}{(\Gamma_\ell)}$, to solve the given instance. The correctness as well as running time bounds follow from those of Lemmas~\ref{lem:finitation} and~\ref{lem:vcsptocsp}, Theorem~\ref{thm:scatcsp}, and Lemma~\ref{lem:using}. This completes the proof of the theorem. \end{proof} \section{Concluding Remarks} \label{sec:conclusion} We have introduced the notion of backdoors to the VCSP setting as a means for augmenting a class of globally tractable VCSP instances to instances that are outside the class but of small distance to the class. We have presented fixed-parameter tractability results for solving VCSP instances parameterized by the size of a smallest backdoor into a (possibly scattered and heterogeneous) tractable class satisfying certain natural properties. Our work opens up several avenues for future research. Since our main objective was to establish the fixed-parameter tractability of this problem, we have not attempted to optimize the runtime bounds for finding backdoors to scattered classes. As a result, it is quite likely that a more focussed study of scattered classes arising from specific constraint languages will yield a significantly better runtime. A second interesting direction would be studying the parameterized complexity of detection of backdoors into tractable VCSP classes that are characterized by specific fractional polymorphisms. \end{document}
arXiv
Home > Teams > Turbulence & Instabilities > Research activities > Flows for material processing Study of crystal growth situations Séverine Millet, Valéry Botton, Daniel Henry, Hamda Ben Hadid, S. Kaddeche, F. Mokhtari, M. Albaric, D. Pelletier, J.P. Garandet, Y. Fautrelle, K. Zaidat The solidification situations we study are known as vertical Bridgman (solidification from the bottom, it is the case of Silicon growth at INES) or horizontal Bridgman (solidification from the side, it is the case of metallic alloys solidification in the Afrodite experiment at SIMAP/EPM). These studies have also been developed in the frame of the PHC Maghreb program "Modeling and optimization of the purification of Silicon by directional solidification", which associates our research team with three teams from Algeria, Morocco and Tunisia. In the case of Silicon growth, the objective is to get a good stirring of the melt for a better uniformity of its composition. Two approaches were studied: a stirring with an impeller or a non-intrusive stirring by acoustic streaming. The stirring by impeller has been studied experimentally (water setup and real melts) and numerically by M. Chatelain during his PhD (CEA funding), in collaboration between INES and our team [B25, A126, B26]. The thematic related with the stirring by acoustic streaming is presented in the rubric "Flows generated by ultrasound waves". In a more applied direction and in connection with INES, we want to be able to estimate the effect of these flows on the impurities segregation. According to the models developed at INES by J.P. Garandet, these solutal effects, such as the segregation, can be estimated from the shear at the interface. In the frame of the PhD of N. El Ghani, we have developed an electrochemical method allowing the measurement of the shear at a wall in water and were thus able to characterize the solutal boundary layer induced by an acoustic streaming flow impinging a wall. In the frame of the PHC Maghreb program, we have also studied the effect that could be obtained by vibration, effect on the convection in the melt as well as on the impurities segregation (PhD of S. Bouarab). Variation of the maximal velocity in a convective flow in a differentially heated cavity submitted to vibrations when the vibration angle $\alpha$ is changed (horizontal vibrations for $\alpha=0^\circ$ and vertical vibrations for $\alpha=90^\circ$). The pure convective flow turns in the trigonometric direction, but the vibrations ($90^\circ<\alpha<180^\circ$) can invert this flow. In the case of metallic alloys solidification in the Afrodite experiment, the ingots are transversally confined and a temperature difference is applied between the endwalls in order to promote convection and the stirring of the compounds in the melt during the solidification. On this problem, the PhD of R. Boussaa on the three-dimensional numerical simulation of solidification allowed us to find the freckles observed experimentally, which are sources of brittleness in the solidified ingots [A120]. A two-dimensional model with a Hele-Shaw approach (2D½ model) has also been developed [A106]. The PhD of I. Hamzaoui allowed us to show that this 2D½ model was able to catch both purely convective situations and solidification configurations [A130]. Flow in the melt and position of the interface at the equilibrium state of solidification for right and left endwalls temperatures below and above the solidification temperature, respectively [A130]. Older work: crystal growth situations. We studied more realistic crystal growth configurations taking into account a moving solid-liquid interface in the case of the Bridgman configuration. We particularly analyzed the convective flows in the liquid phase and the dopant concentration in the melt and in the crystal (macrosegregation phenomena) (PhD of S. Kaddeche). We later took into account periodic growth conditions and then considered pure solutal convection (PhD of F.Z. Haddad). These numerical studies were performed in collaboration with the Centre d'Etudes Nucléaires de Grenoble. They provided interesting results for the crystal growth community which were published in the Journal of Crystal Growth [A25, A27, A30, A46, A53, A54]. Within the frame of international collaborations, we also considered other crystal growth configurations (Czochralski, zone-melting) for which we particularly analyzed the flow transitions [A57, A78, A69]. Striations in the crystal due to a fluctuating interface velocity during horizontal Bridgman crystal growth [A46].
CommonCrawl
\begin{document} \title{Towards super-approximation in positive characteristic} \author{Brian Longo and Alireza Salehi Golsefidy} \address{Mathematics Dept, University of California, San Diego, CA 92093-0112} \email{[email protected]} \email{[email protected]} \thanks{A.S-G. was partially supported by the NSF grants DMS-1303121, DMS-1602137, DMS-1902090, and A. P. Sloan Research Fellowship. A.S-G. would like to thank the IAS for its hospitality; part of this work was done while A.S-G. was visiting the IAS. This article contains the main results proved in B.L.'s Ph.D. thesis which was done in the UCSD} \subjclass{Primary: 22E40, Secondary: 20G30, 05C81} \date{08/19/2019} \begin{abstract} In this note we show that the family of Cayley graphs of a finitely generated subgroup of $\GL_{n_0}(\mathbb{F}_p(t))$ modulo some {\em admissible} square-free polynomials is a family of expanders under certain algebraic conditions. Here is a more precise formulation of our main result. For a positive integer $c_0$, we say a square-free polynomial is $c_0$-admissible if degree of irreducible factors of $f$ are distinct integers with prime factors at least $c_0$. Suppose $\Omega$ is a finite symmetric subset of $\GL_{n_0}(\mathbb{F}_p(t))$, where $p$ is a prime more than $5$. Let $\Gamma$ be the group generated by $\Omega$. Suppose the Zariski-closure of $\Gamma$ is connected, simply-connected, and absolutely almost simple; further assume that the field generated by the traces of $\Ad(\Gamma)$ is $\mathbb{F}_p(t)$. Then for some positive integer $c_0$ the family of Cayley graphs ${\rm Cay}(\pi_{f(x)}(\Gamma),\pi_{f(x)}(\Omega))$ as $f$ ranges in the set of $c_0$-admissible polynomials is a family of expanders, where $\pi_{f(t)}$ is the quotient map for the congruence modulo $f(t)$. \end{abstract} \maketitle \tableofcontents \section{Introduction}\label{sectionintro} \subsection{Statement of the main results} Let $\Gamma$ be a subgroup of a compact group $G$. Suppose $\Omega$ is a finite symmetric (that means $\omega\in \Omega$ implies $\omega^{-1}\in \Omega$) generating set of $\Gamma$. Suppose $\overline{\Gamma}$ is the closure of $\Gamma$ in $G$ and $\mathcal{P}_{\Omega}$ is the probability counting measure on $\Omega$. Let \[ T_{\Omega}:L^2(\overline{\Gamma})\rightarrow L^2(\overline{\Gamma}), \hspace{1mm} T_{\Omega}(f):=\mathcal{P}_{\Omega}\ast f:=\frac{1}{|\Omega|} \sum_{\omega\in \Omega} L_{\omega}(f), \] where $L_{\omega}(f)(g):=f(\omega^{-1}g)$. Then it is well-known that $T_{\Omega}$ is a self-adjoint operator, $T_{\Omega}(\mathds{1}_{\overline{\Gamma}})=\mathds{1}_{\overline{\Gamma}}$ where $\mathds{1}_{\overline{\Gamma}}$ is the constant function on $\overline{\Gamma}$, and the operator norm $\|T_{\Omega}\|$ is 1. So the spectrum of $T_{\Omega}$ is a subset of $[-1,1]$ and $T_{\Omega}$ sends the space $L^2(\overline{\Gamma})^{\circ}$ orthogonal to the constant functions to itself. Let $T_{\Omega}^{\circ}$ be the restriction of $T_{\Omega}$ to $L^2(\overline{\Gamma})^{\circ}$. Let \[ \lambda(\mathcal{P}_{\Omega};G):=\sup\{|c||\hspace{1mm} c \text{ is in the spectrum of } T_{\Omega}^{\circ}\}. \] If $\lambda(\mathcal{P}_{\Omega};G)<1$, we say the left action $\Gamma\curvearrowright G$ of $\Gamma$ on $G$ has {\em spectral gap}. It is worth mentioning that, if $\Omega_1$ and $\Omega_2$ are two generating sets of $\Gamma$ and $\lambda(\mathcal{P}_{\Omega_1};G)<1$, then $\lambda(\mathcal{P}_{\Omega_2};G)<1$. So having spectral gap is a property of the action $\Gamma\curvearrowright G$, and it is independent of the choice of a generating set for $\Gamma$. The following is the main result of this article: \begin{thm}\label{t:SpectralGap} Let $\Omega$ be a finite symmetric subset of $\GL_{n_0}(\mathbb{F}_p[t,1/r_0(t)])$ where $p>5$ is prime and $r_0(t)\in \mathbb{F}_p[t]\setminus \{0\}$. Let $\Gamma$ be the group generated by $\Omega$. Suppose the Zariski-closure $\mathbb{G}$ of $\Gamma$ in $(\GL_{n_0})_{\mathbb{F}_p(t)}$ is a connected, simply-connected, absolutely almost simple group. Suppose the field generated by $\Tr(\Ad(\Gamma))$ is $\mathbb{F}_p(t)$. Then there is a positive integer $c_0$ such that \[ \sup_{\{\ell_i(t)\}_i\in I_{r_0,c_0}}\lambda(\mathcal{P}_{\Omega};\prod_{i=1}^{\infty} \GL_{n_0}(\mathbb{F}_p[t]/\langle \ell_i(t)\rangle))<1, \] where $\{\ell_i(t)\}_{i=1}^{\infty}\in I_{r_0,c_0}$ if and only if $\ell_i(t)$ are irreducible, $\ell_i(t)\nmid r_0(t)$, and $\{\deg \ell_i\}_{i=1}^{\infty}$ is a strictly increasing sequence consisting of integers more than $1$ with no prime factors less than $c_0$. \end{thm} It is well-known that Theorem~\ref{t:SpectralGap} has immediate application in the explicit construction of expanders. Let us quickly recall that a family of $d$-regular graphs $X_i$ is called a family of expanders if the size $|V(X_i)|$ of the set of vertices goes to infinity and there is a positive number $\delta_0$ such that for any subset $B$ of $V(X_i)$ we have \[ \frac{|e(B,V(X_i)\setminus B)|}{\min(|B|,|V(X_i)\setminus B|)}>\delta_0, \] where $e(B,C)$ is the set of edges that connect a vertex in $B$ to a vertex in $C$. Expanders have a lot of applications in theoretical computer science (see \cite{HLW} for a survey on such applications). Now we can give the equivalent formulation of Theorem~\ref{t:SpectralGap} in terms of expander graphs (see \cite[Remark 15]{SG:SAI} or \cite[Section 4.3]{Lub}). \begin{thm'}\label{thmmainthm} Let $\Omega$ be a finite symmetric subset of $\GL_{n_0}(\mathbb{F}_p[t,1/r_0(t)])$ where $p>5$ is prime and $r_0(t)\in \mathbb{F}_p[t]\setminus\{0\}$. Let $\Gamma$ be the subgroup generated by $\Omega$. Suppose the Zariski-closure $\mathbb{G}$ of $\Gamma$ in $(\GL_{n_0})_{\mathbb{F}_p(t)}$ is a connected, simply-connected, absolutely almost simple group. Suppose the field generated by $\Tr(\Ad(\Gamma))$ is $\mathbb{F}_p(t)$. Then there is a positive integer $c_0$ such that the family of Cayley graphs \[\{{\rm Cay}(\pi_{f(t)}(\Gamma),\pi_{f(t)}(\Omega))|\hspace{1mm} f(t)\in S_{r_0,c_0}\}\] is a family of expanders where $S_{r_0,c_0}$ consists of square-free polynomials $f(t)\in \mathbb{F}_p[t]$ with prime factors $\ell_i(t)$ such that (1) $\ell_i(t)\nmid r_0(t)$, (2) $\deg \ell_i>1$, (3) $\deg \ell_i \neq \deg \ell_j$ if $i\neq j$, and (4) $\deg \ell_i$ does not have a prime factor less than $c_0$ and $\pi_{f(t)}$ is induced by the quotient map $\pi_{f(t)}:\mathbb{F}_p[t,1/r_0(t)]\rightarrow \mathbb{F}_p[t]/\langle f(t)\rangle$. \end{thm'} \subsection{What super-approximation is and an ultimate speculation} In order to put Theorem~\ref{t:SpectralGap} in the perspective of previous works, let us say what {\em super-approximation} is in a very general setting. \begin{definition} Suppose $A$ is an integral domain, and $\Omega$ is a finite symmetric subset of $\GL_{n_0}(A)$. Let $\Gamma$ be the group generated by $\Omega$. Suppose $\mathcal{C}$ is a family of finite index ideals of $A$. We say $\Gamma$ has {\em super-approximation with respect to $\mathcal{C}$} if $\sup_{\mathfrak{a}\in \mathcal{C}}\lambda(\pi_{\mathfrak{a}}[\mathcal{P}_{\Omega}];\GL_{n_0}(A/\mathfrak{a}))<1$, where $\pi_{\mathfrak{a}}$ is the group homomorphism induced by the quotient map $A\rightarrow A/\mathfrak{a}$ and $\pi_{\mathfrak{a}}[\mathcal{P}_{\Omega}]$ is the push-forward of $\mathcal{P}_{\Omega}$ under $\pi_{\mathfrak{a}}$. We simply say $\Gamma$ has {\em super-approximation} if it has super-approximation with respect to the set of all the finite index ideals of $A$. \end{definition} Because of several ground breaking results in the past decade (see~\cite{Hel1,Hel2,BGT,PS}, \cite{BG1}-\cite{BV}, \cite{Var}, \cite{SG:SAI}-\cite{SGV}), we have a very good understanding of super-approximation property for finitely generated subgroups of linear groups over $A:=\mathbb{Z}[1/q_0]$ (a finitely generated subring of $\mathbb{Q}$). In this case, it is proved that $\Gamma$ has super-approximation with respect to fixed powers of square-free ideals \cite{SGV,SG:SAII} and powers of prime ideals \cite{SG:SAI,SG:SAII} if and only if the connected component $\mathbb{G}^{\circ}$ of the Zariski-closure of $\Gamma$ in $(\GL_{n_0})_{\mathbb{Q}}$ has trivial abelianization. Based on these results we have the following conjecture. \begin{conjecture}[Super-approximation conjecture over $\mathbb{Q}$] Suppose $\Omega$ is a finite symmetric subset of $\GL_{n_0}(\mathbb{Z}[1/q_0])$, $\Gamma=\langle\Omega\rangle$, and $\mathbb{G}^{\circ}$ is the connected component of the Zariski-closure of $\Gamma$ in $(\GL_{n_0})_{\mathbb{Q}}$. Then $\Gamma$ has super-approximation if and only if $\mathbb{G}^{\circ}$ has trivial abelianization. \end{conjecture} In the beautiful survey by Lubotzky~\cite{Lub:survey}, he goes further and make an analogues conjecture (see~\cite[Conjecture 2.25]{Lub:survey}) for an arbitrary finitely generated integral domain $A$. Notice that such a conjecture implies that {\em super-approximation is a Zariski-topological property}; that means if two groups have equal Zariski-closures, then either both of them have super-approximation or neither have this property. It turns out that this conjecture is false in this generality (see~\cite[Example 5]{SGV}); there are two finitely generated subgroups of $\GL_{n_0}(\mathbb{Z}[i])$ such that (1) they have equal Zariski-closures in $(\GL_{n_0})_{\mathbb{Q}[i]}$, and (2) one of them has super-approximation and the other one does not. This shows that for an arbitrary integral domain $A$ one needs a refiner understanding of $\Gamma$ to determine if it has super-approximation. The key point is that super-approximation is about how well $\Gamma$ is distributed in its closure $\overline{\Gamma}$ in the compact group $\GL_{n_0}(\wh{A})$ where $\wh{A}:=\varprojlim_{|A/\mathfrak{a}|<\infty}A/\mathfrak{a}$ is the profinite closure of the ring $A$. When the field of fractions $Q(A)$ has a subfield $F$ such that $[Q(A):F]<\infty$, $\Gamma$ might satisfy some {\em hidden} polynomial relations over $F$ which {\em disappear} over $Q(A)$. Of course such polynomial relations are still satisfied in $\overline{\Gamma}$; and so these are vital in understanding the group structure of $\overline{\Gamma}$. To detect the mentioned hidden polynomial relations, one has to use Weil's restriction of scalars and view $\GL_{n_0}(Q(A))$ as the $F$-points of $R_{Q(A)/F}((\GL_{n_0})_{Q(A)})$. More or less what we are hoping for is to have a finitely generated ring $A_0$ and a group scheme $\mathcal{G}_0$ over $A_0$ such that $\overline{\Gamma}$ can be realized as an open subgroup of $\mathcal{G}_0(\wh{A_0})$ where $\wh{A_0}$ is the profinite closure of $A_0$. Strong approximation (see~\cite{MVW,Wei,Nor,PinkStrongApproximation}) gives us such a result under various extra algebraic conditions. This is partially responsible for some of the extra technical conditions in Theorem~\ref{t:SpectralGap} compared to the mentioned results over $\mathbb{Z}[1/q_0]$; it will be explained later why we need some additional technical conditions. In light of this discussion, it makes sense to formulate a conjecture for super-approximation property of a finitely generated subgroup $\Gamma$ of $\GL_{n_0}(A)$ {\em based on group theoretic properties of its closure $\overline{\Gamma}$ in $\GL_{n_0}(\wh{A})$}. As Lubotzky says in his survey~\cite[Conjecture 2.25]{Lub:survey} this conjecture is quite a {\em fantasy} at this point. \begin{conjecture}[Super-approximation conjecture for a finitely generated integral domain]\label{conj:ultimate} Suppose $A$ is a finitely generated integral domain, $\Omega$ is a finite symmetric subset of $\GL_{n_0}(A)$, and $\Gamma$ is the subgroup generated by $\Omega$. Let $\wh{A}:=\varprojlim_{|A/\mathfrak{a}|<\infty} A/\mathfrak{a}$ be the profinite closure of $A$ and $\overline{\Gamma}$ is the closure of $\Gamma$ in $\GL_{n_0}(\wh{A})$. Then $\Gamma$ has super-approximation if and only if any open subgroup $\overline{\Lambda}$ of $\overline{\Gamma}$ has finite abelianization; that means $|\overline{\Lambda}/[\overline{\Lambda},\overline{\Lambda}]|<\infty$. \end{conjecture} Let us make two remarks: (1) since $A$ is a finitely generated ring, for any maximal ideal $\mathfrak{m}$ we have that $A/\mathfrak{m}$ is a finitely generated ring and a field; and so $|A/\mathfrak{m}|<\infty$ if $\mathfrak{m}$ is a maximal ideal. Moreover, since $A$ is a finitely generated integral domain, it is a Jacobson ring which means intersection of its maximal ideals is zero. Hence $A$ can be (naturally) embedded into $\wh{A}$. Therefore it does make sense to talk about the closure of $\Gamma$ in $\GL_{n_0}(\wh{A})$. (2) Using the argument given in \cite[Proposition 8]{SG:SAII} one can get the ``only if" part of Conjecture~\ref{conj:ultimate}. It is worth mentioning that {\em super-approximation} (also known as {\em superstrong approximation}) has been found to be extremely instrumental in a wide range of problems; see \cite{ThingrpsandSSA} for a collection of its applications. \subsection{Best related result prior to this work} The best known result on super-approximation for linear groups over a global function field, prior to this work, is due to Bradford~\cite{Bra}. In \cite{Bra}, under the extra assumption that the degree $\deg \ell_i$ of irreducible factors $\ell_i$ are prime, a version of Theorem~\ref{thmmainthm} for subgroups of $\SL_2(\mathbb{F}_p[t])$ is proved. Bradford also highlights many of the subtleties involved in the positive characteristic case. \subsection{Notation}\label{sectionnotation} Throughout this paper for any group $G$ and a subgroup $H$, $Z(G)$ is the center of $G$, $C_G(H)$ is the centralizer of $H$ in $G$, and $N_G(H)$ is the normalizer of $H$ in $G$ as usual. If $G$ and $H$ are algebraic groups, then these notions are considered in the category of algebraic groups. For a finite subset $S$ of a group $G$, we denote by $\mathcal{P}_S$ the uniform probability measure supported on $S$; that means \begin{displaymath} \mathcal{P}_S(A) = |A\cap S|/|S| \end{displaymath} for any $A\subseteq G$. For any measure $\mu$ with finite support on $G$ and $g\in G$, we let $\mu(g):=\mu(\{g\})$. For any two measures with finite support $\mu,\nu$ on $G$, the convolution of $\mu$ and $\nu$ is denoted by $\mu\ast\nu$; and so \[(\mu\ast\nu)(g)=\sum_{h\in G} \mu(h)\nu(h^{-1}g).\] The $l$-fold convolution of $\mu$ with itself is denoted by $\mu^{(l)}$, and $\widetilde{\mu}$ denotes the measure such that $\widetilde{\mu}(g)=\mu(g^{-1}).$ For a measure $\mu$ with finite support on a group $G$ and a group homomorphism $\pi:G\rightarrow H$, we denote by $\pi[\mu]$ the push-forward of $\mu$ under $\pi$; that means $\pi[\mu](\overline{A}):=\mu(\pi^{-1}(\overline{A}))$ for any subset $\overline{A}$ of $H$. For subsets $A,A_1,\dots, A_n$ of a group $G$, we write \[\textstyle \prod_{i=1}^n A_i:=\{a_1a_2\dots a_n|a_i\in A_i\}\] for the product set of $A_1,\dots, A_n$ and we write \[\textstyle{\prod_k} A:=\{a_1a_2\dots a_k|a_i\in A,\ 1\le i \le k\}\] for the set consisting of products of $k$ elements of $A$. We denote by $\bigoplus_{i=1}^k G_i,$ the direct sum of the groups $G_1,\dots, G_k$. We use Vinogradov's notation $x\ll_A y$ to mean $|x|<Cy$ for some positive constant $C$ depending on the parameter $A$. For any constant $\delta$, $K=\Theta_A(\delta)$ means $\delta\ll_A K\ll_A \delta$. The subscript will be omitted from the above notation if either the constant is universal, or if the dependencies are clear from context. For any positive integer $n$, $[1..n]$ denotes the set of integers that are at least $1$ and at most $n$. We use $\pr_i:\bigoplus_{j\in I} G_j\rightarrow G_i$ to denote the projection to the $i^{\rm th}$ factor. For $J\subseteq I$, we identify the group $\bigoplus_{i\in J}G_i$ with its natural inclusion in $\bigoplus_{i\in I}G_i$. For any prime $p$ we let $\overline{\mathbb{F}}_p$ be an algebraic closure of a finite field $\mathbb{F}_p$ of order $p$. For any prime $p$ and positive integer $n$, $\mathbb{F}_{p^n}$ denotes the unique finite subfield of $\overline{\mathbb{F}}_p$ that has order $p^n$. For a field $F$, we let $F^{\times}:=F\setminus \{0\}$. For $f(t)\in \mathbb{F}_q[t]\setminus\{0\}$, we let $N(f):=|\mathbb{F}_q[t]/\langle f(t)\rangle|$. For an irreducible polynomial $\ell(t)\in \mathbb{F}_q[t]$ we let $v_\ell:\mathbb{F}_q(t)\rightarrow \mathbb{Z}\cup\{\infty\}$ be the $\ell$-valuation; that means for $r\in\mathbb{F}_q[t]\setminus \{0\}$ we let $v_\ell(r):=m$ if $\ell^m|r$ and $\ell^{m+1}\nmid r$, $v_{\ell}$ induces a group homomorphism from $\mathbb{F}_q(t)^{\times}$ to $\mathbb{Z}$, and $v_\ell(r)=\infty$ if and only if $r=0$. We let $v_{\infty}$ be the valuation associated to $1/t$; that means $v_{\infty}(r/s):=\deg s-\deg r$ for any $r,s\in \mathbb{F}_q[t]\setminus\{0\}$. The set of valuations of $\mathbb{F}_q(t)$ is denoted by $V_{\mathbb{F}_q(t)}$. For any valuation $v$, the $v$-adic norm of $r\in \mathbb{F}_q(t)$ is defined as \[ |r|_v:= \begin{cases} N(\ell)^{-v(r)} &\text{ if } v=v_\ell \text{ for some irreducible polynomial } \ell, \\ q^{-v(r)} &\text{ if } v=v_{\infty}. \end{cases} \] For any valuation $v$, the $v$-adic completion of $K:=\mathbb{F}_q(t)$ is denoted by $K_v$. The ring of $v$-adic integers is denoted by $\mathcal{O}_v$, and the residue field of $K_v$ is denoted by $K(v)$. For an irreducible polynomial $\ell$, we let $K(\ell):=\mathbb{F}_q[t]/\langle \ell\rangle\simeq K(v_\ell)$. For any valuation $v$ of $\mathbb{F}_q$, we let $\deg v:=[K(v):\mathbb{F}_q]$; and so we have $\log_q N(f)=\sum_{v\in V_{\mathbb{F}_q(t)}} v(f) \deg v$ for any $f\in \mathbb{F}_q[t]$. For $r_0(t)\in \mathbb{F}_q[t]$, we let $D(r_0)$ to be either the set of irreducible factors of $r_0$ or $\{v_\ell\in V_{\mathbb{F}_q(t)}|\hspace{1mm} \ell \text{ is irreducible, } \ell|r_0\}$. For a finite subset $S$ of valuations of $\mathbb{F}_q(t)$, we let $\|r\|_S:=\max_{v\in S}|r|_v$. In this note for $h\in \GL_{n_0}(\mathbb{F}_q[t,1/r_0(t)])$, we let $\|h\|:=\max_{i,j} \|h_{ij}\|_{D(r_0)\cup\{v_{\infty}\}}$ where $h_{ij}$ is the $ij$-entry of $h$. We notice that this norm depends on $r_0(t)$, and $r_0(t)$ should be understood from the context. For a polynomial $r_0\in \mathbb{F}_p[t]\setminus\{0\}$ and positive integer $c_0$, we let $S_{r_0,c_0}$ be the set of all square-free polynomials $f(t)\in \mathbb{F}_p[t]$ with prime factors $\ell_i(t)$ such that (1) $\ell_i(t)\nmid r_0(t)$, (2) $\deg \ell_i>1$, (3) $\deg \ell_i \neq \deg \ell_j$ if $i\neq j$, and (4) $\deg \ell_i$ does not have a prime factor less than $c_0$. For a ring $A$, $A^{\times}$ is the group of units of $A$ and $\Spec (A)$ denotes the associated affine scheme; that means the points of this space are prime ideals of $A$. If $\mathcal{H}$ is a group scheme defined over a ring $A$ and $B$ is an $A$-algebra, then $\mathcal{H}\otimes_A B$ denotes the group scheme on the fiber product $\mathcal{H}\times_{\Spec A}\Spec B$. For a group scheme $\mathcal{H}$ defined over $A$ and $\ell\in A\setminus A^{\times}$, we let $\mathcal{H}_{\ell}:=\mathcal{H}\otimes_A A/\langle \ell\rangle$. For a ring $A$, $(\GL_n)_A$ denotes the $A$-group scheme given by the $n$-by-$n$ general linear group; so $(\GL_n)_A=(\GL_n)_{\mathbb{Z}}\otimes_{\mathbb{Z}} A$. For an algebraic group $\mathbb{G}$, $R_u(\mathbb{G})$ denotes its unipotent radical, and $\Lie \mathbb{G}$ is its Lie algebra. \subsection{Outline of proof and the key differences with the characteristic zero case}\label{sectionoutline} The general architecture of this article is as in Salehi Golsefidy-Varj\'{u}'s work \cite{SGV} where Bourgain-Gamburd's method~\cite{BG1} has been combined with Varj\'{u}'s multi-scale argument~\cite{Var}. By now there are many excellent surveys and lecture notes that explain the key ideas of the ground breaking result of Bourgain and Gamburd (see~\cite{Bre-survey,Hel-survey,Kow-survey,Tao-book}); so here we will be very brief on that part and focus on the main difficulties that were needed to be addressed. As in the characteristic zero case, we start with understanding the group structure of $\pi_{f}(\Gamma)$ for a square-free polynomial $f(t)\in \mathbb{F}_p(t)$. By Weisfeiler's strong approximation theorem~\cite{Wei}, we have that, if irreducible factors $\ell_i(t)$ of $f(t)$ have large degrees, then \[\pi_f(\Gamma)\simeq \bigoplus_{i=1}^n \mathbb{G}_{\ell_i}(\mathbb{F}_{N(\ell_i)})\] for some absolutely almost simple $\mathbb{F}_{N(\ell_i)}$-group $\mathbb{G}_{\ell_i}$ of dimension bounded by $n_0^2$. Notice that, since $\mathbb{F}_p(t)$ has many subfields, it is inevitable to have an assumption on the trace field of $\Gamma$ to get such a result; this is why we assume that $\mathbb{F}_p(t)$ is the field generated by $\Tr(\Ad(\Gamma))$. There is a positive number $c_0$ depending on $n_0$ such that all the factors $\mathbb{G}_{\ell_i}(\mathbb{F}_{N(\ell_i)})$ are $c_0$-quasirandom in the sense of Gowers \cite{Gow}; this implies that for any irreducible representation $\rho$ of $\pi_f(\Gamma)$ we have that $\dim \rho\ge |{\rm Im}\hspace{1mm} \rho|^{c_0}$. Based on Sarnak-Xue trick~\cite{SX1} (see~\cite{Gow,NP}), it would be enough to find a good upper bound for the trace of $(T_{\pi_f(\Omega)}^{\circ})^l$ for some positive integer $l=\Theta_{n_0}(\log |\pi_f(\Gamma)|)$. This trace can be controlled in terms of the $L^2$-norm of $\pi_f[\mathcal{P}_{\Omega}]^{(l)}$. Following Bourgain-Gamburd's treatment we look at the sequence of $\{\|\pi_f[\mathcal{P}_{\Omega}]^{(2^m)}\|_2\}_{m=1}^{\infty}$. It is easy to see that it is a decreasing sequence with a lower bound $\|\mathcal{P}_{\pi_f(\Gamma)}\|_2$ (the $L^2$-norm of the probability counting measure on $\pi_f(\Gamma)$). Roughly what Bourgain and Gamburd showed is that, if at some step $\|\pi_f[\mathcal{P}_{\Omega}]^{(2^m)}\|_2$ is still not close enough to the lower bound $\|\mathcal{P}_{\pi_f(\Gamma)}\|_2$ and does not get significantly smaller in the next step, there should be an algebraic reason: $\pi_f[\mathcal{P}_{\Omega}]^{(2^m)}$ should be concentrated on an approximate subgroup $X$; this roughly means $X$ is symmetric and almost close under multiplication. (we refer the reader to the above cited surveys and lecture notes and \cite{Tao-approximate-subgroup} for a more thorough treatment of this subject; in this note we do not define approximate subgroups as they play an important role only at the background of our arguments). Breakthrough results of Breuillard-Green-Tao~\cite{BGT} and Pyber-Szab\'{o}~\cite{PS} (these generalize works of Helfgott~\cite{Hel1,Hel2}) say that an approximate subgroup of a finite simple group of Lie type with bounded rank is very close to being a subgroup. The multi-scale argument of Varj\'{u}~\cite{Var} gives us an axiomatic way to reduce understanding of approximate subgroups of a finite product of finite groups to the same question for each one of the factors (see~\cite[Section 3]{Var}). One of Varj\'{u}'s assumptions on the factors (see \cite[Condition (A5), section 3]{Var}) demands a type of {\em bounded hierarchy} for the subgroups of factors. The main idea of existence of such bounded hierarchy of subgroups relies on Nori's result~\cite{Nor} which roughly says a subgroup of $\GL_{n_0}(\mathbb{F}_p)$ is more or less the $\mathbb{F}_p$-points of an algebraic subgroup of $(\GL_{n_0})_{\mathbb{F}_p}$; and the mentioned hierarchy comes from the dimension of the associated algebraic subgroup. Clearly this type of statement is not true for subgroups of $\GL_{n_0}(\mathbb{F}_{p^d})$ when $d$ gets arbitrarily large; consider $\GL_{n_0}(\mathbb{F}_p)\subseteq \GL_{n_0}(\mathbb{F}_{p^2})\subseteq \cdots \subseteq \GL_{n_0}(\mathbb{F}_{p^d})$. So an important part of this work is to modify Varj\'{u}'s argument to work in our setting (notice that we are presenting the overview of the proof in a backward fashion; and so this part of proof appears towards the end of the article in Section~\ref{s:VarjuProductTheorem}). So far under the contrary assumption we have that $\pi_f[\mathcal{P}_{\Omega}^{(l_0)}]$ is concentrated on a proper subgroup $H$ of $\pi_f(\Gamma)$ for some positive integer $l_0=\Theta_{n_0}(\log |\pi_f(\Gamma)|)$. Hence we need to have a good understanding of proper subgroups of $\pi_f(\Gamma)$ and escape them in logarithmic number of steps. Here is another important difference with the case of $A=\mathbb{Z}[1/q_0]$ that we only partially address and is responsible for some of the additional technical assumptions in Theorem~\ref{t:SpectralGap}. For the case of $A=\mathbb{Z}[1/q_0]$, we have to understand proper subgroups of $\GL_{n_0}(\mathbb{F}_\ell)$ where $\ell$ is a prime integer; and as it has been pointed out earlier, by a result of Nori~\cite{Nor} such groups are more or less $\mathbb{F}_\ell$-points of an algebraic subgroup. When $A=\mathbb{F}_p[t,1/r_0(t)]$, we need to understand subgroups of $\GL_{n_0}(\mathbb{F}_{N(\ell)})$ where $\ell(t)\in \mathbb{F}_p[t]$ is an irreducible polynomial that does not divide $r_0(t)$. Using work of Larsen and Pink~\cite{LP}, we prove (see Section~\ref{sectiondesubgroupdich}) that if $\mathbb{G}_0$ is an absolutely almost simple group of adjoint type defined over a finite field $\mathbb{F}_q$ and $H\subseteq\mathbb{G}_0(\mathbb{F}_q)$ is a proper subgroup, then either there exists a proper algebraic subgroup $\mathbb{H}$ (with controlled complexity) of $\mathbb{G}_0$ with $H\subseteq \mathbb{H}({\mathbb{F}}_q)$, or there exists a subfield $\mathbb{F}_{q'}$ and a model $\mathbb{G}_1$ of $\mathbb{G}_0$ defined over $\mathbb{F}_{q'}$ (that means we can and will identify $\mathbb{G}_1\otimes_{\mathbb{F}_{q'}}\mathbb{F}_q$ with $\mathbb{G}_0$) such that \[[\mathbb{G}_1(\mathbb{F}_{q'}):\mathbb{G}_1(\mathbb{F}_{q'})]\subseteq H\subseteq \mathbb{G}_1(\mathbb{F}_{q'}).\] Subgroups of the former type are called {\em structural subgroups} while subgroups of the latter type are called {\em subfield type subgroups}. Currently we do not know how to escape subfield type subgroups, and this is why we need to add the extra technical assumptions on the largeness of prime factors of the degree of irreducible factors $\ell_i$ of $f$ in Theorem~\ref{t:SpectralGap}. In order to escape structural subgroups, we use similar ideas as in Salehi Golsefidy-Varj\'{u} \cite{SGV}; but since representations of a simple group over a positive characteristic field are not necessarily completely reducible, we face extra difficulties that need to be resolved. To be more precise we show that there is a polynomial $r_1(t)$ depending on $\Omega$ such that, if $f(t)\in\mathbb{F}_p[t]$ is a square-free polynomial and $\gcd(f,r_1)=1$, then (1) $\pi_f(\Gamma)=\prod_{i=1}^n \pi_{\ell_i}(\Gamma)$ where $\ell_i$'s are irreducible factors of $q$ and $\pi_{\ell_i}(\Gamma)\simeq \mathbb{G}_{\ell_i}(\mathbb{F}_{N(\ell_i)})$ for an absolutely almost simple $\mathbb{F}_{N(\ell_i)}$-groups $\mathbb{G}_{\ell_i}$; (2) if $H\subseteq \pi_f(\Gamma)$ is a proper subgroup such that $\pi_{\ell_i}(H)$ is a structural subgroup of $\mathbb{G}_{\ell_i}(\mathbb{F}_{N(\ell_i)})$ for any $i$, then the set of {\em small lifts} of $H$, \[\mathcal{L}_\delta(H):=\{h\in \mathbb{G}(\mathbb{F}_{p}[t,1/r_0(t)])\mid \pi_f(h)\in H\mbox{ and }\|h\|<[G:H]^\delta\}\] is contained in a proper algebraic subgroup of $\mathbb{G}$, where $\|h\|:=\max_{ij} \|h_{ij}\|_{D(r_1)\cup \{v_{\infty}\}}$ (when $\delta$ is small enough depending on $\Omega$). So we can escape a proper subgroup $H$ of $\pi_f(\Gamma)$ where $\pi_{\ell_i}(H)$ are structural subgroups if we manage to escape proper algebraic subgroups of $\mathbb{G}$. Following \cite{SGV}, we show that there are finitely many non-trivial irreducible representations $\{\rho_i:\mathbb{G}\rightarrow \GL(\mathbb{V}_i)\}_{i=1}^m$ and affine representations $\{\rho_j':\mathbb{G}\rightarrow {\rm Aff}(\mathbb{W}_j)\}_{j=1}^{m'}$ of $\mathbb{G}$ such that (1) the linear part of $\rho_j'$ is non-trivial and irreducible, (2) $\mathbb{G}(\overline{\mathbb{F}_p(t)})$ does not fix any point of $\mathbb{W}_j(\overline{\mathbb{F}_p(t)})$, (3) for any proper algebraic subgroup $\mathbb{H}$ of $\mathbb{G}$ there are either $i$ and $v\in \mathbb{V}_i(\overline{\mathbb{F}_p(t)})$ such that $\rho_i(\mathbb{H}(\overline{\mathbb{F}_p(t)}))[v]=[v]$ where $[v]$ is the line in $\mathbb{V}_i(\overline{\mathbb{F}_p(t)})$ that is spanned by $v$ or $j$ and $w\in \mathbb{W}_j(\overline{\mathbb{F}_p(t)})$ which is fixed by $\mathbb{H}(\overline{\mathbb{F}_p(t)})$ (see Proposition~\ref{propreps}). Notice that, since the representation $\wedge^{\dim\mathbb{H}}\Ad$ is not necessarily completely reducible, we had to use affine representations even for the case where $\mathbb{G}$ is (semi)simple; this is an issue that can occur only in the positive characteristic case. Having this result we can apply the same {\em ping-pong} type argument as in~\cite[Proposition 21]{SGV} and find a finite symmetric subset $\Omega'$ of $\Gamma$ such that very few words in terms of $\Omega'$ fix a line in one of the irreducible representations $\rho_i$; and then we deduce that $\mathcal{P}_{\Omega'}^{(l)}(\mathbb{H}(\mathbb{F}_p(t)))\le e^{-O_{\Omega}(l)}$ for any proper algebraic subgroup $\mathbb{H}$ of $\mathbb{G}$. In order to be able to use $\Omega'$ instead of $\Omega$, we have to make sure that $\pi_f(\langle \Omega'\rangle)=\pi_f(\Gamma)$ when irreducible factors of $f$ have large degree. Unfortunately at this point, we cannot do this; and here is another place that the technical assumption on the largeness of prime divisors of the degree of irreducible factors of $f$ is needed. We suspect that this condition should not be needed here and the answer to the following question should be affirmative. \begin{question}\label{ques:AdelicToplogicalTitsAlternative} Let $\Omega$, $\Gamma$, and $\mathbb{G}$ be as in the hypotheses of Theorem~\ref{t:SpectralGap}. Let $K:=\mathbb{F}_{p}(t)$, $V_K$ be the set of valuations of $K$, $\mathcal{O}_v$ be the ring of integers of the completion $K_v$ of $K$ with respect to a valuation $v$, and $D(r_0):=\{v_{\ell}|\hspace{1mm} \ell \text{ is an irreducible factor of } r_0\}$. Let $\overline{\Gamma}$ be the closure of $\Gamma$ in $\prod_{v\in V_K\setminus (D(r_0)\cup\{v_{\infty}\})}\GL_{n_0}(\mathcal{O}_v)$. Then there is a finite subset $\Omega'_0$ of $\Gamma$ such that \begin{enumerate} \item $\Omega'_0$ freely generates a subgroup $\Gamma'$ of $\Gamma$. \item The closure $\overline{\Gamma'}$ of $\Gamma'$ in $\overline{\Gamma}$ is open. \item For any proper algebraic subgroup $\mathbb{H}$ of $\mathbb{G}$ we have $\mathcal{P}_{\Omega'}^{(l)}(\mathbb{H}(\mathbb{F}_{p}(t)))\le e^{-O_{\Omega}(l)}$ where $\Omega':=\Omega'_0\sqcup \Omega_0'^{-1}$. \end{enumerate} \end{question} It is worth mentioning that we do find $\Omega_0'$ that satisfies (1) and (3); but we cannot make sure that the trace field of $\Gamma'$ would be still $\mathbb{F}_{p}(t)$. Hence strong approximation does not imply (2). This issue does not occur over $\mathbb{Q}$ as it does not have any non-trivial subfield. Overall we get the following result. \begin{prop}[Escape from proper subgroups]\label{propmainescape} Let $\Omega$, $\Gamma$, and $\mathbb{G}$ be as in the hypotheses of Theorem \ref{t:SpectralGap}. Then there is a symmetric set $\Omega'\subset \Gamma$, a square free polynomial $r_1$ divisible by $r_0$, and constants $c_0$ and $\delta_0$ depending only on $\Omega$ such that the following holds:\newline \indent For $f\in S_{r_1,c_0}$, suppose $H\le \pi_f(\Gamma)$ is a proper subgroup with the property that $\pi_\ell(H)$ is a structural subgroup of $\pi_\ell(\Gamma)$ for every irreducible factor $\ell$ of $f$. Then for $l\gg_{\Omega} \deg f$ we have \[\pi_f[\convolve{\mathcal{P}}{l}_{\Omega'}](H)\le [\pi_f(\Gamma):H]^{-\delta_0}\text{, and } \pi_f(\langle \Omega'\rangle)=\pi_f(\Gamma).\] \end{prop} As you can see using Proposition~\ref{propmainescape} we can only show escape from proper subgroups with {\em structural factors}. On the other hand, roughly speaking an arbitrary proper subgroup $H$ can be embedded into a product of two groups, one with structural factors and the other with subfield subgroup factors. The extra technical condition on the largeness of prime factors of degrees of irreducible factors of $f$ implies that the subgroup with subfield factors is relatively small; so it can be disregarded, and we get the desired result. \section*{Acknowledgments} We would like to thank P. Varj\'{u} and M. Larsen for their quick replies to our questions in regard to their works. The second author is thankful to A. Mohammadi for many mathematical discussions related (and unrelated) to random walks in compact groups. \section{A refinement of a theorem by Larsen and Pink}\label{s:RefinementOfLarsenPink} In this section, we point out how Larsen and Pink's work~\cite{LP} gives us a concrete understanding of proper subgroups of $\pi_f(\Gamma)$ where $\Gamma\subseteq\GL_{n_0}(\mathbb{F}_p[t,1/r_0(t)])$ is as in Theorem~\ref{t:SpectralGap} (see Theorem~\ref{thm:FinalRefinementOfLP}). To avoid referring reader to the {\em ideas} in that article, we present an argument that uses only a couple of results from \cite{LP} as a black-box. That said it is worth pointing out that most of the results in this section are hidden in the mentioned Larsen-Pink work. \subsection{General setting and strong approximation}\label{ss:storngapproximation} Let $\Omega\subset\GL_{n_0}(\mathbb{F}_{q_0}(t))$ be a finite symmetric set, and let $\Gamma=\grpgen{\Omega}$. Since $\Omega$ is finite, there exists a square-free polynomial $r_0\in\mathbb{F}_{q_0}[t]$ such that $\Omega\subset\GL_{n_0}(\mathbb{F}_{q_0}[t,1/r_0(t)])$. The set of polynomials in $n_0^2$ variables with coefficients in $\mathbb{F}_{q_0}(t)$ which vanish on $\Gamma$ define a flat group scheme $\mathcal{G}$ of finite type over $\mathbb{F}_{q_0}[t,1/r_0]$. The Zariski closure $\mathbb{G}$ of $\Gamma$ in $(\mathbb{GL}_{n_0})_{\mathbb{F}_{q_0}(t)}$ can be viewed as the generic fiber \begin{equation} \label{grpsch}\mathcal{G}\otimes_{\mathbb{F}_{q_0}[t,1/r_0]}\mathbb{F}_{q_0}(t)\end{equation} of $\mathcal{G}$. After possibly passing to a multiple of $r_0$, we may assume $\mathcal{G}$ is a smooth group scheme over $\mathbb{F}_{q_0}[t,1/r_0]$ and that all of its fibers are of constant dimension. For any polynomial $f\in\mathbb{F}_{q_0}[t]$ that is coprime to $r_0$, we let $\mathcal{G}_f:=\mathcal{G}\otimes_{\mathbb{F}_{q_0}[t,1/r_0]}\mathbb{F}_{q_0}[t]/\langle f\rangle$; and the {\em reduction modulo $f$ homomorphism} is denoted by $\pi_f:\mathcal{G}(\mathbb{F}_{q_0}[t,1/r_0])\rightarrow \mathcal{G}_f(\mathbb{F}_{q_0}[t]/\langle f\rangle).$ For an irreducible polynomial $\ell$ which does not divide $r_0$, let $K(\ell):=\mathbb{F}_{q_0}[t]/\langle \ell\rangle$. Then $\mathcal{G}_{\ell}$ is an absolutely almost simple $K(\ell)$-group; and possibly after passing to a multiple of $r_0$, we can and will assume that all $\mathcal{G}_{\ell}\otimes_{K(\ell)} \overline{K(\ell)}$ are of the same type $\Phi$ as $\ell$ ranges through irreducible polynomials in $\mathbb{F}_{q_0}[t]$ that do not divide $r_0$; this means there is an adjoint Chevalley $\mathbb{Z}$-group scheme $\mathcal{G}^{\rm Che}$ (we refer the reader to \cite{Ste} for a thorough treatment of Chevalley group schemes) such that for any irreducible $\ell$ that does not divide $r_0$ we have a central isogeny \[ \mathcal{G}_{\ell}\otimes_{K(\ell)} \overline{K(\ell)}\rightarrow \mathcal{G}^{\rm Che}\otimes_{\mathbb{Z}} \overline{K(\ell)}. \] By Weisfeiler's strong approximation theorem \cite[Theorem 1.1]{Wei}, after possibly passing to a multiple of $r_0$, we have that if $f$ is a square-free polynomial coprime to $r_0$, then \begin{equation} \label{eqnSAsurj} \pi_f(\Gamma)=\mathcal{G}_f(\mathbb{F}_{q_0}[t]/\langle f\rangle);\end{equation} and by the Chinese Remainder Theorem $\mathbb{F}_{q_0}[t]/\langle f\rangle\simeq \bigoplus_{\ell\mid f, \ell \text{ irred.}} K(\ell)$, which implies \begin{equation} \label{eqnSAprod} \mathcal{G}_f(\mathbb{F}_{q_0}[t]/\langle f\rangle)\simeq \prod_{\ell\mid f, \ell \text{ irred.}}\mathcal{G}_{\ell}(K(\ell)). \end{equation} Throughout this paper, we may replace $r_0$ by the product of all irreducible polynomials of degree at most $C$ in $\mathbb{F}_{q_0}[t]$ for some $C\ll_\Omega 1$ as necessary. For the remainder of this section, $f$ is a fixed square-free polynomial coprime to $r_0$. In order to prove Proposition~\ref{propmainescape} we must understand proper subgroups of $\pi_f(\Gamma)$. In light of (\ref{eqnSAprod}) and (\ref{eqnSAsurj}), we must study proper subgroups of $\mathcal{G}_\ell(K(\ell))$ as $\ell$ ranges through all irreducible factors of $f$. \subsection{The dichotomy of proper subgroups of $\mathcal{G}_{\ell}(K(\ell))$}\label{sectiondesubgroupdich} In this section the mentioned theorem of Larsen-Pink is stated and based on that we define structure type and subfield type subgroups. Let $\mathbb{T}$ be a maximal torus of $\mathbb{G}$ and let $L$ be a minimal splitting field of $\mathbb{T}$. Then $L $ is a finite extension of $\mathbb{F}_{q_0}(t)$ of degree say $D'$. Let $\mathcal{G}^{Che}$ be the adjoint Chevalley $\mathbb{Z}$-group scheme of the same type $\Phi$ as $\mathbb{G}\otimes_{K} L$, where $K:=\mathbb{F}_{q_0}(t)$. Then there exists a central $L$-isogeny \begin{equation*} \mathbb{G}\otimes_{K}L\rightarrow \mathcal{G}^{\rm Che}\otimes_\mathbb{Z} L. \end{equation*} After passing to a multiple of $r_0$, if needed, we can extend this isogeny to a central $\mathcal{O}_L[1/r_0]$-isogeny \begin{equation*}\label{eqnglobliso} \phi:\mathcal{G}\otimes_{\mathbb{F}_{q_0}[t,1/r_0]}\mathcal{O}_L[1/r_0]\rightarrow \mathcal{G}^{\rm Che}\otimes_\mathbb{Z}\mathcal{O}_L[1/r_0] \end{equation*} where $\mathcal{O}_L$ is the integral closure of $\mathbb{F}_{q_0}[t]$ in $L$. For an irreducible polynomial $\ell$ coprime to $r_0$, let $\mathfrak{l}\in \Spec(\mathcal{O}_L)$ be in the fiber over $\langle \ell\rangle$; that means $\mathfrak{l}\cap \mathbb{F}_{q_0}[t]=\langle \ell\rangle$. Then $K(\ell):=\mathbb{F}_{q_0}[t]/\langle \ell\rangle$ can be embedded into $L(\mathfrak{l}):=\mathcal{O}_L/\mathfrak{l}$, and \[[L(\mathfrak{l}):K(\ell)]\le [L:K]\ll_\mathbb{G} 1.\] Hence, we obtain an induced central $L(\mathfrak{l})$-isogeny \begin{equation*}\label{phip}\phi_\ell:\mathcal{G}_\ell\otimes_{K(\ell)} L(\mathfrak{l})\rightarrow \mathcal{G}^{\rm Che}\otimes_{\mathbb{Z}} L(\mathfrak{l}). \end{equation*} With this preparation, we mention a theorem of Larsen and Pink which is key in understanding proper subgroups of $\mathcal{G}_\ell(K(\ell))$. \begin{thm}{\cite[Theorem 0.6]{LP}}\label{thmLP} Let $\mathcal{G}_0^{\rm Che}$ be an adjoint Chevalley $\mathbb{Z}$-group scheme with simple root system $\Phi_0$. Then there exists a representation \[\rho:\mathcal{G}_0^{\rm Che}\rightarrow (\GL_{n'_0})_{\mathbb{Z}}\] with the following property: Let $H$ be a finite subgroup of $\mathcal{G}_{0,p}^{\rm Che}(\overline{\mathbb{F}}_p)$ where $\mathcal{G}_{0,p}^{\rm Che}=\mathcal{G}_0^{\rm Che}\otimes_\mathbb{Z} \overline{\mathbb{F}}_p$ is the geometric fibre of $\mathcal{G}_0^{\rm Che}$ over $p$ where $p$ is a prime more than 3. Then either there exists a proper subspace $W\subset (\overline{\mathbb{F}}_p)^{n_0'}$ that is stable under $\rho(H)$ but not $\rho(\mathcal{G}_{0,p}^{\rm Che}(\overline{\mathbb{F}}_p))$, or there exists a finite field $\mathbb{F}_q\subset\overline{\mathbb{F}}_p$ and a model $\mathbb{G}_0$ of $\mathcal{G}_{0,p}^{\rm Che}$ over $\mathbb{F}_q$ (that means an $\mathbb{F}_q$-group $\mathbb{G}_0$ such that $\mathbb{G}_0\otimes_{\mathbb{F}_q}\overline{\mathbb{F}}_{p}\cong \mathcal{G}_{0,p}^{\rm Che}$) such that the commutator subgroup of $\mathbb{G}_0(\mathbb{F}_q)$ is simple and \begin{equation}\label{eqnsubfieldsubs}[\mathbb{G}_0(\mathbb{F}_q):\mathbb{G}_0(\mathbb{F}_q)]\subseteq H\subseteq \mathbb{G}_0(\mathbb{F}_q).\end{equation} \end{thm} \begin{definition} Subgroups that satisfy the first condition are said to be of {\em structural type} while subgroups that satisfy the latter condition are said to be of {\em subfield type}. If for an irreducible polynomial $\ell$ that does not divide $r_0$, $H\subseteq \pi_\ell(\Gamma)\simeq\mathcal{G}_\ell(K(\ell))$ is a subgroup such that $\phi_\ell(H)$ is a subfield type subgroup (resp. structural type subgroup) of $\mathcal{G}_p^{\rm Che}(\overline{K(\ell)})$, then we call $H$ a {\em subfield} (resp. {\em structural}) {\em type subgroup} of $\pi_\ell(\Gamma)$.\end{definition} \subsection{Refiner description of subfield type subgroups of $\mathcal{G}_{\ell}(K(\ell))$} In this section, we focus on subfield type subgroups of $\mathcal{G}_{\ell}(K(\ell))$; and we get a connection between the model $\mathbb{G}_0$ given in Theorem \ref{thmLP} and $\mathcal{G}_{\ell}$. We prove a stronger result (see Proposition~\ref{propLPsubfields}) which is of independent interest. A subfield type subgroup $H$ of $\mathcal{G}^{\rm Che}(\overline{\mathbb{F}}_p)$ gives us a finite field $F_H$ and a model $\mathbb{G}_H$ of $\mathcal{G}^{\rm Che}\otimes_{\mathbb{Z}} \overline{\mathbb{F}}_p$ over $F_H$. Proposition~\ref{propLPsubfields} implies that if $H_1\subseteq H_2$ are two subfield subgroups of $\mathcal{G}^{\rm Che}(\overline{\mathbb{F}}_p)$ and $p$ is large enough, then $F_{H_1}\subseteq F_{H_2}$ and $\mathbb{G}_{H_1}$ is a model of $\mathbb{G}_{H_2}$ over $F_{H_1}$. This statement can be proved by the virtue of the argument given by Larsen and Pink. Here we give an independent self-contained proof. \begin{prop}\label{propLPsubfields} For $i=1,2$, let $\mathbb{G}_i$ be an absolutely almost simple group defined over a finite field $\mathbb{F}_{q_i}\subseteq \overline{\mathbb{F}}_p$. Suppose $\mathbb{F}_{q_i}$'s are of characteristic $p>5$, $q_1>9$, and that $\mathbb{G}_2$ is of adjoint type. Suppose $\widetilde{\theta}:\mathbb{G}_{1}\otimes_{\mathbb{F}_{q_1}}\overline{\mathbb{F}}_{p}\rightarrow \mathbb{G}_{2}\otimes_{\mathbb{F}_{q_2}}\overline{\mathbb{F}}_{p}$ is an isogeny such that \[\widetilde{\theta}(\mathbb{G}_{1}(\mathbb{F}_{q_1}))\subseteq \mathbb{G}_2(\mathbb{F}_{q_2}).\] Then $\mathbb{F}_{q_1}\subseteq \mathbb{F}_{q_2}$ and there exists an isogeny $\theta:\mathbb{G}_1\otimes_{\mathbb{F}_{q_1}} \mathbb{F}_{q_2}\rightarrow \mathbb{G}_2$ such that $\theta\otimes \id_{\overline{\mathbb{F}}_{q_1}}=\widetilde{\theta}$. \end{prop} \input{LPrefinement} \subsection{Refiner description of structure type subgroups of $\mathcal{G}_\ell(K(\ell))$} Suppose $H$ is a structural subgroup of $\mathcal{G}_{\ell}(K(\ell))$; it means there is a proper subgroup $\mathbb{H}_{\ell}$ of $\mathcal{G}_{\ell}\otimes_{K(\ell)} \overline{K(\ell)}$ such that $H\subseteq \mathbb{H}_{\ell}(\overline{K(\ell)})$. In this section, we use almost the full strength of Larsen and Pink's result to give a control on the {\em complexity} of $\mathbb{H}_{\ell}$ and its {\em field of definition}. \begin{definition} Suppose $F$ is an algebraically closed field and $(\mathbb{A}^n)_F$ is the affine space over $F$. The {\em complexity} of a Zariski closed subset $X$ of $F^n$ is the minimum of positive integers $D$ such that there are at most $D$ polynomials $p_i$ of degree at most $D$ in $F[x_1,\ldots,x_n]$ such that $X$ is the set of common zeros of $p_i$'s. \end{definition} It is worth pointing out that one can use the language of algebraic geometry and use degree of the closure of $X$ in the projective space $\mathbb{P}^n$ to capture the above mentioned complexity of $X$; but we find it easier for the reader to work with the above mentioned quantity. \begin{prop}\label{prop:structural-subgroups} Suppose $\Gamma$, $\mathcal{G}$, $\mathcal{G}_{\ell}$, and $K(\ell)$ are as above; that means $\Gamma$ is a finitely generated subgroup of $\GL_{n_0}(\mathbb{F}_{q_0}[t,1/r_0(t)])$ where $q_0$ is a power of a prime $p>3$ and the field generated by $\Tr(\Gamma)$ is $\mathbb{F}_{q_0}(t)$, $\mathcal{G}$ is the Zariski-closure of $\Gamma$ in $(\GL_{n_0})_{\mathbb{F}_{q_0}[t,1/r_0(t)]}$, for any irreducible polynomial $\ell\in \mathbb{F}_{q_0}[t]$ that does not divide $r_0(t)$, let $K(\ell):=\mathbb{F}_{q_0}[t]/\langle \ell\rangle$ and $\mathcal{G}_{\ell}:=\mathcal{G}\otimes_{\mathbb{F}_{q_0}[t,1/r_0(t)]} K(\ell)$. Suppose $\mathbb{G}:=\mathcal{G}\otimes_{\mathbb{F}_{q_0}[t,1/r_0(t)]} \mathbb{F}_{q_0}(t)$ is an absolutely almost simple group, connected, simply connected group. Then if $H\subseteq \pi_{\ell}(\Gamma)$ is a proper structural subgroup for some irreducible polynomial $\ell$ with $\deg \ell\gg_{\Gamma} 1$, then there is a proper algebraic subgroup $\mathbb{H}$ of $\mathcal{G}_{\ell}$ such that \begin{enumerate} \item the complexity of $\mathbb{H}$ is bounded by a function of $\Gamma$, \item $H\subseteq \mathbb{H}(K(\ell))\subsetneq \mathcal{G}_{\ell}(K(\ell))$. \end{enumerate} \end{prop} \begin{proof} As it has been mentioned earlier (see Section~\ref{ss:storngapproximation}), by Weisfeiler's strong approximation theorem there is a multiple $r_1$ of $r_0$ such that for any irreducible polynomial $\ell\in \mathbb{F}_{q_0}[t]$ that does not divide $r_1$, $\pi_{\ell}(\Gamma)=\mathcal{G}_\ell(K(\ell))$. By the discussion at the beginning of Section~\ref{sectiondesubgroupdich}, there are a finite separable extension $L$ of $\mathbb{F}_{q_0}(t)$, a multiple $r_2$ of $r_1$, and a central $\mathcal{O}_L[1/r_2(t)]$-isogeny \[ \phi:\mathcal{G}\otimes_{\mathbb{F}_{q_0}[t,1/r_0(t)]} \mathcal{O}_L[1/r_2(t)]\rightarrow \mathcal{G}^{\rm Che}\otimes_{\mathbb{Z}} \mathcal{O}_L[1/r_2(t)] \] where $\mathcal{G}^{\rm Che}$ is an adjoint Chevalley $\mathbb{Z}$-group scheme and $\mathcal{O}_L$ is the integral closure of $\mathbb{F}_{q_0}[t]$ in $L$. By \cite[Theorem 0.5]{LP}, there is a scheme $\mathcal{T}$ of finite type over $\Spec \mathbb{Z}$ and a closed group scheme $\mathcal{H}$ of $\mathcal{G}^{\rm Che}\times_{\Spec \mathbb{Z}} \mathcal{T}$ such that \begin{enumerate} \item for any geometric point $s'$ of $\mathcal{T}$ over a geometric point $s$ of $\Spec \mathbb{Z}$, the geometric fiber $\mathcal{H}_{s'}$ is a proper subgroup of the geometric fiber $\mathcal{G}^{\rm Che}_s$ (here is the only place that we use the concept of geometric fiber; and so we do not give a precise definition of this concept. To illustrate what kind of objects these are, we only consider the example of a scheme $\mathcal{X}$ over $\Spec A$ where $A$ is a ring; for any $\mathfrak{p}\in \Spec A$, we let $k(\mathfrak{p}):=Q(A/\mathfrak{p})$ be the field of fractions of the integral domain $A/\mathfrak{p}$, and then $\mathcal{X} \times_{\Spec A} \Spec(\overline{k(\mathfrak{p})})$ is a geometric fiber of $\mathcal{X}$. Vaguely if $\mathcal{X}$ is affine and given by polynomial equations with coefficients in $A$, we are looking at those polynomials modulo $\mathfrak{p}\in \Spec A$ and then view them over the algebraic closure of the field of fractions of $A/\mathfrak{p}$.) \item If $\overline{H}$ is a finite subgroup of $\mathcal{G}^{\rm Che}(\overline{\mathbb{F}}_p)$ and $s'\in \mathcal{T}$ is a point over $p\mathbb{Z}$, then either $\overline{H}\subseteq \mathcal{H}_{s'}(\overline{k(s')})$ where $k(s')$ is the residue field of $s'$ or there are a finite field $F_{\overline{H}}$ and a model $\mathbb{G}_{\overline{H}}$ of $\mathcal{G}^{\rm Che}\otimes_{\mathbb{Z}} \overline{\mathbb{F}}_p$ over $F_{\overline{H}}$ such that \[ [\mathbb{G}_{\overline{H}}(F_{\overline{H}}),\mathbb{G}_{\overline{H}}(F_{\overline{H}})]\subseteq {\overline{H}} \subseteq \mathbb{G}_{\overline{H}}(F_{\overline{H}}). \] \end{enumerate} By \cite[Proposition 2.3]{LP}, there is a representation $\rho:\mathcal{G}^{\rm Che}\rightarrow (\GL_{n_0})_{\mathbb{Z}}$ with the following property: suppose $\overline{H}$ is a finite subgroup of $\mathcal{G}^{\rm Che}(\overline{\mathbb{F}}_p)$ such that a subspace of $\overline{\mathbb{F}}_p^{n_0}$ which is invariant under $\overline{H}$ should also be invariant under $\mathcal{G}^{\rm Che}(\overline{\mathbb{F}}_p)$; then $\overline{H}\not\subseteq \mathcal{H}_{s'}(\overline{k(s')})$ if $s'$ is a geometric point over $p\mathbb{Z}$. For an irreducible polynomial $\ell$ that does not divide $r_2$, let $\mathfrak{l}\in \Spec(\mathcal{O}_L)$ be in the fiber over $\langle \ell\rangle$. Set $L(\mathfrak{l}):=\mathcal{O}_L/\mathfrak{l}$. Let $\phi_{\ell}$ be the representation induced by the composite of $\rho$ and $\phi$ over $\mathfrak{l}$: \[ \phi_{\mathfrak{l}}: \mathcal{G}_{\ell}\otimes_{K(\ell)} L(\mathfrak{l}) \rightarrow (\GL_{n_0})_{L(\mathfrak{l})}. \] If $H\subseteq \mathcal{G}_{\ell}(K(\ell))$ is a proper structural subgroup, then by the above mentioned results of Larsen-Pink there is a subspace $\wt{W}$ of $\overline{\mathbb{F}}_p^{n_0}=\overline{L(\mathfrak{l})}^{n_0}$ which is invariant under $H$ but not under $\mathcal{G}_{\ell}(\overline{L(\mathfrak{l})})$ (via the representation $\phi_{\mathfrak{l}}$). Since $\wt{W}$ is not invariant under $\mathcal{G}_{\ell}(\overline{L(\mathfrak{l})})$, the intersection of $\mathcal{G}_{\ell}\otimes_{K(\ell)}\overline{K(\ell)}$ with the stabilizer of $\wt{W}$ is a proper algebraic subgroup of $\mathcal{G}_{\ell}\otimes_{K(\ell)}\overline{K(\ell)}$. Hence the intersection of $\mathcal{G}_{\ell}\otimes_{K(\ell)}\overline{K(\ell)}$ with all the ${\rm Gal}(\overline{L(\mathfrak{l})}/L(\mathfrak{l}))$-conjugates of the stabilizer of $\wt{W}$ has a descent to a proper subgroup $\wt\mathbb{H}$ of $\mathcal{G}_{\ell}\otimes_{K(\ell)}L(\mathfrak{l})$; and since $\phi_{\mathfrak{l}}$ is defined over $L(\mathfrak{l})$ and $H$ leaves $\wt{W}$ invariant, $H\subseteq \wt\mathbb{H}(L(\mathfrak{l}))$. For any $\sigma\in {\rm Gal}(L(\mathfrak{l})/K(\ell))$, let $\wt\mathbb{H}^{\sigma}$ be the corresponding subgroup of $\mathcal{G}_{\ell}\otimes_{K(\ell)} L(\mathfrak{l})$; and let $\mathbb{H}$ be the subgroup $\mathcal{G}_{\ell}$ that is the descent of $\bigcap_{\sigma\in {\rm Gal}(L(\mathfrak{l})/K(\ell))} \wt\mathbb{H}^{\sigma}$. Since $H\subseteq \mathcal{G}_{\ell}(K(\ell))\cap \wt{\mathbb{H}}(L(\mathfrak{l}))$, we have that $H\subseteq \mathbb{H}(K(\ell))$. We notice that the complexity of the stabilizer of a subspace via $\phi_{\mathfrak{l}}$ has a uniform upper bound which depends on $\rho$ and $\phi$ and it is independent of $\mathfrak{l}$. Hence the complexity of $\wt{\mathbb{H}}$ is bounded as a function of $\Gamma$; moreover complexity does not change under the Galois action, which means the complexity of $\wt{\mathbb{H}}^{\sigma}$ is bounded by the same function of $\Gamma$. As $[L(\mathfrak{l}):K(\ell)]\le [L:K]\ll_{\Gamma} 1$, we deduce that the complexity of $\mathbb{H}$ is bounded by a function of $\Gamma$. Proposition 3.2 in \cite{LP} implies that, if $\deg \ell \gg_{\Gamma} 1$, then $\mathbb{H}(K(\ell))$ is a proper subgroup of $\mathcal{G}_{\ell}(K(\ell))$. For convenience sake we include its short proof here. Since the complexity of $\mathbb{H}$ is bounded by a function of $\Gamma$, the number of its irreducible components is $O_{\Gamma}(1)$. Hence $|\mathbb{H}(K(\ell))|\ll_{\Gamma} |K(\ell)|^{\dim \mathbb{H}}$. On the other hand, since the geometric fiber of $\mathcal{G}_{\ell}$ is connected, by Lang-Weil \cite[Theorem 1]{LW}, $|\mathcal{G}_{\ell}(K(\ell))|\gg_{\Gamma} |K(\ell)|^{\dim \mathcal{G}_{\ell}}$ (It is worth pointing out that an explicit formula for $|\mathcal{G}_{\ell}(K(\ell))|$ based on invariant factors and $|K(\ell)|$ is known. So the mentioned result of Lang-Weil is not really needed; but it is more conceptual). Hence for $|K(\ell)|\gg_{\Gamma} 1$, $\mathbb{H}(K(\ell))$ is a proper subgroup of $\mathcal{G}_{\ell}(K(\ell))$. \end{proof} \subsection{Refine version of the dichotomy of subgroups of $\mathcal{G}_{\ell}(K(\ell))$} Here we summarize what we have proved in the previous sections in regard to subgroups of $\pi_{\ell}(\Gamma)$. \begin{thm}\label{thm:FinalRefinementOfLP} Suppose $\Omega$, $\Gamma$, $\mathcal{G}$, $\mathcal{G}_{\ell}$, and $K(\ell)$ are as above; that means $\Gamma$ is a finitely generated subgroup of $\GL_{n_0}(\mathbb{F}_{q_0}[t,1/r_0(t)])$ where $q_0>7$ is a power of a prime $p>5$ and the field generated by $\Tr(\Gamma)$ is $\mathbb{F}_{q_0}(t)$, $\mathcal{G}$ is the Zariski-closure of $\Gamma$ in $(\GL_{n_0})_{\mathbb{F}_{q_0}[t,1/r_0(t)]}$, $\ell$ is an irreducible polynomial in $\mathbb{F}_{q_0}[t]$ that does not divide $r_0$, $K(\ell):=\mathbb{F}_{q_0}[t]/\langle \ell\rangle$, and $\mathcal{G}_{\ell}:=\mathcal{G}\otimes_{\mathbb{F}_{q_0}[t,1/r_0(t)]} K(\ell)$. Suppose $\mathbb{G}:=\mathcal{G}\otimes_{\mathbb{F}_{q_0}[t,1/r_0(t)]} \mathbb{F}_{q_0}(t)$ is an absolutely almost simple group, connected, simply connected group. Suppose $\deg \ell\gg_{\Gamma} 1$; then for a subgroup $H$ of $\pi_{\ell}(\Gamma)$ we have that either \begin{enumerate} \item {\em $H$ is a structural type subgroup}: there are a proper subgroup $\mathbb{H}$ of $\mathcal{G}_{\ell}$ and a polynomial $f_{H}\in K(\ell)[x_{11},\cdots,x_{n_0n_0}]$ such that \begin{enumerate} \item the complexity of $\mathbb{H}$ is bounded by a function of $\Gamma$, and $H\subseteq \mathbb{H}(K(\ell))\subsetneq \mathcal{G}_{\ell}(K(\ell))$. \item $\deg f\ll_{\Gamma} 1$, $f_H(H)=0$, and for some $\gamma\in \Omega$, $f_H(\pi_{\ell}(\gamma))\neq 0$. \end{enumerate} \item {\em $H$ is a subfield type subgroup}: there are a subfield $F_{H}$ of $K(\ell)$ and an algebraic group $\mathbb{G}_H$ defined over $F_H$ such that \begin{enumerate} \item $\mathbb{G}_H\otimes_{F_H} K(\ell)=\Ad(\mathcal{G}_{\ell})$, \item $[\mathbb{G}_H(F_H),\mathbb{G}_H(F_H)]\subseteq \Ad H \subseteq \mathbb{G}_H(F_H)$. \end{enumerate} \end{enumerate} \end{thm} \begin{proof} By Proposition~\ref{prop:structural-subgroups}, if $\deg \ell\gg_{\Gamma}1$ and $H$ is a structural type subgroup, there is a proper subgroup $\mathbb{H}$ of $\mathcal{G}_{\ell}$ such that the complexity of $\mathbb{H}$ is $O_{\Gamma}(1)$, $H\subseteq \mathbb{H}(K(\ell))\subsetneq \mathcal{G}_{\ell}(K(\ell))$. Suppose $\mathbb{H}$ is defined by polynomials $\{f_i\in K(\ell)[x_{11},\ldots,x_{n_0n_0}]|1\le i\ll_{\Gamma}1\}$, where $\deg f_i\ll_{\Gamma} 1$. Since $\mathcal{G}_{\ell}(K(\ell))\neq \mathbb{H}(K(\ell))$ and by strong approximation $\mathcal{G}_{\ell}(K(\ell))$ is generated by $\pi_{\ell}(\Omega)$, there is $\gamma\in \Omega$ and $f_i$ such that $f_i(\pi_{\ell}(\gamma))\neq 0$. This implies the claim if $H$ is a structure type subgroup. If $H$ is subfield type subgroup, then there are a finite field $F_H\subseteq \overline{K(\ell)}$ and a model $\mathbb{G}_H$ of $\Ad\mathcal{G}_{\ell}\otimes_{K(\ell)} \overline{K(\ell)}$ over $F$ such that \[ [\mathbb{G}_H(F_H),\mathbb{G}_H(F_H)]\subseteq \Ad H\subseteq \mathbb{G}_H(F_H). \] Let $\wt\mathbb{G}_H$ be the simply connected cover of $\mathbb{G}_H$. Then $\wt\mathbb{G}_H$ is a model of $\mathcal{G}_{\ell}\otimes_{K(\ell)} \overline{K(\ell)}$; and so the adjoint homomorphism is a central isogeny \[\Ad: \wt\mathbb{G}_H\otimes_{F_H} \overline{K(\ell)}\rightarrow \Ad \mathcal{G}_{\ell} \otimes_{K(\ell)} \overline{K(\ell)} \text{ and } \Ad(\wt{\mathbb{G}}_H(F_H))\subseteq \Ad H\subseteq \Ad(\mathcal{G}_{\ell})(K(\ell)). \] Hence by Proposition~\ref{propLPsubfields}, $F_H\subseteq K(\ell)$ and the adjoint homomorphism has a descent to $K(\ell)$, $\Ad:\wt{\mathbb{G}}_H\otimes_{F_H} K(\ell) \rightarrow \Ad \mathcal{G}_{\ell}$; and so $\mathbb{G}_H:=\Ad \wt\mathbb{G}_H$ satisfies the claim. \end{proof} \subsection{A note on subfield type subgroups} In this section, we prove Proposition~\ref{prop:conjugates-subfield-type-subgroups} which will be used later in modifying Varj\'{u}'s multi-scale argument. \begin{prop}\label{prop:conjugates-subfield-type-subgroups} Let $q$ be a power of a prime $p>5$, and $n\in \mathbb{Z}^+$. Suppose $\mathbb{H}$ is an absolutely almost simple, connected, adjoint type $\mathbb{F}_q$-group. Then \[ T([\mathbb{H}(\mathbb{F}_q),\mathbb{H}(\mathbb{F}_q)],\mathbb{H}(\mathbb{F}_{q^n})):=\{g\in \mathbb{H}(\overline{\mathbb{F}}_p)|\hspace{1mm} g^{-1} [\mathbb{H}(\mathbb{F}_q),\mathbb{H}(\mathbb{F}_q)] g\subseteq \mathbb{H}(\mathbb{F}_{q^n})\}=\mathbb{H}(\mathbb{F}_{q^n}). \] \end{prop} The main idea of the proof is similar to the proof of Proposition~\ref{propliedescent}; but as the proof is fairly short we reproduce it here. \begin{lem}\label{lem:absoultely-simple-modules} Suppose $F$ is a field, $V$ is a finite-dimensional $F$-vector space, $H$ is a subgroup of ${\rm End}_F(V)$, and $V$ is an absolutely simple $H$-module; that means $V\otimes_F \overline{F}$ is a simple $\overline{F}[H]$-module where $\overline{F}$ is an algebraic closure of $F$ and $\overline{F}[H]$ is the $\overline{F}$-span of $H$ in ${\rm End}_{\overline{F}}(V\otimes_F \overline{F})$. Suppose $F\subseteq E\subseteq \overline{F}$ is an intermediate subfield. Let $F[H]$ be the $F$-span of $H$ in \[ {\rm End}_F(V)\subseteq {\rm End}_E(V\otimes_F E)\subseteq {\rm End}_{\overline{F}}(V\otimes_F \overline{F}). \] If $W\subseteq V\otimes_F E$ is an $F[H]$-module and $\dim_F W=\dim_F V$, then there is $\lambda\in E$ such that $W=V\otimes \lambda$. \end{lem} \begin{proof} First we notice that since $V$ is an absolutely simple $H$-module, by \cite[Theorem 7.5]{Lam} $F[H]={\rm End}_F(V)$; and so \begin{equation}\label{eq:H-isomorphism} {\rm End}_{F[H]}(V)=F. \end{equation} Suppose $\{\alpha_i\}_{i=1}^{\infty}$ is an $F$-basis of $E$. Then $V\otimes_F E=\bigoplus_{i=1}^{\infty} V\otimes \alpha_i$. For any $i$, let \[{\rm pr}_i:W\rightarrow V\otimes \alpha_i\] be the projection to the $i$-th summand according to this decomposition. We notice that, since $l_{\alpha_i}:V\rightarrow V\otimes \alpha_i, l_{\alpha_i}(v):=v\otimes \alpha_i$ is an $F[H]$-module isomorphism, $V\otimes \alpha_i$ is a simple $F[H]$-module. Hence either ${\rm pr}_i(W)=0$ or ${\rm pr}_i:W\rightarrow V\otimes \alpha_i$ is a surjective $F[H]$-module homomorphism. As $\dim_F W=\dim_F V$, in the latter case ${\rm pr}_i$ is an $F[H]$-module isomorphism. Let $I:=\{i\in \mathbb{Z}^+|\hspace{1mm} {\rm pr}_i(W)\neq 0\}$. Then, for $i,j\in I$, \[ l_{\alpha_j}^{-1} \circ {\rm pr}_j \circ {\rm pr}_i^{-1}\circ l_{\alpha_i}: V\rightarrow V \] is an $F[H]$-module isomorphism. Therefore by \eqref{eq:H-isomorphism}, for $i,j\in I$, there is $a_{ij}\in F^{\times}$ such that \begin{equation}\label{eq:various-components} {\rm pr}_j\circ {\rm pr}_i^{-1}(v \otimes \alpha_i)=v \otimes a_{ij}\alpha_j. \end{equation} Since $\dim_F W=\dim_F V<\infty$, by \eqref{eq:various-components} $I$ is finite. Let $i_0\in I$; then by \eqref{eq:various-components} we have \[ \textstyle W=\{\sum_{j\in I} v\otimes a_{i_0j} \alpha_j|\hspace{1mm} v\in V\}=V\otimes (\sum_{j\in I} a_{i_0j}\alpha_j); \] and claim follows. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:conjugates-subfield-type-subgroups}] Since $p>5$, by \cite[Lemma 4.6]{Wei} $\underline{\hfr} (\overline{\mathbb{F}}_{q})/\underline{\mathfrak{z}}(\overline{\mathbb{F}}_q)$ is a simple $H$-module, where $H=[\mathbb{H}(\mathbb{F}_q),\mathbb{H}(\mathbb{F}_q)]$, $\underline{\hfr}={\rm Lie}(\mathbb{H})$, and $\underline{\mathfrak{z}}$ is the center of $\underline{\hfr}$. Hence \begin{equation}\label{eq:finite-Lie-alg} (\underline{\hfr}(\mathbb{F}_{q^n})+\underline{\mathfrak{z}}(\overline{\mathbb{F}}_q))/\underline{\mathfrak{z}}(\overline{\mathbb{F}}_q)\subseteq \underline{\hfr}(\overline{\mathbb{F}}_{q})/\underline{\mathfrak{z}}(\overline{\mathbb{F}}_q) \text{ is an absolutely simple $H$-module.} \end{equation} For $g\in T(H,\mathbb{H}(\mathbb{F}_{q^n}))$, $\Ad(g) \underline{\hfr}(\mathbb{F}_{q^n})$ is $H$-invariant as we have $H\subseteq g\mathbb{H}(\mathbb{F}_{q^n})g^{-1}$. Since $\dim_{\mathbb{F}_{q^n}} \Ad(g)\underline{\hfr} (\mathbb{F}_{q^n})=\dim_{\mathbb{F}_{q^n}} \underline{\hfr} (\mathbb{F}_{q^n})$, by Lemma~\ref{lem:absoultely-simple-modules} there is $\lambda(g)\in \overline{\mathbb{F}}_{q}$ such that \begin{equation}\label{eq:Adg-scalar} \Ad(g) \underline{\hfr} (\mathbb{F}_{q^n})+\underline{\mathfrak{z}} (\overline{\mathbb{F}}_q)=\lambda(g) \underline{\hfr} (\mathbb{F}_{q^n})+\underline{\mathfrak{z}} (\overline{\mathbb{F}}_q). \end{equation} Since $p>5$, $\underline{\hfr} (\mathbb{F}_{q^n})$ is a perfect Lie algebra. Therefore by \eqref{eq:Adg-scalar} we get that for any integer $m\ge 2$ we have \begin{equation}\label{eq:Adg-scalar-2} \Ad(g) \underline{\hfr} (\mathbb{F}_{q^n})=\lambda(g)^m \underline{\hfr} (\mathbb{F}_{q^n}). \end{equation} Notice that $\underline{\hfr} (\mathbb{F}_{q^n})$ and $\underline{\hfr} (\overline{\mathbb{F}}_q)$ are naturally isomorphic to $\mathfrak{h}\otimes_{\mathbb{F}_q} \mathbb{F}_{q^n}$ and $\mathfrak{h}\otimes_{\mathbb{F}_q} \overline{\mathbb{F}}_q$, respectively, where $\mathfrak{h}=\underline{\hfr} (\mathbb{F}_{q})$; and so $\lambda(g)^m \underline{\hfr} (\mathbb{F}_{q^n})$ can be identified with $\mathfrak{h}\otimes \lambda(g)^m \mathbb{F}_{q^n}$. Thus \eqref{eq:Adg-scalar-2} implies that $\lambda(g)\in \mathbb{F}_{q^n}$. Therefore $\Ad(g)\underline{\hfr} (\mathbb{F}_{q^n})=\underline{\hfr} (\mathbb{F}_{q^n})$, which means $g\in \mathbb{H}(\mathbb{F}_{q^n})$ as $\mathbb{H}$ is of adjoint form. \end{proof} \begin{cor}\label{cor:intersection-conjugate-subfield-type} Let $q$ be a power of a prime $p>5$. Let $\mathbb{H}$ be a connected, almost simple, adjoint type $\mathbb{F}_q$-group. Suppose $n$ is a positive integer and $m$ is a positive divisor of $n$. Then, for any $g\in \mathbb{H}(\mathbb{F}_{q^n})\setminus \mathbb{H}(\mathbb{F}_{q^m})$, $g\mathbb{H}(\mathbb{F}_{q^m})g^{-1}\cap \mathbb{H}(\mathbb{F}_{q^m})$ is a structural subgroup of $\mathbb{H}(\mathbb{F}_{p^n})$. \end{cor} \begin{proof} Suppose to the contrary that it is a subfield type subgroup. Then by Proposition~\ref{propLPsubfields} there is a subfield $F'$ of $\mathbb{F}_{q^m}$ and a model $\overline\mathbb{H}$ of $\mathbb{H}\otimes_{\mathbb{F}_q}\mathbb{F}_{q^m}$ over $F'$ such that \[ [\overline\mathbb{H}(F'),\overline\mathbb{H}(F')]\subseteq g\overline\mathbb{H}(\mathbb{F}_{q^m})g^{-1}\cap \overline\mathbb{H}(\mathbb{F}_{q^m}) \subseteq \overline\mathbb{H}(F'). \] Therefore $g\in T([\overline\mathbb{H}(F'),\overline\mathbb{H}(F')], \overline\mathbb{H}(\mathbb{F}_{q^m}))$; and so by Proposition~\ref{prop:conjugates-subfield-type-subgroups} we have that $g$ is in $\overline\mathbb{H}(\mathbb{F}_{q^m})=\mathbb{H}(\mathbb{F}_{q^m})$, which is a contradiction. \end{proof} \section{Escaping from the direct sum of structure type subgroups}\label{sectionproofofmainescape} For a square-free polynomial $f$ (with large degree irreducible factors), we say a proper subgroup $H$ of $\pi_f(\Gamma)$ is {\em purely structural} if $\pi_{\ell}(H)$ is a structure type subgroup of $\pi_{\ell}(\Gamma)=\mathcal{G}_{\ell}(K(\ell))$ for any irreducible factor $\ell$ of $f$. The goal of this section is to prove Proposition~\ref{propmainescape}; that roughly means we show that there exists a symmetric set $\Omega'\subseteq \Gamma$ with the following property: For any square-free polynomial $f\in \mathbb{F}_{q_0}[t]$ with large degree irreducible factors and for any purely structural subgroup $H$ of $\pi_f(\Gamma)$, the probability that an $l\sim \deg f$-step random walk lands in $H$ is small. \subsection{Small lifts of elements of a purely structural subgroup are in a proper algebraic subgroup}\label{sectionsmalllifts} Let us recall that for any $h\in \GL_{n_0}(\mathbb{F}_{q_0}[t,1/r_0])$, \[ \|h\|:=\max_{v\in D(r_0)\cup \{v_{\infty}\},i,j} |h_{ij}|_{v}, \] where $h_{ij}$ is the $i,j$-entry of $h$ and $|\cdot|_{\ell}$ is the $\ell$-adic norm (see section~\ref{sectionnotation} for the definition of all the undefined symbols). For a subgroup $H$ of $\pi_f(\Gamma)$, let \[ \mathcal{L}_\delta(H):=\{h=(h_{ij})\in \Gamma|\pi_f(h)\in H\mbox{ and }\|h\|<[\pi_f(\Gamma):H]^\delta\}, \] In this section we show that, if $H$ is purely structural, then for some $\delta\ll_{\mathbb{G}} 1$, $\mathcal{L}_\delta(H)$ lies in a proper algebraic subgroup of $\mathbb{G}$. In light of Theorem~\ref{thm:FinalRefinementOfLP}, we follow the proof of \cite[Proposition 16]{SGV}. {\bf Standing assumptions.} In this section, we will be working with $\Omega$, $\Gamma$, $\mathcal{G}$, $\mathcal{G}_{\ell}$, $\mathbb{G}$, and $K(\ell)$ are as before; that means $\Gamma$ is a finitely generated subgroup of $\GL_{n_0}(\mathbb{F}_{q_0}[t,1/r_0(t)])$ where $q_0>7$ is a power of a prime $p>5$ and the field generated by $\Tr(\Gamma)$ is $\mathbb{F}_{q_0}(t)$, $\mathcal{G}$ is the Zariski-closure of $\Gamma$ in $(\GL_{n_0})_{\mathbb{F}_{q_0}[t,1/r_0(t)]}$, $\mathbb{G}$ is the generic fiber of $\mathcal{G}$, $\mathbb{G}$ is a connected, simply-connected, absolutely almost simple group, for an irreducible polynomial $\ell$, $K(\ell)$ is $\mathbb{F}_{q_0}[t]/\langle \ell\rangle$, and $\mathcal{G}_{\ell}$ is the fiber of $\mathcal{G}$ over $\langle \ell\rangle$. Here $f$ denotes a square free polynomial with the property that the dichotomy mentioned in Theorem~\ref{thm:FinalRefinementOfLP} holds for any of its irreducible factors. In particular, for any irreducible factor $\ell$ of $f$ and any proper subgroup $H_{\ell}$ of $\pi_{\ell}(\Gamma)$, we have that \[ [\pi_{\ell}(\Gamma):H_{\ell}]\gg_{\Gamma} \begin{cases} |K(\ell)|^{\dim \mathbb{G}-\dim \mathbb{H}}\ge |K(\ell)| &\text{ if } H_{\ell} \text{ is a structure type subgroup,}\\ |K(\ell)/F_H|^{\dim \mathbb{G}}\ge |K(\ell)| &\text{ if } H_{\ell} \text{ is a subfield type subgroup.} \end{cases} \] This implies that \begin{equation}\label{eq:LowerboundIndex} [\pi_{\ell}(\Gamma):H_{\ell}]\gg_{\Gamma} |\pi_{\ell}(\Gamma)|^{c_0} \end{equation} for some positive number $c_0$ which depends only on $\mathbb{G}$. Moreover we assume, if $\ell$ and $\ell'$ are two different irreducible factors of $f$, then $\deg \ell\neq \deg \ell'$. This last condition is very restrictive and in a desired result it has to be removed. Removing this condition is in the spirit of Open Problem 1.4 in \cite{LinVar}. We first start with approximating a proper subgroup $H$ of \[ \pi_f(\Gamma)\simeq \bigoplus_{\ell|f, \ell \text{ irred.}} \pi_{\ell}(\Gamma) = \bigoplus_{\ell|f, \ell \text{ irred.}} \mathcal{G}_{\ell}(K(\ell)) \] with a subgroup in product form. This is done by a variant of \cite[Lemma 15]{SGV}. \begin{lem}\label{lemsubgroupproductform} Suppose $\{G_i\}_{i\in I}$ is a finite collection of finite groups with the following properties: \begin{enumerate} \item $G_i=\bigoplus_{j\in J_i} L_{ij}$ where $L_{ij}/Z(L_{ij})$ is simple. \item $G_i$ is perfect; that means $G_i=[G_i,G_i]$. \item For $i\neq j$, simple factors of $G_i/Z(G_i)$ and $G_j/Z(G_j)$ are not isomorphic. \item There is a positive integer $c$ such that for any proper subgroup $H_i$ of $G_i$ we have $[G_i:H_i]\ge |G_i|^{c}$. \end{enumerate} Then for any subgroup $H$ of $G_I:=\bigoplus_{i\in I}G_i$ we have \[ \prod_{i\in I} [G_i:\pr_i(H)]\ge[G_I:H]^c, \] where $\pr_i:G_I\rightarrow G_i$ is the projection to the $i$-th component. \end{lem} \begin{proof} We proceed by strong induction on $|G_I|$. Let \[ I_1:=\{i\in I|\hspace{1mm} \pr_i(H)=G_i\}\text{, and } I_2:=\{i\in I|\hspace{1mm} \pr_i(H)\neq G_i\}. \] {\bf Claim 1.} {\em We can assume that $I_1\neq \varnothing$.} {\em Proof of Claim 1.} If $I_1=\varnothing$, then \[ \prod_{i\in I} [G_i:\pr_i(H)]\ge \prod_{i\in I} |G_i|^c\ge |G_I|^c\ge [G_I:H]^c; \] and claim follows. So without loss of generality we can and will assume that $I_1\neq \varnothing$. {\bf Claim 2.} {\em The restriction to $H$ of the projection map $\pr_{I_1}$ to $G_{I_1}:=\bigoplus_{i\in I_1} G_i$ is surjective.} {\em Proof of Claim 2.} We proceed by induction on $|I_1|$. The base of induction is clear. Suppose $\pr_{I'}(H)=G_{I'}$ for some subset $I'$ of $I$ and $\pr_i(H)=G_i$ for some $i\in I\setminus I'$. Let $\overline{H}:=\pr_{I'\cup\{i\}}(H)$. Then $\pr_i(\overline{H})=G_i$ and $\pr_{I'}(\overline{H})=G_{I'}$. Let $\overline{H}(I'):=\overline{H}\cap G_{I'}$ and $\overline{H}(i):=\overline{H}\cap G_i$. Then projections induce isomorphisms $\overline{H}/\overline{H}(I')\rightarrow G_i$ and $\overline{H}/\overline{H}(i)\rightarrow G_{I'}$. Hence we get the following commuting diagram \begin{equation}\label{eq:graph-hom} \begin{tikzcd} &\overline{H}/(\overline{H}(i)\oplus \overline{H}(I')) \arrow[rd,"\simeq"]\arrow[ld,"\simeq"] & \\ G_i/\overline{H}(i) \arrow[rr,dashed,"\simeq"]&& G_{I'}/\overline{H}(I'). \end{tikzcd} \end{equation} If $\overline{H}(i)$ is a proper subgroup of $G_i$, then $\overline{H}(i)Z(G_i)$ is also a proper subgroup of $G_i$; this is because $[\overline{H}(i)Z(G_i),\overline{H}(i)Z(G_i)]=[\overline{H}(i),\overline{H}(i)]$ and $G_i$ is perfect. Therefore by \eqref{eq:graph-hom} a simple factor of $G_i/Z(G_i)$ is isomorphic to a simple factor of $G_j/Z(G_j)$ for some $j\in I'$; this contradicts our assumption. Hence $\overline{H}(i)=G_i$ and $\overline{H}(I')=G_{I'}$, which implies that $\overline{H}=G_{I'\cup\{i\}}$; and claim follows. {\bf Claim 3.} {\em $[G:H]\le |G_{I_2}|$ and $\prod_{i\in I} [G_i:\pr_i(H)]\ge |G_{I_2}|^c$.} {\em Proof of Claim 3.} Let $H(I_2):=H\cap \ker \pr_{I_1}$ where $\pr_{I_1}:G\rightarrow G_{I_1}$ is the projection to $G_{I_1}$. Then by Claim 2, we have $|G_{I_1}|=[H:H(I_2)]$; and so \[ [G:H]=\frac{|G_{I_1}||G_{I_2}|}{|G_{I_1}||H(I_2)|}=\frac{|G_{I_2}|}{|H(I_2)|}\le |G_{I_2}|. \] We also have \[ \prod_{i\in I} [G_i:\pr_i(H)]=\prod_{i\in I_2} [G_i:\pr_i(H)]\ge \prod_{i\in I_2} |G_i|^c=|G_{I_2}|^c, \] where we have the last inequality because of our assumption and $\pr_i(H)$ being a proper subgroup of $G_i$ for any $i\in I_2$. Claim 3 implies that \[ \prod_{i\in I}[G_i:\pr_i(H)]\ge |G_{I_2}|^c \ge [G:H]^c; \] and claim follows. \end{proof} \begin{prop}\label{propsmalllifts} Under the {\bf Standing assumptions} of this section, there exists a constant $\delta$ depending on $\Gamma$ such that the following holds: Let $H\subseteq \pi_f(\Gamma)$ be a purely structural subgroup; that means $\pi_\ell(H)$ is a structural subgroup of $\pi_\ell(\Gamma)$ for each irreducible factor $\ell$ of $f$. Then $\mathcal{L}_\delta(H)$ lies in a proper algebraic subgroup $\mathbb{H}$ of $\mathbb{G}$. \end{prop} \begin{proof} By Lemma \ref{lemsubgroupproductform} and \eqref{eq:LowerboundIndex}, there exists a positive constant $c_0$ which depends only on $\mathbb{G}$ such that \[[\pi_f(\Gamma):\bigoplus_{\ell\in D(f)}\pi_\ell(H)]\ge [\pi_f(\Gamma):H]^{c_0}.\] If $\mathcal{L}_\delta(\bigoplus_{\ell\in D(f)}\pi_\ell(H))$ lies in a proper algebraic subgroup of $\mathbb{G}$, then so does $\mathcal{L}_{\delta/c_0}(H)$. Therefore we can and will replace $H$ with $\bigoplus_{\ell\in D(f)}\pi_\ell(H)$. Similarly, after replacing $f$ with the product of those irreducible factors satisfying $\pi_\ell(H)\neq\pi_\ell(\Gamma)$, we may assume $\pi_\ell(H)$ is a proper subgroup for each irreducible factor $\ell$ of $f$. By Theorem~\ref{thm:FinalRefinementOfLP}, there exists a constant $d_0:=d_0(\Gamma)$ such that for any $\ell\in D(f)$, there is a polynomial of degree at most $d_0$ and $\gamma_{\ell}\in \Omega$ such that $f_\ell(\pi_{\ell}(H))=0$ and $f_{\ell}(\pi_{\ell}(\gamma_{\ell}))=1$. First we show that $\mathcal{L}_{\delta}(H)$ lies in a {\em low complexity} proper algebraic {\em subset} of $\mathbb{G}$. To this end, we consider the degree $d_0$ monomial map \[\Psi:\mathbb{GL}_{n_0}\rightarrow \mathbb{A}_{d_1},\] where \[d_1=\begin{small}\left(\begin{array}{c} n_0^2+d_0 \\ d_0 \end{array} \right)\end{small}.\] Let $d$ be the dimension of the linear span of $\Psi(\mathbb{G}(\mathbb{F}_{q_0}(t)))$. To show $\mathcal{L}_{\delta}(H)$ lies in a proper algebraic subgroup of $\mathbb{G}$, it suffices to prove that $\Psi(\mathcal{L}_\delta(H))$ spans a subspace of dimension less than $d$ if $\delta$ is sufficiently small. Suppose to the contrary that the linear span of $\Psi(\mathcal{L}_\delta(H))$ is $d$ dimensional. Hence there is a set of $d$ linearly independent elements $h_1,h_2,\dots, h_d$ of $\Psi(\mathcal{L}_\delta(H))$. Looking at the explicit formula for the number of elements of finite simple groups of Lie type~\cite[\textsection 11.1, \textsection 14.4]{Car}, we have $|\mathcal{G}_\ell(K(\ell))|\le |K(\ell)|^{\dim \mathbb{G}}=q_0^{\dim \mathbb{G}\cdot \deg \ell}$. Hence \[ \pi_f(\Gamma)\le q_0^{\dim \mathbb{G} \cdot \deg f}. \] Thus for $h\in \mathcal{L}_{\delta}(H)$ we have \[ \|h\|<[\pi_f(\Gamma):H]^{\delta}\le |\pi_f(\Gamma)|^{\delta}\le q_0^{\delta \dim \mathbb{G}\cdot \deg f}. \] This implies that the entries of the vectors $h_1,\ldots,h_d\in \mathbb{F}_{q_0}(t)^{d_1}$ are of the form $\frac{a}{\prod_{\ell\in D(r_0)} \ell^{e_{\ell}}}$ with $a\in \mathbb{F}_{q_0}[t]$, $\gcd(a, \prod_{\ell\in D(r_0)}\ell^{e_{\ell}})=1$, \begin{equation}\label{eqnsubdet1} \deg a-\sum_{\ell\in D(r_0)}e_\ell \deg \ell< d_0\delta \dim\mathbb{G} \cdot\deg f, \end{equation} and for each $\ell\in D(r_0)$ \begin{equation}\label{eqnsubdet2} {e_\ell}\deg \ell< d_0\delta\dim\mathbb{G}\cdot \deg f. \end{equation} By the contrary assumption, the determinant $s(t)\in \mathbb{F}_{q_0}(t)$ of a $d$-by-$d$ submatrix of the matrix $X$ that has the vectors $h_1,\ldots,h_d$ in its rows is non-zero. By \eqref{eqnsubdet1} and \eqref{eqnsubdet2}, we have that $s(t)=\frac{a'}{\prod_{\ell\in D(r_0)} \ell^{e'_{\ell}}}$ for some $a'\in \mathbb{F}_{q_0}[t]$ and $e_{\ell}\in \mathbb{Z}^{\ge 0}$ such that \[ \deg a'\le \delta ((|D(r_0)|+1)dd_0\dim \mathbb{G}) \deg f\le \delta ((|D(r_0)|+1)|D(r_0)| dd_0\dim \mathbb{G}) \max_{\ell\in D(f)}\deg \ell. \] Hence for $\delta< ((|D(r_0)|+1)|D(r_0)| dd_0\dim \mathbb{G})^{-1}$, there is an irreducible factor $\ell_0$ of $f$ such that $\deg a'<\deg \ell_0$; in particular, $\pi_{\ell_0}(s(t))\neq 0$. This implies that $\pi_{\ell_0}(h_1),\ldots,\pi_{\ell_0}(h_d)$ are $K(\ell_0)$-linearly independent in $K(\ell_0)^{d_1}$; and so the right kernel of $\pi_{\ell_0}(X)$ is zero. By the definition of $\mathcal{L}_{\delta}(H)$, we have that $\pi_{\ell_0}(h_i)\in \Psi(\pi_{\ell_0}(H))$. Since $f_{\ell_0}(\pi_{\ell_0}(H))=0$ and $\deg f\le d_0$, we have that the coefficients of $f_{\ell_0}$ form a column vector in the right kernel of $\pi_{\ell_0}(X)$, which is a contradiction. Therefore there is a proper algebraic subset $\mathbb{X}$ of $\mathbb{G}$ whose complexity is $O_{\Gamma}(1)$, and $\mathcal{L}_\delta(H)$ is a subset of $\mathbb{X}(\mathbb{F}_{q_0}(t))$. By \cite[Proposition 3.2]{EMO} if $A\subseteq \mathbb{G}(\mathbb{F}_{q_0}(t))$ is a generating set of a Zariski-dense subgroup of $\mathbb{G}$, then there exists a positive integer $N$ depending on the complexity of $\mathbb{X}$ such that $\prod_N A\not\subseteq \mathbb{X}(\mathbb{F}_{q_0}(t))$. It should be pointed out that the statement of \cite[Proposition 3.2]{EMO} is written for algebraic varieties and groups over $\mathbb{C}$. Its proof, however, is based on a generalized B\'{e}zout theorem that has a positive characteristic counter part (see \cite[Pg. 519]{Sch}, \cite[Ex. 12.3.1]{Fu}, and \cite[III. Thm 2.2]{Da}). Altogether one can see that the proof of \cite[Proposition 3.2]{EMO} is valid over any algebraically closed field. Since \[ \textstyle \prod_N\mathcal{L}_{\delta/N}(H)\subseteq \mathcal{L}_\delta(H)\subseteq \mathbb{X}(\mathbb{F}_{q_0}(t)), \] we deduce that the group generated by $\mathcal{L}_{\delta/N}(H)$ is not Zariski-dense in $\mathbb{G}$; that means that $\mathcal{L}_{\delta/N}(H)$ lies in a proper algebraic subgroup of $\mathbb{G}$; and claim follows as $N=O_{\Gamma}(1)$. \end{proof} \subsection{Invariant theoretic description of proper positive dimensional subgroups of a simple group: the positive characteristic case} In this section, we provide an {\em invariant theoretic} (or one can say a {\em geometric}) description of proper positive dimensional algebraic subgroups of an absolutely almost simple group over a field of {\em positive characteristic}. This is the positive characteristic counter part of \cite[Proposition 17, part (1)]{SGV}; and later it plays an important role in the proof of Proposition~\ref{propmainescape}. In this section we slightly deviate from our {\bf Standing assumptions}, and let $\mathbb{G}$ be a simply connected absolutely almost simple algebraic group defined over a positive characteristic {\em algebraically closed field $k$}. \begin{prop}\label{propreps} Let $\mathbb{G}$ be an absolutely almost simple group defined over an algebraically closed field $k$ of positive characteristic. Then there are finitely many group homomorphisms $\{\rho_i:\mathbb{G}\rightarrow (\mathbb{GL})_{\mathbb{V}_i}\}_ {i=1}^d$ and $\{\rho'_j:\mathbb{G}\rightarrow {\rm Aff}(\mathbb{W}_j)\}_{j=1}^{d'}$ such that \begin{enumerate} \item for any $i$, $\rho_i$ is irreducible and non-trivial. \item for any $j$, $\rho'_j(g)(v):=\rho'_{{\rm lin},j}(g)(v)+w_j(g)$ where $\rho'_{{\rm lin},j}:\mathbb{G}\rightarrow \mathbb{GL}(\mathbb{W}_j)$ is irreducible and non-trivial, and $w_j(g)\in \mathbb{W}_j(k)$; and no point of $\mathbb{W}_j(k)$ is fixed by $\mathbb{G}(k)$ under the affine action given by $\rho_j'$. \item for every positive dimensional closed subgroup $\mathbb{H}$ of $\mathbb{G}$, either there is an index $i$ and a non-zero vector $v\in \mathbb{V}_i(k)$ such that $\rho_i(\mathbb{H}(k))[v]=[v]$ where $[v]$ is the line in $\mathbb{V}_i(k)$ spanned by $v$, or there is an index $j$ and a point $w$ in $\mathbb{W}_j(k)$ such that $\rho_j'(\mathbb{H}(k))(w)=w$. \end{enumerate} \end{prop} Let us remark that in the characteristic zero case any affine representation $\mathbb{V}$ of a semisimple group has a fixed point; here is a quick argument: suppose $g\cdot v:=\rho(g)(v)+c(g)$. We identify the affine space of $\mathbb{V}(k)$ with the hyperplane $\{(v,1)| v\in\mathbb{V}(k)\}$ of $W:=\mathbb{V}(k)\oplus k$; and so $ \wh\rho(g):= \begin{pmatrix} \rho(g) & c(g) \\ 0 & 1 \end{pmatrix} $ is a group homomorphism and $(g\cdot v,1)=\wh{\rho}(g)(v,1)$. In the characteristic zero case any module is completely reducible; and so there is a line $[v]$ which is invariant under $\mathbb{G}(k)$ and $W=\mathbb{V}(k)\oplus [v]$. As $\mathbb{G}$ is semisimple, it does not have a non-trivial character. Hence any point on $[v]$ is a fixed point of $\mathbb{G}(k)$. As $[v]\not\subseteq \mathbb{V}(k)$, after rescaling, if needed, we can and will assume that $v=(v_0,1)$ for some $v_0\in \mathbb{V}(k)$. Therefore $\wh\rho(g)(v)=v$ implies that $g\cdot v_0=v_0$. In the positive characteristic case, however, there are affine transformations of $\mathbb{G}(k)$ that have no fixed points: there are irreducible representations $\mathbb{V}$ of $\mathbb{G}$ such that $H^1(\mathbb{G}(k),\mathbb{V}(k))\neq 0$. Hence there is a non-trivial cocycle $c:\mathbb{G}(k)\rightarrow \mathbb{V}(k)$. Since $c$ is a cocycle, $g\cdot v:=\rho(g)(v)+c(g)$ is a group action. If $g\cdot v_0=v_0$ for some $v_0$, then $c(g)=v_0-\rho(g)(v_0)$ which means $c$ is a trivial cocycle; and this contradicts our assumption. This said it is not clear to the authors if the mentioned affine representations are needed in Proposition~\ref{propreps} or not. \begin{question}\label{ques:subgroups-affine} Suppose $\mathbb{G}$ is a connected, absolutely almost simple group and $\mathbb{H}$ is a positive dimensional proper subgroup of $\mathbb{G}$. Is there a non-trivial irreducible representation $\rho:\mathbb{G}\rightarrow \mathbb{GL}(\mathbb{V})$ of $\mathbb{G}$ and a non-zero vector $v\in \mathbb{V}(k)\setminus \{0\}$ such that $\rho(\mathbb{H}(k))([v])=[v]$? \end{question} As we will see in the proof of Proposition ~\ref{propreps}, the mentioned affine representations arise as submodules of wedge powers of the adjoint representation of $\mathbb{G}(k)$. When the characteristic of the field $k$ is large compared to the dimension of $\mathbb{G}$, all these representations are completely reducible; and so by a similar argument as in the characteristic zero case, one can see that such affine representations do not occur. Hence one gets a positive affirmative answer to Question~\ref{ques:subgroups-affine}. \begin{proof}[Proof Proposition~\ref{propreps}] Since $\mathbb{H}$ is a proper positive dimensional subgroup, $\mathfrak{h}:=\Lie(\mathbb{H})(k)$ is a non-trivial proper subspace of $\mathfrak{g}:=\Lie(\mathbb{G})(k)$. Since $\mathbb{G}$ is an absolutely almost simple group, $\mathfrak{g}/\mathfrak{z}$ is a simple $G:=\mathbb{G}(k)$-module where $\mathfrak{z}:=Z(\mathfrak{g})$ is the center of $\mathfrak{g}$ and $\mathfrak{g}$ is a perfect Lie algebra; that means $\mathfrak{g}=[\mathfrak{g},\mathfrak{g}]$. Therefore $(\mathfrak{h}+\mathfrak{z})/\mathfrak{z}$ is a proper subspace of $\mathfrak{g}/\mathfrak{z}$ and it is not $G$-invariant. Thus $\mathfrak{h}$ is not invariant under $G$. From here we deduce that $l_H:=\wedge^{\dim_k \mathfrak{h}}\mathfrak{h}$ is not invariant under $G$, where $G$ acts on $\wedge^{\dim_k \mathfrak{h}} \mathfrak{g}$ via the representation $\wedge^{\dim_k \mathfrak{h}} \Ad$. Suppose \[ 0:=V_0\subset V_1 \subset \cdots \subset V_m:=\wedge^{\dim_k \mathfrak{h}}\mathfrak{g} \] is a composition factor of $\wedge^{\dim_k \mathfrak{h}}\mathfrak{g}$. Let $m'$ be the smallest index such that $l_H\subseteq V_{m'}$ as a $G$-module. Hence $l_H\not\subseteq V_{m'-1}$, which implies $l_H\oplus V_{m'-1}\subseteq V_{m'}$. {\bf Step 1.} (Composition factor is non-trivial) If $\dim_k V_{m'}/V_{m'-1}>1$, then $V_{m'}/V_{m'-1}$ is a non-trivial simple $G$-module that has a line which is $H$-invariant; here $H:=\mathbb{H}(k)$. {\bf Step 2.} (Triviality of the composition factor gives us an affine action whose linear part is irreducible) If $\dim_k V_{m'}/V_{m'-1}=1$, then $l_H\oplus V_{m'-1}=V_{m'}$. Let $V:=V_{m'-1}/V_{m'-2}$ and $W:=V_{m'}/V_{m'-2}$; and so $W/V$ is a one dimensional $G$-module. Since $\mathbb{G}$ has no non-trivial character, $G$ acts trivially on $W/V$. Suppose $w\in W\setminus V$; then for any $g\in G$, $c_w(g):=\rho_W(g)(w)-w\in V$. For $v\in V$ and $g\in G$, we let $g\cdot v:=\rho_V(g)(v)+c_w(g)$; then \begin{align*} g_1\cdot(g_2\cdot v)= & \rho_V(g_1)((g_2\cdot v))+c_w(g_1)\\ = & \rho_V(g_1)(\rho_V(g_2)(v)+c_w(g_2))+c_w(g_1) \\ = & \rho_V(g_1g_2)(v)+\rho_W(g_1)(\rho_W(g_2)(w)-w)+(\rho_W(g_1)(w)-w)\\ = & \rho_V(g_1g_2)(v)+\rho_W(g_1g_2)(w)-\rho_W(g_1)(w)+\rho_W(g_1)(w)-w \\ = & \rho_V(g_1g_2)(v)+(\rho_W(g_1g_2)(w)-w)\\ = & \rho_V(g_1g_2)(v)+c(g_1g_2)=(g_1g_2)\cdot v. \end{align*} So $g\cdot v$ defines an affine action of $G$ on $V$. Suppose $x_H\in l_H\setminus\{0\}$; then $x_H=c_0 w+v_0$ for some $c_0\in k^{\times}$ and $v_0\in V$. For any $h\in H$, we have $\rho_W(h)(x_H)=x_H$, which implies that $c_0 (\rho_W(h)(w)-w)=v_0-\rho_V(h)(v_0)$. Therefore for any $h\in H$, \begin{equation}\label{eq:coboundary} c_w(h)=c_0^{-1}(v_0-\rho_V(h)(v_0)). \end{equation} Since $x_H$ is not fixed by $G$, there is $g_0\in G$ such that $\rho_W(g_0)(x_H)\neq x_H$, which implies \begin{equation}\label{eq:not-equal} c_w(g_0)\neq c_0^{-1}(v_0-\rho_V(g_0)(v_0)). \end{equation} {\bf Step 3.} (Affine action has a fixed point) If the above affine action has a fixed point $v_1\in V$, then for any $g\in G$, \begin{equation}\label{eq:fixedpoint} v_1=\rho_V(g)(v_1)+c_w(g). \end{equation} By \eqref{eq:coboundary} and \eqref{eq:fixedpoint}, for any $h\in H$, we have $v_1-\rho_V(h)(v_1)=c_0^{-1}(v_0-\rho_V(h)(v_0)),$ which implies \begin{equation}\label{eq:existence-of-fixed-point} \rho_V(h)(c_0^{-1}v_0-v_1)=c_0^{-1}v_0-v_1. \end{equation} By \eqref{eq:not-equal} and \eqref{eq:fixedpoint}, we have $\rho_V(g_0)(c_0^{-1}v_0-v_1)\neq (c_0^{-1}v_0-v_1)$. Therefore $\rho_V$ is a non-trivial irreducible representation of $G$ that has a non-zero vector fixed by $H$. {\bf Step 4.} (Affine action does not have a fixed point) Now suppose that the above affine action does not have a $G$-fixed point; then by \eqref{eq:coboundary} for any $h\in H$, \[ h\cdot (c_0^{-1}v_0)=\rho_V(c_0)^{-1}v_0)+c_w(h)=\rho_V(h)((c_0)^{-1}v_0))+c_0^{-1}(v_0-\rho_V(h)(v_0))=c_0^{-1}v_0, \] which means $H$ has a fixed point; and so claim follows. \end{proof} \subsection{Invariant theoretic description of small lifts of purely structural subgroups} In this section based on Proposition~\ref{propsmalllifts} and Proposition~\ref{propreps}, we give an invariant theoretic understanding of small lifts of purely structural subgroups of $\pi_f(\Gamma)$ under the {\bf Standing assumption} (see the 2nd paragraph of Section \ref{sectionsmalllifts}). \begin{prop}\label{prop:SmallLiftsInvariantTheoretic} Let $\Gamma, \mathbb{G}, f$ be as in the {\bf Standing assumption}. Then \begin{enumerate} \item there are local fields $\mathcal{K}_i$ and $\mathcal{K}_j'$ that are field extensions of $\mathbb{F}_{q_0}(t)$. \item there are homomorphisms $\rho_i:\mathbb{G}\otimes_{\mathbb{F}_{q_0}(t)} \mathcal{K}_i\rightarrow \mathbb{GL}(\mathbb{V}_i)$ and $\rho_j':\mathbb{G}\otimes_{\mathbb{F}_{q_0}(t)} \mathcal{K}_j'\rightarrow {\rm Aff}(\mathbb{W}_j)$ such that \begin{enumerate} \item $\rho_i$'s are non-trivial irreducible representations over a geometric fiber; that means after a base change to an algebraic closure of $\mathcal{K}_i$, $\rho_i$ is non-trivial and irreducible, \item the linear parts $\rho_{{\rm lin},j}'$'s of the affine representations $\rho_j'$ are non-trivial irreducible representations over a geometric fiber, \item $\mathbb{G}(\mathcal{K}_j')$ does not fix any point of $\mathbb{W}_j(\mathcal{K}_j')$. \item $\rho_i(\Gamma)\subseteq \GL(\mathbb{V}_i(\mathcal{K}_i))$ and $\rho_{{\rm lin},j}'(\Gamma)\subseteq \GL(\mathbb{W}_j(\mathcal{K}_j'))$ are unbounded subgroups. \end{enumerate} \item there is $\delta>0$ depending on $\Gamma$ such that for any purely structural subgroup $H$ of $\pi_f(\Gamma)$ one of the following conditions hold: \begin{enumerate} \item the group generated by $\mathcal{L}_{\delta}(H)$ is a finite subgroup of $\Gamma$. \item for some $i$, there is a non-zero $v\in \mathbb{V}(\mathcal{K}_i)$ such that for any $h\in \mathcal{L}_{\delta}(H)$, $\rho_i(h)([v])=[v]$. \item for some $j$, there is $w\in \mathbb{W}_j(\mathcal{K}_j')$ such that for any $h\in \mathcal{L}_{\delta}(H)$, $\rho_j'(h)(w)=w$. \end{enumerate} \end{enumerate} \end{prop} \begin{proof} Let $k$ be an algebraic closure of $\mathbb{F}_{q_0}(t)$; then by Proposition~\ref{propreps} the geometric fiber $\wt\mathbb{G}:=\mathbb{G}\otimes_{\mathbb{F}_{q_0}(t)}k$ of $\mathbb{G}$ has representations $\{\wt\rho_i\}_i$ and $\{\wt\rho_j'\}_j$ that can describe positive dimensional proper subgroups of $\wt\mathbb{G}$ (as in the statement of Proposition~\ref{propreps}). There is a finite Galois extension $L$ of $\mathbb{F}_{q_0}(t)$ such that $\wt\rho_i$ and $\wt\rho_j'$ have Galois descents $\wh\rho_i$ and $\wh\rho_j'$ to $\mathbb{G}\otimes_{\mathbb{F}_{q_0}(t)}L$. As $\Gamma$ is a discrete subgroup of $\prod_{v\in D(r_0)\cup\{v_{\infty}\}} \mathbb{G}(K_v)$ where $K_v$ is the $v$-adic completion of $\mathbb{F}_{q_0}(t)$, for any $i$ and $j$ there are some $v_i,v_j'\in D(r_0)\cup\{v_{\infty}\}$ and extensions $\nu_i,\nu_j'\in V_L$ of $v_i$ and $v_j'$, respectively, such that $\wh\rho_i(\Gamma)\subseteq \GL(\mathbb{V}_i(L_{\nu_i}))$ and $\wh\rho_{j}'(\Gamma)\subseteq \GL(\mathbb{W}_j(L_{\nu_j'}))$ are unbounded. So $\mathcal{K}_i:=L_{\nu_i}$, $\mathcal{K}_j':=L_{\nu_j'}$, $\rho_i:=\wh\rho_i\otimes {\rm id}_{\mathcal{K}_i}$, and $\rho_j':=\wh\rho_j\otimes {\rm id}_{\mathcal{K}_j'}$ satisfy parts (1) and (2). Let $\delta$ be as in Proposition~\ref{propsmalllifts}; then for any structural subgroup $H$ of $\pi_f(\Gamma)$, there is a proper subgroup $\mathbb{H}$ of $\mathbb{G}$ such that $\mathcal{L}_{\delta}(H)\subseteq \mathbb{H}(k)$. If $\mathbb{H}$ is zero-dimensional, then the group generated by $\mathcal{L}_{\delta}(H)$ is a finite group. If $\mathbb{H}$ is positive dimensional, then Proposition~\ref{propreps} implies that either (3.b) holds or (3.c); and claim follows. \end{proof} \subsection{Ping-pong argument} Let's recall that under the {\bf Standing assumptions} (see the 2nd paragraph in Section~\ref{sectionsmalllifts}), we want to show a random walk with respect to the probability counting measure on $\pi_f(\Omega)$ after $O(\deg f)$-many steps lands in a purely structural subgroup $H$ of $\pi_f(\Gamma)$ with {\em small probability}. Considering the lift of this random walk in $\Gamma$, we have to say that after $O(\delta_0 \deg f)$-many steps, the probability of landing in $\mathcal{L}_{\delta_0}(H)$ is small. By Proposition \ref{prop:SmallLiftsInvariantTheoretic}, it is enough to make sure that the probability of landing in a proper algebraic subgroup of $\mathbb{G}$ is small. In this section, we point out that the characteristic of the involved fields are irrelevant in the {\em ping-pong type} argument in \cite[Section 3.2]{SGV}, and we get similar statements in the global function field case. After having the needed {\em ping-pong players}, using Proposition~\ref{propreps} we end up getting a finite symmetric subset $\Omega_0$ such that a random walk with respect to the probability counting measure on $\Omega_0$ has an exponentially small chance of landing in a proper algebraic subgroup of $\mathbb{G}$. In this note, we do not repeat any of the proofs presented in \cite{Var, SGV}, and we refer the readers to those articles for the details of the arguments. For a subset $\Omega'$ of a group and a positive integer $l$, we let \[ B_l(\Omega'):=\{g_1\cdots g_l|\hspace{1mm} g_i\in \Omega'\cup \Omega'^{-1}, g_i\neq g_{i+1}^{-1}\}; \] so the support of the $l$-step random-walk with respect to the probability counting measure on $\Omega'\cup \Omega'^{-1}$ is $\bigcup_{2k\le l} B_{l-2k}(\Omega')$. \begin{prop}\label{Propping1} Let $\Gamma,\mathbb{G}$ be as in the {\bf Standing assumptions}. Let $\mathcal{K}_i$, $\mathcal{K}_j'$, $\rho_i$, and $\rho_j'$ be as in Proposition~\ref{prop:SmallLiftsInvariantTheoretic}. Then there exists a subset $\Omega'\subset \Gamma$ that freely generates a subgroup $\Gamma'$ with the following properties: \begin{enumerate} \item For any $i$ and any non-zero vector $v\in \mathbb{V}_i(\mathcal{K}_i)$, \[ |\{g\in B_\ell(\Omega')|\rho_i(g)([v])=[v]\}| < |B_\ell(\Omega')|^{1-c'}. \] \item For any $j$ and any point $w\in \mathbb{W}_j(\mathcal{K}_j')$ \[ |\{g\in B_\ell(\Omega')|\rho_j'(g)(w)=w\}| < |B_\ell(\Omega')|^{1-c'}. \] \end{enumerate} where $c'$ is a constant depending only on $\Omega'$ and the representations. \end{prop} \begin{proof} See proof of \cite[Proposition 20]{SGV}. \end{proof} \subsection{Escaping purely structural subgroups: finishing proof of Proposition \ref{propmainescape}} This proof is almost identical to the proof of \cite[Proposition 7]{SGV}. Let $\Gamma, \mathcal{G}, \mathbb{G},$ and $f$ be as in the {\bf Standing assumptions}. Let $\Omega'$ be the set given by Proposition \ref{Propping1}. Suppose $H\subseteq\pi_f(\Gamma)$ is a purely structural subgroup. Let $\delta$ be as in Proposition~\ref{prop:SmallLiftsInvariantTheoretic}. As $\pi_f[\mathcal{P}_{\Omega'}]^{(l)}(H)^2\le \pi_f[\mathcal{P}_{\Omega'}]^{(2l)}(H)$, it is enough to prove the claim for even positive integers $l$. We notice that for any positive integer $l$ \[ \pi_f[\mathcal{P}_{\Omega'}]^{(2l)}(H)= \mathcal{P}_{\Omega'}^{(2l)}\left(\bigcup_{0\le k\le l}(\pi_f^{-1}(H)\cap B_{2l-2k}(\Omega'))\right); \] and for any $\gamma\in \pi_f^{-1}(H)\cap B_l(\Omega')$, $\|\gamma\|\le (\max_{w\in\Omega'} \|w\|)^l$. Hence for $l\ll_{\Omega'} \delta \log[\pi_f(\Gamma):H]$ and $\deg f\gg_{\Omega'} 1$ we have \begin{equation}\label{eq:random-walk-lift-bound} \mathcal{P}_{\pi_f(\Omega')}^{(2l)}(H)\le \sum_{0\le k\le l} \mathcal{P}_{\Omega'}^{(2l)}(\mathcal{L}_{\delta}(H)\cap B_{2l-2k}(\Omega')). \end{equation} We notice that, since $\Omega'=\Omega'_0\sqcup\Omega_0'^{-1}$ and $\Omega'_0$ freely generates a subgroup, for $\gamma, \gamma'\in B_{2r}(\Omega')$ we have $\mathcal{P}_{\Omega'}^{(2l)}(\gamma)=\mathcal{P}_{\Omega'}^{(2l)}(\gamma')$; let $P_l(r):=\mathcal{P}_{\Omega'}^{(2l)}(\gamma)$ for some $\gamma\in B_{2r}(\Omega')$. Hence by \eqref{eq:random-walk-lift-bound} we have \begin{equation} \mathcal{P}_{\pi_f(\Omega')}^{(2l)}(H)\le \sum_{0\le r\le l} |\mathcal{L}_{\delta}(H)\cap B_{2r}(\Omega')| P_l(r). \end{equation} Combining Propositions \ref{prop:SmallLiftsInvariantTheoretic} and \ref{Propping1}, we have \begin{equation}\label{eq:upper-bound-number-of-elements-small-lifts} |\mathcal{L}_{\delta}(H)\cap B_{2r}(\Omega')|< |B_{2r}(\Omega')|^{1-c'} \end{equation} where $c'$ is the constant from Proposition~\ref{Propping1}. Let us recall a few well-known results related to random-walks (in a free group); for any $\gamma\in \langle \Omega'\rangle$, by Cauchy-Schwarz inequality, we have \begin{equation}\label{eq:return-to-identity} \mathcal{P}_{\Omega'}^{(2l)}(\gamma)=\sum_{\gamma'} \mathcal{P}_{\Omega'}^{(l)}(\gamma') \mathcal{P}_{\Omega'}^{(l)}(\gamma'^{-1}\gamma)\le \|\mathcal{P}_{\Omega'}^{(l)}\|_2^2=\sum_{\gamma'} \mathcal{P}_{\Omega'}^{(l)}(\gamma')\mathcal{P}_{\Omega'}^{(l)}(\gamma'^{-1})=\mathcal{P}_{\Omega'}^{(2l)}(I) \end{equation} where $I$ is the identity matrix; and so $P_l(r)\le P_l(0)$ for any non-negative integer $r$. Since $P_{l_1}(0)P_{l_2}(0)\le P_{l_1+l_2}(0)$, we have that $\{\sqrt[l]{P_l(0)}\}_l$ is a non-decreasing sequence. Hence by Kesten's result~\cite[Theorem 3]{Kes}, we have \begin{equation}\label{eq:Kesten} P_l(r)\le P_l(0)\le \left(\frac{2M-1}{M^2}\right)^l, \end{equation} where $|\Omega'|=2M$. We also have $|B_{2r}(\Omega')|=2M(2M-1)^{2r-1}$. Therefore by \eqref{eq:random-walk-lift-bound}, \eqref{eq:upper-bound-number-of-elements-small-lifts}, and \eqref{eq:Kesten}, for $l=\Theta_{\Omega'}([\pi_f(\Gamma):H])$, we have \begin{align*} \mathcal{P}_{\pi_f(\Omega')}^{(2l)}(H)\le & \sum_{0\le r\le l/20} |\mathcal{L}_{\delta}(H) \cap B_{2r}(\Omega')| P_l(0) + \sum_{l/20<r\le l} |\mathcal{L}_{\delta}(H) \cap B_{2r}(\Omega')| P_l(r) \\ \le & \left( 1+ 2M\sum_{1\le r\le l/20}(2M-1)^{2r-1}\right) \left(\frac{2M-1}{M^2}\right)^l+ \sum_{l/20<r\le l} |B_{2r}(\Omega')|^{1-c'} P_l(r) \\ \le & \frac{(2M)^{11l/10+1}}{M^{2l}}+(2M(2M-1)^{l/10})^{-c'} \sum_{l/20<r\le l} |B_{2r}(\Omega')| P_l(r) \\ \le & \frac{(2M)^{11l/10+1}}{M^{2l}}+(2M(2M-1)^{l/10})^{-c'} \le [\pi_f(\Gamma):H]^{-O_{\Omega'}(1)}. \end{align*} Suppose, for a positive integer $l$, the desired inequality holds for $2l$; then \[ \mathcal{P}_{\pi_f(\Omega')}^{(l)}(gH)^2\le \mathcal{P}_{\pi_f(\Omega')}^{(2l)}(H)\le [\pi_f(\Gamma):H]^{-\delta_0} \] for any $g\in \pi_f(\Gamma)$. Hence for any $l'\ge l$ we have \[ \mathcal{P}_{\pi_f(\Omega')}^{(l')}(H)=\sum_{g\in \pi_f(\Gamma)} \mathcal{P}_{\pi_f(\Omega')}^{(l'-l)}(g^{-1}) \mathcal{P}_{\pi_f(\Omega')}^{(l)}(gH)\le [\pi_f(\Gamma):H]^{-\delta_0/2}. \] So it remains to show for large enough $c_0$, if $f\in S_{r_1,c_0}$, then $\pi_f(\Gamma)=\pi_f(\langle \Omega'\rangle)$. By Propositions~\ref{prop:SmallLiftsInvariantTheoretic} and \ref{Propping1}, we have that the group $\Gamma'$ generated by $\Omega'$ is Zariski-dense in $\mathbb{G}$. Let $\mathbb{F}_p(s(t)/r(t))$ be the trace field of $\Gamma'$. Then by \cite[Theorem 0.2, Theorem 3.7, Proposition 4.2]{PinkStrongApproximation} (or \cite[Theorem 1.1]{Wei}) we have that, if $f$ is a square-free polynomial with large degree irreducible factors (in particular, we can and will assume that $\gcd(f,r)=1$), then \[ \pi_f(\Gamma')\simeq \prod_{\ell|f} \mathcal{G}_{\ell}(\mathbb{F}_p[s(t)/r(t)]/\langle \ell\rangle). \] Notice that $\mathbb{F}_p[s(t)/r(t)]/\langle \ell\rangle$ can be embedded into $\mathbb{F}_p[t]/\langle \ell\rangle$, and the degree of this extension is at most $[\mathbb{F}_p(t):\mathbb{F}_p(s(t)/r(t))]=\max(\deg s,\deg t)$. Hence $[\mathbb{F}_p[t]/\langle \ell\rangle:\mathbb{F}_p[s(t)/r(t)]/\langle \ell\rangle]$ is a divisor of $\deg \ell$ that is at most $\max(\deg s,\deg r)$. So if all the prime divisors of $\deg \ell$ are more than $c_0:=\max(\deg s,\deg r)$, then \[ \mathbb{F}_p[s(t)/r(t)]/\langle \ell\rangle=\mathbb{F}_p[t]/\langle \ell\rangle; \] and claim follows. \section{A variation of Varj\'{u}'s Product Theorem}\label{s:VarjuProductTheorem} In \cite{Var}, Varj\'{u} introduced a technique on proving a multi-scale product result for the direct product of an infinite family of certain finite groups. He provided a series of conditions for each one of the factors for this {\em gluing} process to work. One of the important conditions is on the structure of the subgroups of each factor; it was assumed that subgroups can be divided into $O(1)$ families of different {\em dimensions}. This condition was modeled from Nori's theorem on description of subgroups of $\GL_n(\mathbb{F}_p)$, which roughly says that any such subgroup is {\em very close} to being the $\mathbb{F}_p$-points of an algebraic subgroup. As we discussed in Section~\ref{s:RefinementOfLarsenPink}, subgroups of $\GL_n(\mathbb{F}_{p^m})$ might be either {\em structural} or {\em subfield type}; and the subfield type subgroups cannot be grouped into an $O_n(1)$ family of subgroups. We, however, use the fact that intersection of two conjugate subgroups of subfield type is a structural subgroup (see Corollary~\ref{cor:intersection-conjugate-subfield-type}), and modify Varj\'{u}'s axioms and arguments accordingly (see Proposition~\ref{propvarjuproduct}). Most of Varj\'{u}'s arguments and results stay the same even after the modifications of the assumptions; but we reproduce some of those arguments. It should be pointed out that there is an error in the proof of \cite[Corollary 14]{Var}. Our modified axioms help us to resolve this issue; Varj\'{u} has also communicated to us a way to correct the proof {\em without} changing the original assumptions. \subsection{Modified assumptions} Before stating our modified assumptions, let us introduce a notation and recall the definition of {\em quasi-random} groups (this concept was introduced by Gowers~\cite{Gow}). For two subgroups $H$ and $H'$ of a finite group $G$ and a positive integer $L$, we write $H\preceq_L H'$ if $[H:H'\cap H]<L$. \begin{definition}\label{QRdef} For a positive constant $c$, we say a finite group $G$ is {\em $c$-quasi-random} if for any non-trivial irreducible representation $\rho$ of $G$ we have $\dim\rho>|G|^c$. \end{definition} Our set of axioms depend on two parameters $L$ and $\delta_0$, where $L$ is a positive integer and $\delta_0:\mathbb{R}^+\rightarrow \mathbb{R}^+$ is a function. {\bf Assumptions (V1)$_{L}$-(V3)$_{L}$ and (V4)$_{\delta_0}$} \begin{enumerate} \item[(V1)$_L$] $G$ is an almost simple group with $|Z(G)|<L$. \item[(V2)$_L$] $G$ is $L^{-1}$-quasi-random (see Definition \ref{QRdef}). \item[(V3)$_L$] There exists an integer $m<L$, and classes of proper subgroups $\mathcal{H}_j$ for $1\le j\le m$ and $\mathcal{H}_i'$ for $1\le i\le m'$ where $m'\le L \log |G|$ with the following properties: \begin{enumerate}[(i)] \item For each $i$, $\mathcal{H}_i$ and $\mathcal{H}_i'$ are closed under conjugation by elements in $G$. \item $\mathcal{H}_0=\{Z(G)\}$. \item For each proper subgroup $H$ of $G$ there exist an index $i$ and a subgroup $H^\sharp\in \mathcal{H}_i$ or $\mathcal{H}_i'$ such that $H\preceq_L H^\sharp$. \item For each $i$ and for each pair of distinct subgroups $H_1$, $H_2\in \mathcal{H}_i$, there exists $j<i$ and a subgroup $H^\sharp\in \mathcal{H}_j$ such that $H_1\cap H_2\preceq_L H^\sharp$. For any $H\in \mathcal{H}_i$, there is $j$ and $H^\sharp\in \mathcal{H}_j$ such that $N_G(H)\preceq_L H^\sharp$. \item For each $i$ and for each pair of distinct subgroups $H_1'$ and $H_2' \in \mathcal{H}_i'$, there exists $j$ and a subgroup $H^{\sharp}\in \mathcal{H}_j$ such that $H_1'\cap H_2'\preceq_L H^\sharp$. For any $H\in \mathcal{H}'_i$, $[N_G(H):H]\le L$. \end{enumerate} \item[(V4)$_{\delta_0}$] if $S\subseteq G$ is a generating set and $|S|<|G|^{1-\varepsilon}$ for a positive number $\varepsilon$, then $|S\cdot S \cdot S|\ge |S|^{1+\delta_0(\varepsilon)}$. \end{enumerate} \begin{prop}\label{propvarjuproduct} For $L\in \mathbb{Z}^+$, $\delta_0:\mathbb{R}^+\rightarrow\mathbb{R}^+$, suppose $\{G_i\}_{i=1}^{\infty}$ is a family of pairwise non-isomorphic finite groups that satisfy assumptions (V1)$_{L}$-(V3)$_{L}$ and (V4)$_{\delta_0}$. Then for any $\varepsilon>0$, there is $\delta>0$ such that for any $n\in \mathbb{Z}^+$ and any symmetric subset $S$ of $G:=\bigoplus_{i=1}^nG_i$ satisfying \begin{equation} |S|<|G|^{1-\varepsilon}\mbox{ and } \mathcal{P}_S(gH)<[G:H]^{-\varepsilon}|G|^\delta \text{ for any subgroup } H \text{ of } G \text{ and } g\in G, \end{equation} we have \begin{equation*} |\Pi_3S|\gg_\varepsilon |S|^{1+\delta}. \end{equation*} \end{prop} Let us reiterate that there are two key differences between Proposition \ref{propvarjuproduct} and \cite[Proposition 14]{Var}: (1) In Varj\'{u}'s setting we have only $O(L)$ families of proper subgroups, and this parameter resembles dimension of an algebraic subgroup. In our setting, however, we have two types of families of proper subgroups, and only one of the types can have at most $O(L)$ families of proper subgroups. Theses types resemble the structural and the subfield type subgroups. For the structural subgroups we more or less use dimension of the underlying algebraic groups to parametrize them, and for subfield type subgroups the order of the subfield gives us the needed parametrization. It is clear that in this case the number of such possible families can grow as $|G|$ goes to infinity; but it does not get more than $\log |G|$. (2) We are assuming a {\em product type} result for each factor (see (V4)$_{\delta_0}$) instead of an $l^2$-flattening assumption for measures with {\em large} $l^2$-norm (see (A4) in \cite[Section 3]{Var}). This modification helps us resolve the mentioned error in \cite[Corollary 14]{Var}. \subsection{A detailed overview of Varj\'{u}'s proof} Before getting to the multi-scale setting of Proposition~\ref{propvarjuproduct}, we recall Bourgain-Gamburd's result which gives us a way to measure how product of two random variables gets {\em substantially more random} unless there is an algebraic obstruction (see \cite[Proposition 2]{BG1} and \cite[Lemma 15]{Var}). \begin{lem}\label{lemmaBGapproxsub} Let $\mu$ and $\nu$ be two probability measures on an arbitrary finite group $G$, and let $K$ be a real number greater than $2$. If $\|\mu\ast \nu\|_2> \frac{1}{K}\|\mu\|_2^{1/2}\|\nu\|_2^{1/2}$, then there is a symmetric subset $A\subseteq G$ with the following properties: \begin{enumerate} \item (Size of $A$ is comparable with $\|\mu\|_2^{-2}$) $K^{-R}\|\mu\|_2^{-2}\le |A|\le K^{R}\|\mu\|_2^{-2} $. \item (An approximate subgroup) $|A\cdot A\cdot A|\le K^R|A|$. \item (Almost equidistribution on $A$) $\min_{a\in A} (\wt{\mu}\ast \mu)(a)\ge K^{-R}|A|^{-1}$, \end{enumerate} where $R$ is a universal constant and $\wt{\mu}(g):=\mu(g^{-1})$. \end{lem} One can use various forms of entropy to quantify how random a measure is. \begin{definition} Suppose $X$ is a random variable on a finite set $S$ and has distribution $\mu$; then the (Shannon) {\em entropy} of $X$ is \[ H(X):=\sum_{s\in S}-\log (\mathbb{P}(X=s)) \mathbb{P}(X=s), \] where $\mathbb{P}(X=s)$ is the probability of having $X=s$. The {\em R\'{e}nyi entropy} of $X$ is \[ H_2(X):=-\log \left(\sum_{s\in S} \mathbb{P}(X=s)^2\right)=-\log \|\mu\|_2^2. \] We let $H_{\infty}(X):=-\log (\max_{s\in {\rm supp}(X)} \mathbb{P}(X=s))$ and $H_0(X):=\log |{\rm supp}(X)|$, where ${\rm supp}(X)$ is the support of $X$. Suppose $Y$ is another random variable on $S$. Then {\em the entropy of $X$ conditioned to $Y$} is \begin{align} \notag H(X|Y):=&\sum_{y\in S} \mathbb{P}(Y=y) H(X|Y=y) \\ =& -\sum_{y\in S} \mathbb{P}(Y=y) \sum_{x\in S}\mathbb{P}(X=x|Y=y) \log\mathbb{P}(X=x|Y=y), \end{align} where $X|Y=y$ is the random variable $X$ conditioned to the random variable $Y$ taking a certain value $y$, and $\mathbb{P}(X = x|Y = y)$ is the probability of having $X = x$ conditioned to $Y = y$. The {\em R\'{e}nyi entropy of $X$ conditioned to $Y$} is \[ H_2(X|Y):=\sum_{y\in S} \mathbb{P}(Y=y) H_2(X|Y=y). \] \end{definition} Here are some of the basic properties of entropy that will be used in this note. \begin{lem}\label{lem:properties-entropy} Suppose $S$ is a finite set, and $X$ and $Y$ are random variables with values in $S$. Then \begin{enumerate} \item $H(X,Y)=H(X)+H(Y|X)$. \item $H(X)\ge H(X|Y)$. \item $H_0(X)\ge H(X)\ge H_2(X)\ge H_{\infty}(X)$. \item $H(X|f(Y))\ge H(X|Y)$ where $f$ is a function. \end{enumerate} \end{lem} \begin{proof} These are all well-known facts; for instance see \cite[Theorem 2.4.1, Theorem 2.5.1, Theorem 2.6.4, Lemma 2.10.1, Problem 2.1]{CT}. \end{proof} It is very intuitive to say that the product of two independent random variables with values in a group should be at least as random as the initial random variables. The next lemma says that this intuition is compatible with how various types of entropy measure the randomness of a distribution. \begin{lem}\label{lem:product-entropy-trivial-bound} Suppose $X$ and $Y$ are two independent random variables with values in a group $H$ and finite supports. Then $H_i(XY)\ge \max(H_i(X),H_i(Y))$ for $i\in \{0,1,2,\infty\}$ where $H_1(X):=H(X)$. \end{lem} \begin{proof} Notice that ${\rm supp}(XY)={\rm supp}(X){\rm supp}(Y)$; and so $H_0(XY)\ge \max(H_0(X),H_0(Y))$. For any $h\in H$, we have \[ \mathbb{P}(XY=h)=\sum_{x\in H} \mathbb{P}(X=x)\mathbb{P}(Y=x^{-1}h) \le \max_{y\in H} \mathbb{P}(Y=y); \] and so $H_{\infty}(XY)\ge H_{\infty}(Y)$. By symmetry we get the claim for $i=\infty$. Since the function $x^2$ is a convex function, we have \begin{align*} \mathbb{P}(XY=h)^2= & (\sum_{x\in H} \mathbb{P}(X=x)\mathbb{P}(Y=x^{-1}h))^2 \\ \le & \sum_{x\in H} \mathbb{P}(X=x)\mathbb{P}(Y=x^{-1}h)^2. \end{align*} Therefore $\sum_{h\in H}\mathbb{P}(XY=h)^2\le \sum_{h\in H} \sum_{x\in H} \mathbb{P}(X=x)\mathbb{P}(Y=x^{-1}h)^2= \sum_{y\in H} \mathbb{P}(Y=y)^2$, which implies the claim for $i=2$. By Lemma~\ref{lem:properties-entropy}, we have \[ H(XY)\ge H(XY|Y)=H(X|Y)=H(X); \] and claim follows. \end{proof} Lemma~\ref{lemmaBGapproxsub} says how much the R\'{e}nyi entropy of product of two independent variables increases unless there is an algebraic obstruction: if $X$ and $Y$ are two independent random variables with values in a group $G$, then we have \begin{equation}\label{eq:RenyEntropyIncrease} H_2(XY)\ge \frac{H_2(X)+H_2(Y)}{2}+\log K, \end{equation} unless there is a symmetric subset $A$ of $G$ such that \[ |\log |A|-H_2(X)|\le R\log K,\hspace{0.5cm} |A\cdot A\cdot A|\le K^R|A|, \] and for any $a\in A$ \[ \mathbb{P}(X'^{-1}X=a)\ge K^{-R} |A|^{-1}, \] where $X'$ is a random variable with identical distribution as $X$ and it is independent of $X$. Based on this result one can prove a meaningful increase in the R\'{e}nyi entropy of product of two independent random variables with a {\em Diophantine type} condition with values in a group that has a {\em product type property} (similar to the condition (V4)$_{\delta_0}$). \begin{definition} Suppose $G$ is a finite group and $X$ is a random variable with values in $G$. We say $X$ is of $(\alpha,\beta)${\em -Diophantine type} if for any proper subgroup $H$ of $G$ with $|H|\ge |G|^{\alpha}$ and for any $g\in G$, we have $\mathbb{P}(X\in gH)\le [G:H]^{-\beta}$. \end{definition} \begin{lem}\label{lem:GrowthOfRenyiEntropySingleScale} Suppose $G$ is a finite group and $X$ and $Y$ are two independent random variables with values in $G$. Suppose $G$ satisfies the following properties: \begin{enumerate} \item {\em (Quasi-randomness)} It is an $L^{-1}$-quasi-random group for some positive integer $L$. \item {\em (Product property)} For every positive number $\varepsilon$, there is a positive number $\delta_0:=\delta_0(\varepsilon)$ such that if $A$ is a generating set of $G$ and $|A|<|G|^{1-\varepsilon}$, then $|A\cdot A\cdot A|\ge |A|^{1+\delta_0}$. \end{enumerate} Suppose the random variable $X$ satisfies the following properties: \begin{enumerate} \item {\em (Diophantine condition)} For some $\alpha,\beta>0$, $X$ is of $(\alpha,\beta)$-Diophantine type. \item {\em (Initial entropy)} $\alpha' \log |G|\le H_2(X)$ for some $\alpha'>2\alpha$. \item {\em (Room for improvement)} $H_2(X)\le (1-\alpha'') \log |G|$ for some $\alpha''>0$. \end{enumerate} Then \[ H_2(XY)\ge \frac{H_2(X)+H_2(Y)}{2}+\gamma_0 \log |G|, \] where $\gamma_0$ is a positive constant that only depends on $\alpha',\alpha'',\beta$, and the function $\delta_0$. \end{lem} \begin{proof} Suppose $H_2(XY)<\frac{H_2(X)+H_2(Y)}{2}+\gamma \log |G|$ for some $\gamma>0$; then by Bourgain-Gamburd's result and the above discussion there is a symmetric subset $A$ of $G$ such that \begin{align} \label{eq:order-of-almost-subgroup} |\log |A|-H_2(X)| &\le R \gamma \log |G| & \text{(Controlling the order)} \\ \label{eq:almost-subgroup} |A\cdot A \cdot A|&\le |G|^{R\gamma} |A| & \text{(almost subgroup)} \\ \label{eq:almost-equidistributed} \forall a\in A, \mathbb{P}(X'^{-1}X=a)&\ge |G|^{-R\gamma} |A|^{-1} &\text{(almost equidistribution)} \end{align} where $X'$ is a random variable with identical distribution as $X$ and it is independent of $X$ and $R$ is an absolute positive constant. Let $H$ be the group generated by $A$. Then by \eqref{eq:almost-equidistributed} \begin{equation}\label{eq:probability-of-hitting-H} \mathbb{P}(X\in H)\ge \mathbb{P}(X'^{-1}X\in H)^{1/2}\ge \mathbb{P}(X'^{-1}X\in A)^{1/2}\ge |G|^{-R\gamma/2}, \end{equation} and by the lower bound on the R\'{e}nyi entropy of $X$ \begin{equation}\label{eq:order-of-subgroup} |H|\ge |A|\ge |G|^{-R\gamma}e^{H_2(X)}\ge |G|^{\alpha'/2} \end{equation} for $\gamma\le \frac{\alpha'}{2R}$. Since $X$ is of $(\alpha,\beta)$-Diophantine type and $\alpha'>2\alpha$, by \eqref{eq:order-of-subgroup} and \eqref{eq:probability-of-hitting-H} we get \begin{equation}\label{eq:Diophantine-condition} [G:H]^{-\beta}\ge \mathbb{P}(X\in H)\ge |G|^{-R\gamma/2}. \end{equation} Since $G$ is an $L^{-1}$-quasi-random group, we $[G:H]\ge |G|^{1/L}$ if $H$ is a proper subgroup; and so by \eqref{eq:probability-of-hitting-H} and \eqref{eq:Diophantine-condition} we get \[ \gamma \ge \frac{\beta}{2RL}. \] Therefore for $\gamma\le \frac{\beta}{4RL}$, we have $G=H$, which means $A$ is a generating set of $G$. By the upper bound on the R\'{e}nyi entropy of $X$ and \eqref{eq:order-of-almost-subgroup} we have \[ |A|\le |G|^{1-\alpha''}|G|^{R\gamma}\le |G|^{1-\frac{\alpha''}{2}} \] for $\gamma\le \frac{\alpha''}{2R}$. Hence by the product property of $G$ there is $\delta_0:=\delta_0(\alpha''/2)$ such that \begin{equation}\label{eq:propduct-propety-implies} |A\cdot A\cdot A|\ge |A|^{1+\delta_0}. \end{equation} By \eqref{eq:almost-subgroup} and \eqref{eq:propduct-propety-implies} we deduce that \[ |G|^{R\gamma}\ge |A|^{\delta_0}; \] together with \eqref{eq:order-of-almost-subgroup} and the lower bound on the R\'{e}yi entropy of $X$ we get \[ |G|^{R\gamma}\ge |A|^{\delta_0}\ge (|G|^{-R\gamma}e^{H_2(X)})^{\delta_0} \ge |G|^{\alpha'\delta_0/2}. \] Hence we deduce that for $\gamma=\gamma_0:=\min(\frac{\alpha'\delta_0(\alpha''/2)}{4R}, \frac{\beta}{4RL})$, we have \[ H_2(XY)\ge \frac{H_2(X)+H_2(Y)}{2}+\gamma \log|G|; \] and claim follows. \end{proof} Now suppose $X_{j}:=(X_j^{(i)})_{i=1}^n$'s are i.i.d. random variables with values in $G:=\bigoplus_{i=1}^n G_i$ and distribution $\mathcal{P}_A$. We notice that (see Lemma~\ref{lem:properties-entropy}) \[ \textstyle \log |\prod_l A|=H_0(X_{1}\cdots X_{l})\ge H(X_{1}\cdots X_{l}); \] and by the mentioned basic properties of entropy (see Lemma~\ref{lem:properties-entropy}) we have \begin{align*} H(X_{1}\cdots X_{l})= & \sum_{j=1}^n H(X_1^{(j)}\cdots X_l^{(j)}| X_1^{(1)}\cdots X_l^{(1)},\ldots,X_{1}^{(j-1)}\cdots X_{l}^{(j-1)})\\ \ge & \sum_{j=1}^n H(X_1^{(j)}\cdots X_l^{(j)}| X_i^{(k)}, 1\le i\le l, 1\le k\le j-1)\\ \ge & \sum_{j=1}^n H_2(X_1^{(j)}\cdots X_l^{(j)}| X_i^{(k)}, 1\le i\le l, 1\le k\le j-1). \end{align*} At this point we are almost at the setting of Bourgain-Gamburd's result, and we would like to apply Lemma~\ref{lem:GrowthOfRenyiEntropySingleScale}. By (V2)$_{L}$ and (V4)$_{\delta_0}$, $G_j$ does satisfy Lemma~\ref{lem:GrowthOfRenyiEntropySingleScale}'s conditions; but the random variables $X_j^{(i)}$'s do not necessarily satisfy the required conditions. Here are the steps that we take to get the desired conditions: {\bf Step 1}. By a regularization argument, we find a subset $A$ of $S$ such that (a) for any $(g_1,\ldots,g_{j-1})\in \bigoplus_{k=1}^{j-1} G_k$ the conditional random variables $X_i^{(j)}|X_i^{(k)}=g_k, 1\le k\le j-1$ are uniformly distributed in their support. (b) $H(X_i^{(j)}|X_i^{(k)}=g_k, 1\le k\le j-1)$ is the same for any $(g_1,\ldots,g_{j-1}) \in \pr_{[1..j-1]}(A)$ where $\pr_I:\bigoplus_{k=1}^n G_k\rightarrow \bigoplus_{k\in I} G_k$ is the projection map. (c) (Initial entropy) For any $(g_1,\ldots,g_{j-1}) \in \pr_{[1..j-1]}(A)$, either $H(X_i^{(j)}|X_i^{(k)}=g_k, 1\le k\le j-1)=0$ or $H(X_i^{(j)}|X_i^{(k)}=g_k, 1\le k\le j-1)\ge \alpha \log |G_j|$. (d) $\log |A|>\log |S|-2\alpha \log |G|$. This process (more or less) gives us the {\em initial entropy} condition. {\bf Step 2}. At this step, we focus on the {\em scales} where the entropy is already large and does not have much {\em room for improvement}. In the influential work~\cite{Gow} where Gowers defined quasi-random groups, he proved the following result (see \cite[Theorem 3.3]{Gow} and also \cite[Corollary 1]{NP}). \begin{thm}\label{thm:Gowers} Suppose $G$ is an $L^{-1}$-quasi-random group . Suppose $X_1,X_2,X_3$ are three independent random variables with values in $G$. If \[\frac{H_0(X_1)+H_0(X_2)+H_0(X_3)}{3}>(1-\frac{1}{3L})\log |G|,\] then $H_0(X_1X_2X_3)=\log |G|$. \end{thm} We apply Theorem~\ref{thm:Gowers} for the conditional random variables $X_i^{(j)}|X_i^{(k)}=g_i^{(k)}, 1\le k\le j-1$ for $(g_i^{(1)},\ldots,g_{i}^{(j-1)}) \in \pr_{[1..j-1]}(A)$ and at the {\em scales} where \begin{equation}\label{eq:large-entropy} H(X_i^{(j)}|X_i^{(k)}=g_i^{(k)}, 1\le k\le j-1)\ge (1-\frac{1}{3L})\log |G|, \end{equation} and deduce that $\bigoplus_{i\in I_{\rm l}}G_i=\pr_{I_{\rm l}}(A\cdot A\cdot A)$ where $I_{\rm l}$ consists of $j$'s such that \eqref{eq:large-entropy} holds. Next we let $I_{\rm s}:=[1..n]\setminus I_{\rm l}$; and define the following metric on $\bigoplus_{i\in I_{\rm s}}G_i$ \[ d(g,g'):=\sum_{i\in I_{\rm s}, \pr_i(g)\neq \pr_i(g')} \log |G_i|. \] Let $T:=\max \{d(g_{\rm s},1)|\hspace{1mm} g_{\rm s}\in \pr_{I_{\rm s}}(\prod_9 S\cap \{1\}\oplus \bigoplus_{i\in I_{\rm s}}G_i)\}$. Then one gets a $T$-almost group homomorphism $\psi:\bigoplus_{i\in I_{\rm l}}G_i\rightarrow \bigoplus_{i\in I_{\rm s}}G_i$. By a result of Farah~\cite{Far} on approximate homomorphisms, $\psi$ should be close to a group homomorphism. Based on this and certain Diophantine property of $S$, one can deduce that \[ \textstyle \exists (1,g_{\rm s})\in \prod_9 S\cap \{1\}\oplus \bigoplus_{i\in I_{\rm s}}G_i, d(g_{\rm s},1)\gg \varepsilon^2 \log |G|. \] Now considering $H:=C_G((g_{\rm s},1))$ and using the assumed upper bound of $\mathcal{P}_S(gH)$, one gets a strong lower bound for $|\prod_{14} S|$ unless almost all the {\em scales} do have {\em room for improvement}. {\bf Step 3}. At this step we focus on the {\em scales} where there is an {\em initial entropy} and {\em room for improvement} as required in Lemma~\ref{lem:GrowthOfRenyiEntropySingleScale}. The last condition that is needed is a {\em Diophantine type} condition. Varj\'{u} (essentially) proves the following result in order to deal with this issue. \begin{prop}\label{prop:generic-Diophantine-property} Suppose $L$ is a positive integer, $G$ is a finite group that satisfies properties (V1)$_L$-(V3)$_L$. Let $m$ be as in (V3)$_{L}$. Suppose $X_1,\ldots,X_{2^{m+1}}$ are independent random variables with values in $G$ and $H_{\infty}(X_i)\ge \alpha' \log |G|$ for some positive number $\alpha'$ and any index $i$. For $\overrightarrow{y}:=(y_1,\ldots,y_{2^{m+1}-1})\in \bigoplus_{i=1}^{2^{m+1}-1} G$, let $X_{\overrightarrow{y}}:=X_1 y_1 X_2y_2\cdots y_{2^{m+1}-1} X_{2^{m+1}}$. Suppose $Y_1,\ldots,Y_{2^{m+1}-1}$ are i.i.d. random variables with values in $G$. Suppose $Y_i$'s are of $(\alpha,\beta)$-Diophantine type for some positive numbers $\alpha$ and $\beta$ such that $\beta\ge 4\alpha$; further, assume that for any $g\in G$ and $H\in \bigcup_{i=1}^{m} \mathcal{H}_i$, $\mathbb{P}(Y_1\in gH)\le [G:H]^{-\beta}$. Then assuming $|G|\gg_{\alpha',\beta,L} 1$, we have \[ \mathbb{P}((Y_1,\ldots,Y_{2^{m+1}-1})=\overrightarrow{y} \text{ such that } X_{\overrightarrow{y}} \text{ is not of } (0,\beta'/2)\text{-Diophantine type}) \le |G|^{-\frac{\beta}{4L}} \] where $\beta':=\frac{1}{8^{m+1}}\min(\frac{\beta}{5L},\frac{\alpha'}{2})$. \end{prop} Using Proposition~\ref{prop:generic-Diophantine-property}, Lemma~\ref{lem:product-entropy-trivial-bound}, and Lemma~\ref{lem:GrowthOfRenyiEntropySingleScale}, one gets the following result. \begin{prop}\label{prop:VarjuMutation} Suppose $L$ is a positive integer and $\delta_0:\mathbb{R}^+\rightarrow\mathbb{R}^+$ is a function. Suppose $G$ satisfies conditions (V1)$_{L}$-(V3)$_{L}$ and (V4)$_{\delta_0}$. Let $m$ be as in the condition (V3)$_{L}$. Suppose random variables $X_1,\ldots,X_{2^{m+1}+1}$ satisfy the following properties: \begin{enumerate} \item {\em (Initial entropy)} $\alpha' \log |G|\le H_{\infty}(X_i)$ for some $\alpha'>0$ and any index $i$. \item {\em (Room for improvement)} $H_2(X_i)\le (1-\alpha'')\log |G|$ for some $\alpha''>0$. \end{enumerate} Suppose the i.i.d. random variables $Y_1,\ldots,Y_{2^{m+1}-1}$ satisfy the following property: \begin{center} {\em (Diophantine condition)} For some $0\le \alpha<\min(\beta/4,\alpha'/2)$, $Y_1$ is of $(\alpha,\beta)$-Diophantine type; and for any $H\in \bigcup_{i=1}^m \mathcal{H}_i$ and $g\in G$, $\mathbb{P}(Y_1\in gH)\le [G:H]^{-\beta}$. \end{center} Then assuming $|G|\gg_{\alpha',\alpha'',\beta,L,\delta_0}1$ we have \[ H_2(X_1Y_1X_2\cdots Y_{2^{m+1}-1}X_{2^{m+1}}X_{2^{m+1}+1}|Y_1,\ldots,Y_{2^{m+1}-1})\ge \min_i H_2(X_i)+ \gamma \log |G| \] where $\gamma$ is a positive constant that only depends on $\alpha',\alpha'', \beta, L$, and the function $\delta_0$. \end{prop} Finally Varj\'{u} finds a subset $B$ of $S$ such that, if $Y=(Y^{(1)},\ldots,Y^{(n)})$ is a random variable with distribution $\mathcal{P}_B$, then for {\em lots of} $i$'s $Y^{(i)}$ is of $(0,\varepsilon')$-Diophantine type where $\varepsilon'\gg_{\varepsilon,L} 1$ ($\varepsilon$ and $L$ are given in Proposition~\ref{propvarjuproduct}); overall one gets \[\textstyle \log |\prod_{2^{m+2}} S|-\log |S|\gg_{\varepsilon,L} \log|S|.\] One can finish the proof of Proposition~\ref{propvarjuproduct} using \cite[Lemma 2.2]{Hel1} which says \[\textstyle (k-2)(\log |\prod_3 S|-\log |S|)\ge \log|\prod_{k} S|-\log |S|\] for any integer $k\ge 3$. \subsection{Regularization and a needed inequality}\label{ss:regular-subset} Let $L$, $\delta_0$, and $\{G_i\}_{i=1}^{\infty}$ be as in the statement of Proposition~\ref{propvarjuproduct}. Since $G_i$'s are pairwise non-isomorphic, $\lim_{i\rightarrow \infty}|G_i|=\infty$. \begin{lem}\label{lem:required-inequality} Suppose $m:\mathbb{R}^+\rightarrow \mathbb{Z}^+$ is a function. If claim of Proposition~\ref{propvarjuproduct} holds for $\varepsilon$, $\delta(\varepsilon)$, and the subfamily $\{G_i|\hspace{1mm} 1\le i, |G_i|> m(\varepsilon)\}$, then Proposition~\ref{propvarjuproduct} holds with $\delta(\varepsilon)/2$ for $\delta$ and a possibly larger implied constant in the final claimed inequality. \end{lem} Let us remark that $\delta$ also depends on $L$ and $\delta_0$; but we are assuming that those are fixed in the entire section. \begin{proof}[Proof of Lemma~\ref{lem:required-inequality}] Suppose $S$ is a symmetric subset of $G:=\bigoplus_{i=1}^n G_i$ such that \begin{equation}\label{eq:assumption} |S|<|G|^{1-\varepsilon} \text{ and } \mathcal{P}_S(gH)<[G:H]^{-\varepsilon}|G|^{\delta(\varepsilon)/2} \end{equation} for any subgroup $H$ of $G$ and $g\in G$. Let \[ N:=\bigoplus_{|G_i|\le m(\varepsilon), 1\le i\le n} G_i. \] Since $G_i$'s are pairwise non-isomorphic, $|N|<f(\varepsilon)$ for some function $f:\mathbb{R}^+\rightarrow \mathbb{Z}^+$. Let $\overline{S}:=\pi_N(S)$ where \[ \pi_N:G\rightarrow \overline{G}:=\bigoplus_{|G_i|> m(\varepsilon), 1\le i\le n} G_i \] is the natural projection. For any subgroup $\overline{H}$ of $\overline{G}$ and $\overline{g}\in \overline{G}$, by \eqref{eq:assumption} we have \[ \mathcal{P}_S((\overline{g},1)\overline{H}\oplus N)<[G:\overline{H}\oplus N]^{-\varepsilon/2}|G|^{\delta(\varepsilon)/2}; \] and so \[ \frac{1}{|N|}\mathcal{P}_{\overline{S}}(\overline{g}\overline{H})\le \frac{|\overline{S}|}{|S|}\mathcal{P}_{\overline{S}}(\overline{g}\overline{H}) < [\overline{G}:\overline{H}]^{-\varepsilon} |\overline{G}|^{\delta(\varepsilon)/2}|N|^{\delta(\varepsilon)/2}. \] This implies that \[ \mathcal{P}_{\overline{S}}(\overline{g}\overline{H})\le [\overline{G}:\overline{H}]^{-\varepsilon} |\overline{G}|^{\delta(\varepsilon)/2}f(\varepsilon)^{1+\delta(\varepsilon)/2}. \] If $|\overline{G}|>f(\varepsilon)^{\frac{1+\delta(\varepsilon)/2}{\delta(\varepsilon)/2}}$, then we get $ \mathcal{P}_{\overline{S}}(\overline{g}\overline{H})\le [\overline{G}:\overline{H}]^{-\varepsilon} |\overline{G}|^{\delta(\varepsilon)}. $ Therefore by our assumption \[ |\overline{S}|^{1+\delta(\varepsilon)} \le C(\varepsilon)|\overline{S}\cdot \overline{S}\cdot \overline{S}|. \] Hence \[ |S|^{1+\delta(\varepsilon)}\le f(\varepsilon)^{1+\delta(\varepsilon)} C(\varepsilon) |S\cdot S\cdot S|. \] If $|\overline{G}|<f(\varepsilon)^{\frac{1+\delta(\varepsilon)/2}{\delta(\varepsilon)/2}}$, then $|S|\le |G|<f(\varepsilon)^{1+\frac{1+\delta(\varepsilon)/2}{\delta(\varepsilon)/2}}$. Overall we get \[ |S|^{1+\delta(\varepsilon)}\le C'(\varepsilon) |S\cdot S\cdot S|, \] where $C'(\varepsilon):=\max\{f(\varepsilon)^{2+\frac{2+\delta(\varepsilon)}{\delta(\varepsilon)/2}}, f(\varepsilon)^{1+\delta(\varepsilon)} C(\varepsilon)\}$; and claim follows. \end{proof} We show that for small enough $\varepsilon$ we can take \begin{equation}\label{eq:initial-delta} \delta(\varepsilon):=\min\{\varepsilon^5,1\}/8L. \end{equation} For the given $\delta(\varepsilon)$ and a positive valued function $C''(\varepsilon)$, we let \[ m(\varepsilon):=\sup\{x\in \mathbb{R}^+|\hspace{1mm} C''(\varepsilon) \log x\ge x^{\delta(\varepsilon)^2}\}. \] By Lemma~\ref{lem:required-inequality}, we can and will assume that \begin{equation}\label{eq:required-inequality} C''(\varepsilon) \log |G_i|< |G_i|^{\delta(\varepsilon)^2} \end{equation} for any $i$. Throughout the proof of proposition~\ref{propvarjuproduct} we will be assuming inequalities of the type given in \eqref{eq:required-inequality}. As it is discussed in the beginning of \cite[Section 3.2]{Var}, passing to the groups $G_i/Z(G_i)$, using an argument similar to Lemma~\ref{lem:required-inequality} and based on an inequality of type \eqref{eq:required-inequality}, we can and will assume that $G_i$'s are simple groups. For any non-empty subset $I$ of $[1..n]$, we let $G_I:=\bigoplus_{i\in I}G_i$; sometimes we view $G_I$ as a subgroup of $G_J$ when $I\subseteq J$. We let $G_{\varnothing}=\{1\}$. For any $I\subseteq J\subseteq [1..n]$, we let $\pr_I:G_J\rightarrow G_I$ be the natural projection map. \begin{definition}\label{def:regular} A subset $A$ of $\bigoplus_{i=1}^n G_i$ is called $(m_0,\ldots,m_{n-1})$-regular if for any $0\le k< n$ and $\overline{x}\in \pr_{[1..k]}(A)$ we have \[ |\{x\in \pr_{[1..{k+1}]}(A)| \pr_{[1..k]}(x)=\overline{x}\}|=m_k. \] \end{definition} For a random variable $X$ with values in $\bigoplus_{i=1}^n G_i$, we write $X=(X_1,\ldots,X_n)$ and get random variables $X_i$ with values in $G_i$. \begin{lem}\label{lem:random-variables-regular-sets} Suppose $A\subseteq \bigoplus_{i=1}^n G_i$ is an $(m_1,\ldots,m_n)$-regular subset. Let $X$ be a random variable with respect to the probability counting measure on $A$. Then \begin{enumerate} \item $\pr_{[1..k]}(X)$ is a random variable with respect to the probability counting measure on $\pr_{[1..k]}(A)$. \item For any $(a_1,\ldots,a_n)\in A$, the conditional probability measure \[\mathbb{P}(X_k|X_1=a_1,\ldots, X_{k-1}=a_{k-1})\] is a probability counting measure on a set of size $m_k$. \end{enumerate} \end{lem} \begin{proof} Both of the above claims are easy consequences of the fact that $A$ is a regular set (See \cite[Lemma 22]{SG:sum-product}). \end{proof} The filtration $\{1\}=G_{\varnothing} \subseteq G_{\{1\}}\subseteq \cdots \subseteq G_{[1..i]} \subseteq \cdots \subseteq G_{[1..n]}$ gives us a rooted tree structure, where the vertices at the level $i$ are the elements of $G_{[1..i]}$; and the {\em children} of $(a_1,\ldots,a_i)$ are elements of $(a_1,\ldots,a_i)\oplus G_{i+1}$. To a non-empty subset $A$ of $G_{[1..n]}$, we associate the rooted subtree consisting of paths from the root to the elements of $A$. So a subset $A$ is $(m_0,\ldots,m_{n-1})$-regular precisely when the vertices at the level $i$ of the associated rooted tree of $A$ has exactly $m_i$ children. As it has been discussed in \cite[Section 3.2]{Var} by \cite[Lemma 5.2]{BGS} and inequality~\eqref{eq:required-inequality} (see also \cite[A.3]{BG3} and \cite[Section 2.2]{SG:SAI}) we get that there is a $(D_0,\ldots,D_{n-1})$-regular subset $A$ of $S$ such that the following holds. \begin{enumerate} \item For any $i$, either $D_i>|G_i|^{\delta}$ or $D_i=1$. \item $|A|>(\prod_{i=1}^n|G_i|)^{-2\delta} |S|$. \end{enumerate} \subsection{Scales with no room for improvement} This section is identical to \cite[Section 3.4]{Var}. The change in the assumptions has no effect in this part of the proof. We have decided to include the proofs for the convenience of the reader. Let $I_{\rm l}:=\{i\in [0..n-1]|\hspace{1mm} D_i\ge |G_i|^{1-1/(3L)}\}$, and $I_{\rm s}:=[0..n-1]\setminus I_{\rm l}$. Suppose $X=(X_1,\ldots,X_n)$ is the random variable with respect to the probability counting measure on $A$. \begin{lem}\label{lem:scales-with-no-room-for-improvement} In the above setting, $\pr_{I_{\rm l}}(A\cdot A\cdot A)=G_{I_{\rm l}}$. \end{lem} \begin{proof} (See the beginning of Section 3.4 in \cite{Var}) Let's recall that $A$ is a $(D_0,\ldots,D_{n-1})$-regular set. Suppose $I$ is a subset $[0..n-1]$ such that for any $i\in I$, $D_i>|G_i|^{1-1/(3L)}$. By induction on $I$, we prove that $\pr_I(A\cdot A\cdot A)=G_I$. The base of induction follows from Theorem~\ref{thm:Gowers}. Suppose $I=\{i_1,\ldots,i_{m+1}\}$. By the induction hypothesis, \[ \pr_{\{i_1,\ldots,i_m\}}(A\cdot A\cdot A)=G_{I\setminus\{i_{m+1}\}}. \] So for any $(g_{i_1},\ldots,g_{i_m})\in G_{I\setminus\{i_{m+1}\}}$, there are $a_1,a_2,a_3\in A$ such that \begin{equation}\label{eq:first-m-components} \pr_{I\setminus\{i_{m+1}\}}(a_1a_2a_3)=(g_{i_1},\ldots,g_{i_m}). \end{equation} Let \[ A(a_j):=\{a\in A|\hspace{1mm} \pr_{[1..i_{m+1}-1]}(a)=\pr_{[1..i_{m+1}-1]}(a_j)\}. \] Then $|\pr_{i_{m+1}}(A(a_j))|=D_{i_{m+1}}>|G_i|^{1-1/(3L)}$; and so by Theorem~\ref{thm:Gowers}, we have \begin{equation}\label{eq:last-component} \pr_{i_{m+1}}(A(a_1)A(a_2)A(a_3))=\pr_{i_{m+1}}(A(a_1))\pr_{i_{m+1}}(A(a_2))\pr_{i_{m+1}}(A(a_3))=G_{i_{m+1}}. \end{equation} By \eqref{eq:first-m-components} and \eqref{eq:last-component}, we have that \[ \pr_I(A\cdot A\cdot A)=G_I; \] and claim follows. \end{proof} Let $I_{\rm s}:=[0..n-1]\setminus I_{\rm l}$, and for $g,g'\in G_{I_{\rm s}}$, let \[ d(g,g'):=\sum_{i\in I_{\rm s}, \pr_i(g)\neq \pr_i(g')} \log |G_i|. \] It is easy to see that $d(.,.)$ defines a metric on $G_{I_{\rm s}}$. Let \[ \textstyle T:=\max \{d(g_{\rm s},1)|\hspace{1mm} g_{\rm s}\in \bigcup_{i=1}^3 \pr_{I_{\rm s}}((\prod_{3i} S) \cap (\{1\}\oplus G_{I_{\rm s}}))\}; \] here we rearranging components of $G_{[1..n]}$ and identifying it with $G_{I_{\rm l}}\oplus G_{I_{\rm s}}$. For any $g_{\rm l}\in G_{I_{\rm l}}$, let $\psi(g_{\rm l})\in G_{I_{\rm s}}$ be such that \[ (g_{\rm l},\psi(g_{\rm l}))\in A\cdot A\cdot A; \] notice that by Lemma~\ref{lem:scales-with-no-room-for-improvement} there is such a $\psi(g_{\rm l})$. \begin{lem}\label{lem:approximate-hom} In the above setting $\psi:G_{I_{\rm l}}\rightarrow G_{I_{\rm s}}$ is a $T$-approximate homomorphism; that means for any $g,g'\in G_{I_{\rm l}}$ we have \[ d(\psi(gg'),\psi(g)\psi(g'))\le T\text{ and, } d(\psi(g^{-1}),\psi(g)^{-1})\le T. \] \end{lem} \begin{proof} For $g,g'\in G_{I_{\rm l}}$, we have $(g,\psi(g)),(g',\psi(g')),(gg',\psi(gg'))\in A\cdot A\cdot A$; and so \[ \textstyle (1,\psi(gg')\psi(g)^{-1}\psi(g')^{-1})\in \prod_9 S \cap (\{1\}\oplus G_{I_{\rm s}}) \text{ and, } (1,\psi(g^{-1})\psi(g)) \in \prod_6 S \cap (\{1\}\oplus G_{I_{\rm s}}); \] and claim follows as $d(g,g')$ is $G_{I_{\rm s}}$-bi-invariant. \end{proof} By \cite[Theorem 2.1]{Far}, there is a group homomorphism $\wt{\psi}:G_{I_{\rm l}}\rightarrow G_{I_{\rm s}}$ such that for $g\in G_{I_{\rm l}}$ \begin{equation}\label{eq:close-to-hom} d(\psi(g),\wt{\psi}(g))\le 24 T. \end{equation} \begin{lem}\label{lem:one-element-in-subgroup} In the above setting, let $H$ be the graph of $\wt{\psi}$; then for any $g\in S$, there is $I_{\rm s}(g)\subseteq I_{\rm s}$ such that the following holds: \begin{enumerate} \item $g\in H G_{I_{\rm s}(g)}$ where $G_{I_{\rm s}(g)}$ is viewed as a subgroup of $G_{[1..n]}$. \item $|G_{I_{\rm s}(g)}| \le 2^{25 T}$. \end{enumerate} \end{lem} \begin{proof} Suppose $g=(g_{\rm l},g_{\rm s})$ for some $g_{\rm l}\in G_{I_{\rm l}}$ and $g_{\rm s}\in G_{I_{\rm s}}$; then $d(\psi(g_{\rm l}),g_{\rm s})\le T$. By \eqref{eq:close-to-hom}, we have $d(\wt{\psi}(g_{\rm l}),\psi(g_{\rm l}))\le 24 T$; and so \begin{equation}\label{eq:distance-from-H} d(g_{\rm s},\wt{\psi}(g_{\rm l}))\le 25 T. \end{equation} Let $h:=(g_{\rm l},\wt{\psi}(g_{\rm l}))\in H$; and consider $h^{-1}g=(1,\wt{\psi}(g_{\rm l})^{-1} g_{\rm s})$. Let \[ I_{\rm s}(g):=\{j\in I_{\rm s}|\hspace{1mm} \pr_j(g_{\rm s})\neq \pr_j(\wt\psi(g_{\rm l}))\}; \] and so $h^{-1}g\in G_{I_{\rm s}(g)}$. By \eqref{eq:distance-from-H}, we have \[ \sum_{j\in I_{\rm s}(g)} \log |G_j|\le 25 T, \] which implies that $|G_{I_{\rm s}(g)}|\le 2^{25T}$; and the claim follows. \end{proof} \begin{lem}\label{lem:finiding-small-height-element-with-large-centralizer} In the above setting, under the assumptions that $\delta\ll \varepsilon^2\ll 1$ (as in \eqref{eq:initial-delta}) and an inequality of type \eqref{eq:required-inequality} hold, either $|\prod_3 S|>|G_{[1..n]}|^{1-\varepsilon+\delta}$ or $T\gg \varepsilon^2 \log |G_{[1..n]}|$; that means either $|\prod_3 S|>|G_{[1..n]}|^{1-\varepsilon+\delta}$ or there is \[ \textstyle (1,g_{\rm s})\in \bigcup_{i=1}^3 (\prod_{3i} S) \cap (\{1\}\oplus G_{I_{\rm s}}) \] such that $d(g_{\rm s},1)\gg \varepsilon^2 \log |G_{[1..n]}|$. \end{lem} This can be interpreted as the existence of an element with {\em small height} and {\em large centralizer}; it has some conceptual similarities with \cite[Proposition 57]{SG:SAI}. \begin{proof}[Proof of Lemma~\ref{lem:finiding-small-height-element-with-large-centralizer}] By Lemma~\ref{lem:one-element-in-subgroup}, we have \[ S\subseteq \bigcup_{I'\subseteq I_{\rm s}, |G_{I'}|\le 2^{25T}} H G_{I'}. \] Therefore we have \begin{align} \notag 1=&\mathcal{P}_{S}(\bigcup_{I'\subseteq I_{\rm s}, |G_{I'}|\le 2^{25T}} H G_{I'}) \le \sum_{I'\subseteq I_{\rm s}, |G_{I'}|\le 2^{25T}} \mathcal{P}_S(H G_{I'}) \\ \notag \le & 2^{|I_{\rm s}|} 2^{25T} [G_{[1..n]}:H]^{-\varepsilon}|G_{[1..n]}|^{\delta} = 2^{|I_{\rm s}|} 2^{25T} |G_{I_{\rm s}}|^{-\varepsilon} |G_{[1..n]}|^{\delta} \\ \label{eq:lower-bound-union-subgroups} \le & 2^{25 T} |G_{I_{\rm s}}|^{-\varepsilon/2} |G_{[1..n]}|^{\delta}. & (2^{|I_{\rm s}|}\le |G_{I_{\rm s}}|^{\varepsilon/2} \text{by Inequality~\eqref{eq:required-inequality}}) \end{align} Let's assume that $|\prod_3 S|\le |G_{[1..n]}|^{1-\varepsilon+\delta}$; then \[ \textstyle |G_{I_{\rm l}}|\le |\prod_3 S|\le |G_{[1..n]}|^{1-\varepsilon+\delta}, \] which implies \begin{equation}\label{eq:scales-with-small-entropy-are-large} |G_{[1..n]}|^{\varepsilon/2}\le |G_{[1..n]}|^{\varepsilon-\delta}\le |G_{I_{\rm s}}|. \end{equation} By \eqref{eq:lower-bound-union-subgroups} and \eqref{eq:scales-with-small-entropy-are-large}, we have \[ 2^{25T}\ge |G_{[1..n]}|^{(\varepsilon/2)^2-\delta} \ge |G_{[1..n]}|^{\varepsilon^2/8}; \] and the claim follows. \end{proof} \begin{lem}\label{lem:lower-bound-number-of-conjugates} In the above setting, for any $g\in G_{[1..n]}$, we have \[ |\{sgs^{-1}|\hspace{1mm} s\in S\}|\ge |\Cl(g)|^{\varepsilon}|G_{[1..n]}|^{-\delta}, \] where $\Cl(g)$ is the set of conjugacy classes of $g$ in $G_{[1..n]}$. \end{lem} \begin{proof} Because of the bijection between conjugates of $g$ and cosets of the centralizer $C_{G_{[1..n]}}(g)$ of $g$ in $G_{[1..n]}$, we have that \[ |\{sgs^{-1}|\hspace{1mm} s\in S\}|=|\underbrace{\{sC_{G_{[1..n]}}(g)|\hspace{1mm} s\in S\}}_{\mathcal{C}(g;S)}|. \] On the other hand, \[ 1=\mathcal{P}_S(\bigcup_{s\in S} sC_{G_{[1..n]}}(g))=\mathcal{P}_S(\bigcup_{\overline{s}\in \mathcal{C}(g;S)}\overline{s}) \le \sum_{\overline{s}\in \mathcal{C}(g;S)} \mathcal{P}_S(\overline{s}); \] and by our assumption \[ \mathcal{P}_S(\overline{s})\le [G_{[1..n]}:C_{G_{[1..n]}}(g)]^{-\varepsilon}|G_{[1..n]}|^{\delta}=|\Cl(g)|^{-\varepsilon} |G_{[1..n]}|^{\delta}, \] for any $\overline{s}\in \mathcal{C}(g;S)$. Hence we have \[ |\Cl(g)|^{\varepsilon} |G_{[1..n]}|^{-\delta} \le |\mathcal{C}(g;S)|; \] and the claim follows. \end{proof} \begin{prop}\label{proplargeindices} In the above setting, either $|\prod_3 S|>|G_{[1..n]}|^{1-\varepsilon+\delta}$ or \[ \textstyle |\prod_{14} S|\ge |G_{[1..n]}|^{\Theta_L(\varepsilon^3)}|G_{I_{\rm l}}|. \] \end{prop} \begin{proof} If $|\prod_3 S|\le|G_{[1..n]}|^{1-\varepsilon+\delta}$ , then by Lemma~\ref{lem:finiding-small-height-element-with-large-centralizer} there is \[ \textstyle (1,g_{\rm s})\in \bigcup_{i=1}^3 (\prod_{3i} S) \cap (\{1\}\oplus G_{I_{\rm s}}) \] such that $d(g_{\rm s},1)\gg \varepsilon^2 \log |G_{[1..n]}|$. Notice that \begin{equation}\label{eq:conjugacy-class} |\Cl(1,g_{\rm s})|=[G_{[1..n]}:C_{G_{[1..n]}}(1,g_{\rm s})]=\prod_{i\in I_{\rm s}}[G_i:C_{G_i}(\pr_i g_{\rm s})]\ge 2^{d(g_{\rm s},1)/L}\ge |G_{[1..n]}|^{\Theta_L(\varepsilon^2)}. \end{equation} Let us recall that there is a function $\psi:G_{I_{\rm l}}\rightarrow G_{I_{\rm s}}$ such that graph $H_{\psi}$ of $\psi$ is a subset of $\prod_3 S$. Since $\Cl(1,g_{\rm s})\subseteq \{1\}\oplus G_{\rm s}$, we have \begin{equation}\label{eq:growth-after-conjugation} \textstyle |\Cl(1,g_{\rm s})H_{\psi}|=|\Cl(1,g_{\rm s})||G_{I_{\rm l}}|. \end{equation} By \eqref{eq:conjugacy-class}, \eqref{eq:growth-after-conjugation}, and Lemma~\ref{lem:lower-bound-number-of-conjugates}, we have \[ \textstyle |\prod_{14} S|\ge |G_{[1..n]}|^{\Theta_L(\varepsilon^3)}|G_{[1..n]}|^{-\delta} |G_{I_{\rm l}}| \ge |G_{[1..n]}|^{\Theta_L(\varepsilon^3)}|G_{I_{\rm l}}|; \] and claim follows. \end{proof} \subsection{Combining Diophantine property of a distribution with entropy of another one} The main goal of this section is to prove Proposition~\ref{prop:generic-Diophantine-property}. So this section is all about a single scale. Roughly speaking we start with two distributions on a finite group that satisfies (V1)$_{L}$-(V3)$_{L}$; we assume one of them has a certain Diophantine property and the other one has an entropy proportional to the entropy of the uniform distribution. We will show lots of certain convolutional distributions have both of these properties at the same time. In this section, $G$ is a finite group that satisfies (V1)$_{L}$-(V3)$_{L}$, and $m\le L$ and $m'\le L \log |G|$ are positive integers given in (V3)$_{L}$. The next lemma says if we have a Diophantine type property for a distribution $\nu$ for subgroups of given complexity, then not many subgroups of the next level of complexity can fail a Diophantine type property of a similar order. We notice that because of (V3)$_{L}$-(v) the extra type $\{\mathcal{H}_i'\}_{i=1}^{m'}$ of subgroups do not cause any problem and Varj\'{u}'s argument works in our setting as well. \begin{lem}\label{lem:exceptional-subgroups} Suppose $G$ is a finite group that satisfies (V3)$_{L}$ and $m\le L$ is a positive integer given in (V3)$_{L}$. Suppose $\nu$ is a probability measure on $G$, $1\le k\le m$ is an integer, and $0<p,p'<1$ with the following properties. \begin{enumerate} \item For any $H\in \bigcup_{i=1}^k \mathcal{H}_i$ and for any $g\in G$, $\nu(gH)<p$. \item $p'>\sqrt{2Lp}$. \end{enumerate} If $k<m$, let \[ E_{k+1}(\nu;p,p'):=\{H\in \mathcal{H}_{k+1}|\hspace{1mm} \wt{\nu}\ast \nu(H)>p'\}. \] If $k=m$, for any $i\in [1..m']$, let \[ E'_i(\nu;p,p'):=\{H\in \mathcal{H}'_i|\hspace{1mm} \wt\nu\ast \nu(H)>p'\}. \] Then $|E_{k+1}(\nu;p,p')|$ and $|E'_i(\nu;p,p')|$ are less than $\sqrt{\frac{2}{Lpp'}}$. \end{lem} \begin{proof} (See \cite[Towards the end of proof of Lemma 18]{Var}) First we consider the case $k<m$. For two distinct elements $H,H'\in E_{k+1}(\nu;p,p')$, there is $H^\sharp\in \mathcal{H}_j$ for some $j\le k$ such that $[H\cap H':H^\sharp\cap H\cap H']\le L$. Hence $\nu(g(H\cap H'))\le Lp$, which implies \begin{equation}\label{eq:volume-intersection} \wt\nu\ast \nu(H\cap H')\le Lp. \end{equation} For any $1\le l\le |E_{k+1}(\nu;p,p')|$, suppose $H_1,\ldots,H_l$ are distinct elements of $E_{k+1}(\nu;p,p')$; then \begin{equation}\label{eq:inclusion-exclusion} 1\ge \wt\nu\ast \nu(\bigcup_{i=1}^l H_i) \ge \sum_{i=1}^l \wt\nu\ast \nu(H_i)- \sum_{1\le i<j\le l} \wt\nu\ast \nu(H_i\cap H_j) \ge lp'- {l\choose 2} Lp. \end{equation} Let $f(x):=-\frac{Lp}{2}x^2+(p'+\frac{Lp}{2}) x-1$; then by \eqref{eq:inclusion-exclusion} for any $l\in [1..|E_{k+1}(\nu;p,p')|]$, $f(l)\le 0$. We notice that \begin{align*} f\left(\frac{2}{p'+Lp/2}\right)= & -\frac{Lp}{2}\left(\frac{2}{p'+Lp/2}\right)^2+(p'+Lp/2) \left(\frac{2}{p'+Lp/2}\right)-1 \\ =& -\frac{Lp}{2}\left(\frac{2}{p'+Lp/2}\right)^2+1. \end{align*} Since $p'>\sqrt{2Lp}$, we have $p'+Lp/2> \sqrt{2Lp}$, which implies $\frac{p'+Lp/2}{2}> \sqrt{\frac{Lp}{2}}$. Hence \[ f\left(\frac{2}{p'+Lp/2}\right)>0. \] By the concavity of $f$, $f(-\infty)=-\infty$, $1\le 2/(p'+Lp/2)$, and the above discussion we deduce that \[ |E_{k+1}(\nu;p,p')|< \frac{2}{p'+Lp/2}\le \sqrt{\frac{2}{Lpp'}}; \] and claim follows in this case. For the case of $k=m$, we notice that for any two distinct elements $H,H'\in \mathcal{H}'_i$, there is $H^\sharp\in \mathcal{H}_j$ for some $j\le m$ such that $[H\cap H':H^\sharp\cap H\cap H']\le L$. So an identical argument as in the previous case works here as well. \end{proof} Let us first recall the setting of Proposition~\ref{prop:generic-Diophantine-property}; we will be working in this setting for the rest of this section. \begin{enumerate} \item $G$ is a finite group that satisfies (V2)$_L$, (V3)$_{L}$ and $|G|\gg_{\alpha',\beta,L} 1$ (the implied constant will be specified later); \item $X_1,\ldots,X_{2^{m+1}}$ are independent random variables such that $H_{\infty}(X_i)\ge \alpha' \log |G|$; \item for any $l$-tuple $\overrightarrow{y}:=(y_1,\ldots,y_l)$, $X_{\overrightarrow{y}}:=X_1y_1X_2 \cdots y_{l}X_{l+1}$; in addition we let \[X'_{\overrightarrow{y}}:=X_{l+2}y_1X_{l+3}\cdots y_{l}X_{2l+2};\] \item $Y_1,\ldots,Y_{2^{m+1}-1}$ are i.i.d. random variables with values in $G$ such that for any $H$ in $\bigcup_{i=1}^m \mathcal{H}_i$ and $g\in G$, $\mathbb{P}(Y_1\in gH)\le [G:H]^{-\beta}$; and moreover for any subgroup $H$ with order at least $|G|^{\alpha}$ and any $g\in G$, we have the same inequality; that means $\mathbb{P}(Y_1\in gH)\le [G:H]^{-\beta}$. \item $\beta\ge 4\alpha$; in fact it is enough to assume $(1-\frac{1}{L})\beta\ge \alpha$. \end{enumerate} \begin{lem}\label{lem:inductive-set-up-structural} In the above setting, let $\beta_0:=\min(\frac{\beta}{5L},\frac{\alpha'}{2})$, $\beta_k:=\frac{\beta_0}{8^k}$ and $p_k:=(2^k-1)|G|^{-\beta/(2L)}$; then for any $1\le k\le m$ \[ \textstyle \mathbb{P}((Y_1,\ldots,Y_{2^k-1})=\overrightarrow{y} \text{ such that } \exists H\in \bigcup_{j=0}^k \mathcal{H}_j, \exists g\in G, \mathbb{P}(X_{\overrightarrow{y}}\in gH)\ge |G|^{-\beta_k})\le p_k. \] \end{lem} \begin{proof} (See \cite[Proof of Lemma 18]{Var}) We proceed by induction on $k$. In order to deal with the base of induction in the same venue as in the induction step, we consider the case of $k=0$ as well; in the sense that we show why we have $\mathbb{P}(X_1\in gZ(G))<|G|^{-\beta_0}$ for any $g\in G$. Since $H_{\infty}(X)\ge \alpha'\log|G|$ and $|Z(G)|\le L$, we have $\mathbb{P}(X_1\in gZ(G))<L |G|^{-\alpha'}\le |G|^{-\alpha'/2}$ (the second inequality holds as $|G|\gg_{\alpha',L} 1$); and this implies the case of $k=0$. Next we focus on the induction step; let \begin{equation}\label{eq:def-exceptional-left} \mathcal{E}_k:=\{\overrightarrow{y} \in \bigoplus_{i=1}^{2^k-1} G|\hspace{1mm} \exists H\in \bigcup_{i=0}^k \mathcal{H}_i, \exists g\in G, \mathbb{P}(X_{\overrightarrow{y}} \in gH)\ge |G|^{-\beta_k}\}, \end{equation} and \begin{equation}\label{eq:def-exceptional-right} \mathcal{E}'_k:=\{\overrightarrow{y} \in \bigoplus_{i=1}^{2^k-1} G|\hspace{1mm} \exists H\in \bigcup_{i=0}^k \mathcal{H}_i, \exists g\in G, \mathbb{P}(X'_{\overrightarrow{y}} \in gH)\ge |G|^{-\beta_k}\}. \end{equation} By the induction hypothesis, we have that these are {\em exceptional sets}: \begin{equation}\label{eq:induction-hypo-exceptional-sets} \mathbb{P}((Y_1,\ldots,Y_{2^k-1})\in \mathcal{E}_k)\le p_k \text{ and } \mathbb{P}((Y_{2^k+1},\ldots,Y_{2^{k+1}-1})\in \mathcal{E}'_k)\le p_k. \end{equation} Suppose $\overrightarrow{y}:=(\overrightarrow{y}_{\rm l},y,\overrightarrow{y}_{\rm r})\in \mathcal{E}_{k+1}$ where $\overrightarrow{y}_{\rm l}$ and $\overrightarrow{y}_{\rm r}$ are the left and the right $2^k-1$ components of $\overrightarrow{y}$, respectively, and $y\in G$. Then $X_{\overrightarrow{y}}=X_{\overrightarrow{y}_{\rm l}}yX'_{\overrightarrow{y}_{\rm r}}$ and there are $H\in \bigcup_{i=0}^{k+1}\mathcal{H}_i$ and $g\in G$ such that \begin{equation}\label{eq:being-exceptional} |G|^{-\beta_{k+1}}\le \mathbb{P}(X_{\overrightarrow{y}_{\rm l}}yX'_{\overrightarrow{y}_{\rm r}}\in gH) =\sum_{j=1}^{[G:H]}\mathbb{P}(X_{\overrightarrow{y}_{\rm l}}\in gHg_j)\mathbb{P}(X'_{\overrightarrow{y}_{\rm r}}\in y^{-1}g_j^{-1}H), \end{equation} where $\{g_j\}_{j=1}^{[G:H]}$ is a set of right coset representatives of $H$. Let \[ I_{\rm l}:=\{j\in [1..[G:H]]| \mathbb{P}(X_{\overrightarrow{y}_{\rm l}}\in gHg_j)\le \mathbb{P}(X'_{\overrightarrow{y}_{\rm r}}\in y^{-1}g_j^{-1}H)\} \] and \[ I_{\rm r}:=\{j\in [1..[G:H]]| \mathbb{P}(X_{\overrightarrow{y}_{\rm l}}\in gHg_j)> \mathbb{P}(X'_{\overrightarrow{y}_{\rm r}}\in y^{-1}g_j^{-1}H)\}. \] Let $q_{\rm l}:=\max_{j\in I_{\rm l}} \mathbb{P}(X_{\overrightarrow{y}_{\rm l}}\in gHg_j)$ and $q_{\rm r}:=\max_{j\in I_{\rm r}} \mathbb{P}(X'_{\overrightarrow{y}_{\rm r}}\in y^{-1}g_j^{-1}H)\}$; then \begin{align*} \sum_{j\in I_{\rm l}}\mathbb{P}(X_{\overrightarrow{y}_{\rm l}}\in gHg_j)\mathbb{P}(X'_{\overrightarrow{y}_{\rm r}}\in y^{-1}g_j^{-1}H)& \le q_{\rm l}, \text{ and} \\ \sum_{j\in I_{\rm r}}\mathbb{P}(X_{\overrightarrow{y}_{\rm l}}\in gHg_j)\mathbb{P}(X'_{\overrightarrow{y}_{\rm r}}\in y^{-1}g_j^{-1}H)& \le q_{\rm r}. \end{align*} Therefore by \eqref{eq:being-exceptional}, we have \[ \frac{1}{2}|G|^{-\beta_{k+1}}\le \max(q_{\rm l},q_{\rm r}), \] which implies that there is $j_0$ such that \begin{equation}\label{eq:cosets-with-large-prob} \frac{1}{2}|G|^{-\beta_{k+1}}\le \mathbb{P}(X_{\overrightarrow{y}_{\rm l}}\in gHg_{j_0}) \text{ and } \frac{1}{2}|G|^{-\beta_{k+1}} \le \mathbb{P}(X'_{\overrightarrow{y}_{\rm r}}\in y^{-1}g_{j_0}^{-1}H). \end{equation} For a random variable $U$ with values in $G$, let $\wt{U}$ be a random variable independent of $U$ with a distribution similar to $U^{-1}$. Then by \eqref{eq:cosets-with-large-prob} we have \begin{equation}\label{eq:subgroups-large-prob} \mathbb{P}(\widetilde{X}_{\overrightarrow{y}_{\rm l}}X_{\overrightarrow{y}_{\rm l}}\in g_j^{-1}Hg_j)\ge \frac{1}{4}|G|^{-2\beta_{k+1}} \text{ and } \mathbb{P}(X'_{\overrightarrow{y}_{\rm r}}\widetilde{X}'_{\overrightarrow{y}_{\rm r}}\in y^{-1}g_j^{-1}Hg_jy)\ge \frac{1}{4}|G|^{-2\beta_{k+1}}. \end{equation} It $\overrightarrow{y}_{\rm l}\not\in \mathcal{E}_k$, then $\mathbb{P}(X_{\overrightarrow{y}_{\rm l}}\in \overline{g}\overline{H})<|G|^{-\beta_k}$ for any $\overline{H}\in \bigcup_{i=0}^k \mathcal{H}_i$ and any $\overline{g}\in G$. This implies that \begin{equation}\label{eq:exceptional-subgroup-left} g_j^{-1}Hg_j \in E_{k+1}(\lambda_{\overrightarrow{y}_{\rm l}}; |G|^{-\beta_k}, \frac{1}{4}|G|^{-2\beta_{k+1}}), \end{equation} where $\lambda_{\overrightarrow{y}_{\rm l}}$ is the distribution of the random variable $X_{\overrightarrow{y}_{\rm l}}$ and $E_{k+1}$ is the set defined in Lemma~\ref{lem:exceptional-subgroups}. By a similar argument, if $\overrightarrow{y}_{\rm r}\not\in \mathcal{E}'_k$, then \begin{equation}\label{eq:exceptional-subgroup-right} y^{-1}g_j^{-1}Hg_jy \in E_{k+1}(\wt{\lambda'}_{\overrightarrow{y}_{\rm r}}; |G|^{-\beta_k}, \frac{1}{4}|G|^{-2\beta_{k+1}}), \end{equation} where $\wt{\lambda'}_{\overrightarrow{y}_{\rm r}}$ is the distribution of the random variable $\widetilde{X}'_{\overrightarrow{y}_{\rm r}}$. So far by \eqref{eq:exceptional-subgroup-left}, \eqref{eq:exceptional-subgroup-right}, and we have \begin{align} \notag \mathbb{P}((\overrightarrow{y}_{\rm l},y,\overrightarrow{y}_{\rm r})\in \mathcal{E}_{k+1})\le & \mathbb{P}(\overrightarrow{y}_{\rm l}\in \mathcal{E}_k)+\mathbb{P}(\overrightarrow{y}_{\rm r}\in \mathcal{E}'_k)+ \mathbb{P}((\overrightarrow{y}_{\rm l},y,\overrightarrow{y}_{\rm r})\in \mathcal{E}_{k+1}), \overrightarrow{y}_{\rm l}\not\in \mathcal{E}_k, \overrightarrow{y}_{\rm r}\not\in \mathcal{E}'_{k}) \\ \notag \le & 2p_k+\sum_{\overrightarrow{y}_{\rm l}\not\in \mathcal{E}_k, \overrightarrow{y}_{\rm r}\not\in\mathcal{E}'_k} \mathbb{P}((Y_1,\ldots,Y_{2^k-1})=\overrightarrow{y}_{\rm l})\mathbb{P}((Y_{2^k+1},\ldots,Y_{2^{k+1}-1})=\overrightarrow{y}_{\rm r}) \\ \label{eq:initial-upper-bound-prob} & \left(\sum_{H_1,H_2}\mathbb{P}(Y_{2^k}^{-1}H_1Y_{2^k}=H_2)\right) \end{align} where $H_1$ ranges in $E_{k+1}(\lambda_{\overrightarrow{y}_{\rm l}};|G|^{-\beta_k},\frac{1}{4}|G|^{-2\beta_{k+1}})$ and $H_2$ ranges in $E_{k+1}(\wt{\lambda'}_{\overrightarrow{y}_{\rm r}}; |G|^{-\beta_k}, \frac{1}{4}|G|^{-2\beta_{k+1}})$. For a given $H_1$ and $H_2$ that are conjugate of each other and are in $\mathcal{H}_i$ for some $i$ there is $g'\in G$ such that \[ \mathbb{P}(Y_{2^k}^{-1} H_1 Y_{2^k}=H_2)=\mathbb{P}(Y_{2^k}\in g'N_G(H_1)); \] and by our assumption there is $H^\sharp\in \mathcal{H}_j$ for some $j$ such that $N_G(H_1)\preceq_L H^\sharp$. Hence \begin{equation}\label{eq:conjugation-prob} \mathbb{P}(Y_{2^k}^{-1} H_1 Y_{2^k}=H_2)\le L [G:H^\sharp]^{-\beta}\le L|G|^{-\beta/L}. \end{equation} By \eqref{eq:initial-upper-bound-prob}, \eqref{eq:conjugation-prob}, and Lemma~\ref{lem:exceptional-subgroups}, we have \begin{align} \notag \mathbb{P}((\overrightarrow{y}_{\rm l},y,\overrightarrow{y}_{\rm r})\in \mathcal{E}_{k+1})\le & 2p_k+\sum_{\overrightarrow{y}_{\rm l}\not\in \mathcal{E}_k, \overrightarrow{y}_{\rm r}\not\in\mathcal{E}'_k} \mathbb{P}((Y_1,\ldots,Y_{2^k-1})=\overrightarrow{y}_{\rm l})\mathbb{P}((Y_{2^k+1},\ldots,Y_{2^{k+1}-1})=\overrightarrow{y}_{\rm r}) \\ \notag & \left( L|G|^{-\beta/L}\frac{2}{L|G|^{-\beta_k}|G|^{-2\beta_{k+1}}/4}\right) \\ \label{eq:second-upper-bound-prob} \le & 2p_k+8|G|^{-\frac{\beta}{L}+\beta_k+2\beta_{k+1}}. \end{align} By \eqref{eq:second-upper-bound-prob}, to prove the claim it is enough to show that $2p_k+8|G|^{-\frac{\beta}{L}+\beta_k+2\beta_{k+1}}\le p_{k+1}$. We notice that $p_{k+1}-2p_k=|G|^{-\beta/(2L)}$, and \[ -\frac{\beta}{L}+\beta_k+2\beta_{k+1}=-\frac{\beta}{L}+\frac{1}{8^k}\left(1+\frac{1}{4}\right)\beta_0\le -\frac{3\beta}{4L}. \] Hence it is enough to show $8|G|^{-\frac{3\beta}{4L}}\le |G|^{-\frac{\beta}{2L}}$, which clearly holds for $|G|\gg_{\beta,L} 1$. \end{proof} \begin{lem}\label{lem:Diophantine-subfield-type} In the above setting, let $p:=(2^{m+1}-1)|G|^{-\beta/(2L)}$, $\beta_0:=\min(\frac{\beta}{5L},\frac{\alpha'}{2})$, and $\beta':=\frac{\beta_0}{8^{m+1}}$; then \[ \textstyle \mathbb{P}((Y_1,\ldots,Y_{2^{m+1}-1})=\overrightarrow{y} \text{ s.t. } \exists H\in \bigcup_{j=0}^{m'} \mathcal{H}'_j, \exists g\in G, \mathbb{P}(X_{\overrightarrow{y}}\in gH)\ge |G|^{-\beta'})\le p. \] \end{lem} \begin{proof} We follow an identical argument as in the proof of Lemma~\ref{lem:inductive-set-up-structural}. Let \begin{equation}\label{eq:exceptional-subfield-type} \mathcal{E}':= \{\overrightarrow{y}\in \bigoplus_{i=1}^{2^{m+1}-1} G|\hspace{1mm} \exists H\in \bigcup_{i=1}^{m'} \mathcal{H}'_i, \exists g\in G, \mathbb{P}(X_{\overrightarrow{y}}\in gH)\ge |G|^{-\beta'}\}. \end{equation} Suppose $\overrightarrow{y}:=(\overrightarrow{y}_{\rm l},y,\overrightarrow{y}_{\rm r})\in\mathcal{E}'$ where $\overrightarrow{y}_{\rm l}$ and $\overrightarrow{y}_{\rm r}$ are the left and the right $2^m-1$ components of $\overrightarrow{y}$, respectively, and $y\in G$. Then $X_{\overrightarrow{y}}=X_{\overrightarrow{y}_{\rm l}}yX'_{\overrightarrow{y}_{\rm r}}$, and there are $1\le i\le m'$, $H\in\mathcal{H}'_i$, and $g\in G$ such that $|G|^{-\beta'}\le \mathbb{P}(X_{\overrightarrow{y}_{\rm l}}yX'_{\overrightarrow{y}_{\rm r}}\in gH)$. As in the proof of Lemma~\ref{lem:inductive-set-up-structural}, there is $g'\in G$ such that \begin{equation}\label{eq:subgroups-large-prob-2} \mathbb{P}(\widetilde{X}_{\overrightarrow{y}_{\rm l}}X_{\overrightarrow{y}_{\rm l}}\in g'^{-1}Hg')\ge \frac{1}{4}|G|^{-2\beta'} \text{ and } \mathbb{P}(X'_{\overrightarrow{y}_{\rm r}}\widetilde{X}'_{\overrightarrow{y}_{\rm r}}\in y^{-1}g'^{-1}Hg'y)\ge \frac{1}{4}|G|^{-2\beta'}. \end{equation} If $\overrightarrow{y}_{\rm l}\not\in \mathcal{E}_m$ where $\mathcal{E}_m$ is defined in \eqref{eq:def-exceptional-left}, then by Lemma~\ref{lem:inductive-set-up-structural} for any $\overline{H}\in \bigcup_{i=0}^m \mathcal{H}_i$ and any $\overline{g}\in G$ we have $\mathbb{P}(X_{\overrightarrow{y}}\in \overline{g}\overline{H})<|G|^{-\beta_m}$ where $\beta_m=\frac{\beta_0}{8^m}$. This implies that \begin{equation}\label{eq:among-exceptional-subgroups} g'^{-1}Hg'\in E'_i(\lambda_{\overrightarrow{y}_{\rm l}};|G|^{-\beta_m},\frac{1}{4}|G|^{-2\beta'}), \end{equation} where $E'_i$ is the set defined in Lemma~\ref{lem:exceptional-subgroups}. Similarly if $\overrightarrow{y}_{\rm r}\not\in \mathcal{E}'_m$ where $\mathcal{E}'_m$ is defined in \eqref{eq:def-exceptional-right}, then \begin{equation}\label{eq:among-exceptional-subgroups-right} y^{-1}g'^{-1}Hg'y\in E'_i(\wt{\lambda'}_{\overrightarrow{y}_{\rm r}};|G|^{-\beta_m},\frac{1}{4}|G|^{-2\beta'}). \end{equation} Following an identical argument as in the proof of Lemma~\ref{lem:inductive-set-up-structural}, we get that \begin{align} \notag \mathbb{P}((\overrightarrow{y}_{\rm l},y,\overrightarrow{y}_{\rm r})\in \mathcal{E}')\le & \mathbb{P}(\overrightarrow{y}_{\rm l}\in \mathcal{E}_m)+\mathbb{P}(\overrightarrow{y}_{\rm r}\in \mathcal{E}'_m)+ \mathbb{P}((\overrightarrow{y}_{\rm l},y,\overrightarrow{y}_{\rm r})\in \mathcal{E}'), \overrightarrow{y}_{\rm l}\not\in \mathcal{E}_m, \overrightarrow{y}_{\rm r}\not\in \mathcal{E}'_m) \\ \notag \le & 2p_m+\sum_{\overrightarrow{y}_{\rm l}\not\in \mathcal{E}_m, \overrightarrow{y}_{\rm r}\not\in\mathcal{E}'_m} \mathbb{P}((Y_1,\ldots,Y_{2^m-1})=\overrightarrow{y}_{\rm l})\mathbb{P}((Y_{2^m+1},\ldots,Y_{2^{m+1}-1})=\overrightarrow{y}_{\rm r}) \\ \notag & \left(\sum_{i=1}^{m'}\sum_{H_1,H_2}\mathbb{P}(Y_{2^m}^{-1}H_1Y_{2^m}=H_2)\right) \end{align} where $p_m$ is given in Lemma~\ref{lem:inductive-set-up-structural}, $H_1$ ranges in $E'_i(\lambda_{\overrightarrow{y}_{\rm l}};|G|^{-\beta_m},\frac{1}{4}|G|^{-2\beta'})$, and $H_2$ ranges in $E_{i}'(\wt{\lambda'}_{\overrightarrow{y}_{\rm r}}; |G|^{-\beta_m}, \frac{1}{4}|G|^{-2\beta'})$ for the given $i$. We notice that for a given $H_1$ and $H_2$ that are conjugate of each other and are in $\mathcal{H}'_i$, there is $g''\in G$ such that \[ \mathbb{P}(Y_{2^m}^{-1} H_1 Y_{2^m}=H_2)=\mathbb{P}(Y_{2^m}\in g''N_G(H_1)); \] and by our assumption $[N_G(H_1):H_1]\le L$. Hence \[ \mathbb{P}(Y_{2^m}^{-1} H_1 Y_{2^m}=H_2)\le L\max_{\overline{g}\in G} \mathbb{P}(Y_{2^m}\in \overline{g}H_1). \] Now we consider two cases based on whether $|H_1|\ge |G|^{\alpha}$ or not. {\bf Case 1.} $|H_1|\ge |G|^{\alpha}$. In this case, by our assumption, \[ \max_{\overline{g}\in G} \mathbb{P}(Y_{2^m}\in \overline{g}H_1)\le [G:H]^{-\beta}\le |G|^{- \frac{\beta}{L}}; \] and so by an identical analysis as in the proof of Lemma~\ref{lem:inductive-set-up-structural} and our assumption that $m'\le \log |G|$, we deduce that \begin{equation}\label{eq:initial-upper-bound-subfield} \mathbb{P}((\overrightarrow{y}_{\rm l},y,\overrightarrow{y}_{\rm r})\in \mathcal{E}') \le 2p_m+8L(\log |G|) |G|^{-\frac{\beta}{L}+\beta_m+2\beta'}. \end{equation} By \eqref{eq:initial-upper-bound-subfield} to prove the claim in this case, it is enough to show $ 2p_m+8L(\log |G|) |G|^{-\frac{\beta}{L}+\beta_m+2\beta'}\le p. $ We notice that $p-2p_m=|G|^{-\beta/(2L)}$, and $ -\frac{\beta}{L}+\beta_m+2\beta'\le -\frac{3\beta}{4L}. $ Hence it is enough to show $8L(\log|G|) |G|^{-\frac{3\beta}{4L}}\le |G|^{-\frac{\beta}{2L}}$, which clearly holds for $|G|\gg_{\beta,L} 1$. {\bf Case 2.} $|H_1|<|G|^{\alpha}$. In this case, we have \[ \max_{\overline{g}\in G} \mathbb{P}(Y_{2^m}\in \overline{g}H_1)\le |G|^{-\beta}|H| \le|G|^{-\beta}|G|^{\alpha}\le |G|^{-\frac{\beta}{L}}, \] where the last inequality holds as $(1-\frac{1}{L})\beta\ge \alpha$. Now we can follow the same analysis as in the first case; and the claim follows. \end{proof} \begin{proof}[Proof of Propodition~\ref{prop:generic-Diophantine-property}] First we notice that we can and will let $\mathcal{H}_{m+1}:=\mathcal{H}_m$ and get the claim of Lemma~\ref{lem:inductive-set-up-structural} for $k=m+1$ as well. Hence by Lemma~\ref{lem:inductive-set-up-structural} and Lemma~\ref{lem:Diophantine-subfield-type} we get \[ \mathbb{P}(\overrightarrow{y}\in \mathcal{E}_{m+1}\cup \mathcal{E}')\le p_{m+1}+p'=2(2^{m+1}-1)|G|^{-\frac{\beta}{2L}}\le |G|^{-\frac{\beta}{4L}}, \] where $\mathcal{E}_{m+1}$ and $\mathcal{E}'$ are defined in \eqref{eq:exceptional-subgroup-left} and \eqref{eq:exceptional-subfield-type}, respectively, and the last inequality holds for $|G|\gg_{L,\beta} 1$. Suppose $\overrightarrow{y}\not\in \mathcal{E}_{m+1}\cup \mathcal{E}'$. For any proper subgroup $H$ of $G$ proper, there is $H^\sharp\in \bigcup_{i=0}^m\mathcal{H}_i \cup \bigcup_{j=1}^{m'} \mathcal{H}'_j$ such that $[H:H\cap H^{\sharp}]\le L$. Then for any $g\in G$ \[ \mathbb{P}(X_{\overrightarrow{y}}\in gH)\le L \max_{g'\in G} \mathbb{P}(X_{\overrightarrow{y}}\in g'H^\sharp)\le L|G|^{-\beta'}\le |G|^{-\beta'/2}, \] where the last inequality holds for $|G|\gg_{\beta,L} 1$. \end{proof} \subsection{Gaining conditional entropy: proof of Proposition~\ref{prop:VarjuMutation}} By the definition of the conditional R\'{e}nyi entropy, we have \begin{align} \notag H_2(X_1Y_1\cdots Y_{2^{m+1}-1}X_{2^{m+1}}X_{2^{m+1}+1}|Y_1,\ldots,Y_{2^{m+1}-1}) & = \hspace{3cm} \\ \label{eq:def-conditional-entropy} \sum_{\overrightarrow{y}\in \bigoplus_{i=1}^{2^{m+1}-1}G} \mathbb{P} ((Y_1,\ldots,Y_{2^{m+1}-1})&=\overrightarrow{y}) H_2(X_{\overrightarrow{y}}X_{2^{m+1}+1}). \end{align} Let \[ \mathcal{E}'':=\{\overrightarrow{y} \text{ such that } X_{\overrightarrow{y}} \text{ is not of } (0,\beta'/2)\text{-Diophantine type}\} \] where $\beta':=\frac{1}{8^{m+1}}\min(\frac{\beta}{5L},\frac{\alpha'}{2})$; and so by Proposition~\ref{prop:generic-Diophantine-property} we have \begin{equation}\label{eq:measure-exceptional-set} \mathbb{P}((Y_1,\ldots,Y_{2^{m+1}-1})\in \mathcal{E}'')\le |G|^{-\frac{\beta}{4L}}. \end{equation} For $\overrightarrow{y}\in \mathcal{E}''$, we use the trivial bound given in Lemma~\ref{lem:product-entropy-trivial-bound} \begin{equation}\label{eq:trivial-bound-for-exceptional-set} H_2(X_{\overrightarrow{y}}X_{2^{m+1}+1})\ge \max_{i=1}^{2^{m+1}+1}H_2(X_i)\ge \min_{i=1}^{2^{m+1}+1} H_2(X_i)=:h_{\rm min}; \end{equation} we notice that $H_2(Xg)=H_2(X)$ for any random variable $X$ with values in $G$ and $g\in G$. For $\overrightarrow{y}\not\in \mathcal{E}''$, we have that \begin{enumerate} \item $X_{\overrightarrow{y}}$ is of $(0,\beta'/2)$-Diophantine type. \item $H_2(X_{\overrightarrow{y}})\ge \max_{i=1}^{2^{m+1}} H_2(X_i)\ge \max_{i=1}^{2^{m+1}} H_\infty(X_i)\ge \alpha' \log |G|$ by Lemma~\ref{lem:properties-entropy}, Lemma~\ref{lem:product-entropy-trivial-bound}, and the fact that $H_2(X_iy_i)=H_2(X_i)$ for any $i$. \end{enumerate} Then either $H_2(X_{\overrightarrow{y}})>(1-\frac{\alpha''}{2})\log |G|$ or by Lemma~\ref{lem:GrowthOfRenyiEntropySingleScale} \[ H_2(X_{\overrightarrow{y}}X_{2^{m+1}+1})\ge \frac{H_2(X_{\overrightarrow{y}})+H_2(X_{2^{m+1}+1})}{2}+\gamma_0 \log |G| \] for some positive $\gamma_0$ which depends only on $\alpha',\alpha'',\beta,$ and the function $\delta_0$. Since \[\max_{i=1}^{2^{m+1}+1} H_2(X_i)\le (1-\alpha'')\log |G|,\] in either case we get \begin{equation}\label{eq:gaining-entropy-outside-exceptional-set} H_2(X_{\overrightarrow{y}}X_{2^{m+1}+1})\ge h_{\rm min}+\gamma_0 \log |G|. \end{equation} In what follows, let $p_{\overrightarrow{y}}:=\mathbb{P}((Y_1,\ldots,Y_{2^{m+1}-1})=\overrightarrow{y})$ for simplicity. By \eqref{eq:def-conditional-entropy}, \eqref{eq:trivial-bound-for-exceptional-set}, and \eqref{eq:gaining-entropy-outside-exceptional-set}, we have \begin{align*} H_2(X_1Y_1\cdots Y_{2^{m+1}-1}&X_{2^{m+1}}X_{2^{m+1}+1}|Y_1,\ldots,Y_{2^{m+1}-1}) \hspace{3cm} \\ = & \sum_{\overrightarrow{y} \in \mathcal{E}''} p_{\overrightarrow{y}} h_{{\rm min}} + \sum_{\overrightarrow{y}\not\in \mathcal{E}''} p_{\overrightarrow{y}} (h_{{\rm min}}+\gamma_0 \log |G|) \\ = & h_{{\rm min}}+\gamma_0 \mathbb{P}((Y_1,\ldots,Y_{2^{m+1}-1})\not\in \mathcal{E}'')\log |G| \\ \ge & h_{{\rm min}}+ \gamma_0 (1-|G|^{-\frac{\beta}{4L}})\log |G| &(\text{By } \eqref{eq:measure-exceptional-set}) \\ \ge &h_{{\rm min}}+\frac{\gamma_0}{2} \log |G|, \end{align*} for $|G|\gg_{L,\beta,\alpha',\alpha'',\delta_0} 1$; and the claim follows. \subsection{Scales with room for improvement} In this section, we use the gain of conditional entropy (given in Proposition~\ref{prop:VarjuMutation}) at levels where we have room for improvement to prove a growth statement in the multi-scaled setting of Proposition~\ref{propvarjuproduct}. In order to use Proposition~\ref{prop:VarjuMutation}, we need to have an auxiliary random variable with some Diophantine property. We recall a result of Varj\'{u} that provides us with such a random variable. \begin{lem}\label{lem:Varj-aux-random-variable}\cite[Lemma 17]{Var} Suppose $\{G_i\}_i$ is a sequence of finite groups that are $L^{-1}$-quasi-random. Suppose $0<\varepsilon<1$, $0<\delta<\varepsilon/(8L)$, and $S\subseteq G:=\bigoplus_{i=1}^n G_i$ is such that for any proper subgroup $H$ of $G$ and $g\in G$ we have \[ \mathcal{P}_S(gH)\le [G:H]^{-\varepsilon}|G|^{\delta}. \] Then there are $B\subseteq S$ and $J_{\rm g}\subseteq [1..n]$ (think about it as the set of {\em good} indexes) with the following properties. Let $Y:=(Y^{(1)},\ldots,Y^{(n)})$ be the random variable with the uniform distribution on $B$, and $Y^{(i)}$ be the induced random variable with values in $G_i$. \begin{enumerate} \item For $i\in J_{\rm g}$, $Y^{(i)}$ is of $(0,\frac{\varepsilon}{2L})$-Diophantine type. \item Let $J_{\rm b}:=[1..n]\setminus J_{\rm g}$ (think about it as the set of {\em bad} indexes); then $|G_{J_{\rm b}}|\le |G|^{\frac{\delta}{\varepsilon/(2L)}}$, where $G_{J_{\rm b}}:=\bigoplus_{i\in J_{\rm b}} G_i$. \end{enumerate} \end{lem} \begin{proof} See \cite[Proof of Lemma 17]{Var}. \end{proof} Let us recall some of our assumptions and earlier results that will be used in the remaining of this section. \begin{enumerate} \item $\{G_i\}_{i=1}^{\infty}$ is a family of pairwise non-isomorphic finite groups that satisfy assumptions (V1)$_L$-(V3)$_L$ and (V4)$_{\delta_0}$. \item $0<\varepsilon<1$ and $\delta:=\varepsilon^5/(8L)$. \item Suppose $|G_i|\gg_{\varepsilon,L} 1$ such that Proposition~\ref{prop:VarjuMutation} holds for the variables $\alpha':=\delta$, $\alpha'':=1/(3L)$, $\beta:=\varepsilon/(2L)$, $L$, and $\delta_0$; we are allowed to make this assumption thanks to Lemma~\ref{lem:required-inequality}. Let $\gamma$ be the constant given by Proposition~\ref{prop:VarjuMutation} for the same set of variables. \item $S\subseteq G:=\bigoplus_{i=1}^n G_i$ such that for any proper subgroup $H$ of $G$ and $g\in G$ \[ \mathcal{P}_S(gH)\le [G:H]^{-\varepsilon}|G|^{\delta}. \] \item Let $A\subseteq S$ be the $(D_0,\ldots,D_{n-1})$-regular subset that is given at the end of Section~\ref{ss:regular-subset}; that means $D_i$ is either 1 or at least $|G_i|^{\delta}$ and $|A|>|G|^{-2\delta} |S|$. \item Let $I_{\rm l}:=\{i\in [0..n-1]| D_i>|G_i|^{1-1/(3L)}\}$ and $I_{\rm s}:=[0..n-1]\setminus I_{\rm l}$. \item Let $B\subseteq S$ be given by Lemma~\ref{lem:Varj-aux-random-variable}; and let $J_{\rm g}$ and $J_{\rm b}$ be given the sets given in same lemma. \end{enumerate} In the above setting, we let $X_i:=(X^{(1)}_i,\ldots,X^{(n)}_i)$ be i.i.d. random variables with distribution $\mathcal{P}_A$ for $1\le i\le 2^{m+1}+1$, and $Y_i:=(Y^{(1)}_i,\ldots,Y^{(n)}_i)$ be i.i.d. random variables with distribution $\mathcal{P}_B$ for $1\le i\le 2^{m+1}-1$. For $\overrightarrow{y}=(\overrightarrow{y}^{(1)},\ldots,\overrightarrow{y}^{(n)})\in \bigoplus_{i=1}^{2^{m+1}-1} G$, we let \[ p_{\overrightarrow{y}}:=\mathbb{P}((Y_1,\ldots,Y_{2^{m+1}-1})=\overrightarrow{y}), \text{ and } p_{\overrightarrow{y}^{(i)}}:=\mathbb{P}((Y^{(i)}_1,\ldots,Y^{(i)}_{2^{m+1}-1})=\overrightarrow{y}^{(i)}) \] and \begin{align*} X_{\overrightarrow{y}}:= & X_1y_1X_2\ldots y_{2^{m+1}-1}X_{2^{m+1}} \\ = &(X^{(1)}_1y^{(1)}_1X^{(1)}_2\cdots y^{(1)}_{2^{m+1}-1}X^{(1)}_{2^{m+1}}, \ldots, X^{(n)}_1y^{(n)}_1X^{(n)}_2\cdots y^{(n)}_{2^{m+1}-1}X^{(n)}_{2^{m+1}}). \end{align*} \begin{lem}\label{lem:main-gain-scales-with-room} In the above setting, we have \[ \sum_{\overrightarrow{y}\in \bigoplus_{j=1}^{2^{m+1}-1} G} p_{\overrightarrow{y}} H(X_{\overrightarrow{y}} X_{2^{m+1}+1})\ge \log |S|+\gamma \log |G_{I_{\rm s}}|-\gamma \frac{\delta}{\varepsilon/(3L)} \log |G|. \] \end{lem} \begin{proof} By Lemma~\ref{lem:properties-entropy}, we have \begin{align} \notag H(X_{\overrightarrow{y}}X_{2^{m+1}+1})= &\sum_{i=1}^n H(X^{(i)}_{\overrightarrow{y}^{(i)}}X^{(i)}_{2^{m+1}+1}|\pr_{[1..i-1]}(X_{\overrightarrow{y}}X_{2^{m+1}+1})) \\ \label{eq:initial-inequality-average-conditional-entropy} \ge & \sum_{i=1}^n H(X^{(i)}_{\overrightarrow{y}^{(i)}}X^{(i)}_{2^{m+1}+1}|\{\pr_{[1..i-1]}(X_j)\}_{j=1}^{2^{m+1}+1}) \end{align} By \eqref{eq:initial-inequality-average-conditional-entropy}, we get \begin{align} \notag \sum_{\overrightarrow{y}\in \bigoplus_{j=1}^{2^{m+1}-1} G} p_{\overrightarrow{y}} H(X_{\overrightarrow{y}} X_{2^{m+1}+1})\ge & \sum_{i=1}^n \sum_{\overrightarrow{y}\in \bigoplus_{j=1}^{2^{m+1}-1} G} p_{\overrightarrow{y}} H(X^{(i)}_{\overrightarrow{y}^{(i)}}X^{(i)}_{2^{m+1}+1}|\{\pr_{[1..i-1]}(X_j)\}_{j=1}^{2^{m+1}+1}) \\ \label{eq:second-inequality-conditional-entropy} = & \sum_{i=1}^n \sum_{\overrightarrow{y}^{(i)}\in \bigoplus_{j=1}^{2^{m+1}-1} G_i} p_{\overrightarrow{y}^{(i)}} H(X^{(i)}_{\overrightarrow{y}^{(i)}}X^{(i)}_{2^{m+1}+1}|\{\pr_{[1..i-1]}(X_j)\}_{j=1}^{2^{m+1}+1}) \end{align} We notice that for a given $1\le i\le n$, we have \begin{align} \notag h_i:=\sum_{\overrightarrow{y}^{(i)}\in \bigoplus_{j=1}^{2^{m+1}-1} G_i} p_{\overrightarrow{y}^{(i)}} H(X^{(i)}_{\overrightarrow{y}^{(i)}}X^{(i)}_{2^{m+1}+1}| & \{\pr_{[1..i-1]}(X_j)\}_{j=1}^{2^{m+1}+1})= \\ \label{eq:connection-previous-result-conditional-entropy} H(X^{(i)}_1Y^{(i)}_1X^{(i)}_2\cdots Y^{(i)}_{2^{m+1}-1} X^{(i)}_{2^{m+1}} X^{(i)}_{2^{m+1}+1}| & \{\pr_{[1..i-1]}(X_j)\}_{j=1}^{2^{m+1}+1}, \{Y^{(i)}_j\}_{j=1}^{2^{m+1}-1}). \end{align} Hence, if $i\in J_{\rm g}\cap I_{\rm s}$ and $D_i\neq 1$, by Proposition~\ref{prop:VarjuMutation}, we have \begin{equation}\label{eq:gained-conditional-entropy} h_i\ge \log D_i+ \gamma \log |G_i|. \end{equation} By Lemma~\ref{lem:product-entropy-trivial-bound}, we have the trivial bound \begin{equation}\label{eq:trivial-bound-entropy} h_i\ge \log D_i, \end{equation} for any $i$. By \eqref{eq:second-inequality-conditional-entropy}, \eqref{eq:connection-previous-result-conditional-entropy}, \eqref{eq:gained-conditional-entropy}, and \eqref{eq:trivial-bound-entropy}, we get \begin{equation}\label{eq:total} \sum_{\overrightarrow{y}\in \bigoplus_{j=1}^{2^{m+1}-1} G} p_{\overrightarrow{y}} H(X_{\overrightarrow{y}} X_{2^{m+1}+1})\ge \log |A|+\gamma \log |G_{J_{\rm g}\cap I_{\rm s}}|. \end{equation} We notice that \begin{equation}\label{eq:adding-bad-indexes} \log |G_{I_s}| = \log |G_{J_{\rm g}\cap I_{\rm s}}|+\log |G_{J_{\rm b}\cap I_{\rm s}}| \le \log |G_{J_{\rm g}\cap I_{\rm s}}| + \log |G_{J_{\rm b}}| \le \log |G_{J_{\rm g}\cap I_{\rm s}}| + \frac{\delta}{\varepsilon/(2L)} \log |G|, \end{equation} and $\log |A|\ge \log |S|-2\delta \log |G|$. Hence by \eqref{eq:total} and \eqref{eq:adding-bad-indexes}, we have \[ \sum_{\overrightarrow{y}\in \bigoplus_{j=1}^{2^{m+1}-1} G} p_{\overrightarrow{y}} H(X_{\overrightarrow{y}} X_{2^{m+1}+1})\ge \log |S|+ \gamma \log |G_{I_s}| - \gamma \delta (2+\frac{2L}{\varepsilon}) \log |G|; \] and the claim follows. \end{proof} \begin{cor}\label{cor:multi-scaled-expansion-scales-with-room} In the above setting, we have \[ \textstyle \log |\prod_{2^{m+2}} S|\ge \log |S|+\gamma \log |G_{I_{\rm s}}|-\gamma \frac{\delta}{\varepsilon/(3L)} \log |G|. \] \end{cor} \begin{proof} By Lemma~\ref{lem:main-gain-scales-with-room}, there is $\overrightarrow{y}\in B\times \cdots \times B$ such that \[ H(X_{\overrightarrow{y}}X_{2^{m+1}+1})\ge \log |S|+\gamma \log |G_{I_{\rm s}}|-\gamma \frac{\delta}{\varepsilon/(3L)} \log |G|. \] On the other hand, $H(X_{\overrightarrow{y}}X_{2^{m+1}+1})\le H_0(X_{\overrightarrow{y}}X_{2^{m+1}+1})\le \log |\prod_{2^{m+2}} S|$; and the claim follows. \end{proof} \subsection{Multi-scale product result: proof of Proposition~\ref{propvarjuproduct}} In this section, we still work in the setting listed in the previous section, and finish the proof of Proposition~\ref{propvarjuproduct}. By Proposition~\ref{proplargeindices}, we either have $|\prod_3 S|\ge |G|^{1-\varepsilon+\delta}$ in which case the claim follows or \begin{equation}\label{eq:first-bound} \textstyle \log |\prod_{14} S| \ge \log |G_{I_{\rm l}}|+\Theta_L(\varepsilon^3) \log |G|. \end{equation} By Corollary~\ref{cor:multi-scaled-expansion-scales-with-room}, we have \begin{equation}\label{eq:second-bound} \textstyle \log |\prod_{2^{m+2}} S|\ge \log |S|+\gamma \log |G_{I_{\rm s}}|-\Theta_L(\gamma \varepsilon^4) \log |G|. \end{equation} By \eqref{eq:first-bound} and \eqref{eq:second-bound}, we get \[ \textstyle (1+\gamma) \log |\prod_{2^{m+2}} S| \ge \log |S|+\gamma \log |G|. \] As $\log |S|\le (1-\varepsilon)\log |G|$, we deduce \[ \textstyle (1+\gamma)\log |\prod_{2^{m+2}} S| \ge \log |S|+\frac{\gamma}{1-\varepsilon} \log |S|\ge (1+\gamma) \log |S|+\gamma\varepsilon \log |S|. \] Hence by \cite[Lemma 2.2]{Hel1} we get \[ \textstyle (1+\gamma)(2^{m+2}-2)(\log |\prod_3 S|-\log |S|)\ge (1+\gamma)(\log |\prod_{2^{m+2}} S|-\log |S|)\ge \gamma\varepsilon \log |S|; \] and the claim follows. \section{Super-approximation: proof of Theorem~\ref{t:SpectralGap}}~\label{sec:finishing-proof-SA} As it has been pointed out by Bradford (see~\cite[Theorem 1.14]{Bra}) Varj\'{u} has already proved a multi-scale version of Bourgain-Gamburd's result which in combination with Proposition~\ref{propvarjuproduct} can be formulated as follows (see~\cite[Sections 3 and 5]{Var}). \begin{thm}\label{thm:VarjuBourgainGamburdMachine} Suppose $L$ is a positive integer, $\delta_0:\mathbb{R}^+\rightarrow \mathbb{R}^+$, and $\{G_i\}_{i=1}^{\infty}$ is a family of finite groups that satisfy V(1)$_L$-V(3)$_L$ and V(4)$_{\delta_0}$. Suppose $\overline{\Omega}$ is a symmetric generating set of $G:=\bigoplus_{i=1}^n G_i$. Suppose there are $\eta>0$, $C_0$, and $l<C_0\log |G|$ such that for any proper subgroup $H$ of $G$, we have $\mathcal{P}_{\overline{\Omega}}^{(2l)}(H)\le [G:H]^{-\eta}$. Then \[ 1-\lambda(\mathcal{P}_{\overline{\Omega}};G)\gg_{L,\delta_0,\eta,C_0,|\overline{\Omega}|} 1. \] \end{thm} Let us recall that by the discussion in Section~\ref{ss:storngapproximation}, we can and will assume \begin{equation}\label{eq:group-structure} \pi_f(\Gamma)\simeq \bigoplus_{\ell|f, \ell \text{ irred.}} \mathcal{G}_{\ell}(K(\ell)) \end{equation} where $K(\ell)$ is the finite field $\mathbb{F}_{p_0}[t]/\langle \ell\rangle$, $\mathcal{G}_{\ell}$ is an absolutely almost simple, simply connected, $K(\ell)$-group, and the absolute type of all $\mathcal{G}_{\ell}$'s are the same. By Proposition~\ref{propmainescape}, there is a symmetric subset $\Omega'$ of $\Gamma$, a square-free polynomial $r_1$, and positive numbers $c_0$ and $\delta$ such that for any $f\in S_{r_1,c_0}$ (that means $f$ and $r_1$ are coprime and the degree of any irreducible factor of $f$ does not have a prime factor less than $c_0$), any purely structural subgroup $H$ of $\pi_f(\Gamma)$ and $l\gg_{\Omega} \deg f$, we have \begin{equation}\label{eq:escape-purely-structural-and-generation} \pi_f(\langle \Omega'\rangle)=\pi_f(\Gamma),\text{ and } \mathcal{P}^{(l)}_{\pi_f(\Omega')}(H)\le [\pi_f(\Gamma):H]^{-\delta}; \end{equation} moreover $\Omega'=\Omega'_0\sqcup {\Omega'_0}^{-1}$ and $\Omega'_0$ freely generates a subgroup of $\Gamma$. By \eqref{eq:escape-purely-structural-and-generation}, to prove Theorem~\ref{t:SpectralGap}, it is enough to prove \[ 1-\lambda(\mathcal{P}_{\pi_f(\Omega')};\pi_f(\Gamma))\gg_{\Omega} 1. \] By \eqref{eq:group-structure}, \eqref{eq:escape-purely-structural-and-generation}, and Theorem~\ref{thm:VarjuBourgainGamburdMachine}, to prove Theorem~\ref{t:SpectralGap}, it is enough to prove the following. \begin{enumerate} \item There are $L$ and $\delta_0$ such that $\mathcal{G}_{\ell}(K(\ell))$ satisfies V(1)$_L$-V(3)$_L$, and V(4)$_{\delta_0}$ if $\ell$ is an irreducible polynomial that does not divide $r_1$. \item There are $\eta>0$, $C_0$, and $c_0'\ge c_0$ such that for any $f\in S_{r_1,c_0'}$ and any proper subgroup $H$ of $\pi_f(\Gamma)$ we have $\mathcal{P}_{\pi_f(\Omega')}^{(2l)}(H)\le [\pi_f(\Gamma):H]^{\eta}$ for some $l<C_0 \deg f$. \end{enumerate} In the rest of this section, we will prove these items. \subsection{Verifying Varj\'{u}'s assumptions V(1)$_L$-V(3)$_L$, and V(4)$_{\delta_0}$ for $\mathcal{G}_{\ell}(K(\ell))$} Since $\mathcal{G}_{\ell}$'s are absolutely almost simple, simply connected, $K(\ell)$-groups, and all of them have the same absolute type, by \cite{LS-quasi}, they satisfy V(1)$_L$ and V(2)$_L$ for some positive integer $L$. By the groundbreaking results \cite[Corollary 2.4]{BGT} and \cite[Theorem 4]{PS}, there is a function $\delta_0$ such that $\mathcal{G}_{\ell}(K(\ell))$'s satisfy V(4)$_{\delta_0}$. Now we introduce the families of subgroups $\mathcal{H}_i$ and $\mathcal{H}'_j$, and prove that they satisfy V(3)$_L$ for some positive integer $L$ that is independent of irreducible polynomials $\ell$'s. By Theorem~\ref{thm:FinalRefinementOfLP}, for a structural subgroup $H$ of $\mathcal{G}_{\ell}(K(\ell))$, there is a proper subgroup $\mathbb{H}$ of $\mathcal{G}_{\ell}$ with complexity $O_{\Gamma}(1)$ such that $H\subseteq \mathbb{H}(K(\ell))$. Since the complexity of $\mathbb{H}$ is $O_{\Gamma}(1)$, $[\mathbb{H}(K(\ell)):\mathbb{H}^{\circ}(K(\ell))]\ll_{\Gamma}1$ and the complexity of $\mathbb{H}^{\circ}$ is also bounded by a function of $\Gamma$, where $\mathbb{H}^{\circ}$ is the connected component of the identity of $\mathbb{H}$ in the Zariski topology. For $0\le i< \dim \mathbb{G}$, initially we let \[ \mathcal{H}_i:=\{\mathbb{H}(K(\ell))|\hspace{1mm} \mathbb{H} \lneq \mathcal{G}_{\ell}, \dim \mathbb{H}=i, \mathbb{H}=\mathbb{H}^{\circ} \text{, its complexity is bounded as above}\} \] Next for smaller dimension subgroups, we allow slightly larger complexity to include the connected components of the intersections of larger dimension connected proper subgroups. Then by Theorem~\ref{thm:FinalRefinementOfLP} a subgroup $H$ of $\mathcal{G}_\ell(K(\ell))$ is a structural subgroup if and only if there is $H^\sharp$ in $\mathcal{H}_i$ for some $i$ such that $H\preceq_L H^\sharp$ where $L:=O_{\Gamma}(1)$. Moreover $\mathcal{H}_i$'s satisfy the condition (V3)$_L$,(i)-(ii). By Theorem~\ref{thm:FinalRefinementOfLP}, we know that if $H$ is a proper subgroup of subfield type of $\mathcal{G}_{\ell}(K(\ell))$, then there is a subfield $F_H$ and an $F_H$-model $\mathbb{G}_H$ of $\Ad(\mathcal{G}_{\ell})$ such that \[[\mathbb{G}_H(F_H),\mathbb{G}_H(F_H)]\subseteq \Ad(H)\subseteq \mathbb{G}_H(F_H).\] This implies that $H\preceq \wt{\mathbb{G}}_H(F_H)$ where $\wt{\mathbb{G}}_H$ is a simply-connected cover of $\mathbb{G}_H$. \begin{lem}~\label{lem:subfield-type-subgroups-up-to-conjugation} For a subgroup $H$ of $\mathcal{G}_{\ell}(K(\ell))$, let ${\rm Con}(H)$ be the set of all the conjugates of $H$ in $\mathcal{G}_{\ell}(K(\ell))$. For a subfield $F$ of $K(\ell)$, let \[ n_F:=|\{{\rm Con}(\wt{\mathbb{G}}(F))|\hspace{1mm} \wt{\mathbb{G}}\text{ is an } F\text{-model of } \mathcal{G}_{\ell}\}|. \] Then $n_F\ll_{\mathbb{G}} 1$. \end{lem} \begin{proof} Since $\wt\mathbb{G}$ has the same absolute type as $\mathcal{G}_{\ell}$, up to $F$-isomorphism there are only two choices; either $\wt\mathbb{G}$ is the unique simply connected $F$-split group of the given absolute type or it is the unique quasi-split outer form of the given absolute type defined over $F$. So without loss of generality, we fix an $F$-model $\wt\mathbb{G}_0$ of $\mathcal{G}_{\ell}$ and we want to show that \[ |\{{\rm Con}(\wt{\mathbb{G}}(F))|\hspace{1mm} \wt{\mathbb{G}}\simeq \wt{\mathbb{G}}_0\text{ as } F\text{-groups}\}|\ll_{\mathbb{G}} 1. \] Notice that if $\wt\mathbb{G}$ is an $F$-model of $\mathcal{G}_{\ell}$ which is $F$-isomorphic to $\wt{\mathbb{G}}_0$, then there is an $F$-isomorphism $\phi:\wt\mathbb{G}_0\xrightarrow{\simeq} \wt{\mathbb{G}}$ which induces an automorphism of $\mathcal{G}_{\ell}$ after base change. Since $[\Aut(\mathcal{G}_{\ell}):\Ad(\mathcal{G}_{\ell})]\ll 1$, without loss of generality we can and will assume that $\phi$ induces and inner automorphism of $\mathcal{G}_{\ell}$. Hence we can and will assume that there is $g\in \mathcal{G}_{\ell}(\overline{K(\ell)})$ such that $g\wt{\mathbb{G}}_0(F)g^{-1}=\wt{\mathbb{G}}(F)\subseteq \wt{\mathbb{G}}_0(K(\ell))$. Therefore by Proposition~\ref{prop:conjugates-subfield-type-subgroups}, $\Ad(g)\in \Ad(\wt{\mathbb{G}}_0)(K(\ell))=\Ad(\mathcal{G}_{\ell})(K(\ell))$. Since $[\Ad(\mathcal{G}_{\ell})(K(\ell)):\Ad(\mathcal{G}_{\ell}(K(\ell)))]\ll_{\mathbb{G}} 1$, the claim follows. \end{proof} We let $\mathcal{H}_i'$'s be the sets of conjugacy classes of the groups of the form $\wt\mathbb{G}(F)$ where $F$ is a proper subfield of $K(\ell)$ and $\wt\mathbb{G}$ is an $F$-model of $\mathcal{G}_{\ell}$. By Lemma~\ref{lem:subfield-type-subgroups-up-to-conjugation}, there are at most $L\log |\mathcal{G}_{\ell}(K(\ell))|$ where $L$ just depends on the absolute type of $\mathcal{G}_{\ell}$'s. By Corollary~\ref{cor:intersection-conjugate-subfield-type}, $\mathcal{H}_i'$'s satisfy property (V3)$_L$-(v); and our claim follows. \subsection{Escaping proper subgroups} The main goal of this short section is to show that Proposition~\ref{propmainescape} is good enough to show the needed escaping from an arbitrary proper subgroup of $\pi_f(\Gamma)$ for $f\in S_{r_1,c_0}$. \begin{lem}\label{lem:final-escaping-proper-subgroups} In the setting described at the beginning of Section~\ref{sec:finishing-proof-SA} , there are $\eta>0$, $C_0$, and $c_0'\ge c_0$ such that for any $f\in S_{r_1,c_0'}$ and any proper subgroup $H$ of $\pi_f(\Gamma)$ we have \[ \mathcal{P}_{\pi_f(\Omega')}^{(2l)}(H)\le [\pi_f(\Gamma):H]^{-\eta} \] for some $l<C_0\deg f$. \end{lem} \begin{proof} We assume that $f\in S_{r_1,c_0'}$ for a sufficiently large $c_0'\ge c_0$ (to be specified later). We split the set of irreducible divisors of $f$ into three disjoint sets: \[ D_1(f;H):=\{\ell|f \text{ s.t. } \ell\text{ is irreducible, } \pi_\ell(H) \text{ is a structural subgroup}\}, \] \[ D_2(f;H):=\{\ell|f \text{ s.t. } \ell\text{ is irreducible, } \pi_\ell(H) \text{ is a subfield type subgroup}\}, \text{ and } \] \[ D_3(f;H):=\{\ell|f \text{ s.t. } \ell\text{ is irreducible, } \pi_\ell(H)=\pi_\ell(\Gamma)\}. \] Let $f_i:=\prod_{\ell\in D_i(f;H)} \ell$ and $H_i:=\prod_{\ell\in D_i(f;H)} \pi_\ell(H)$; then by Lemma~\ref{lemsubgroupproductform} we have that \begin{equation}\label{eq:approximating-independent-subgroups} [\pi_{f_1}(\Gamma):H_1][\pi_{f_2}(\Gamma):H_2]=[\pi_f(\Gamma):H_1\oplus H_2\oplus H_3] \ge [\pi_f(\Gamma):H]^{1/L}. \end{equation} By Proposition~\ref{propmainescape}, we have \begin{equation}\label{eq:escaping-purely-structural-part} \mathcal{P}_{\pi_{f_1}(\Omega')}^{(2l)}(H_1)\le [\pi_{f_1}(\Gamma):H_1]^{-\delta_0}, \end{equation} where $\delta_0$ is a positive number which just depends on $\Omega$. On the other hand, by Kesten's result on random walks in a free group (see~\cite[Theorem 3]{Kes}), we have that, for some $\ell_0\ll_{\Omega} \deg f_2$ and $c_1>0$, we have \[ \| \mathcal{P}_{\pi_{f_2}(\Omega')}^{(2l_0)}\|_{\infty} \le |\pi_{f_1}(\Gamma)|^{-c_1}; \] and so \begin{equation}\label{eq:escaping-purely-subfield-type-1} \mathcal{P}_{\pi_{f_2}(\Omega')}^{(2l_0)}(H_2)\le |\pi_{f_1}(\Gamma)|^{-c_1}|H_2|\le |\pi_{f_1}(\Gamma)|^{-c_1}\prod_{\ell|f_2, \text{irr. }} |\pi_{\ell}(H_2)|\le |\pi_{f_1}(\Gamma)|^{-c_1} |\pi_{f_2}(\Gamma)|^{1/c'_0}. \end{equation} So if $c_0'>2/c_1$, then by \eqref{eq:escaping-purely-subfield-type-1} implies that \begin{equation}\label{eq:escaping-purely-subfield-type-2} \mathcal{P}_{\pi_{f_2}(\Omega')}^{(2l_0)}(H_2)\le |\pi_{f_1}(\Gamma)|^{-c_1/2}. \end{equation} By \eqref{eq:escaping-purely-subfield-type-2} and the fact that $\Omega'$ is a symmetric set, we have $\mathcal{P}_{\pi_{f_2}(\Omega')}^{(l_0)}(gH_2)\le |\pi_{f_1}(\Gamma)|^{-c_1/4}$; and so for any $l\ge l_0$, we have \begin{equation}\label{eq:escaping-purely-subfield-type} \mathcal{P}_{\pi_{f_2}(\Omega')}^{(l)}(H_2)\le |\pi_{f_1}(\Gamma)|^{-c_1/4}. \end{equation} By \eqref{eq:escaping-purely-structural-part} and \eqref{eq:escaping-purely-subfield-type}, we deduce that \begin{align} \notag \mathcal{P}_{\pi_{f}(\Omega')}^{(2l)}(H)\le & \min(\mathcal{P}_{\pi_{f_1}(\Omega')}^{(2l)}(H_1),\mathcal{P}_{\pi_{f_2}(\Omega')}^{(2l)}(H_2)) \\ \notag \le & (\mathcal{P}_{\pi_{f_1}(\Omega')}^{(2l)}(H_1)\mathcal{P}_{\pi_{f_2}(\Omega')}^{(2l)}(H_2))^{1/2} \\ \label{eq:almost-finished} \le & |\pi_{f_1f_2}(\Gamma)|^{-\min(\delta_0,c_1/4)}\le [\pi_f(\Gamma):H_1\oplus H_2\oplus H_3]^{-\min(\delta_0,c_1/4)}. \end{align} By \eqref{eq:almost-finished} and \eqref{eq:approximating-independent-subgroups}, we get \[ \mathcal{P}_{\pi_{f}(\Omega')}^{(2l)}(H) \le [\pi_f(\Gamma):H]^{-\min(\delta_0/L,c_1/(4L))}; \] and the claim follows. \end{proof} \end{document}
arXiv
Non-separable wavelet Non-separable wavelets are multi-dimensional wavelets that are not directly implemented as tensor products of wavelets on some lower-dimensional space. They have been studied since 1992.[1] They offer a few important advantages. Notably, using non-separable filters leads to more parameters in design, and consequently better filters.[2] The main difference, when compared to the one-dimensional wavelets, is that multi-dimensional sampling requires the use of lattices (e.g., the quincunx lattice). The wavelet filters themselves can be separable or non-separable regardless of the sampling lattice. Thus, in some cases, the non-separable wavelets can be implemented in a separable fashion. Unlike separable wavelet, the non-separable wavelets are capable of detecting structures that are not only horizontal, vertical or diagonal (show less anisotropy). Examples • Red-black wavelets[3] • Contourlets[4] • Shearlets[5] • Directionlets[6] • Steerable pyramids[7] • Non-separable schemes for tensor-product wavelets[8] References 1. J. Kovacevic and M. Vetterli, "Nonseparable multidimensional perfect reconstruction filter banks and wavelet bases for Rn," IEEE Trans. Inf. Theory, vol. 38, no. 2, pp. 533–555, Mar. 1992. 2. J. Kovacevic and M. Vetterli, "Nonseparable two- and three-dimensional wavelets," IEEE Transactions on Signal Processing, vol. 43, no. 5, pp. 1269–1273, May 1995. 3. G. Uytterhoeven and A. Bultheel, "The Red-Black Wavelet Transform," in IEEE Signal Processing Symposium, pp. 191–194, 1998. 4. M. N. Do and M. Vetterli, "The contourlet transform: an efficient directional multiresolution image representation," IEEE Transactions on Image Processing, vol. 14, no. 12, pp. 2091–2106, Dec. 2005. 5. G. Kutyniok and D. Labate, "Shearlets: Multiscale Analysis for Multivariate Data," 2012. 6. V. Velisavljevic, B. Beferull-Lozano, M. Vetterli and P. L. Dragotti, "Directionlets: anisotropic multi-directional representation with separable filtering," IEEE Trans. on Image Proc., Jul. 2006. 7. E. P. Simoncelli and W. T. Freeman, "The Steerable Pyramid: A Flexible Architecture for Multi-Scale Derivative Computation," in IEEE Second Int'l Conf on Image Processing. Oct. 1995. 8. D. Barina, M. Kula and P. Zemcik, "Parallel wavelet schemes for images," J Real-Time Image Proc, vol. 16, no. 5, pp. 1365–1381, Oct. 2019.
Wikipedia
In this paper, we introduce the class of ideals with $(d_1,\ldots,d_m)$-linear quotients generalizing the class of ideals with linear quotients. Under suitable conditions we control the numerical invariants of a minimal free resolution of ideals with $(d_1,\ldots,d_m)$-linear quotients. In particular we show that their first module of syzygies is a componentwise linear module.
CommonCrawl
In the diagram, what is the perimeter of $\triangle PQS$? [asy] size(5cm); import olympiad; // Lines pair q = (0, 0); pair r = (0, -5); pair p = (12, 0); pair s = (0, -20); draw(q--p--s--cycle); draw(r--p); // Labels label("$Q$", q, NW); label("$P$", p, NE); label("$R$", r, W); label("$S$", s, W); label("$5$", r / 2, W); label("$13$", r + (p - r) / 2, 1.2 * S); label("$37$", s + (p - s) / 2, SE); markscalefactor = 0.1; draw(rightanglemark(s, q, p)); [/asy] By the Pythagorean Theorem in $\triangle PQR$, $$PQ^2 = PR^2 - QR^2 = 13^2 - 5^2 = 144,$$so $PQ=\sqrt{144}=12$. By the Pythagorean Theorem in $\triangle PQS$, $$QS^2 = PS^2 - PQ^2 = 37^2 - 12^2 = 1225,$$so $QS = \sqrt{1225}=35$. Therefore, the perimeter of $\triangle PQS$ is $12+35+37=\boxed{84}$.
Math Dataset
Mumford measure In mathematics, a Mumford measure is a measure on a supermanifold constructed from a bundle of relative dimension 1|1. It is named for David Mumford. References • Voronov, Alexander A. (1988), "A formula for the Mumford measure in superstring theory", Akademiya Nauk SSSR. Funktsionalnyii Analiz I Ego Prilozheniya, 22 (2): 67–68, doi:10.1007/BF01077608, ISSN 0374-1990, MR 0947611
Wikipedia
Euler–Lotka equation In the study of age-structured population growth, probably one of the most important equations is the Euler–Lotka equation. Based on the age demographic of females in the population and female births (since in many cases it is the females that are more limited in the ability to reproduce), this equation allows for an estimation of how a population is growing. The field of mathematical demography was largely developed by Alfred J. Lotka in the early 20th century, building on the earlier work of Leonhard Euler. The Euler–Lotka equation, derived and discussed below, is often attributed to either of its origins: Euler, who derived a special form in 1760, or Lotka, who derived a more general continuous version. The equation in discrete time is given by $1=\sum _{a=1}^{\omega }\lambda ^{-a}\ell (a)b(a)$ where $\lambda $ is the discrete growth rate, ℓ(a) is the fraction of individuals surviving to age a and b(a) is the number of offspring born to an individual of age a during the time step. The sum is taken over the entire life span of the organism. Derivations Lotka's continuous model A.J. Lotka in 1911 developed a continuous model of population dynamics as follows. This model tracks only the females in the population. Let B(t)dt be the number of births during the time interval from t to t+dt. Also define the survival function ℓ(a), the fraction of individuals surviving to age a. Finally define b(a) to be the birth rate for mothers of age a. The product B(t-a)ℓ(a) therefore denotes the number density of individuals born at t-a and still alive at t, while B(t-a)ℓ(a)b(a) denotes the number of births in this cohort, which suggest the following Volterra integral equation for B: $B(t)=\int _{0}^{t}B(t-a)\ell (a)b(a)\,da.$ We integrate over all possible ages to find the total rate of births at time t. We are in effect finding the contributions of all individuals of age up to t. We need not consider individuals born before the start of this analysis since we can just set the base point low enough to incorporate all of them. Let us then guess an exponential solution of the form B(t) = Qert. Plugging this into the integral equation gives: $Qe^{rt}=\int _{0}^{t}Qe^{r(t-a)}\ell (a)b(a)\,da$ or $1=\int _{0}^{t}e^{-ra}\ell (a)b(a)\,da.$ This can be rewritten in the discrete case by turning the integral into a sum producing $1=\sum _{a=\alpha }^{\beta }e^{-ra}\ell (a)b(a)$ letting $\alpha $ and $\beta $ be the boundary ages for reproduction or defining the discrete growth rate λ = er we obtain the discrete time equation derived above: $1=\sum _{a=1}^{\omega }\lambda ^{-a}\ell (a)b(a)$ where $\omega $ is the maximum age, we can extend these ages since b(a) vanishes beyond the boundaries. From the Leslie matrix Let us write the Leslie matrix as: ${\begin{bmatrix}f_{0}&f_{1}&f_{2}&f_{3}&\ldots &f_{\omega -1}\\s_{0}&0&0&0&\ldots &0\\0&s_{1}&0&0&\ldots &0\\0&0&s_{2}&0&\ldots &0\\0&0&0&\ddots &\ldots &0\\0&0&0&\ldots &s_{\omega -2}&0\end{bmatrix}}$ where $s_{i}$ and $f_{i}$ are survival to the next age class and per capita fecundity respectively. Note that $s_{i}=\ell _{i+1}/\ell _{i}$ where ℓ i is the probability of surviving to age $i$, and $f_{i}=s_{i}b_{i+1}$, the number of births at age $i+1$ weighted by the probability of surviving to age $i+1$. Now if we have stable growth the growth of the system is an eigenvalue of the matrix since $\mathbf {n_{i+1}} =\mathbf {Ln_{i}} =\lambda \mathbf {n_{i}} $. Therefore, we can use this relationship row by row to derive expressions for $n_{i}$ in terms of the values in the matrix and $\lambda $. Introducing notation $n_{i,t}$ the population in age class $i$ at time $t$, we have $n_{1,t+1}=\lambda n_{1,t}$. However also $n_{1,t+1}=s_{0}n_{0,t}$. This implies that $n_{1,t}={\frac {s_{0}}{\lambda }}n_{0,t}.\,$ By the same argument we find that $n_{2,t}={\frac {s_{1}}{\lambda }}n_{1,t}={\frac {s_{0}s_{1}}{\lambda ^{2}}}n_{0,t}.$ Continuing inductively we conclude that generally $n_{i,t}={\frac {s_{0}\cdots s_{i-1}}{\lambda ^{i}}}n_{0,t}.$ Considering the top row, we get $n_{0,t+1}=f_{0}n_{0,t}+\cdots +f_{\omega -1}n_{\omega -1,t}=\lambda n_{0,t}.$ Now we may substitute our previous work for the $n_{i,t}$ terms and obtain: $\lambda n_{0,t}=\left(f_{0}+f_{1}{\frac {s_{0}}{\lambda }}+\cdots +f_{\omega -1}{\frac {s_{0}\cdots s_{\omega -2}}{\lambda ^{\omega -1}}}\right)n_{(0,t)}.$ First substitute the definition of the per-capita fertility and divide through by the left hand side: $1={\frac {s_{0}b_{1}}{\lambda }}+{\frac {s_{0}s_{1}b_{2}}{\lambda ^{2}}}+\cdots +{\frac {s_{0}\cdots s_{\omega -1}b_{\omega }}{\lambda ^{\omega }}}.$ Now we note the following simplification. Since $s_{i}=\ell _{i+1}/\ell _{i}$ we note that $s_{0}\ldots s_{i}={\frac {\ell _{1}}{\ell _{0}}}{\frac {\ell _{2}}{\ell _{1}}}\cdots {\frac {\ell _{i+1}}{\ell _{i}}}=\ell _{i+1}.$ This sum collapses to: $\sum _{i=1}^{\omega }{\frac {\ell _{i}b_{i}}{\lambda ^{i}}}=1,$ which is the desired result. Analysis of expression From the above analysis we see that the Euler–Lotka equation is in fact the characteristic polynomial of the Leslie matrix. We can analyze its solutions to find information about the eigenvalues of the Leslie matrix (which has implications for the stability of populations). Considering the continuous expression f as a function of r, we can examine its roots. We notice that at negative infinity the function grows to positive infinity and at positive infinity the function approaches 0. The first derivative is clearly −af and the second derivative is a2f. This function is then decreasing, concave up and takes on all positive values. It is also continuous by construction so by the intermediate value theorem, it crosses r = 1 exactly once. Therefore, there is exactly one real solution, which is therefore the dominant eigenvalue of the matrix the equilibrium growth rate. This same derivation applies to the discrete case. Relationship to replacement rate of populations If we let λ = 1 the discrete formula becomes the replacement rate of the population. Further reading • Coale, Ansley J. (1972). The Growth and Structure of Human Populations. Princeton: Princeton University Press. pp. 61–70. ISBN 0-691-09357-1. • Hoppensteadt, Frank (1975). Mathematical Theories of Populations : Demographics, Genetics and Epidemics. Philadelphia: SIAM. pp. 1–5. ISBN 0-89871-017-0. • Kot, M. (2001). "The Lotka integral equation". Elements of Mathematical Ecology. Cambridge: Cambridge University Press. pp. 353–64. ISBN 0-521-80213-X. • Pollard, J. H. (1973). "The deterministic population models of T. Malthus, A. J. Lotka, and F. R. Sharpe and A. J. Lotka". Mathematical models for the growth of human populations. Cambridge University Press. pp. 22–36. ISBN 0-521-20111-X. Leonhard Euler • Euler–Lagrange equation • Euler–Lotka equation • Euler–Maclaurin formula • Euler–Maruyama method • Euler–Mascheroni constant • Euler–Poisson–Darboux equation • Euler–Rodrigues formula • Euler–Tricomi equation • Euler's continued fraction formula • Euler's critical load • Euler's formula • Euler's four-square identity • Euler's identity • Euler's pump and turbine equation • Euler's rotation theorem • Euler's sum of powers conjecture • Euler's theorem • Euler equations (fluid dynamics) • Euler function • Euler method • Euler numbers • Euler number (physics) • Euler–Bernoulli beam theory • Namesakes • Category
Wikipedia
\begin{document} \title{\LARGE \vskip-8mm Chains of compact cylinders for cusp-generic\\ nearly integrable convex systems on ${\mathbb A}^3$} \author{Jean-Pierre Marco \thanks{Universit\'e Paris 6, 4 Place Jussieu, 75005 Paris cedex 05. E-mail: [email protected] }} \date{} \maketitle \begin{abstract} This paper is the first of a series of three dedicated to a proof of the Arnold diffusion conjecture for perturbations of {convex} integrable Hamiltonian systems on ${\mathbb A}^3={\mathbb T}^3\times{\mathbb R}^3$. We consider systems of the form $H({\theta},r)=h(r)+f({\theta},r)$, where $h$ is a $C^\kappa$ strictly convex and superlinear function on ${\mathbb R}^3$ and $f\in C^\kappa({\mathbb A}^3)$, $\kappa\geq2$. Given ${\bf e}>\mathop{\rm Min\,}\limits h$ and a finite family of arbitrary open sets $O_i$ in ${\mathbb R}^3$ intersecting $h^{-1}({\bf e})$, a {\em diffusion orbit} associated with these data is an orbit of $H$ which intersects each open set $\widehat O_i={\mathbb T}^3\times O_i\subset{\mathbb A}^3$. The first main result of this paper (Theorem I) states the existence (under cusp-generic conditions on $f$ in Mather's terminology) of ``chains of compact and normally hyperbolic invariant $3$-dimensional cylinders'' intersecting each $\widehat O_i$. Diffusion orbits drifting along these chains are then proved to exist in \cite{GM,M}. The second main result (Theorem II) consists in a precise description of the hyperbolic features of classical systems (sum of a quadratic kinetic energy and a potential) on ${\mathbb A}^2={\mathbb T}^2\times{\mathbb R}^2$, which is a crucial step to prove Theorem I. The cylinders are either diffeomorphic to ${\mathbb T}^2\times[0,1]$ or to the product of ${\mathbb T}$ with a sphere with three holes. A chain at energy ${\bf e}$ for $H$ is a finite family of such cylinders, which are contained in $H^{-1}({\bf e})$ and admit heteroclinic connections between them. The cylinders satisfy additional dynamical properties which ensure the existence, up to an arbitrarily small perturbation, of orbits of $H$ drifting along them (and so along the chain). The content of Theorem I is the following. Assuming $\kappa$ large enough, we prove that for every $\bf f$ in an {\em open dense} subset of the unit sphere in $C^\kappa({\mathbb A}^3)$, there is a lower semicontinuous threshold ${\boldsymbol \varepsilon}({\bf f})>0$ for which, when ${\varepsilon}\in\,]0,{\boldsymbol \varepsilon}(f)[$, the system $H=h+{\varepsilon} \bf f$ admits a chain at energy ${\bf e}$ which intersects each $\widehat O_i$. To prove this result we approximate the system $H$ by local normal forms near resonances, and we distinguish between ``strong double resonance'' points and ``simple resonance'' curves. In both cases we first detect normally hyperbolic objects invariant under the normal forms obtained by averaging with respect to two fast angles (in the simple resonance case) or a single fast angle (in the double resonance case). Along simple resonance curves, the approximate systems are one-parame\-t\-er families of pendulums on ${\mathbb A}$, while the main role at strong double resonance points is played by classical systems on ${\mathbb A}^2$, whose study in the generic case is the content of Theorem II. Given a generic classical system on ${\mathbb A}^2$, for any integer homology class~$c$ we first prove the existence of an associated ``chain of heteroclinically connected $2$-annuli realizing~$c$,'' which is asymptotic both to the critical energy (maximal value of the potential) and to the infinite energy. We then prove the existence of a singular annulus, and we finally prove that for any $c$, the associated chain admits heteroclinic connections with that singular annulus. Along simple resonance curves, the normalized cylinders are the product of the one-parameter family of fixed points of the pendulums with the torus ${\mathbb T}^2$ of fast angles, while near the double resonance points the cylinders are the product of the annuli (or the singular annuli) in the classical system with the circle ${\mathbb T}$ of the fast angle. We get the corresponding invariant objects for $H$ by normally hyperbolic persistence and KAM type results to deal with the invariance of the boundaries ($2$-dimensional tori). We finally prove the existence of a rich homoclinic and heteroclinic structure for these objects, which gives rise to the chains. \end{abstract} \begin{center} {\bf\LARGE Introduction and main results} \end{center} \vskip.5cm Given $n\geq1$, we denote by ${\mathbb A}^n={\mathbb T}^n\times{\mathbb R}^n$ the cotangent bundle of the torus ${\mathbb T}^n$, endowed with its natural angle-action coordinates $({\theta},r)$ and its usual symplectic structure. This paper is the first of a series of three dedicated to a proof of the Arnold diffusion conjecture for nearly integrable Hamiltonian systems on ${\mathbb A}^3$, in the ``convex setting'' which was introduced by Mather. Two other approaches of the same problem are developped in \cite{C,KZ}. In this paper we focus on the geometric part of our construction, that is, the existence of a ``hyperbolic skeleton'' for diffusion, formed by chains of compact invariant normally hyperbolic $3$-dimensional cylinders, whose existence is the content of Theorem~I. The proof of the existence of diffusion orbits drifting along chains is the object of \cite{GM,M}. The proof of Theorem~I necessitates in particular a detailed analysis of the hyperbolic properties of generic classical systems (sum of a quadratic energy and a potential function) on ${\mathbb A}^2$, which constitutes the second main result of the present paper (Theorem~II). \section{The general setting} \paraga In \cite{A64}, Arnold introduced the first example of an ``unstable'' family of Hamiltonian systems on ${\mathbb A}^3$, namely: \begin{equation}\label{eq:Arnold1} H_{{\varepsilon}}({\theta},r)=r_1+{\tfrac{1}{2}} (r_2^2+r_3^2)+{\varepsilon}(\cos {\theta}_3-1)+\mu({\varepsilon})(\cos {\theta}_3-1) g({\theta}), \end{equation} where $g$ is a suitably chosen trigonometric polynomial, ${\varepsilon}>0$ is small enough and $\mu({\varepsilon})<\!\!<{\varepsilon}$. The main result of Arnold is the existence of ${\varepsilon}_0>0$ such that for $0<{\varepsilon}<{\varepsilon}_0$, the system $H_{\varepsilon}$ admits an ``unstable solution'' $\gamma_{\varepsilon}(t)=\big({\theta}(t),r(t)\big)$ such that \begin{equation}\label{eq:unstablesol} r_2(0)<0,\qquad r_2(T_{\varepsilon})>1, \end{equation} for some (large) $T_{\varepsilon}$. In view of this result and the associated constructions, Arnold conjectured (see \cite{A94}) that for ``typical'' systems of the form $H_{\varepsilon}({\theta},r)=h(r)+{\varepsilon} f({\theta},r,{\varepsilon})$ on ${\mathbb A}^n$, $n\geq 3$, the projection in action of some orbits should visit any element of a prescribed collection of arbitrary open sets intersecting a connected component of a level set of $h$. Orbits experiencing this behavior are said to be {\em diffusion orbits}. This conjecture motivated a number of works, first in a sligthly different context. Namely, setting ${\varepsilon}=1$ in (\ref{eq:Arnold1}) yields a simpler class of systems for which the unperturbed part no longer depends on the actions only, but still remains completely integrable (with nondegenerate hyperbolicity). It became a challenging question to prove the existence of unstable solutions (\ref{eq:unstablesol}) for the slightly more general class of systems \begin{equation}\label{eq:Arnold2} G_{\mu}({\theta},r)=r_1+{\tfrac{1}{2}} (r_2^2+r_3^2)+(\cos {\theta}_3-1)+\mu g({\theta},r), \end{equation} where $g$ belongs to a residual subset of a small enough ball in some appropriate function space (finitely or infinitely differentiable, Gevrey, analytic). This setting (with its natural generalizations) is now called the {\em a priori unstable} case of Arnold diffusion. In \cite{GM} we set out a geometric framework to deal with such systems, see \cite{B08,BKZ13,BT99,CY09,DLS00,DLS06a,DLS06b,FM03,GT,GL06,GR13,GLS,M02,T04} amongst others for different approaches. \paraga In this paper we focus on the so-called {\em a priori} stable case, that is, we consider perturbations of integrable systems on ${\mathbb A}^3$ which depend only on the actions. Our goal is to analyze the hyperbolic structure of such systems (under nondegeneracy conditions) and prove the existence of ``many'' $3$ and $4$ dimensional hyperbolic invariant submanifolds with a rich homoclinic structure, which in addition form well-defined ``chains'' (in the spirit of the initial approach of Arnold in \cite{A64}). This geometric framework will in turn enable us in \cite{M} to use in the {\em a priori} stable setting the ``{\em a priori} unstable techniques'' introduced in \cite{GM}, and prove the existence of orbits drifting along such chains. \paraga Let us briefly describe our setting, beginning with the functional spaces. For $2\leq \kappa <+\infty$, we equip $C^\kappa({\mathbb A}^3):=C^\kappa({\mathbb A}^3,{\mathbb R})$ with the uniform seminorm $$ \norm{f}_\kappa=\sum_{k\in{\mathbb N}^6,\ 0\leq \abs{k}\leq \kappa}\norm{\partial^kf}_{C^0({\mathbb A}^3)}\leq+\infty $$ and we set $ C_b^\kappa({\mathbb A}^3)=\big\{f\in C^\kappa({\mathbb A}^3)\mid \norm{f}_\kappa<+\infty\big\}, $ so that $\big(C_b^\kappa({\mathbb A}^3),\norm{\ }_\kappa\big)$ is a Banach algebra. We consider systems on ${\mathbb A}^3$, of the form \begin{equation}\label{eq:hampert1} H({\theta},r)=h(r)+ f({\theta},r), \end{equation} where $h:{\mathbb R}^3\to{\mathbb R}$ is $C^\kappa$ and the perturbation $f\in C_b^\kappa({\mathbb A}^3)$ is small enough. \paraga A first restriction imposed by Mather in \cite{Mat04} in order to use variational methods is that the unperturbed part $h$ is strictly convex with superlinear growth at infinity (that is, $\lim_{\norm{r}\to+\infty} h(r)/\norm{r}\to+\infty$). Such Hamiltonians are referred to as {\em Tonelli Hamiltonians}. We will also limit ourselves to Tonelli Hamiltonians here, since convexity will be necessary in our constructions in the neighborhood of double resonance points. \paraga A usual way to deal with the smallness condition on $f$, as already illustrated by (\ref{eq:Arnold1}), is to prove the occurrence of diffusion orbits for all systems in ``segments'' originating at $h$, of the form \begin{equation}\label{eq:hampert2} \big\{H_{\varepsilon}({\theta},r)=h(r)+ {\varepsilon} f({\theta},r)\mid {\varepsilon}\in\,]0,{\varepsilon}_0]\big\} \end{equation} where $f$ is a fixed function. This makes it natural that the smallness threshold ${\varepsilon}_0$ may explicitely depend on $f$, however this would not be appropriate in our setting since it seems difficult to prove the existence of diffusion over {\em whole} segments such as (\ref{eq:hampert2}). To take this observation into account, following Mather, we use a more global framework and introduce ``anisotropic balls'' in which the diffusion phenomenon can be expected to occur generically. Let ${\mathcal S}^\kappa$ be the unit sphere in $C_b^\kappa({\mathbb A}^3)$. Given $ {\boldsymbol \varepsilon}_0:{\mathcal S}^\kappa\to [0,+\infty[ $ (a ``threshold function''), we define the associated ${\boldsymbol \varepsilon}_0$-ball: \begin{equation}\label{eq:cuspball} {\mathscr B}^\kappa({\boldsymbol \varepsilon}_0):= \big\{{\varepsilon} {\bf f} \mid {\bf f}\in{\mathcal S}^\kappa,\ {\varepsilon}\in\,]0,{\boldsymbol \varepsilon}_0({\bf f})[\big\}. \end{equation} \paraga This yields the following version of the diffusion conjecture\footnote{Mather's formulation is indeed still more precise and involved}, to be compared with \cite{A94}. \vskip2mm {\bf Diffusion conjecture in the convex setting.} {\em Consider a $C^\kappa$ integrable Tonelli Hamiltonian $h$ on ${\mathbb A}^3$. Fix an energy~${\bf e}$ larger than $\mathop{\rm Min\,}\limits h$ and a finite family of arbitrary open sets $O_1,\ldots,O_m$ which intersect $h^{-1}({\bf e})$. Then for $\kappa\geq {\kappa_0}$ large enough, there exists a lower semicontinuous function \begin{equation} {\boldsymbol \varepsilon}_0:{\mathcal S}^\kappa\to[0,+\infty[ \end{equation} with positive values on an open dense subset of ${\mathcal S}^\kappa$ such that for $f$ in an open and dense subset of ${\mathscr B}^\kappa({\boldsymbol \varepsilon}_0)$, the system \begin{equation} H({\theta},r)=h(r)+ f({\theta},r) \end{equation} admits an orbit which intersects each ${\mathbb T}^3\times O_i$. } \paraga The zeros of ${\boldsymbol \varepsilon}_0$ correspond to directions along which diffusion cannot occur. Simple examples show that such directions exist in general: for instance if $h(r)={\tfrac{1}{2}}(r_1^2+r_2^2+r_3^2)$, each system $H_{\varepsilon}=h+{\varepsilon} f$ with $f({\theta})=\sin {\theta}_i$ ($i=1,2,3$) is completely integrable and does not admit diffusion orbits for ${\varepsilon}$ small enough. Note also that since ${\boldsymbol \varepsilon}_0$ is assumed to be lower semicontinuous, the associated ball is open in $C^\kappa_b({\mathbb A}^3)$. In view of the shape of ${\mathscr B}^\kappa({\boldsymbol \varepsilon}_0)$, a residual subset in such a ball is said to be {\em cusp-residual} and a property which holds on a cusp-residual subset is said to be {\em cusp-generic}. \begin{figure} \caption{A generalized ball} \label{Fig:genball} \end{figure} \vskip-5mm \paraga Our purpose in this paper is to set out a list of nondegeneracy conditions on the perturbation $f$ which yield the existence of ``a small amount of hyperbolicity'' in the system $H=h+f$, from which we can deduce the existence of chains of normally hyperbolic objects which intersect the collection of open sets ${\mathbb T}^3\times O_i$. We then prove that these conditions are satisfied for {\em any}\footnote{an additional perturbation will be necessary in order to get the diffusion orbits drifting along the chains, which explains the restriction to residual subsets of generalized balls in the previous conjecture} $f$ in some generalized ball ${\mathscr B}^\kappa({\boldsymbol \varepsilon}_0)$, where the threshold function satisfies the conditions of the previous conjecture. This is the content of Theorem~I stated in Section~\ref{sec:TheoremI} of this Introduction. As in Nekhoroshev's approach of exponential stability, our analysis necessitates to discriminate between ``strong double resonances'' and ``almost simple resonances'' of the unperturbed Hamiltonian $h$. While the analysis along simple resonances is quite straightforward, the neighborhood of strong double resonances needs a precise description of the hyperbolic behavior of generic classical systems on the annulus~${\mathbb A}^2$. This is the second main result of this paper (Theorem~II), stated in Section~\ref{sec:TheoremII} of this Introduction. \setcounter{paraga}{0} \section{Cylinders, chains and Theorem I}\label{sec:TheoremI} \paraga Before stating Theorem I we briefly describe the various objects involved in our construction. More precise definitions are given in Section~\ref{sec:normhyp} of Part I. Let $X$ be a $C^1$ complete vector field on a smooth manifold $M$, with flow $\Phi$. Let $p$ be an integer $\geq 1$. \vskip1.5mm ${\bullet}$ We say that ${\mathscr C}\subset M$ is a {\em $C^p$ invariant cylinder with boundary} for $X$ if ${\mathscr C}$ is a submanifold of $M$, $C^p$--diffeomorphic to ${\mathbb T}^2\times [0,1]$, which is invariant under the flow of $X$: $\Phi^t({\mathscr C})={\mathscr C}$ for all $t\in{\mathbb R}$. \vskip1.5mm ${\bullet}$ We denote by $\boldsymbol{\mathsf Y}$ any realization of the two-sphere $S^2$ minus three open discs with nonintersecting closures, so that $\partial \boldsymbol{\mathsf Y}$ is the union of three circles. We say that ${\mathscr C}_{\bullet}\subset M$ is an {\em invariant singular cylinder} for $X$ if ${\mathscr C}_{\bullet}$ is a $C^1$ submanifold of $M$, diffeomorphic to ${\mathbb T}\times \boldsymbol{\mathsf Y}$ and invariant under $\Phi$. The boundary of a singular cylinder is the disjoint union of three tori. \vskip1.5mm Throughout this paper we will consider vector fields generated by Hamiltonian functions $H\in C^\kappa({\mathbb A}^3)$, $\kappa\geq 2$. The cylinders or singular cylinders will be contained in regular levels of~$H$. \begin{figure} \caption{$3$-dimensional cylinder and singular cylinder} \label{Fig:cylsingcyl} \end{figure} \paraga The notion of normal hyperbolicity for submanifolds with boundary requires some care. We introduce in Section~\ref{sec:normhyp} of Part I and Appendix~\ref{app:normhyp} a simple definition for the normal hyperbolicity of cylinders and singular cylinders, which coincides with the usual one (see \cite{C04,C08}) but is better adapted to our subsequent constructions. In particular, normally hyperbolic cylinders and singular cylinders admit well-defined $4$-dimensional stable and unstable manifolds, contained in their energy level. \paraga In addition to the normal hyperbolicity, we will require our cylinders to admit global Poincar\'e sections, diffeomorphic to ${\mathbb T}\times[0,1]$, whose associated Poincar\'e maps satisfy a twist condition (with a similar property for singular cylinders). This enables us to define a particular class of $2$-dimensional invariant tori contained in these cylinders, which we call {\em essential tori}. Analogous (but slightly more involved) notions will be defined for singular cylinders. Moreover, we will require specific homoclinic conditions to be satisfied by the cylinders, which yields the notion of {\em admissible cylinders}. \paraga Finally, we will introduce various heteroclinic conditions which will have to be satisfied by pairs of cylinders. This makes it possible to define {\em admissible chains}, that is, finite families $({\mathscr C}_k)_{1\leq k\leq k_*}$ of admissible cylinders or singular cylinders, in which two consecutive elements satisfy these heteroclinic conditions. \paraga The main result of Part~I is the following. \vskip2mm \noindent {\bf Theorem I. (Cusp-generic existence of admissible chains.)} {\it Consider a $C^\kappa$ integrable Tonelli Hamiltonian $h$ on ${\mathbb A}^3$. Fix ${\bf e}>\mathop{\rm Min\,}\limits h$ and a finite family of open sets $O_1,\ldots,O_m$ which intersect $h^{-1}({\bf e})$. Fix $\delta>0$. Then for $\kappa\geq \kappa_0$ large enough, there exists a lower semicontinous function $$ {\boldsymbol \varepsilon}_0:{\mathscr S}^\kappa\to{\mathbb R}^+ $$ with positive values on an open dense subset of ${\mathscr S}^\kappa$ such that for $f\in{\mathscr B}^\kappa({\boldsymbol \varepsilon}_0)$ the system \begin{equation}\label{eq:hamstatement} H({\theta},r)=h(r)+ f({\theta},r) \end{equation} admits an admissible chain of cylinders and singular cylinders, such that each open set ${\mathbb T}^3\times O_k$ contains the $\delta$-neighborhood in ${\mathbb A}^3$ of some essential torus of the chain.} \vskip3mm \paraga One can be more precise and localize the previous chain. Since $h$ is a Tonelli Hamiltonian, one readily checks that $\omega:=\nabla h$ is a diffeomorphism from ${\mathbb R}^3$ onto ${\mathbb R}^3$, and that the level set $h^{-1}({\bf e})$ is diffeomorphic to $S^2$. Given an indivisible vector $k\in{\mathbb Z}^3\setminus\{0\}$, set $$ {\Gamma}_k=\omega^{-1}(k^\bot)\cap h^{-1}({\bf e}), $$ where $k^\bot$ is the plane orthogonal to $k$ for the Euclidean structure of ${\mathbb R}^3$. Then one checks that ${\Gamma}_k$ is diffeomorphic to a circle, and that if $k\neq k'$ then ${\Gamma}_k$ and ${\Gamma}_{k'}$ intersect at exactly two points (such intersection points are said to be {\em double resonance points}). By projective density, it is possible to choose a family $k_1,\ldots,k_{m-1}$ of indivisible and pairwise independent vectors of ${\mathbb Z}^3$ such that \vskip1.5mm ${\bullet}$ ${\Gamma}_{k_1}$ intersects $O_1$ and ${\Gamma}_{k_{m-1}}$ intersects $O_{m}$; \vskip1.5mm ${\bullet}$ for $2\leq i\leq m-1$, ${\Gamma}_{k_{i-1}}\cap {\Gamma}_{k_{i}}$ contains a point $a_i\in O_i$. \vskip1.5mm \noindent Fix $a_1\in {\Gamma}_{k_1}\cap O_1$ and $a_m\in {\Gamma}_{k_{m-1}}\cap O_m$. Fix an arbitrary orientation on each circle ${\Gamma}_{k_i}$ and let $[a_i,a_{i+1}]_{{\Gamma}_i}$ be the segment of ${\Gamma}_i$ bounded by $a_i$ and $a_{i+1}$ according to this orientation. Set finally $$ {\boldsymbol \Gamma}=\bigcup_{1\leq i\leq m-1}[a_i,a_{i+1}]_{{\Gamma}_i}. $$ \begin{figure} \caption{A ``broken line'' ${\Gamma}$ of resonance arcs} \label{Fig:brokenline } \end{figure} We will prove that one can choose ${\boldsymbol \varepsilon}_0$ in Theorem I so that for $f\in {\mathscr B}({\boldsymbol \varepsilon}_0)$ the projection to ${\mathbb R}^3$ of the admissible chain is located in a $\rho(f)$-tubular neighborhood of ${\boldsymbol \Gamma}$, whose radius $\rho(f)$ tends to $0$ when $f\to 0$ in $C^\kappa({\mathbb A}^3)$. \setcounter{paraga}{0} \section{Generic hyperbolic properties of classical systems on ${\mathbb A}^2$}\label{sec:TheoremII} A {\em classical system on ${\mathbb A}^2$} is a Hamiltonian of the form \begin{equation}\label{eq:classham} C({\theta},r)={\tfrac{1}{2}} T(r)+ U({\theta}),\qquad ({\theta},r)\in{\mathbb A}^2 \end{equation} where $T$ is a positive definite quadratic form of ${\mathbb R}^2$ and $U$ a $C^\kappa$ potential function on ${\mathbb T}^2$, where $\kappa\geq 2$. In the sequel we will require the potential $U$ to admit a single maximum at some $x^0$, which is nondegenerate in the sense that the Hessian of $U$ is negative definite. Consequently, the lift of $x^0$ to the zero section of ${\mathbb A}^2$ is a hyperbolic fixed point which we denote by $O$. We set $\overline e=\mathop{\rm Max\,}\limits U$ and we say that $\overline e$ is the {\em critical energy} for $C$. Such systems appear (generically), {\em up to a non symplectic rescaling $r-r^0=\sqrt{\varepsilon} \overline r$} in the neighborhood of a double resonance point $r^0$ of the initial system (\ref{eq:hamstatement}), as the main part of normal forms (we did not change the notation of the variables here). The energy of $C$ is not directly related to the initial energy ${\bf e}$ of (\ref{eq:hamstatement}), but the difference $e-\overline e$ has rather to be though of as the distance to the double resonance point (in projection to the action space) rescaled by the factor $\sqrt{\varepsilon}$. The aim of Part II is to depict some hyperbolic properties of $C$, when {\em $T$ is fixed} and $U$ belongs to a residual subset of $C^\kappa({\mathbb T}^2)$, $\kappa$ large enough. \paraga The following definition will be used throughout the paper. \begin{Def}\label{def:ann} Let $c\in H_1({\mathbb T}^2,{\mathbb Z})$. Let $I\subset{\mathbb R}$ be an interval. An {\em annulus for $X^C$ realizing $c$ and defined over $I$} is a $2$-dimensional submanifold ${\mathsf A}$, contained in $C^{-1}(I)\subset{\mathbb A}^2$, such that for each $e\in I$, ${\mathsf A}\cap C^{-1}(e)$ is the orbit of a periodic solution $\gamma_e$ of $X^C$, which is hyperbolic in $C^{-1}(e)$ and such that the projection $\pi\circ\gamma_e$ on ${\mathbb T}^2$ realizes $c$. We also require the period of the orbits to increase with the energy and that for each $e\in I$, the periodic orbit $\gamma_e$ admits a homoclinic orbit along which $W^\pm(\gamma_e)$ intersect transversely in $C^{-1}(e)$. Finally, we require the existence of a finite partition $I=I_1\cup\cdots\cup I_n$ in consecutive intervals such that the previous homoclinic orbit varies continuously for $e\in I_i$, $1\leq i\leq n$. \end{Def} When $I$ is compact, the annulus ${\mathsf A}$ is clearly normally hyperbolic in the usual sense (the boundary causes no trouble is this simple setting). The stable and unstable manifolds of ${\mathsf A}$ are well-defined, as the unions of those of the periodic solutions $\gamma_e$. Moreover, ${\mathsf A}$ can be continued to an annulus defined over a slightly larger interval $I'\supset I$. In the aforementioned normalization process of (\ref{eq:hamstatement}) near a double resonance point $r^0$, the interval over which an annulus is defined will be crucial for its localization with respect to~$r^0$. \paraga Note that, due to the reversibility of $C$, the solutions of the vector field $X^C$ occur in ``opposite pairs'', whose time parametrizations are exchanged by the symmetry $t\mapsto-t$. \begin{Def}\label{def:singann} Let $c\in H_1({\mathbb T}^2,{\mathbb Z})\setminus\{0\}$. A {\em singular annulus for $X^C$ realizing $\pm c$} is a $C^1$ compact invariant submanifold $\boldsymbol{\mathsf Y}$ of ${\mathbb A}^2$, diffeomorphic to the sphere $S^2$ minus three disjoint open discs with disjoint closures (so that $\partial \boldsymbol{\mathsf Y}$ is the disjoint union of three circles), such that there exist constants $e_*<\overline e<e^*$ which satisfy: \vskip1.5mm ${\bullet}$ $\boldsymbol{\mathsf Y}\cap\, C^{-1}(\overline e)$ is the union of the hyperbolic fixed point $O$ and a pair of opposite homoclinic orbits, \vskip1.5mm ${\bullet}$ $\boldsymbol{\mathsf Y}\cap C^{-1}(]\overline e,e^*])$ admits two connected components $\boldsymbol{\mathsf Y}^+$ and $\boldsymbol{\mathsf Y}^-$, which are annuli defined over the interval $]\overline e,e^*]$ and realizing $c$ and $-c$ respectively, \vskip1.5mm ${\bullet}$ $\boldsymbol{\mathsf Y}^0=\boldsymbol{\mathsf Y}\cap C^{-1}([e_*,\overline e[)$ is an annulus realizing the null class $0$. \end{Def} \begin{figure} \caption{A singular $2$-dimensional annulus} \label{Fig:singcyl} \end{figure} \vskip-2mm A singular annulus, endowed with its induced dynamics, is essentially the phase space of a simple pendulum from which an open neighborhood of the elliptic fixed point has been removed. According to the remark on the interpretation of the energy of $C$, a singular annulus is to be though of as located ``at the center'' of the double resonance for the initial system (\ref{eq:hamstatement}). \paraga We will need the following notion of chains\footnote{we keep the same terminology as for the cylinders, with a slightly different sense here} of annuli for $C$, from which we will deduce the existence and properties of the chains of cylinders near the double resonance points. \begin{Def}\label{def:chains} Let $c\in H_1({\mathbb T}^2,{\mathbb Z})$. We say that a family $(I_i)_{1\leq i\leq i_*}$ of nontrivial intervals, contained and closed in the energy interval $]\overline e,+\infty[$, is {\em ordered} when $\mathop{\rm Max\,}\limits I_i=\mathop{\rm Min\,}\limits I_{i+1}$ for $1\leq i\leq i_*-1$. A {\em chain of annuli realizing $c$} is a family $({\mathsf A}_i)_{1\leq i\leq i_*}$ of annuli realizing $c$, defined over an ordered family $(I_i)_{1\leq i\leq i_*}$, with the additional property $$ W^-({\mathsf A}_{i})\cap W^+({\mathsf A}_{i+1})\neq \emptyset,\qquad W^+({\mathsf A}_{i})\cap W^-({\mathsf A}_{i+1})\neq \emptyset, $$ for $1\leq i\leq i_*-1$. \end{Def} The last condition is equivalent to assuming that the boundary periodic orbits of ${\mathsf A}_i$ and ${\mathsf A}_{i+1}$ at energy $e=\mathop{\rm Max\,}\limits I_i=\mathop{\rm Min\,}\limits I_{i+1}$ admit heteroclinic orbits\footnote{but the previous formulation is more appropriate when hyperbolic continuations of the annuli are involved}. \def{\bf d}{{\bf d}} \paraga We can now state the main result of Part II. We say that $c\in H_1({\mathbb T}^2,{\mathbb Z})\setminus\{0\}$ is {\em primitive} when the equality $c=mc'$ with $m\in{\mathbb Z}$ implies $m=\pm1$. We denote by ${\bf H}_1({\mathbb T}^2,{\mathbb Z})$ the set of primitive homology classes, by ${\bf d}$ be the Hausdorff distance for compact subsets of ${\mathbb R}^2$ and by $\Pi:{\mathbb A}^2\to{\mathbb R}^2$ the canonical projection. \vskip3mm \noindent{\bf Theorem II.} {\bf (Generic hyperbolic properties of classical systems).} {\it Let $T$ be a quadratic form on ${\mathbb R}^2$ and for $\kappa\ge2$, let ${\mathscr U}^\kappa_0\subset C^\kappa({\mathbb T}^2)$ be the set of potentials with a single and nondegenerate maximum. Then for $\kappa\geq\kappa_0$ large enough, there exists a residual subset \begin{equation}\label{eq:defUT} {\mathscr U}(T)\subset{\mathscr U}_0^\kappa \end{equation} in $C^\kappa({\mathbb T}^2)$ such that for $U\in {\mathscr U}$, the associated classical system $C={\tfrac{1}{2}} T+U$ satisfies the following properties. \begin{enumerate} \item For each $c\in {\bf H}_1({\mathbb T}^2,{\mathbb Z})$ there exists a chain ${\bf A}(c)=({\mathsf A}_0,\ldots,{\mathsf A}_m)$ of annuli realizing~$c$, defined over ordered intervals $I_0,\ldots,I_m$, such that the first and last intervals are of the form $$ I_0=\,]\mathop{\rm Max\,}\limits U,e_m]\quad \textit{and}\quad I_m=[e_P,+\infty[, $$ for suitable constants $e_m$ and $e_P$ (which we call the Poincar\'e energy). \item Given two primitive classes $c\neq c'$, there exists ${\sigma}\in\{-1,+1\}$ such that the two chains ${\bf A}(c)=({\mathsf A}_i)_{0\leq i\leq m}$ and ${\bf A}(\sigma c')=({\mathsf A}'_i)_{0\leq i\leq m'}$ satisfy $$ W^-({\mathsf A}_0)\cap W^+({\mathsf A}'_0)\neq\emptyset \quad \textit{and}\quad W^-({\mathsf A}'_0)\cap W^+({\mathsf A}_0)\neq\emptyset, $$ both heteroclinic intersections being transverse in ${\mathbb A}^2$. \item There exists a singular annulus $\boldsymbol{\mathsf Y}$ which admits transverse heteroclinic connections with the first annulus of the chain ${\bf A}(c)$, for all $c\in{\bf H}_1({\mathbb T}^2,{\mathbb Z})$. \item Under the canonical identification of $H_1({\mathbb T}^2,{\mathbb Z})$ with ${\mathbb Z}^2$ and for $e>0$, let us set, for a given primitive class $c\sim(c_1,c_2)\in{\mathbb Z}^2$: $$ Y_c(e)=\frac{\sqrt {2e}\,c}{\sqrt{c_1^2+c_2^2}}\in{\mathbb R}^2 $$ Let ${\bf A}(c)=({\mathsf A}_0,\ldots,{\mathsf A}_m)$ be the associated chain and set ${\Gamma}_e={\mathsf A}_m\cap C^{-1}(e)$ for $e\in [e_P,+\infty[$. Then $$ \lim_{e\to+\infty}{\bf d}\big(\Pi({\Gamma}_e),\{Y_c(e)\}\big)=0. $$ \end{enumerate} } \vskip2mm We say that a chain with $I_0$ and $I_m$ as in the first item is {\em biasymptotic to $\overline e:=\mathop{\rm Max\,}\limits U$ and to $+\infty$}. We will not only consider chains formed by annuli only, but also ``generalized ones'' in which we will allow one annulus to be singular. With this terminology, one can rephrase the content of items 1 and 3 of previous theorem in the following concise way: for $U\in{\mathscr U}$ and for each pair of classes $c,c'\in {\bf H}_1({\mathbb T}^2,{\mathbb Z})$, there exists a generalized chain: $$ {\mathsf A}_m\leftrightarrow\cdots\leftrightarrow{\mathsf A}_1\leftrightarrow\boldsymbol{\mathsf Y}\leftrightarrow{\mathsf A}'_1\leftrightarrow\cdots\leftrightarrow{\mathsf A}'_{m'} $$ (where $\leftrightarrow$ stands for the heteroclinic connections) which is biasymptotic to $+\infty$, and realize $c$ and $c'$ respectively. This is indeed the main ingredient of our subsequent constructions, to get the part of the chains of cylinders located in the neighborhood of the ``double resonance points''. Item 4 will serve us to precisely localize the extremal cylinders, while item 2, which we find interesting in itself, will not be used in the construction of our chains. \vskip2mm In the $r$--plane, one therefore gets the following symbolic picture for the projection of 6 generalized chains of annuli, where the annuli are represented by fat segments, the singular annulus by a fat segment with a circle and the various heteroclinic connections are represented by $\leftrightarrow$. \begin{figure} \caption{Projections in action of chains of annuli} \label{fig:classicannuli} \end{figure} \vskip-3mm The projections of the annuli on the action space are in fact more complicated than lines, they are rather $2$--dimensional submanifolds with boundary, which tend to a line when the energy grows to infinity. \vskip3mm \section{Outline of the proofs} \setcounter{paraga}{0} Part I essentially relies on the result of Part II, which will be described separately. \subsection{Outline of the proof of Theorem I} \vskip2mm\noindent ${\bullet}$ In this description we look at simplified model of the form $H_{\varepsilon}=h+{\varepsilon} f$, where we assume $h(r)={\tfrac{1}{2}}(r_1^2+r_2^2+r_3^2)$. We fix an energy ${\bf e}>0$ and consider the broken line ${\boldsymbol \Gamma}$ defined in Section~\ref{sec:TheoremI}. Fix an arc ${\Gamma}={\Gamma}_{k_i}$ from ${\boldsymbol \Gamma}$ and assume, again for simplicity, that $k_i=(0,0,1)$, so that ${\Gamma}$ is contained in the plane $r_3=0$, and $r\in{\Gamma}$ if and only if: $$ \omega(r)=\nabla h(r)=(r_1,r_2,0),\qquad h(r_1,r_2,0)={\bf e}. $$ One can assume without loss of generality that the endpoints of ${\Gamma}$ are double resonance points, that is, the frequency $\widehat\omega(r):=(r_1,r_2)$ lies on a rational line of ${\mathbb R}^2$. To prove the existence of cylinders whose projection in action lies along ${\Gamma}$, we will first average the perturbation {\em as much as possible} in order to get simplified systems which admit cylinders. We then use normally hyperbolic persistence to prove that these cylinders give rise to cylinders in the initial system, provided that the averaged systems are close enough to the initial one. \vskip2mm\noindent ${\bullet}$ Given $r^0\in{\Gamma}$, when $\widehat\omega(r^0)$ is ``sufficiently nonresonant'', one proves that the system $H_{\varepsilon}$ is conjugated to the normal form \begin{equation}\label{eq:exnormform} N_s({\theta},r)=h(r)+{\varepsilon} V({\theta}_3,r)+R_s({\theta},r,{\varepsilon}) \end{equation} in the neighborhood of ${\mathbb T}^3\times \{r^0\}$, with \begin{equation} \qquad V({\theta}_3,r):=\int_{{\mathbb T}^2}f\big((\widehat{\theta},{\theta}_3),r\big)\,d\widehat{\theta},\qquad \widehat{\theta}=({\theta}_1,{\theta}_2), \end{equation} and where $R_s$ is small in some $C^k$ topology. \vskip2mm\noindent ${\bullet}$ When $r$ varies on a small closed segment $S\subset {\Gamma}$ around $r^0$, the truncated normal form \begin{equation}\label{eq:truncnormform} {\tfrac{1}{2}}(r_1^2+r_2^2)+\big[{\tfrac{1}{2}} r_3^2+{\varepsilon} V({\theta}_3,r)\big] \end{equation} appears as the skew-product of the unperturbed Hamiltonian ${\tfrac{1}{2}}(r_1^2+r_2^2)$ with a family of ``generalized pendulums'', functions of $({\theta}_3,r_3)\in {\mathbb A}$, parametrized by $r\in S$ (the fact that $r_3$ itself appears in the parameter is here innocuous). For each value of the parameter, the latter pendulums are therefore completely integrable. Assume moreover that $V(\,\cdot\,,r)$ admits a single and nondegenerate maximum at some point ${\theta}_3^*(r)$, and, for simplicity, that $V\big({\theta}_3^*(r),r)=0$. Then the point $({\theta}_3^*(r),r_3=0)$ is hyperbolic for the Hamiltonian ${\tfrac{1}{2}} r_3^2+{\varepsilon} V({\theta}_3,r)$ and one immediately gets the existence of a normally hyperbolic cylinder ${\mathcal C}$ at energy ${\bf e}$ for $N_s$ by taking the the product of the torus ${\mathbb T}^2$ of the angles $\widehat{\theta}$ with the curve $$ r\in S,\quad {\theta}_3={\theta}_3^*(r). $$ Note that ${\mathcal C}$ is diffeomorphic to ${\mathbb T}^2\times[0,1]$, so that its boundary is the disjoint union of two $2$-dimensional isotropic tori. \vskip2mm\noindent ${\bullet}$ When the remainder $R_s$ is small enough in the $C^2$ topology, the previous cylinder persists by normal hyperbolicity {\em provided that its boundary persists}, which will comes from KAM-type results. This necessitates both $R_s$ to be small in the $C^k$ topology for $k$ large enough and some frequency to be Diophantine, which in turns necessitates a careful choice of the endpoints of the segment $S$. One main task in Part I is be to determine {\em maximal} subsegments $S$ of ${\Gamma}$ to which the previous description applies. \vskip2mm\noindent ${\bullet}$ We will treat the smallness condition of $R_s$ and the KAM conditions separately. The first remark (see \cite{B10}), is that under appropriate nondegeneracy conditions on $f$, the smallness condition on $R_s$ holds outside a {\em finite set} $D\subset {\Gamma}$ of ``strong double resonance points''. Consequently, our first step will be to divide ${\Gamma}$ into ``$s$--segments'' (where $s$ stands for ``purely simple'') limited by a finite number of consecutive strong double resonance points ($\bigcirc\hskip-2.9mm\bullet$ in the following picture). \begin{figure} \caption{The arc ${\Gamma}$ with the strong double resonance points} \end{figure} \vskip-5mm We prove that global normal forms exist along such segments, which enable us to detect ``normally hyperbolic'' cylinders (without boundary) which are everywhere tangent to the Hamiltonian vector field, but not necessarily invariant under its flow. Obviously the notion of normal hyperbolicity has to be relaxed beyond its usual sense in this case, which will be done in Section I of Part I. These pseudo invariant cylinders become genuine normally hyperbolic invariant manifolds once the existence of $2$ dimensional invariant tori close to their boundaries is proved. We call them the $s$-cylinders. \vskip2mm\noindent ${\bullet}$ To prove this existence, and overcome the lack of precise estimates on the size of the remainder $R_s$, we will begin by proving the existence of genuine invariant cylinders {\em in the neighborhood of the double resonance points}. These cylinders will be called $d$-cylinders in the following. Thanks to the existence of extremely precise normal forms in the neighborhood of double resonance points\footnote{in domains whose size tends to $0$ when ${\varepsilon}\to0$}, their existence is easy to prove taking Theorem II for granted. In particular, we will be able to prove the existence of many $2$-dimensional persisting tori inside these cylinders. We will then turn back to the determination of the maximal segments $S$, by ``interpolating'' between two $d$-cylinders located near two consecutive double resonance points, by means of the previous global normal form. This way, the boundaries of the $s$-cylinders will be proved to belong to the previous family of $2$-dimensional tori. \vskip2mm\noindent ${\bullet}$ Let us now describe the construction of the $d$-cylinders in the neighborhood of a double resonance point. Given such a point $r^0$, for instance $r^0=(1,0,0)$ for simplicity, the first task is to prove the existence of a conjugacy between the initial system and the normal form \begin{equation}\label{eq:exnormform2} \begin{array}{lll} N_d({\theta},r)={\tfrac{1}{2}} r_1^2+\big[{\tfrac{1}{2}}(r_2^2+r_3^2)+{\varepsilon} U({\theta}_2,{\theta}_3)\big]+R_d({\theta},r,{\varepsilon}),\\[5pt] \qquad U({\theta}_2,{\theta}_3):=\displaystyle\int_{{\mathbb T}}f\Big(\big({\theta}_1,({\theta}_2,{\theta}_3)\big),r^0\Big)\,d{\theta}_1, \end{array} \end{equation} where now the remainder $R_d$ can be proved to be extremely small (in the $C^k$ topology with large $k$) over a neighborhood of $r^0$ of diameter ${\varepsilon}^\nu$, where $\nu$ can be arbitrarily chosen in $]0,{\tfrac{1}{2}}]$ provided that $\kappa$ is large enough. \vskip2mm\noindent ${\bullet}$ After performing a $\sqrt{\varepsilon}$ dilatation in action, the main role in (\ref{eq:exnormform2}) will be played by the classical system $$ C(\overline{\theta},\overline r)={\tfrac{1}{2}}(r_2^2+r_3^2)+ U({\theta}_2,{\theta}_3) $$ which we will assume to satisfy the genericity conditions of Theorem~II. This will provide us with a large family of invariant $2$-dimensional annuli for $C$, realizing any primitive integer homology class of ${\mathbb T}^2$, together with a singular annulus. They constitute chains and ``generalized chains'' along lines of rational slope in projection to the action space (see Figure~\ref{fig:classicannuli}). \vskip2mm\noindent ${\bullet}$ In the truncated normal form $$ {\tfrac{1}{2}} r_1^2+\big[{\tfrac{1}{2}}(r_2^2+r_3^2)+{\varepsilon} U({\theta}_2,{\theta}_3)\big] $$ each previous annulus ${\mathsf A}$ of $C$ gives rise (up to a rescaling in action) to a cylinder, product of ${\mathsf A}$ with the circle ${\mathbb T}$ of the angle ${\theta}_1$. Again, this cylinder is diffeomorphic to ${\mathbb T}^2\times [0,1]$. Now we can moreover take advantage to the smallness of $R_d$ to prove the persistence of the boundaries by KAM techniques (we will use here Herman's version of the invariant curve theorem). This way we prove the existence in the initial system of a $d$-cylinder attached to each annulus of $C$, which lies along a simple resonance curve, whose equation is directly related to the homology class which is relized by ${\mathsf A}$. The same method enables us to prove the existence of a singular cylinder attached to the singular annulus of $C$, and which is located ``at the center of the double resonance''. The length of these cylinders is $O(\sqrt{\varepsilon})$, due to the rescaling. One can also prove the existence of heteroclinic orbits between them, as soon as the annuli of $C$ from which they are deduced admit heteroclinic connections. Finally, one crucial remark is then that the extremal cylinders (attached to the extremal annuli of $C$) can be {\em continued to a distance $O({\varepsilon}^\nu)$ from the double resonance point}. One therefore deduce from Figure~\ref{fig:classicannuli} the following symbolic picture, now in the initial system and near $r^0$. \begin{figure} \caption{$d$-cylinders and singular cylinder near a double resonance point} \end{figure} \vskip-4mm We did not represent the heteroclinic connections since they are immediately deduced from those of Figure~\ref{fig:classicannuli}. In particular, the four cylinders located close to the singular cylinder admit heteroclinic connections with it. The chains of cylinders so obtained lie along the simple resonance curves getting to the double resonance point, and admit connections with the singular cylinder. This enables us to ``cross'' the double resonance along a simple resonance curve, of to ``pass from'' one resonance curve to another one. \vskip2mm\noindent ${\bullet}$ Once the existence the $d$-cylinders is proved for each double resonance point $\bigcirc\hskip-2.9mm\bullet$\ \ on the segment ${\Gamma}$, we can ``interpolate along ${\Gamma}$'' between the extremal cylinders attached to two consecutive such points. These extremal cylinders are those attached to the extremal annuli of the classical systems realizing the homology corresponding to the resonance curve ${\Gamma}$. This yields the existence of an $s$--cylinder, whose projection in action lies along the segment of ${\Gamma}$ limited by the double resonance points, and whose ``ends'' moreover ``match'' with both extremal cylinders at these points. The situation is in fact slighly more complicated, due to the possible generic occurrence of {\em bifurcation points} for the two-phase averaged systems~(\ref{eq:truncnormform}). These are the points $r\in{\Gamma}$ where the potential $V(\,\cdot\,,r)$ admits {\em two} nondegenerate global maxima instead of a single one. In the neighborhood of these points two cylinders coexist, for which we prove the existence of heteroclinic connections. This yields the following final picture between two double resonance points. \begin{figure} \caption{Interpolation between two extremal $d$-cylinders} \label{fig:interpolation} \end{figure} \vskip-3mm \vskip2mm\noindent ${\bullet}$ This way one obtains a chain of cylinders and singular cylinders along the segment ${\Gamma}$, by concatenation of the previous chains between consecutive double resonance points. This construction works for each segment ${\Gamma}_{k_i}$ of the initial broken line. To get a chain along the full broken line one only has to use the previous description at a double resonance point: the ``incoming chain'' along ${\Gamma}_{k_i}$ is connected to the ``outgoing chain'' along ${\Gamma}_{k_{i+1}}$ since the singular cylinder at the point $a_i$ admits heteroclinic connections with the ``initial cylinders'' in both chains. \begin{figure} \caption{Transition between two arcs at a double resonance point} \label{fig:transition} \end{figure} \vskip2mm\noindent ${\bullet}$ The previous constructions are possible if $f$ is subjected to a list of nondegeneracy conditions, both along the simple resonance curves involved in the construction of the broken line ${\boldsymbol \Gamma}$ and in the neighborhood of the strong double resonance points (or the intersection points of two distinct curves in ${\boldsymbol \Gamma})$. The last step is to prove that these conditions are cusp-residual. \vskip-.5cm \subsection{Outline of the proof of Theorem II} In this part we consider a classical system $C({\theta},r)={\tfrac{1}{2}} T(r)+U({\theta})$ on $T^*{\mathbb T}^2$, under the generic assumption that $U$ admits a single and nondegenerate maximum at ${\theta}_0$. The lift $O=({\theta}_0,0)$ to the zero section is therefore a hyperbolic fixed point for the vector field $X_C$. We set $\overline e=\mathop{\rm Max\,}\limits U$. \vskip2mm\noindent ${\bullet}$ For $e>\overline e$, the so-called Jacobi metric induced by $C$ at energy $e$ is defined for $v\in T_{\theta} {\mathbb T}^2$ by \begin{equation} \abs{v}_e=\big(2(e-U({\theta}))\big)^{{\tfrac{1}{2}}}\norm{v}, \end{equation} where $\norm{\ }$ stands for the norm on ${\mathbb R}^2$ associated with the dual of $T$. The Jacobi-Maupertuis principle states that, up to reparametrization, the solutions of the Hamiltonian vector field $X^C$ in $C^{-1} (e)$ and those of the geodesic vector field $X_e$ induced by $\abs{\ }_e$ in the unit tangent bundle are in one-to-one correspondence. \vskip2mm\noindent ${\bullet}$ Fix a primitive class $c\in H_1({\mathbb T}^2,{\mathbb Z})$. By a simple minimization argument, there exist length-minimizing closed geodesics in the class $c$ for the metric $\abs{\,\cdot\,}_e$. As a consequence, for each $e>\overline e$, there exist periodic orbits of $X^C$ contained in $C^{-1}(e)$ and realizing $c$, which we will call minimizing too. It turns out that, generically on $U$, these orbits are hyperbolic. Moreover, still generically, there is a discrete subset $B(c)\subset\,]\overline e,+\infty[$ such that for $e\in \,]\overline e,+\infty[\setminus B(c)$, the level $C^{-1}(e)$ contains a single minimizing periodic orbit realizing $c$, while $C^{-1}(e)$ contains exactly two such orbits when $e\in B(c)$. Finally, Hedlund's theorem proves that when $e\in \,]\overline e,+\infty[\setminus B(c)$, the corresponding minimizing periodic orbit admit homoclinic orbits, while the two minimizing orbits at $e\in B(c)$ are connected by heteroclinic orbits. \vskip2mm\noindent ${\bullet}$ Since the orbits are hyperbolic, varying the energy $e$ in the previous description proves the existence of a (possibly infinite) family of annuli $(A_j)_{j\in J}$, defined over the ordered family of intervals $(I_j)_{j\in I}$ limited by consecutive points of $B(c)$ (the constraint of monotonicity of the periods and the existence of continously varying homoclinic orbits in Definition~\ref{def:ann} come from more refined considerations). Moreover, each pair of annuli defined over consecutive intervals admit heteroclinic connections by Hedlund's theorem. It therefore remains to prove that the chain ``stabilizes'' at both ends, that is, that one can assume $J$ to be finite, of the form $\{1,\ldots,m\}$, with $I_1=\,]\overline e, e_m[$ and $I_m=\,]e_P,+\infty[$. We refer to the latter as the ``high energy annulus'' and to the former as the ``low energy annulus''. \vskip2mm\noindent ${\bullet}$ {\bf The high energy annuli.} To see that the familly stabilizes at high energies, we use the fact that a classical system of the form $C(x,p)={\tfrac{1}{2}} T(p)+U(x)$ at high energy appears as a perturbation of the completely integrable system ${\tfrac{1}{2}} T$. The scaling $p=\sqrt{\varepsilon} \overline p$ reduces the study of $C$ at high energies $e$ to that of $$ C_{\varepsilon}(x,\overline p)={\tfrac{1}{2}} T(\overline p)+{\varepsilon} U(x) $$ for small ${\varepsilon}\sim 1/e$. We canonically identify $H_1({\mathbb T}^2,{\mathbb Z})$ with ${\mathbb Z}^2$. Given $c\in{\mathbb Z}^2$, we define the $c$--averaged potential associated with $U$ as the function \begin{equation}\label{eq:avpot} U_c(\varphi)=\int_0^1 U\big(\varphi+s\,(c_1,c_2)\big)\,ds \end{equation} where $\varphi$ belongs to the circle ${\mathbb T}^2/T_c\sim {\mathbb T}$, where $T_c=\{\lambda(c_1,c_2)\ [{\mathbb Z}^2]\mid \lambda\in{\mathbb R}\}$. Assume that $U_c$ admits a single nondegenerate maximum, which is nondegenerate. Then the classical Poincar\'e theorem on the creation of hyperbolic periodic solutions by perturbation of periodic tori can be applied at each point $\overline p$ with $\norm{\overline p}\geq \mu_0$, for $\mu_0$ large enough, on the simple resonance line $T^{-1}({\mathbb R} c)$. As a result, going back to the system $C$ by the inverse scaling, we get an annulus ${\mathsf A}$ of class $C^\kappa$ formed by the union of the rescaled periodic orbits, which is defined over an interval of the form $]e_P,+\infty[$ and realizes $c$. One can moreover prove that these orbits are minimizing in the previous sense. \vskip2mm\noindent ${\bullet}$ {\bf The low energy annuli.} The proof of existence of a single low energy annulus realizing a given class is more involved and requires the study of the symbolic dynamics created by the hyperbolic fixed point $O$ together with its homoclinic orbits (such orbits were proved to exist in \cite{Bo78} and we will give here a proof close to that of \cite{Be00}, based on discrete weak KAM theory, which enables us to localize them more precisely). This requires some additional (generic) nondegeneracy assumptions on the eigenvalues of the fixed point. We obtain a family (parametrized by the energies $e$ slightly larger than $\overline e$) of horseshoes for Poincar\'e sections of the Hamiltonian flow in $C^{-1}(e)$. This is reminiscent of the Shilnikov-Turaev study for hyperbolic fixed points of Hamiltonian systems with homoclinic orbits which are transverse in their critical energy level, with more precise estimates on the structure and localization of the horseshoes. The result is the existence of a family of annuli realizing each primitive homology class and which admit heteroclinic connections between them, provided that some compatibility condition is satisfied. This will prove the stabilization property at low energy for each class, together that the third item in Theorem II. \vskip2mm\noindent ${\bullet}$ {\bf The singular annulus.} We get the existence of (at least) one singular annulus by gluing together the annuli corresponding to the homology classes $\pm c$, where $c$ is determined by a minimization condition on the homoclinic orbits of the hyperbolic fixed points, together with an annulus of periodic orbits realizing the zero homology class. This proves the existence of an invariant manifold which contains the fixed point together with a pair of opposite homoclinic orbits (satisfying special minimization properties), on which a one-parameter family of null homology periodic orbits accumulates, together with two families of periodic orbits realizing opposite homology classes. The main point is that the union of the periodic orbits and the homoclinic orbits is a $C^1$ normally hyperbolic manifold, which is due to the nondegeneracy assumptions on the eigenvalues of the fixed point. The rich heteroclinic structure induced by the family of horseshoes in turn yields the existence of the heteroclinic connections between the first annuli in each chain and the singular annulus. \vskip5mm {\bf Structure of the paper.} The paper is split into two parts and seven appendices. Part I introduces the various notions related to chains of cylinders and contains the proof of Theorem~I, taking for granted the generic properties of classical systems. Part II is dedicated to the various definitions and statements relative to classical systems and contains the proof of Theorem~II. The first four appendices present technical results related to Part I: Appendix A recalls basic results on normally hyperbolic manifolds in our setting, Appendices B and C are devoted to normal forms, and Appendix D states a finite differentiable version of the invariant curve theorem for twist maps. The last two appendices are related to Part II: in Appendix E we prove the existence of orbits homoclinic to the hyperbolic fixed points for generic classical systems on ${\mathbb A}^2$, Appendix F is devoted to a proof of the Hamiltonian Birkhoff-Smale theorem, while Appendix G recalls some elements of Moser's construction of horseshoes. \vskip3mm {\bf Aknowledgements.} I warmly thank Marc Chaperon, Alain Chenciner, Jacques F\'ejoz and Pierre Lochak for their constant support and encouragements. I am indebted to Laurent Lazzarini for the proof of the invariant curve theorem and for lots of discussions at several stages of the preparation of this work. Cl\'emence Labrousse carefully read and corrected several parts of this paper, my warmest thanks to her. \setcounter{section}{0} \begin{center} {\bf\LARGE Part I. Cusp-generic chains} \end{center} \vskip.5cm This part is devoted to the proof of Theorem I. \begin{itemize} \item In Section~\ref{sec:normhyp} we introduce precise definitions for normally hyperbolic annuli and cylinders. \item In Section~\ref{sec:nondeg} we list the nondegeneracy conditions imposed to the perturbed systems we consider. \item In Section~\ref{sec:cylinders} we introduce the definitions of $d$-cylinders and $s$-cylinders, which depend on the resonance zones they are located in. We also introduce the twist property and the twist sections attached to a cylinder. \item In Section~\ref{sec:proofscyl} we prove the existence of $d$ and $s$-cylinders with twist sections under the nondegeracy conditions of Section~\ref{sec:nondeg}. \item In Section~\ref{sec:chains} we describe the homoclinic and heteroclinic intersection conditions which are satisfied by the cylinders and serve us to define the notion of {\em admissible chains}. We prove their existence under the same nondegeracy conditions. \item Finally, Section~\ref{sec:cuspgen} proves the cusp-genericity of our nondegeneracy conditions and ends the proof of Theorem I. \end{itemize} \setcounter{paraga}{0} \section{Normally hyperbolic annuli and cylinders}\label{sec:normhyp} In this section we first introduce particular definitions for the ``normal hyperbolicity'' of manifolds which are not necessarily invariant under a vector field, whose occurence is unavoidable in the perturbed systems we will consider. We then obtain genuine normally hyperbolic manifolds (with boundary) by considering codimension 1 invariant subsets contained in the previous ones. We refer to \cite{C04,Berg10} for direct presentations of the normal hyperbolicity of manifolds with boundary. \paraga In this paper, a {\em $2\ell$-dimensional $C^p$ annulus} will be a $C^p$ manifold $C^p$ diffeomorphic to ${\mathbb A}^\ell$. A {\em singular annulus} will be a $C^1$ manifold $C^1$-diffeomorphic to ${\mathbb T}\times\,]0,1[\,\times {\mathcal Y}$, where ${\mathcal Y}$ is (any realization of) the sphere $S^2$ minus three points. We will have to consider $2$-dimensional annuli embedded in ${\mathbb A}^2$ and $4$-dimensional annuli or singular annuli embedded in ${\mathbb A}^3$, which we abbreviate in $2$-annuli, $4$-annuli and singular $4$-annuli. \paraga We now define the main objects under concern in this part, which all are $3$-dimensional manifolds. \vskip1.5mm ${\bullet}$ A {\em $C^p$ cylinder without boundary} is a $C^p$ manifold $C^p$-diffeomorphic to ${\mathbb T}^2\times {\mathbb R}$. \vskip1.5mm ${\bullet}$ A {\em $C^p$ cylinder} is a $C^p$ manifold $C^p$-diffeomorphic to ${\mathbb T}^2\times [0,1]$, so that a cylinder is compact and its boundary has two components diffeomorphic to ${\mathbb T}^2$. \vskip1.5mm ${\bullet}$ A {\em $C^p$ singular cylinder} is a $C^p$ manifold $C^p$-diffeomorphic to ${\mathbb T}\times \boldsymbol{\mathsf Y}$, where~$\boldsymbol{\mathsf Y}$ is (any realization of) the sphere $S^2$ minus three open discs with nonintersecting closures. A singular cylinder is compact and its boundary has three components, diffeomorphic to ${\mathbb T}^2$. \paraga Let $M$ be a $C^\infty$ manifold and $X$ a complete vector field on $M$ with flow $\Phi$. A submanifold $N\subset M$ (possibly with boundary) is said to be {\em pseudo invariant} for $X$ when the vector field $X$ is tangent to $N$ at each point of $N$. A submanifold $N$ is said to be {\em invariant} when $\Phi(t,N)=N$ for all $t\in{\mathbb R}$. Invariant manifolds are pseudo invariant. When $N$ is invariant with $\partial N\neq\emptyset$, $\partial N$ is invariant too. \paraga We endow now ${\mathbb A}^3$ with its standard symplectic form $\Omega$, and we assume that $X=X_H$ is the vector field generated by $H\in C^\kappa({\mathbb A}^3)$, $\kappa\geq 2$. A pseudo invariant $4$-annulus ${\mathscr A}\subset A^3$ for $X$ is said to be {\em pseudo normally hyperbolic in ${\mathbb A}^3$} when there exist \vskip1.5mm ${\bullet}$ an open subset $O$ of ${\mathbb A}^3$ containing ${\mathscr A}$, \vskip1.5mm ${\bullet}$ an embedding $\Psi:O\to {\mathbb A}^2\times {\mathbb R}^2$ whose image has compact closure, such that $\Psi_*\Omega$ continues to a symplectic form $\overline\Omega$ on ${\mathbb A}^2\times {\mathbb R}^2$ which satisfies (Appendix~\ref{app:normhyp} (\ref{eq:assumpsymp})), \vskip1.5mm ${\bullet}$ a vector field ${\mathscr V}$ on ${\mathbb A}^2\times {\mathbb R}^2$ satisfying the assumptions of the normally hyperbolic persistence theorem, in particular~(\ref{eq:addcond}), together with those of the symplectic normally hyperbolic theorem (Appendix~\ref{app:normhyp}) for the form $\overline\Omega$, such that, with the notation of this theorem: \begin{equation} \Psi({\mathscr A})\subset {\rm A}({\mathscr V})\quad{\rm and}\quad \Psi_*X(x)={\mathscr V}(x),\quad \forall x\in O. \end{equation} Such an annulus ${\mathscr A}$ is therefore of class $C^p$ and symplectic. We define similarly pseudo normally hyperbolic singular $4$-annuli, with in this case $p=1$. \paraga When the previous $4$-annulus ${\mathscr A}$ is moreover invariant for $X_H$, we say that it is normally hyperbolic. In this case the image $\Psi({\mathscr A})\subset {\rm A}({\mathscr V})$ is invariant for ${\mathscr V}$ and admits well-defined invariant manifolds $W^\pm\big(\Psi({\mathscr A})\big)$, with center-stable and center-unstable foliations $\big(W^\pm\big(\Psi(x)\big)\big)_{x\in{\mathscr A}}$. In this case we define the {\em local} invariant manifolds $W^\pm_{\ell}({\mathscr A})$ for $X$, with respect to $(O,\Psi)$, as the subsets \begin{equation} \Psi^{-1}\Big(W_\ell^\pm\big(\Psi({\mathscr A})\big)\Big), \end{equation} where $W_\ell^\pm\big(\Psi({\mathscr A})\big)$ stands for the connected component of ${\rm A}({\mathscr V})$ in $\Psi(O)\cap W^\pm\big(\Psi({\mathscr A})\big)$. Similarly, we define the local center stable and unstable manifolds of the points of ${\mathscr A}$: \begin{equation} W_\ell^\pm(x)=\Psi^{-1}\Big(W_\ell^\pm\big(\Psi(x)\big)\Big),\quad x\in{\mathscr A}. \end{equation} The global manifolds $W^\pm({\mathscr A})$ and $W^\pm(x)$ for $X$ are then defined in the usual way, by forward or backward transport of the corresponding local ones by the flow of $X$. By compactness of the image of $\Psi$, one immediately checks that the global manifolds $W^\pm({\mathscr A})$ and $W^\pm(x)$ are independent of the choice of $(O,\Psi)$. These manifolds are of class $C^p$, coisotropic, and their characteristic foliations coincide with their center-stable and center-unstable foliations. \paraga We define similarly the invariant manifolds of invariant (normally hyperbolic) $4$-singular-annuli, which are therefore symplectic and whose invariant manifolds satsify the same properties as above. Observe moreover that, by definition, given an invariant normally hyperbolic singular $4$-annulus ${\mathscr A}_{\bullet}$, there exists an open neighborhood $O$ of ${\mathscr A}_{\bullet}$ in ${\mathbb A}^3$ and a Hamiltonian $H_\circ$ defined on an open subset ${\mathscr O}$ containing $O$ such that: \vskip1mm ${\bullet}$ $H_\circ$ coincides with $H$ on $O$, \vskip1mm ${\bullet}$ $H_\circ$ admits a normally hyperbolic invariant $4$-annulus which contains ${\mathscr A}_{\bullet}$. \paraga We still assume that $X=X_H$ is the vector field generated by $H\in C^\kappa({\mathbb A}^3)$, $\kappa\geq 2$. Let ${\bf e}$ be a regular value of $H$. \vskip1.5mm ${\bullet}$ A pseudo invariant cylinder without boundary ${\mathscr C}\subset H^{-1}({\bf e})$ is {\em pseudo normally hyperbolic in $H^{-1}({\bf e})$} when there exists a pseudo invariant and pseudo normally hyperbolic $4$-annulus ${\mathscr A}$ for $X_H$ such that ${\mathscr C}\subset{\mathscr A}\cap H^{-1}({\bf e})$. \vskip1.5mm ${\bullet}$ An invariant cylinder (with boundary) ${\mathscr C}\subset H^{-1}({\bf e})$ is {\em normally hyperbolic in $H^{-1}({\bf e})$} when there exists an invariant normally hyperbolic $4$-annulus ${\mathscr A}$ for $X_H$ such that ${\mathscr C}\subset{\mathscr A}\cap H^{-1}({\bf e})$. Any such ${\mathscr A}$ is said to be {\em associated with ${\mathscr C}$}. \vskip1.5mm ${\bullet}$ An invariant singular cylinder ${\mathscr C}_{\bullet}\subset H^{-1}({\bf e})$ is {\em normally hyperbolic in $H^{-1}({\bf e})$} when there is an invariant normally hyperbolic $4$-singular-annulus ${\mathscr A}_{\bullet}$ for $X_H$ such that ${\mathscr C}_{\bullet}\subset{\mathscr A}_{\bullet}\cap H^{-1}({\bf e})$. Any such ${\mathscr A}_{\bullet}$ is said to be {\em associated with ${\mathscr C}_{\bullet}$}. \vskip1.5mm One immediately sees that normally hyperbolic invariant cylinders or singular cylinders, contained in $H^{-1}({\bf e})$, admit well-defined $4$-dimensional stable and unstable manifolds with boundary, also contained in $H^{-1}({\bf e})$, together with their center-stable and center-unstable foliations. \vskip1.5mm \paraga From the remark on the singular $4$-annuli, one deduces that given a singular cylinder ${\mathscr C}_{\bullet}$, there exists an open neighborhood $O$ of ${\mathscr C}_{\bullet}$ in ${\mathbb A}^3$ and a Hamiltonian $H_\circ$ defined on an open subset ${\mathscr O}$ containing $O$ such that: \vskip1mm ${\bullet}$ $H_\circ$ coincides with $H$ on $O$, \vskip1mm ${\bullet}$ $H_\circ$ admits a normally hyperbolic cylinder. \vskip1.5mm This remark will enable us to deal with singular cylinders in the same way as with usual cylinders in our subsequent constructions. \section{Averaged systems, $\delta$-double resonances and conditions {\bf(S)}}\label{sec:nondeg} We first describe the geometry of simple and double resonances at fixed energy of a Tonelli Hamiltonian $h\in C^\kappa({\mathbb R}^3)$ and, given a perturbation $f\in C_b^\kappa({\mathbb A}^3)$, we define the averaged systems associated with $H=h+f$. We then introduce the set of {\em $\delta$-strong double resonance points} on a resonance circle at fixed energy, where $\delta>0$ will be the main control parameter of our construction. This enables us to set out a list of nondegeneracy conditions {\bf(S)} for the system $H$ along a resonance circle, which will be used throughout Part I and yield the ``cusp-generic part'' of Theorem I. \subsection{Simple and double resonances} We identify the action space ${\mathbb R}^3$ with its dual, the frequency space, by Euclidean duality. We fix a $C^\kappa$ Tonelli Hamiltonian $h$ on ${\mathbb R}^3$, $\kappa\geq2$, and set $\omega=\nabla h$. Let us first state some direct geometric consequences of the convexity and superlinearity of $h$. \paraga The map $\omega$ is a $C^{\kappa-1}$ diffeomorphism from ${\mathbb R}^3$ onto ${\mathbb R}^3$. Being convex and coercive, $h$ admits a single absolute minimum at some point $p$, which satisfies $\omega(p)=0$. For ${\bf e}>h(p)$, the level surface $h^{-1}({\bf e})$ bounds a convex domain containing $p$, so $h^{-1}({\bf e})$ is diffeomorphic to $S^2$, and its image by $\omega$ contains $0$ in its ``interior\footnote{the bounded connected component of its complementary}.'' Moreover, the map $ {\varpi}\mapsto \frac{{\varpi}}{\norm{{\varpi}}_2} $ (where $\norm{\ }_2$ stands for the Euclidean norm) defines a $C^{\kappa-1}$ diffeomorphism from the set $\omega\big(h^{-1}(e)\big)$ onto the sphere $S^2$. \paraga Let $\pi:{\mathbb R}^3\to{\mathbb T}^3$ be the canonical projection. Fix ${\varpi}\in{\mathbb R}^3\setminus\{0\}$ and consider the {\em resonance module} ${\mathcal M}_{\varpi}= {\varpi}^\bot\cap{\mathbb Z}^3$ associated with ${\varpi}$. Clearly $\pi({\mathcal M}^\bot)$ is a subtorus of ${\mathbb T}^3$, which is invariant under the flow generated by the constant vector field ${\varpi}$. This flow is dynamically minimal. \paraga Given ${\varpi}\in{\mathbb R}^3\setminus\{0\}$, ${\mathcal M}_{\varpi}={\varpi}^\bot \cap\,{\mathbb Z}^3$ is a submodule of ${\mathbb Z}^3$ whose rank is the {\em multiplicity of resonance} of ${\varpi}$. We say that ${\varpi}$ is a simple resonance frequency when ${\rm rank\,}{\mathcal M}_{\varpi}=1$ and a double resonance frequency when ${\rm rank\,}{\mathcal M}=2$. A point $r\in{\mathbb R}^3$ is a simple or double resonance action when $\omega(r)$ is a simple or double resonance frequency. \paraga Given a submodule ${\mathcal M}$ of ${\mathbb Z}^3$ of rank $m=1$ or $2$, the vector subspace ${\mathcal M}^\bot$ is said to be the {\em resonance subspace associated with ${\mathcal M}$} (a resonance plane when $m=1$ and a resonance line when $m=2$). In the action space, the corresponding resonance $ {\varpi}^{-1}({\mathcal M}^\bot) $, is said to be a resonance surface when $m=1$ and a resonance curve when $m=2$. Note that any point on a resonance curve is a double resonance action, while a point on a resonance surface can be either a simple resonance action or a double resonance action. \paraga Resonance curves and surfaces in the action space are {\em transverse} to the levels $h^{-1} ({\bf e})$ for ${\bf e}>\mathop{\rm Min\,}\limits h$. As a consequence, the resonance surfaces intersect the energy levels along (topological) {\em resonance circles}, while the resonance curves intersect the levels at isolated {\em double resonance points}. Moreover, two independent resonant circles at energy ${\bf e}>\mathop{\rm Min\,}\limits h$ in the action space intersect at exactly two double resonance points. \paraga Recall that a submodule of ${\mathbb Z}^n$ is {\em primitive} when it is not strictly contained in a submodule with the same rank. Primitive rank $1$ submodules are generated by indivisible vectors of ${\mathbb Z}^n$. Note that the resonances can always be defined by primitive submodules, this will always be the case in the following. \paraga Given a rank $m$ primitive submodule ${\mathcal M}$ of ${\mathbb Z}^3$, there exists a ${\mathbb Z}$--basis of ${\mathbb Z}^3$ whose last $m$ vectors form a ${\mathbb Z}$--basis of ${\mathcal M}$ (see for instance \cite{Art}). Let $P$ be the matrix in ${\rm GL_3}({\mathbb Z})$ whose $i^{th}$-column is formed by the components of the $i^{th}$-vector of this basis. Let ${\mathscr R}=\omega^{-1}({\mathcal M}^\bot)$. The symplectic linear coordinate change in ${\mathbb A}^3$ defined by \begin{equation}\label{eq:adcoord} {\theta}=\,^tP^{-1} \widetilde{\theta}\ \ [{\rm mod}\ {\mathbb Z}^n],\qquad r=P \,\widetilde r, \end{equation} transforms $h$ into a new Hamiltonian $\widetilde h$ such that, setting $\widetilde\omega=(\widetilde\omega_1,\widetilde\omega_2,\widetilde\omega_3)=\nabla\widetilde h$, the transformed resonance $\widetilde {\mathscr R}=P^{-1}\,{\mathscr R}$ admits the equation $$ \widetilde\omega_{3-m+1}=\cdots=\widetilde\omega_3=0 $$ Such coordinates are said to be {\em adapted to ${\mathcal M}$}. \begin{notation}\label{not:splitvar} According to the previous decomposition, the variables $u$ in ${\mathbb R}^3$ or ${\mathbb T}^3$ will be split into $(\widehat u,\overline u)=u$, where $\overline u$ is $m$-dimensional and $\widehat u$ is $(3-m)$-dimensional. \end{notation} \subsection{Averaged systems}\label{ssec:normformepsdep} \setcounter{paraga}{0} We consider a $C^\kappa$ Tonelli Hamiltonian $h$ on ${\mathbb R}^3$, $\kappa\geq2$, and set $\omega=\nabla h$. Given $f\in C^\kappa({\mathbb A}^3)$, we set $H=h+f$. \paraga Fix $r^0\in{\mathbb R}^3$ with ${\varpi}:=\omega(r^0)\neq0$ and let $m=1,2$ be the rank of the resonance module ${\mathcal M}_{\varpi}={\varpi}^\bot\cap{\mathbb Z}^3$, so that the quotient ${\mathbb T}^3/\pi({\mathcal M}^\bot)$ is an $m$--dimensional torus. We denote by ${\mathcal T}_x\subset{\mathbb T}^3$ the fiber over $x\in{\mathbb T}^3/\pi({\mathcal M}^\bot)$, which is therefore a $(3-m)$--dimensional torus invariant under the flow generated by $h$. \paraga The ${\mathcal M}$--averaged system ${\rm Av}_{r^0}$ at $r^0$ is defined on the cotangent bundle $T^*[{\mathbb T}^3/\pi({\mathcal M}^\bot)]$. The cotangent space at $x$ satisfies the natural identifications $$ \big(T_{x}[{\mathbb T}^3/\pi({\mathcal M}^\bot)]\big)^*\simeq ({\mathbb R}^3/{\mathcal M}^\bot)^*\simeq \langle{\mathcal M}\rangle, $$ where $\langle{\mathcal M}\rangle\subset{\mathbb R}^3$ is the vector subspace generated by ${\mathcal M}$. The ${\mathcal M}$--averaged perturbation is the function $U_{r^0}:{\mathbb T}^3/\pi({\mathcal M}^\bot)\to {\mathbb R}$ defined by $$ U_{r^0}(x)=\int_{{\mathcal T}_x}f(\varphi,r^0)\,d\mu_x(\varphi), $$ where $\mu_x$ is the induced Haar measure on ${\mathcal T}_x$. We are thus led to set $$ {\rm Av}_{r^0}(x,y)={\tfrac{1}{2}}\,D^2h(r^0)[y,y]+U_{r^0}(x),\qquad (x,y)\in \big({\mathbb T}^3/\pi({\mathcal M}^\bot)\big)\times \langle{\mathcal M}\rangle. $$ Averaged systems are therefore classical systems on ${\mathbb A}^m$. We say that ${\rm Av}_{r^0}$ is an {\em $s$--averaged system} when $m=1$ and a {\em $d$--averaged} system when $m=2$. \paraga Fix now an adapted coordinate system $({\theta},r)$ at $r^0$. Following Notation~\ref{not:splitvar}, observe that $\overline {\theta}$ and $\widehat{\theta}$ define coordinates on the quotient ${\mathbb T}^3/\pi({\mathcal M}^\bot_{\varpi})$ and on its fibers respectively, and that $(\overline{\theta},\overline r)$ are canonically conjugated coordinates on $T^*[{\mathbb T}^3/\pi({\mathcal M}^\bot)]$. In these coordinates, the averaged system reads \begin{equation}\label{eq:averagedsyst} {\rm Av}_{r^0}(\overline{\theta},\overline r)={\tfrac{1}{2}} T_{r^0}(\overline r)+U_{r^0}(\overline{\theta}), \end{equation} where $T$ is the restriction of the Hessian $D^2h(r^0)$ to the $\overline r$--space ${\mathbb R}^m$ and $U:{\mathbb T}^m\to{\mathbb R}$ reads \begin{equation}\label{eq:averagedpot} U_{r^0}(\overline {\theta})=\int_{{\mathbb T}^m}f\big((\widehat{\theta},\overline{\theta}),r^0\big)\,d\widehat{\theta}. \end{equation} Clearly, averaged systems associated to different adapted coordinates are linearly symplectically conjugated. \subsection{The control parameter for double resonance points on a resonance circle} We consider now a $C^\kappa$ Tonelli Hamiltonian $h$ on ${\mathbb R}^3$ and its frequency map $\omega=\nabla h$, together with $f\in C_b^\kappa({\mathbb A}^3)$, with \begin{equation}\label{eq:diffmin} \kappa\geq 6, \qquad \norm{f}_\kappa\leq 1. \end{equation} We fix ${\bf e}>{\rm Min\,} h$ and an indivisible vector $k\in{\mathbb Z}^3$, and we set ${\Gamma}=\omega^{-1}(k^\bot)\cap h^{-1}({\bf e})$. We fix a coordinate system $({\theta},r)$ adapted to ${\mathcal M}={\mathbb Z} k$. We still denote by $f$ the expression of the initial function $f$ in the coordinates $({\theta},r)$, so that now $\norm{f}_\kappa\leq M$, where $M$ depends only on $k$. The aim of this section is to discriminate between strong and weak double resonance points on ${\Gamma}$ for the system $H=h+f$. \paraga {\bf The decay of Fourier coefficients of $f$.} For $k=(k_j)\in{\mathbb Z}^d$, we use the notation \begin{equation} \norm{k}={\rm Max\,}_{1\leq j\leq d}\abs{k_j}, \qquad \abs{k}=\sum_{1\leq j\leq d}\abs{k_j}. \end{equation} We adopt the usual convention for multiindices and partial derivatives. Let us denote by $$ [f]_k(r)=\int_{{\mathbb T}^3}f({\theta},r)\,^{-2i\pi\,k\cdot{\theta}}d{\theta} $$ the Fourier coefficient of $f(\,.\,,r)$ of index $k\in{\mathbb Z}^3$ and set $g_k({\theta},r)=[f]_k(r) e^{2i\pi k\cdot{\theta}}$. Usual estimates yield, for $k\neq0$ and any multiindices $j,\ell\in{\mathbb Z}^3$ such that $\abs{j}+\abs{\ell}<\kappa$: \begin{equation}\label{eq:Fouriercoef} \abs{\partial_{\theta}^j\partial_r^\ell g({\theta},r)}\leq\frac{M}{(2\pi)^{\kappa-(\abs{j}+\abs{\ell})}\norm{k}^{\kappa-(\abs{j}+\abs{\ell})}}. \end{equation} and in particular the Fourier expansion $ f({\theta},r) =\sum_{k\in{\mathbb Z}^3}[f]_k(r)\,e^{2i\pi\, k\cdot {\theta}} $ is normally convergent since $\kappa>3$. Hence \begin{equation} f({\theta},r)=\sum_{\widehat k\in{\mathbb Z}^2}\phi_{\widehat k}({\theta}_3,r)e^{2i\pi\,\widehat k\cdot \widehat{\theta}},\quad \textrm{with}\quad \phi_{\widehat k}({\theta}_3,r)=\sum_{k_3\in{\mathbb Z}}[f]_{(\widehat k,k_3)}(r)e^{2i\pi\,k_3\cdot {\theta}_3}. \end{equation} Given $K\geq1$ we set \begin{equation}\label{eq:function} f_{> K}({\theta},r)=\sum_{\widehat k\in{\mathbb Z}^2,\norm{\widehat k}> K}\phi_{\widehat k}({\theta}_3,r)\,e^{2i\pi\, \widehat k\cdot \widehat{\theta}} \end{equation} \begin{lemma}\label{lem:choseK} Fix an integer $p\in\{2,\ldots,\kappa-4\}$ and fix $\delta>0$. Then there exists an integer $K:=K(\delta)$ such that the function $f_{> K}$ is in $C^2({\mathbb A}^3)$ and satisfies \begin{equation}\label{eq:truncest} \norm{f_{> K}}_{C^p({\mathbb A}^3)}\leq \delta. \end{equation} \end{lemma} \begin{proof} Since $\norm{f}_{C^\kappa}\leq M$ with $\kappa\geq6$, by (\ref{eq:Fouriercoef}): \begin{equation}\label{eq:upperbound} \abs{\partial_{\theta}^j\partial_r^\ell g({\theta},r)}\leq \frac{M}{(2\pi)^{4}\norm{k}^{4}} \end{equation} as soon as $\abs{j}+\abs{\ell}\leq p$. Let $K(\delta)$ be the smallest integer such that \begin{equation} \sum_{k\in{\mathbb Z}^3,\abs{k}> K(\delta)}\frac{M}{(2\pi)^4\norm{k}^{4}}\leq\delta. \end{equation} Hence $f_{> K(\delta)}$ is $C^p$ and satisfies (\ref{eq:truncest}) (we do not try to give optimal estimates). \end{proof} \paraga {\bf The $\delta$-strong double resonance points.} Since the coordinates $({\theta},r)$ are ${\mathcal M}$-adapted: $\omega(r):=\nabla h(r)=(\widehat \omega(r),0)\in{\mathbb R}^2\times{\mathbb R}$. For $K\in{\mathbb N}$, we set \begin{equation} B^*(K)=\big\{\widehat k\in{\mathbb Z}^2\setminus\{0\}\mid \norm{\widehat k}\leq K\big\}. \end{equation} \begin{Def}\label{def:control} Given a {\em control parameter} $\delta>0$, we introduce the set of $\delta$-strong double resonance points: \begin{equation}\label{eq:doubres} D(\delta)=\Big\{r\in {\Gamma}\mid \exists\, \widehat k\in B^*\big( K(\delta)\big),\ \widehat k \cdot \widehat \omega (r)=0\Big\}, \end{equation} where $K(\delta)$ was defined in {\rm Lemma~\ref{lem:choseK}}. \end{Def} Observe that $D(\delta)$ is finite. Indeed, if ${\Gamma}_{\widehat k}=h^{-1}({\bf e})\cap\omega^{-1}((\widehat k,0)^\bot)$ is the simple resonance at energy ${\bf e}$ associated with $(\widehat k,0)$, then $$ D(\delta)=\bigcup_{\widehat k\inB^*_\Z\big( K(\delta)\big)} {\Gamma}\cap {\Gamma}_{\widehat k} $$ and each ${\Gamma}\cap {\Gamma}_{\widehat k}$ contains exactly two points, which proves our claim. Note that $D(\delta)$ increases when $\delta$ decreases. \subsection{The nondegeneracy conditions {\bf (S)}} We consider a $C^\kappa$ Tonelli Hamiltonian $h$ on ${\mathbb R}^3$, $\kappa\geq2$, and set $\omega=\nabla h$. We fix ${\bf e}>\mathop{\rm Min\,}\limits h$. Let $k\in{\mathbb Z}^3\setminus\{0\}$ be an indivisible vector and set ${\Gamma}_k=\omega^{-1}(k^\bot)\cap h^{-1}({\bf e})$. Given $f\in C_b^\kappa({\mathbb A}^3)$ satisfying~(\ref{eq:diffmin}), we now set out a list of nondegeneracy conditions involving the averaged systems attached to $H=h+f$ at the points of ${\Gamma}_k$. \begin{itemize} \item {\bf ($\bf S_1$)}\ {\em There exists a finite subset $B\subset {\Gamma}_k$ such that for $r^0\in{\Gamma}_k\setminus B$ the $s$-averaged potential function $V_{r^0}:{\mathbb T}\to{\mathbb R}$ admits a single global maximum, which is nondegenerate, and for $r^0\in B$ the function $V_{r^0}$ admits exactly two global maximums, which are nondegenerate.} \end{itemize} The nondegeneracy condition on $V_{r^0}$ is to be understood in the Morse sense, that is, the second derivative of $V_{r^0}$ at a nondegenerate point is nonzero. The elements of $B$ will be called {\em bifurcation points}. To state the next condition, note that each point $r^0$ in $B$ admits a neighborhood $I(r^0)$ in ${\Gamma}_k$ such that when $r\in I(r^0)$, the averaged potential $V_{r}$ admits two (differentiably varying) nondegenerate local maximums $m^*(r)$ and $m^{**}(r)$. The second condition is a transversal crossing property at a bifurcation point. \begin{itemize} \item {\bf ($\bf S_2$)}\ {\em For any $r^0\in B$, the derivative $\tfrac{d}{dr}\big(m^*(r)-m^{**}(r)\big)$ does not vanish at $r^0$.} \end{itemize} The next condition focuses on the double resonance points contained in ${\Gamma}_k$. Given such an $r^0$, let \begin{equation}\label{eq:davsyst} {\rm Av}_{r^0}(\overline r,\overline{\theta})={\tfrac{1}{2}} T_{r^0}(\overline r)+U_{r^0}(\overline{\theta}), \end{equation} be the $d$--averaged system at $r^0$ in an adapted coordinate system for the resonance module of $\omega(r^0)$. \begin{itemize} \item {\bf($\bf S_3$)}\ {\em For every double resonance point $r^0\in{\Gamma}_k$, the potential $U_{r^0}$ belongs to the residual set ${\mathscr U}(T_{r^0})$ of {\rm Theorem II}.} \end{itemize} Condition {\bf($\bf S_3$)} is independent of the choice of the adapted system at $r^0$, by symplectic conjugacy. We say that $H$ satisfies conditions {\bf(S)} on ${\Gamma}_k$ when it satisfies the previous three conditions. \section{The cylinders}\label{sec:cylinders} This section contains definitions and statements only, the proofs are postponed to the next one. We fix once and for all a Tonelli Hamiltonian $h\in C^\kappa({\mathbb R}^3)$, and an energy ${\bf e}>\mathop{\rm Min\,}\limits h$, together with a resonance circle ${\Gamma}\subset h^{-1}({\bf e})$. We denote by $\Pi:{\mathbb A}^3\to{\mathbb R}^3$ the natural projection and by ${\bf d}$ the Hausdorff distance between compact subsets of ${\mathbb R}^3$. The main result of this section is the following. \begin{prop}\label{prop:existcylinders} Fix $f\in C_b^\kappa({\mathbb A}^3)$ and set $H_{\varepsilon}=h+{\varepsilon} f$. Assume that $H:=H_1$ satisfies {\bf (S)}\ along ${\Gamma}$. Then for $\kappa\geq\kappa_0$ large enough, there exists ${\varepsilon}_0>0$ such that for $0<{\varepsilon}\leq{\varepsilon}_0$, there is a finite sequence $\big({\mathscr C}_k({\varepsilon})\big)_{0\leq k\leq k_*}$ of normally hyperbolic invariant cylinders and singular cylinders at energy ${\bf e}$ for $H_{\varepsilon}$, whose projection by $\Pi$ satisfies $$ {\bf d}\Big(\bigcup_{1\leq k\leq k_*}{\mathscr C}_k({\varepsilon}),{\Gamma}\Big)=O(\sqrt{\varepsilon}). $$ \end{prop} The cylinders in fact enjoy more stringent ``graph properties'' which will enable us to prove that they form {\em chains} in Section~\ref{sec:chains}. In the rest of this section we describe the intermediate steps to prove the previous proposition. We start with the ``$d$-cylinders'' in the neighborhood of the points of the set $D(\delta)$ of $\delta$-strong double resonance points introduced in Definition~\ref{def:control}, where $\delta$ has to be suitably chosen, and we ``interpolate between them'' with ``$s$-cylinders'' along the complementary arcs of ${\Gamma}$, taking the bifurcation points into account. \subsection{The $d$-cylinders at a double resonance point}\label{sec:mainresd} In this section we fix an {\em arbitrary} double resonance point $r^0\in{\Gamma}$, with $d$-averaged system $C$, and we set out precise definitions for the {\em $d$-cylinders} in the neighborhood of $r^0$. We will introduce three different families of such $d$-cylinders, according to the way they are constructed. Let $({\theta},r)$ be adapted coordinates at $r^0$, so that $\omega(r^0)=(\omega_1,0,0)$ with $\omega_1\neq0$. \vskip1mm ${\bullet}$ The notion of $2$-dimensional annulus for a classical system was introduced in Definition~\ref{def:singann}. Given a compact annulus ${\mathsf A}$ for $C$, the product of ${\mathsf A}$ with ``the circle of ${\theta}_1$'' is a normally hyperbolic compact cylinder, which is invariant for a suitable truncation of $H_{\varepsilon}$. We will prove that it remains invariant for $H_{\varepsilon}$, provided ${\varepsilon}$ is small enough. The family of such normally hyperbolic cylinders, attached with all compact annuli of $C$, constitutes our first family of $d$-cylinders. \vskip1mm ${\bullet}$ The construction of the second family is similar to the previous one, but the starting point is a singular annulus (see Definition~\ref{def:singann}) rather than a compact annulus. The normally hyperbolic objects obtained this way are singular cylinders. \vskip1mm ${\bullet}$ The third family is formed by suitable continuations of the cylinders attached to the annuli of $C$ which are defined over intervals of the form $[e_P,+\infty[$, we call them {\em extremal cylinders}. They will enable us to define the ``$s$-cylinders'' between to two consecutive distinct points of $D(\delta)$, as cylinders containing two suitable extremal cylinders, located in the neighborhood of both points of $D(\delta)$ (see Section~\ref{ssec:scyldef}). \vskip1mm Three corresponding existence results are stated, which will be proved in the next section. \subsubsection{The cylinders attached to a compact $2$-annulus of the $d$-averaged system} We consider the system $H_{\varepsilon}=h+{\varepsilon} f$ and set $H:=H_1$. We perform a translation in action so that $r^0=0$, without loss of generality. \paraga We will have to use several coordinate transformations. To avoid confusion, we fix an initial coordinate system $(x,y)$ adapted to the double resonance point $0$. Hence, relatively to these coordinates, $ \nabla h(0)=(\widehat\omega,0)\in({\mathbb R}\setminus\{0\})\times{\mathbb R}^2, $ (where $\widehat\omega=\omega_1$). With the usual notational convention, the $d$-averaged system associated with $H$ at $0$ reads \begin{equation}\label{eq:classpec} C(\overline x,\overline y)={\tfrac{1}{2}} T(\overline y)+U(\overline x), \qquad (\overline x,\overline y)\in{\mathbb T}^2\times{\mathbb R}^2, \end{equation} where $T$ is the restriction of the Hessian $D^2h(0)$ to the $\overline y$-plane and \begin{equation}\label{eq:quadpot} U(\overline x)=\int_{{\mathbb T}} f\big((\widehat x,\overline x),0\big)\,d\widehat x. \end{equation} We also introduce the {\em complementary part} of the Hessian $D^2h(0)$: \begin{equation}\label{eq:comppart} Q(y)={\tfrac{1}{2}}\big(D^2h(0)y^2-\partial^2_{\overline y}h(0)\overline y^2\big):=y_1L(y), \end{equation} so that $L$ is a linear form on ${\mathbb R}^3$. \paraga The $d$-cylinders will be conveniently defined relatively to appropriate normalized coordinates, that we now introduce. Let us set, for ${\varepsilon}>0$ $$ {\sigma}_{\varepsilon}({\theta},{\mathsf r})=({\theta},\sqrt{\varepsilon}{\mathsf r}),\qquad ({\theta},{\mathsf r})\in{\mathbb A}^3. $$ \begin{Def} Fix $d^*>0$, ${\sigma}\in\,]{\frac{1}{2}},1[$ and two integers $p,\ell\geq2$. Given ${\varepsilon}>0$, a {\em normalizing diffeomorphism with parameters $(d^*,{\sigma},p,\ell)$} is an analytic embedding \begin{equation} \Phi_{\varepsilon}=\Psi_{\varepsilon}\circ{\sigma}_{\varepsilon}: {\mathbb T}^3\times B^3(0,d^*)\to {\mathbb T}^3\times B^3(0,2d^*\sqrt{\varepsilon}) \end{equation} where $\Psi_{\varepsilon}: {\mathbb T}^3\times B^3(0,d^*\sqrt{\varepsilon})\to {\mathbb T}^3\times B^3(0,2d^*\sqrt{\varepsilon})$ is symplectic and satisfies $\norm{\Phi_{\varepsilon}-{\rm Id}}_{C^0}\leq{\varepsilon}^{\sigma}$, such that for $({\theta},{\mathsf r})\in {\mathbb T}^3\times B^3(0,d^*)$: \begin{equation}\label{eq:scaleham} {\mathsf N}_{\varepsilon}({\theta},{\mathsf r}):= \frac{1}{{\varepsilon}}\, \Big(H_{\varepsilon}\circ\Psi_{\varepsilon}({\theta},{\mathsf r})-{\bf e}\Big) =\frac{\widehat\omega}{\sqrt{\varepsilon}}\,\widehat{\mathsf r}+Q({\mathsf r})+C(\overline{\theta},\overline{\mathsf r}) +{\mathsf R}^0_{\varepsilon}(\overline{\theta},{\mathsf r})+{\mathsf R}_{\varepsilon}({\theta},{\mathsf r}). \end{equation} The functions $C$ and $Q$ are defined in (\ref{eq:classpec}) and (\ref{eq:comppart}), and ${\mathsf R}^0_{\varepsilon}$ and ${\mathsf R}_{\varepsilon}$ are $C^p$ functions on ${\mathbb T}^2\times B^3(0,d^*)$ and ${\mathbb T}^3\times B^3(0,d^*)$ respectively, which satisfy \begin{equation}\label{eq:estimrem} \norm{{\mathsf R}^0_{\varepsilon}}_{C^p }\leq C \sqrt {\varepsilon},\qquad \norm{{\mathsf R}_{\varepsilon}}_{C^p }\leq {\varepsilon}^\ell. \end{equation} for a suitable $C>0$. \end{Def} We will adopt the notation $({\theta},r)$ for the symplectic coordinates such that $\Psi_{\varepsilon}({\theta},r)=(x,y)$, so that the nonsymplectic rescaling reads ${\sigma}_{\varepsilon}({\theta},{\mathsf r})=({\theta},r)$. The evolution time for the normal form ${\mathsf N}_{\varepsilon}$ has also to be rescaled, which will is here innocuous since we are interested only in geometric objects. \paraga We are now in a position to define the $d$-cylinder attached to a compact $2$-annulus of $C$. Fix such an annulus ${\mathsf A}$, defined over a compact interval $J$. Since the periodic orbits in ${\mathsf A}$ are hyperbolic in their energy level, ${\mathsf A}$ can be continued to a slightly larger family of hyperbolic orbits, the union of which we denote by ${\mathsf A}_*$, and one can moreover assume that their period satisfy the same mononicity assumption as for ${\mathsf A}$. Then, basic angle-action transformations prove the existence of an open interval $J^*$ containing $J$ and a symplectic embedding \begin{equation}\label{eq:embedj} {\rm j}: {\mathbb T}\times J^*\to {\mathbb A}^2,\qquad {\rm j}({\mathbb T}\times J)={\mathsf A}, \qquad {\rm j}({\mathbb T}\times J_*)={\mathsf A}_*, \end{equation} such that, if $(\varphi,\rho)\in{\mathbb T}\times J^*$ are the standard symplectic coordinates: \begin{equation} C\circ {\rm j}(\varphi,\rho)=\rho. \end{equation} We say that $(J\subset J^*,{\rm j})$ is a normalizing system for ${\mathsf A}$. \begin{Def}\label{def:scyl} Fix an annulus ${\mathsf A}$ of $C$ with normalizing system $(J\subset J^*,{\rm j})$ and contained in ${\mathbb T}^2\times B^2(0,d^*)$ for some $d^*>0$ \begin{itemize} \item A {\em $4$-annulus of class $C^p$ attached to ${\mathsf A}$} for $H_{\varepsilon}$ is a $C^p$ invariant normally hyperbolic $4$-annulus ${\mathscr A}_{\varepsilon}\subset{\mathbb A}^3$ for the vector field $X_{H_{\varepsilon}}$, such that there exists a $d$-normalizing diffeomorphism $\Phi_{\varepsilon}$ and a neighborhood $J'\subset{\mathbb R}$ of $J$ in $J^*$ for which $\Phi_{\varepsilon}^{-1}({\mathscr A}_{\varepsilon})$ contains a graph over the domain \begin{equation}\label{eq:domain} {\theta}_1\in{\mathbb T},\ {\mathsf r}_1\in ]-d^*\sqrt{\varepsilon},d^*\sqrt{\varepsilon}[,\ \varphi\in{\mathbb T},\ \rho\in J', \end{equation} of the form \begin{equation}\label{eq:graph} u=U_{\varepsilon}({\theta}_1,{\mathsf r}_1,\varphi,\rho), \ s=S_{\varepsilon}({\theta}_1,{\mathsf r}_1,\varphi,\rho), \end{equation} where $U_{\varepsilon}$ and $S_{\varepsilon}$ are $C^p$ functions which tend to $0$ in the $C^p$-topology when ${\varepsilon}\to0$. \item A $d$-cylinder at energy ${\bf e}$ attached to ${\mathsf A}$ for $H_{\varepsilon}$ is a (compact and normally hyperbolic) cylinder ${\mathscr C}_{\varepsilon}$ invariant for the vector field $X_{H_{\varepsilon}}$, such that there exists a $4$-annulus attached to ${\mathsf A}$ with ${\mathscr C}_{\varepsilon}\subset{\mathscr A}_{\varepsilon}\cap H_{\varepsilon}^{-1}({\bf e})$, and such that {\em the projection $\Pi_\rho\big(\Phi_{\varepsilon}^{-1}({\mathscr C}_{\varepsilon})\big)$ on the $\rho$-axis contains the interval $J$}. \item A twist section for such a $d$-cylinder is a global $2$-dimensional transverse section ${\Sigma}\subset {\mathscr C}_{\varepsilon}$, image of a symplectic embedding ${\rm j}_{\Sigma}: {\mathbb T}\times [a,b]$, such that the associated Poincar\'e return map is a twist map in the ${\rm j}_{\Sigma}$-induced coordinates on ${\mathbb T}\times [a,b]$. \end{itemize} \end{Def} We refer to Section~\ref{sec:normhyp} for the definition of normally hyperbolic cylinders and associated $4$-annuli. Note that the constraint ${\mathsf r}_1\in ]-d^*\sqrt{\varepsilon},d^*\sqrt{\varepsilon}[$ yields the localization $r_1\in ]-d^*{\varepsilon},d^*{\varepsilon}[$, which is very stringent. \paraga Our first existence result is the following. \begin{lemma}\label{lem:dcyl} Assume $\kappa\geq\kappa_0$ large enough. Then for each compact annulus ${\mathsf A}$ of $C$, there is an ${\varepsilon}_0>0$ such that for $0<{\varepsilon}\leq{\varepsilon}_0$ there exists a $d$-cylinder ${\mathscr C}_{\varepsilon}$ at energy ${\bf e}$ attached to ${\mathsf A}$ for $H_{\varepsilon}$, which admits a twist section. \end{lemma} The proof of Lemma~\ref{lem:dcyl} is in Section~\ref{sec:prooflemdcyl}. \subsubsection{The singular $d$-cylinders at a double resonance point}\label{sec:singularcyl} \setcounter{paraga}{0} The definition and existence result for the singular cylinder is very similar to the previous ones. \begin{Def}\label{def:singann2} Let ${\mathsf A}_{\bullet}$ be a singular annulus for $C$. \begin{itemize} \item A {\em singular annulus}Ê attached to ${\mathsf A}_{\bullet}$ for $H_{\varepsilon}$ is a normally hyperbolic singular $4$-annulus ${\mathscr A}_{\bullet}({\varepsilon})\subset{\mathbb A}^3$ for the vector field $X_{H_{\varepsilon}}$, such that there exists a $d$-normalizing diffeomorphism $\Phi_{\varepsilon}$ for which $\Phi_{\varepsilon}^{-1}({\mathscr A}_{\bullet}({\varepsilon}))$ tends to the product \begin{equation} {\bf A}_{\varepsilon}:=\big({\mathbb T}\times\,]-d^*\sqrt{\varepsilon},d^*\sqrt{\varepsilon}[\big)\times {\mathsf A}_{\bullet} \end{equation} in the $C^1$ topology when ${\varepsilon}\to0$. More precisely, there exists ${\sigma}\in\,]{\tfrac{1}{2}},1]$ and a $C^1$-embedding $\chi_{\varepsilon}$ defined on ${\bf A}_{\varepsilon}$, with image $\Phi_{\varepsilon}^{-1}({\mathscr A}_{\bullet}({\varepsilon}))$, which satisfies \begin{equation} \norm{\chi_{\varepsilon}-\chi}_{C^1}\leq {\varepsilon}^{\sigma} \end{equation} where $\chi$ is the canonical embedding ${\bf A}_{\varepsilon}\hookrightarrow{\mathbb A}^3$. \item A singular $d$-cylinder at energy ${\bf e}$ attached to ${\mathsf A}_{\bullet}$ for $H_{\varepsilon}$ is a singular cylinder ${\mathscr C}_{\bullet}({\varepsilon})$ for the vector field $X_{H_{\varepsilon}}$, such that there is a singular annulus ${\mathscr A}_{\bullet}({\varepsilon})$ with ${\mathscr C}_{\bullet}({\varepsilon})\subset{\mathscr A}_{\bullet}({\varepsilon})\cap H_{\varepsilon}^{-1}({\bf e})$. \item A generalized twist section for such a singular $d$-cylinder is a singular $2$-annulus which admits a continuation to a $2$-annulus, on which the Poincar\'e return map continues to a twist map. \end{itemize} \end{Def} We refer to Section~\ref{sec:normhyp} for the definition of normally hyperbolic singular cylinders and associated singular $4$-annuli. Again, note that the definition of ${\bf A}_{\varepsilon}$ and the convergence property yields a very precise localization for the singular annuli. \begin{lemma}\label{lem:singcyl} Assume $\kappa\geq\kappa_0$ large enough. Then given a singular 2-annulus ${\mathsf A}_{\bullet}$ of $C$, there is an ${\varepsilon}_0>0$ such that for $0<{\varepsilon}\leq{\varepsilon}_0$ there exists a singular $d$-cylinder ${\mathscr C}_{\varepsilon}$ at energy ${\bf e}$ attached to ${\mathsf A}_{\bullet}$ for $H_{\varepsilon}$, which admits a generalized twist section. \end{lemma} The proof of Lemma~\ref{lem:singcyl} is in Section~\ref{sec:prooflemsingcyl}. \subsection{The extremal $d$ cylinders and the interpolating $s$-cylinders}\label{ssec:scyldef} \setcounter{paraga}{0} In this section we come back to the initial assumptions of Proposition~\ref{prop:existcylinders}. We endow ${\Gamma}$ with an arbitrary orientation and, given $\delta>0$, we fix two consecutive elements $m^0$ and $m^1$ of $D(\delta)$ on ${\Gamma}$ according to that orientation. Let $[m^0,m^1]$ be the segment of ${\Gamma}$ they delimit (also according to that orientation). \paraga We introduce adapted coordinate systems $(x^i,y^i)$ at $m^i$ relatively to which $\omega(m^i)=(\omega^i_1,0,0)$ with $\omega^i_1\neq0$, and which both satisfy $$ {\Gamma}=\{m\in h^{-1}({\bf e})\mid \omega^i_3(m)=0\}. $$ For $m\in\, ]m^0,m^1[$, set $$ {\sigma}^i=\frac{\omega^i_1(m)}{\abs{\omega^i_1(m)}}\in\{-1,1\}. $$ Let $C_i={\tfrac{1}{2}} T_{m^i}+U_{m^i}$ be the $d$-averaged systems at $m^i$ relatively to the previous coordinate systems. Identify $H_1({\mathbb T}^2,{\mathbb Z})$ with ${\mathbb Z}^2$ relatively to the same systems, and let $$ c^i=({\sigma}^i,0)\in H_1({\mathbb T}^2,{\mathbb Z}). $$ \paraga We can now define the extremal $d$-cylinders at the point $m\in\{m^0,m^1\}$ relatively to the resonance circle ${\Gamma}$. We fix the corresponding integer homology class $c\in \{c^0,c^1\}$ and consider a compact $2$-annulus ${\mathsf A}$ of $C$ defined over the interval $J=[e_P,e_\ell]$, where $e_P$ is the Poincar\'e energy for $c$ (see Theorem II). We want to continue the $d$-cylinders attached to ${\mathsf A}$ (and the corresponding annuli containing them) ``away from the double resonance and along ${\Gamma}$'', to a distance $O({\varepsilon}^\nu)$ where $\nu\in\,]0,{\tfrac{1}{2}}[$ can be arbitrarily chosen (provided that the regularity $\kappa$ is large enough). To state our result properly, we need to distinguish between the two components of the boundary of the $d$-cylinders ${\mathscr C}_{\varepsilon}$ attached to ${\mathsf A}$, introduced in Definition~\ref{def:scyl}. With the notation of Lemma~\ref{lem:dcyl}, let $(J\subset J^*,{\rm j})$ be a normalizing system for ${\mathsf A}$ and set $J^*=\,]e_P^*,e_\ell^*[$, so that $e_P^*<e_P$ and $e_\ell^*>e_\ell$. Since the projection $\Pi_\rho\big(\Phi_{\varepsilon}^{-1}({\mathscr C}_{\varepsilon})\big)$ on the $\rho$-axis contains the interval $J$, one can define the {\em inner component} $\partial_{in} {\mathscr C}_{\varepsilon}$ of $\partial {\mathscr C}_{\varepsilon}$ as the one whose corresponding projection intersects $]e_P^*,e_P]$, and the {\em outer component} $\partial_{out} {\mathscr C}_{\varepsilon}$ as the one whose corresponding projection intersects $[e_\ell,e_\ell^*[$. \begin{figure} \caption{An extremal cylinder} \end{figure} Let $(x,y)$ be the adapted coordinates at $m$. The resonance surface $\omega^{-1}(k^\bot)$ admits the graph representation $$ y_3=y_3(\widehat y) $$ and we assume that the $s$-averaged potential $$ U(\cdot ,y)=\int_{{\mathbb T}^2}f\big((\widehat x,\cdot),r)d\widehat x\quad :{\mathbb T}\to{\mathbb R} $$ admits in the neighborhood of $y(m)$ a unique and nondegenerate maximum at $x_3^*(y)$. \begin{lemma}\label{lem:extcyl} Fix $\nu\in\,]0,{\tfrac{1}{2}}[$ and constants $b>a>0$, $\mu>0$. Then for $\kappa\geq\kappa_0$ large enough, there exist ${\varepsilon}_0>0$ such that for $0<{\varepsilon}_0<{\varepsilon}$, there exists a cylinder ${\mathscr C}^{\rm ext}_{\varepsilon}$ which continues ${\mathscr C}_{\varepsilon}$ in the sense that: \vskip1mm ${\bullet}$ ${\mathscr C}_{\varepsilon}\subset {\mathscr C}^{\rm ext}_{\varepsilon}$, \vskip1mm ${\bullet}$ one component of the boundary $\partial{\mathscr C}^{\rm ext}_{\varepsilon}$ coincide with the inner component $\partial_{inn}{\mathscr C}_{\varepsilon}$ points \vskip 1mm ${\bullet}$ the other component of $\partial{\mathscr C}^{\rm ext}_{\varepsilon}$ is also a component of the boundary of an {\em invariant} cylinder contained in ${\mathscr C}^{\rm ext}_{\varepsilon}$ and located in the domain \begin{equation} \widehat x\in{\mathbb T}^2,\quad a{\varepsilon}^\nu\leq \norm{\widehat y-\widehat y^0}\leq b{\varepsilon}^\nu,\quad \abs{x_3-x^*_3(m)}\leq \mu,\quad \abs{y_3-y_3^*(m)}\leq\mu\sqrt{\varepsilon}. \end{equation} \end{lemma} The proof of Lemma~\ref{lem:extcyl} is in Section~\ref{sec:prooflemextcyl}. \paraga We can now define in a simple way the $s$-cylinder which ``interpolates'' between the previous extremal cylinders ${\mathscr C}^{\rm ext}_{\varepsilon}(m^0)$ and ${\mathscr C}^{\rm ext}_{\varepsilon}(m^1)$. We say that a cylinder ${\mathscr C}$ is {\em oriented} when an order on its boundary components has been fixed, we denote by $\partial_{\bullet}{\mathscr C}$ the first one and by ${\mathscr C}^{\bullet}$ the second one. We say that two cylinders ${\mathscr C}_0$, ${\mathscr C}_1$ contained in a cylinder are {\em consecutive} when $\partial^{\bullet}{\mathscr C}_0=\partial^{\bullet}{\mathscr C}^1$. \begin{Def} An $s$-cylinder at energy ${\bf e}$ ``connecting $m^0$ and $m^1$ along ${\Gamma}$'' is a normally cylinder ${\mathscr C}_{\varepsilon}$ at energy ${\bf e}$ for $H_{\varepsilon}$ which contains both extremal cylinders ${\mathscr C}^{\rm ext}_{\varepsilon}(m^0)$ and ${\mathscr C}^{\rm ext}_{\varepsilon}(m^0)$, and whose projection in action is located in a tubular neighborhood of radius $O(\sqrt{\varepsilon})$ of ${\Gamma}$. A twist section for an invariant cylinder $\widehat{\mathscr C}\subset{\mathscr C}_{\varepsilon}$ is a $2$-dimensional global section ${\Sigma}\subset \widehat{\mathscr C}$, transverse to $X_H$ in ${\mathscr C}$, which is the image of some exact-symplectic embedding ${\rm j}_{{\Sigma}}:{\mathbb T}\times [a,b]\to {\Sigma}$, such that the Poincar\'e return map associated with ${\Sigma}$ is a twist map in the ${\rm j}_{{\Sigma}}$-induced coordinates on ${\mathbb T}\times [a,b]$. We say that ${\mathscr C}_{\varepsilon}$ satifies the twist property when it admits a finite covering $\widehat{\mathscr C}_1,\ldots,\widehat {\mathscr C}_{\ell({\varepsilon})}$ by consecutive subcylinders, each of which admits a twist section in the previous sense, and whose boundaries are dynamically minimal. \end{Def} \begin{figure} \caption{An $s$-cylinder} \end{figure} \vskip-3mm \paraga The corresponding existence result is the following. \begin{lemma}\label{lem:scyl} Assume that $\kappa\geq\kappa_0$ large enough. Then there is an ${\varepsilon}_0$ such that there exists a family $({\mathscr C}_{\varepsilon})_{0<{\varepsilon}<{\varepsilon}_0}$ of $s$-cylinders at energy~${\bf e}$ connecting $m^0$ and $m^1$, which satsifies the twist property. \end{lemma} The proof of Lemma~\ref{lem:scyl} is in Section~\ref{sec:prooflemscyl} \section{Proof of the results of Section~\ref{sec:cylinders}}\label{sec:proofscyl} We successively prove Lemma~\ref{lem:dcyl}, Lemma~\ref{lem:singcyl} which rely on the ${\varepsilon}$-dependent normal forms of Appendix~\ref{app:normformepsdep}. We then deduce Lemma~\ref{lem:scyl} and Lemma~\ref{lem:extcyl} from the global normal form of Appendix~\ref{App:globnormforms}, for the former, and from the previous ${\varepsilon}$-dependent normal forms for the latter. \subsection{Proof of Lemma~\ref{lem:dcyl}}\label{sec:prooflemdcyl} We first prove the existence of normalizing diffeomorphisms. We then prove the existence of $d$--annuli by the persistence theorem of Appendix~\ref{app:normhyp}, applied to the normal form ${\mathsf N}_{\varepsilon}$ of equation (\ref{eq:scaleham}). The intersection of a $d$--annulus with ${\mathsf N}_{\varepsilon}^{-1}(0)$ is a pseudo invariant cylinder which admits a ``twist section'' (which will be naturally defined even in this pseudo invariant context). The invariant curve theorem (in the version by Herman) applied to the twist section proves the existence of a large family of isotropic $2$--dimensional invariant tori inside the pseudo invariant cylinders. The zone limited by two of them is a compact invariant cylinder and one can choose these tori close enough to the ``ends'' of the pseudo invariant cylinders to prove our statement. \subsubsection{Existence of the $d$--normalizing diffeomorphisms}\label{sec:dnormdiff} In this section we prove the following lemma. \begin{lemma}\label{lem:dnormalizing} Fix a set of parameters $(d^*,{\sigma},p,\ell)$. Then there exists $\kappa_0$ such then for $\kappa\geq\kappa_0$, there exists a $d$--normalizing diffeomorphism with parameters $(d^*,{\sigma},p,\ell)$. \end{lemma} \begin{proof} We apply Proposition \ref{prop:normal2} to our Hamiltonian $H_{\varepsilon}$ at $0$, with $\ell+1$ in place of $\ell$. We arbitrarily fix $d<{\tfrac{1}{2}}$ and ${\sigma}<1-d$ (we do not try to get optimal results here). Therefore one can choose $\kappa\in{\mathbb N}^*$ large enough such that, given any $d^*>0$, there is an ${\varepsilon}_0>0$ so that, for $0\leq{\varepsilon}\leq {\varepsilon}_0$, there exists an analytic symplectic embedding $$ \Phi_{\varepsilon}: {\mathbb T}^3\times B^3(0,d^*\sqrt{\varepsilon})\to {\mathbb T}^3\times B^3(0,2d^*\sqrt{\varepsilon}) $$ such that for $({\theta},r)\in {\mathbb T}^3\times B^3(0,d^*\sqrt{\varepsilon})$, $$ N_{\varepsilon}({\theta},r)=H_{\varepsilon}\circ\Phi_{\varepsilon}({\theta},r)=h(r)+g_{\varepsilon}(\overline {\theta},r)+R_{\varepsilon}({\theta},r), $$ where $g_{\varepsilon}$ is analytic on ${\mathbb T}^2\times B(0,d^*\sqrt{\varepsilon})$ and $R_{\varepsilon}$ is $C^\kappa$ on ${\mathbb T}^3\times B^3(0,d^*\sqrt{\varepsilon})$, with \begin{equation}\label{eq:estim2} \norm{g_{\varepsilon}-{\varepsilon}[f]}_{C^p\big( {\mathbb T}^2\times B^3(0,d^*\sqrt{\varepsilon})\big)}\leq {\varepsilon}^{2-{\sigma}},\qquad \norm{R_{\varepsilon}}_{C^p\big( {\mathbb T}^3\times B^3(0,d^*\sqrt{\varepsilon})\big)}\leq {\varepsilon}^{\ell+1}. \end{equation} The rescaling $r =\sqrt{\varepsilon}\, {\mathsf r}$, $t=\frac{1}{\sqrt{\varepsilon}}\,{{\mathsf t}}$ yields the new Hamiltonian \begin{equation} {\mathsf N}_{\varepsilon}({\theta},{\mathsf r})=\frac{1}{{\varepsilon}}\, N_{\varepsilon}({\theta},{\sqrt{\varepsilon}}\,{\mathsf r}). \end{equation} Then, performing a Taylor expansion of $h$ and $[f]$ at $r^0=0$ in $N_{\varepsilon}$, one gets the normal form: \begin{equation} {\mathsf N}_{\varepsilon}({\theta},{\mathsf r})=\frac{1}{{\varepsilon}}N_{\varepsilon}({\theta},\sqrt{\varepsilon}{\mathsf r})= \frac{\widehat \omega}{\sqrt{\varepsilon}}\,{\mathsf r}_1+{\tfrac{1}{2}} D^2h(0)\,{\mathsf r}^2+ U(\overline{\theta}) +{\mathsf R}^0_{\varepsilon}(\overline{\theta},{\mathsf r})+{\mathsf R}_{\varepsilon}({\theta},{\mathsf r}) \end{equation} where ${\mathsf R}_{\varepsilon}({\theta},{\mathsf r})=\frac{1}{{\varepsilon}}R_{\varepsilon}({\theta},\sqrt{\varepsilon}{\mathsf r})$ is a $C^\kappa$ function on ${\mathbb T}^3\times B^3(0,d^*)$ such that $$ \norm{{\mathsf R}_{\varepsilon}}_{C^p({\mathbb T}^n\times B(0,d^*))}\leq {\varepsilon}^\ell. $$ Moreover $$ {\mathsf R}^0_{\varepsilon}(\overline{\theta},{\mathsf r})= \frac{1}{{\varepsilon}}\big(g_{\varepsilon}(\overline{\theta},\sqrt{\varepsilon}{\mathsf r})-{\varepsilon} U(\overline{\theta})\big)+\frac{{\varepsilon}^{1/2}}{2}\int_0^1(1-s)^2D^3h(s\sqrt{\varepsilon}{\mathsf r})({\mathsf r}^3)\,ds. $$ Note that $$ \frac{1}{{\varepsilon}}\Big(g_{\varepsilon}(\overline{\theta},\sqrt{\varepsilon}{\mathsf r})-{\varepsilon} U(\overline{\theta})\big)= \frac{1}{{\varepsilon}}\big(g_{\varepsilon}(\overline{\theta},\sqrt{\varepsilon}{\mathsf r})-{\varepsilon} [f](\overline{\theta},\sqrt{\varepsilon}{\mathsf r})\Big) -\Big([f](\overline{\theta},\sqrt{\varepsilon}{\mathsf r})-[f](\overline{\theta},0)\Big), $$ so that, for a suitable constant $a>0$: $$ \norm{{\mathsf R}^0_{\varepsilon}}_{C^p({\mathbb T}^2\times B^3(0,d^*))}\leq a\sqrt {\varepsilon}. $$ This concludes the proof. \end{proof} \subsubsection{Existence of the annuli} \setcounter{paraga}{0} In this section we deal with the normal form ${\mathsf N}_{\varepsilon}$ of (\ref{eq:scaleham}). We fix a compact annulus ${\mathsf A}$ for $C$, defined over $J$. By hyperbolic continuation of periodic orbits together with the torsion assumptions, ${\mathsf A}$ can be continued to an annulus ${\mathsf A}^*$ (satisfying the same torsion properties as ${\mathsf A}$), defined over a slightly larger open interval $J^*$. Using the Moser isotopy argument, one then proves the existence of a neighborhood ${\mathscr O}_{\mathsf A}$ of ${\mathsf A}$ in ${\mathbb A}^2$ and an $\alpha>0$ such that, setting \begin{equation} {\mathscr O}(J^*,\alpha):={\mathbb T}\times J^*\times\,]-\alpha,\alpha[^2 \end{equation} there exists a ``straightening'' symplectic diffeomorphism \begin{equation} \left\vert \begin{array}{lll} \phi: {\mathscr O}(J^*,\alpha)\longrightarrow {\mathscr O}_A,\\[3pt] \phi(\varphi,\rho,0,0)={\rm j}(\varphi,\rho),\qquad \forall (\varphi,\rho)\in{\mathbb T}\times J^*,\\[3pt] \phi^{-1}\big(W_{loc}^s({\mathsf A}^*)\big)=\{u=0\},\qquad \phi^{-1}\big(W_{loc}^u({\mathsf A}^*)\big)=\{s=0\},\qquad (u,s)\in \,]-\alpha,\alpha[^2.\\ \end{array} \right. \end{equation} so that, in particular, $\phi\big({\mathbb T}\times J^*\times\{(0,0)\}\big)={\mathsf A}^*$. We say that the symplectic embedding $\phi$ and the coordinates $(\varphi,\rho,u,s)$ are adapted to ${\mathsf A}$ (or ${\mathsf A}^*$). The main result of this section is the following, where we set ${\mathbb T}_{\varepsilon}:={\mathbb R}/\sqrt{\varepsilon}{\mathbb Z}$. \begin{lemma}\label{lem:existann} Assume that ${\mathsf N}_{\varepsilon}$ satisfies (\ref{eq:scaleham}) and (\ref{eq:estimrem}), and let $q\geq1$ be a fixed integer and $a$ be a positive constant. Then if $\ell$ is large enough, for any open interval $J^{\bullet}$ such that $J\subset J^{\bullet}\subset J^*$, there is an ${\varepsilon}_0>0$ such that for $0<{\varepsilon}\leq{\varepsilon}_0$ there is a symplectic $C^p$ embedding \begin{equation}\label{eq:embed} \left\vert \begin{array}{ll} \Psi_{\varepsilon}: {\mathbb T}_{\varepsilon}\times [-1,1]\times {\mathbb T}\times J^{\bullet}\longrightarrow{\mathbb A}^3\\ \Psi_{\varepsilon}(\xi,\eta,\varphi,\rho)=\Big(\tfrac{1}{\sqrt{\varepsilon}}\xi,\sqrt{\varepsilon}\,\eta,\, j_{\varepsilon}\big(\xi,\eta,\varphi,\rho\big)\Big), \qquad \norm{j_{\varepsilon}-j}_{C^p({\mathbb R}\times [-1,1]\times{\mathbb T}\times J)}\leq a\sqrt{\varepsilon}, \end{array} \right. \end{equation} whose image is a pseudo invariant hyperbolic annulus for ${\mathsf N}_{\varepsilon}$. Here ${\mathbb T}_{\varepsilon}\times [-1,1]\times {\mathbb T}\times J$ is equipped with the symplectic form $d\,\eta\wedge d\xi +d\rho\wedge d\varphi$. The vector field $\Psi_{\varepsilon}^*X^{{\mathsf N}_{\varepsilon}}$ is generated by the Hamiltonian \begin{equation} {\mathsf M}_{\varepsilon}(\xi,\eta,\varphi,\rho) =\widehat\omega\,\eta + {\mathsf C}_0(\rho) + \Lambda^0_{\varepsilon}(\eta,\rho)+ \Lambda_{\varepsilon}\Big(\tfrac{1}{\sqrt{\varepsilon}}\xi,\eta,\varphi,\rho\Big), \end{equation} where $\Lambda_{\varepsilon}$ is $1$--periodic in its first variable and in $\varphi$, and \begin{equation} \norm{\Lambda^0_{\varepsilon}}_{C^p}\leq a \sqrt{\varepsilon}, \qquad \norm{\Lambda_{\varepsilon}}_{C^p}\leq a {\varepsilon}^q. \end{equation} \end{lemma} \begin{proof} We will proceed in three steps to take into account the partial integrability of the system ${\mathsf N}_{\varepsilon}$ up to the extremely small term ${\mathsf R}_{\varepsilon}$. \vskip3mm \noindent${\bullet}$ {\bf First step.} Using a suitable rescaling in $(\widehat{\theta},\widehat{\mathsf r})$, we will first prove the existence of a hyperbolic annulus for the {\em truncated} normal form ${\mathsf N}_{\varepsilon}^0={\mathsf N}_{\varepsilon}-{\mathsf R}_{\varepsilon}$ (see (\ref{eq:scaleham})). \paraga From the definition of the adapted embedding and coordinates, it is easy to check that the composed Hamiltonian ${\mathsf C}=C\circ \phi$ takes the form \begin{equation}\label{eq:hambC} {\mathsf C}(\varphi,\rho,u,s)={\mathsf C}_0(\rho)+\lambda(\varphi,\rho)\,us+{\mathsf C}_3(\varphi,\rho,u,s) \end{equation} with $ \partial^{i+j}_{u^is^j}{\mathsf C}_3(\varphi,\rho,0,0)=0$ for $ 0 \leq i+j\leq 2 $ and $ \lambda(\varphi,\rho)\geq\lambda_0>0 $ by compactness of the closure $\overline {J^*}$. As a consequence, the vector field $X^{\mathsf C}=\phi^*X^C_{\vert {\mathscr O}}$ reads: \begin{equation}\label{eq:avsys} \left\vert \begin{array}{lll} \varphi'=\varpi(\rho)+K^\varphi(\varphi,\rho,u,s)\\ \rho'=K^\rho(\varphi,\rho,u,s)\\ u'=\lambda(\varphi,\rho)\,u+K^u(\varphi,\rho,u,s)\\ s'=-\lambda(\varphi,\rho)\,s+K^s(\varphi,\rho,u,s),\\ \end{array} \right. \end{equation} where $K:=(K^\varphi,K^\rho,K^u,K^s)$ satisfies $ K(\varphi,\rho,0,0)=0 $, $ \partial_uK(\varphi,\rho,0,0)=\partial_sK(\varphi,\rho,0,0)=0 $ for $(\varphi,\rho)\in {\mathbb T}\times J^*$. \paraga We see all the functions and systems as defined over the universal cover of their domains, so that in particular $\widehat{\theta}\in{\mathbb R}$ and $\varphi\in{\mathbb R}$. We use the same notation ${\mathcal O}$ and ${\mathscr O}$ for the initial domains and their covers. We introduce the symplectic transformation \begin{equation}\label{eq:chi} \left\vert \begin{array}{lll} \chi:{\mathbb R}\times\,]-2,2[\,\times {\mathcal O}\longrightarrow {\mathbb R}\times\,]-2\sqrt{\varepsilon},2\sqrt{\varepsilon}[\,\times {\mathscr O}\\[4pt] \chi(\xi,\eta,\varphi,\rho,u,s)=\Big(\widehat{\theta}=\tfrac{1}{\sqrt{\varepsilon}}\, \xi,\,\widehat{\mathsf r}=\sqrt{\varepsilon}\,\eta,\,(\overline{\theta},\overline{\mathsf r})=\phi\big(\varphi, \rho,u,s\big)\Big). \end{array} \right. \end{equation} We consider the restriction of ${\mathsf N}^0_{\varepsilon}$ to the range ${\mathbb R}\times\,]-2\sqrt{\varepsilon},2\sqrt{\varepsilon}[\,\times {\mathscr O}$ and use bold letters to denote the functions ${\mathsf N}_{\varepsilon}^0\circ\chi,Q\circ\chi,{\mathsf R}^0_{\varepsilon}\circ\chi$, so that: \begin{equation} {\mathsf N}_{\varepsilon}^0(z^0)=\widehat\omega\,\eta+{\bf Q}(z^0)+{\mathsf C}({\zeta})+{\bf R}_{\varepsilon}^0(z^0), \end{equation} where ${\zeta}=(\varphi,\rho,u,s)$ and $z^0=(\eta,\varphi,\rho,u,s)$. The symplectic character of $\chi$ yields the following form for the Hamiltonian vector field $X^{{\mathsf N}_{\varepsilon}^0}$ in the coordinate system $(\xi,\eta,\varphi, \rho,u,s)$ \begin{equation}\label{eq:mainsyst} \left\vert \begin{array}{lllll} \xi'=\widehat\omega& &\!\!\!+\ \partial_{\eta}{\bf Q}(z^0)& \!\!\!+\ \partial_{\eta}{\bf R}^0_{\varepsilon}(z^0)&\!\!\! \\ \eta'=0& &&&\!\!\! \\ \varphi'=\varpi(\rho)&\!\!\!+\ K^\varphi({\zeta}) &\!\!\!+\ \partial_{\rho}{\bf Q}(z^0)& \!\!\!+\ \partial_{\rho}{\bf R}^0_{\varepsilon}(z^0)&\!\!\! \\ \rho'=0&\!\!\!+\ K^\rho({\zeta}) &\!\!\!-\ \partial_{\varphi}{\bf Q}(z^0)&\!\!\!-\ \partial_{\varphi}{\bf R}^0_{\varepsilon}(z^0)&\!\!\! \\ u'=\lambda(\varphi,\rho)\,u&\!\!\! +\ K^u({\zeta})&\!\!\!+\ \partial_{s}{\bf Q}(z^0)&\!\!\!+\ \partial_{s}{\bf R}^0_{\varepsilon}(z^0)&\!\!\! \\ s'=-\lambda(\varphi,\rho)\,s&\!\!\! +\ K^s({\zeta})&\!\!\!-\ \partial_{u}{\bf Q}(z^0)&\!\!\!-\ \partial_{u}{\bf R}^0_{\varepsilon}(z^0)&\!\!\! \\ \end{array} \right. \end{equation} Note that $Q({\mathsf r})=\widehat{\mathsf r}\cdot L({\mathsf r})$, where $L:{\mathbb R}^3\to{\mathbb R}$ is a linear map. We set ${\bf L}=L\circ\chi$, so, by (\ref{eq:chi}): \begin{equation}\label{eq:estimnew1} \partial_y{\bf Q}(z^0)=2\sqrt{\varepsilon}\, \eta \cdot \partial_y{\bf L}(z^0)\quad\textrm{and}\quad \partial_{\eta}{\bf Q}(z^0)=2\sqrt{\varepsilon}\,{\bf L}(z^0)+2\sqrt{\varepsilon}\, \eta \cdot \partial_{\eta}{\bf L}(z^0) \end{equation} where $y$ stands for any variable in the set $\{\varphi,\rho,u,s\}$, and where the derivatives $\partial_y{\bf L}$ and $\partial_{\eta}{\bf L}$ are bounded in the $C^p$ topology, by periodicity and compactness. Moreover, clearly $ \norm{{\bf R}^0_{\varepsilon}}_{C^p}\leq c_0\sqrt{\varepsilon}, $ for a suitable constant $c_0>0$. \paraga To get the assumptions of the persistence theorem, we fix an interval $J$ strictly contained in $J^*$ and we introduce the following rescaling, where $\gamma_0,\gamma_1\in\,]0,1[$ will be chosen below: \begin{equation}\label{eq:tilchi} \left\vert \begin{array}{lll} \widetilde\chi: {\mathcal D}\longrightarrow {\mathcal D}_{\varepsilon}\\[4pt] \widetilde\chi(\xi,\eta,\widetilde\varphi,\widetilde\rho,\widetilde u,\widetilde s)=\Big(\xi,\eta,\varphi=\widetilde\varphi/\gamma_0,\eta=\widetilde\eta,u={\sqrt{\varepsilon}}\,\widetilde u/{\gamma_1} ,\ s={\sqrt{\varepsilon}}\,\widetilde s/{\gamma_1}\Big). \end{array} \right. \end{equation} where ${\mathcal D}_{\varepsilon}$ and ${\mathcal D}$ are the subddomains of ${\mathbb R}\times\,]-2,2[\,\times{\mathcal O}$ defined by $\norm{(u,s)}\leq {2\sqrt{\varepsilon}/\gamma_1}$ and $\norm{(\widetilde u,\widetilde s)}\leq {2/\gamma_1}$ respectively. We will restrict the system (\ref{eq:mainsyst}) to the domain ${\mathcal D}_{\varepsilon}$. On the domain ${\mathcal D}$, the vector field $\widetilde\chi^*X^{{\mathsf N}^0_{\varepsilon}}$ takes the following form \begin{equation}\label{eq:avsys20} \left\vert \begin{array}{llll} \xi'=\widehat\omega + F^{\theta}_{\varepsilon}(\widetilde z^0) \\ \eta'=0 \\ \widetilde\varphi'=\gamma_0\,\varpi(\widetilde \rho) +F^\varphi_{\varepsilon}(\widetilde z^0) \\ \widetilde\rho'=F^\rho_{\varepsilon}(\widetilde z^0) \\ \widetilde u'=\widetilde\lambda(\widetilde\varphi,\widetilde\rho)\,\widetilde u+ F_{\varepsilon}^{u}(\widetilde z^0)\\ \widetilde s'=-\widetilde\lambda(\widetilde\varphi,\widetilde\rho)\,\widetilde s+ F_{\varepsilon}^{s}(\widetilde z^0)\\ \end{array} \right. \end{equation} with, by direct computation and assuming ${\varepsilon}<1$: \begin{equation}\label{eq:finalest} \norm{F_{\varepsilon}}_{C^p}\leq 3 \mathop{\rm Max\,}\limits\Big(\frac{{\sigma}\sqrt{\varepsilon}}{\gamma_0^{p}\gamma_1^2},\frac{{\sigma}\gamma_1}{\gamma_0^{p}}\Big). \end{equation} \paraga Using a bump flat-top function, one can continue the function $F$ to a function defined over ${\mathbb R}^6$, still denoted by $F$, which coincide with the initial one over the domain $\widetilde{\mathcal D}/2$ defined by the inequality $ \norm{(\widetilde u,\widetilde s)}\leq 1, $ and which moreover vanish outside $\widetilde{\mathcal D}$ and have the same periodicity properties (relatively to $\xi$ and $\widetilde \varphi$) as the initial one. We can also assume that its $C^p$ norm over ${\mathbb R}^6$ satisfies (\ref{eq:finalest}) up to the choice of a larger constant ${\sigma}$. This continuation yields a new vector field $Y_{\varepsilon}$ on ${\mathbb R}^{6}$. The persistence theorem applies to $Y_{\varepsilon}$: there is a constant $\nu<\!\!<1$ such that, if $\varpi,\gamma_0,\gamma_1,{\varepsilon}_0$ satisfy \begin{equation} \mathop{\rm Max\,}\limits(\gamma_0\norm{\varpi}_{C^p},\gamma_1/\gamma_0^p,\sqrt{\varepsilon}_0/\gamma_0^{p}\gamma^2_1)<\nu \end{equation} then for $0\leq{\varepsilon}\leq{\varepsilon}_0$ the vector field $Y_{\varepsilon}$ admits a normally hyperbolic invariant manifold, of the form \begin{equation} \widetilde u=\widetilde U_{\varepsilon}(\eta,\widetilde\varphi,\widetilde\rho),\quad \widetilde s= \widetilde S_{\varepsilon}(\eta,\widetilde\varphi,\widetilde\rho), \qquad \norm{(\widetilde U_{\varepsilon},\widetilde S_{\varepsilon})}_{C^p}\leq \overline a<1. \end{equation} where $\widetilde U_{\varepsilon}$ and $\widetilde S_{\varepsilon}$ are $C^p$ functions and $\overline a$ is an arbitrary positive constant. Note that $\widetilde U_{\varepsilon}$ and $\widetilde S_{\varepsilon}$ are independent of $\xi$. Clearly, this invariant manifold is contained in $\widetilde{\mathcal D}/2$, so that the initial system (\ref{eq:avsys20}) also admits a normally hyperbolic invariant manifold, with the same equation. Moreover, since the system (\ref{eq:avsys20}) is $\gamma_0$ periodic in $\widetilde\varphi$, by hyperbolic uniqueness, the functions $\widetilde U_{\varepsilon}$ and $\widetilde S_{\varepsilon}$ are $\gamma_0$ periodic in $\widetilde\varphi$ too (and independent of $\xi$), see\cite{B10}. \paraga As a consequence, the initial system (\ref{eq:mainsyst}) admits a $4$--dimensional pseudo invariant normally hyperbolic annulus $\widehat{\mathcal A}_{\varepsilon}^0$ with equation, in the coordinates $(\xi,\eta,\varphi,\rho,u,s)$: \begin{equation} \xi\in {\mathbb T}_{\varepsilon},\ \eta\in\,]-1,1[ ,\ \varphi\in{\mathbb T},\ u=\sqrt{\varepsilon}\, U_{\varepsilon}(\eta,\varphi,\rho),\ s= \sqrt{\varepsilon}\, S_{\varepsilon}(\eta,\varphi,\rho), \qquad \norm{(U_{\varepsilon},S_{\varepsilon})}_{C^p}\leq \overline a, \end{equation} where ${\mathbb T}_{\varepsilon}:={\mathbb R}/\sqrt{\varepsilon}{\mathbb Z}$ and $U_{\varepsilon}(\eta,\varphi,\rho):=\widetilde U_{\varepsilon}(\eta,\gamma_0\varphi,\gamma_1\rho)$ and an analogous definition for $S_{\varepsilon}$. We will see $\widehat{\mathcal A}_{\varepsilon}^0$ as the image of the embedding \begin{equation} \left\vert \begin{array}{ll} \widehat\Psi_{\varepsilon}^0 : {\mathbb T}_{\varepsilon}\times \,]-1,1[\,\times {\mathbb T}\times J\longrightarrow {\mathbb A}^3\\[4pt] \widehat\Psi_{\varepsilon}^0(\xi,\eta,\varphi,\rho)=\big(\xi,\eta,\,j_{\varepsilon}^0\big(\eta,\varphi,\rho\big)\big),\\ \end{array} \right. \end{equation} with \begin{equation} j_{\varepsilon}^0(\eta,\varphi,\rho)=\phi\big(\varphi,\rho,\sqrt{\varepsilon}\, U_{\varepsilon}(\eta,\varphi,\rho), \sqrt{\varepsilon}\, S_{\varepsilon}(\eta,\varphi,\rho)\big). \end{equation} \vskip3mm \setcounter{paraga}{0} \noindent${\bullet}$ {\bf Second step.} We now take advantage of the integrable structure of $X^{{\mathsf N}_{\varepsilon}^0}$ restricted to $\widehat{\mathcal A}_{\varepsilon}^0$, stemming both from the fact that, relatively to the $(\xi,\eta,\varphi,\rho)$ coordinates, it is independent of $\xi$ and that the ``complementary part'' is an $\eta$--family of one-degree-of-freedom systems in the variables $(\varphi,\rho)$. \paraga One immediately checks that the annulus $\widehat{\mathcal A}^0_{\varepsilon}$ is controllable, so that it is a symplectic submanifold of ${\mathbb A}^3$. The vector field $(\widehat\Psi_{\varepsilon}^0)^*X^{{\mathsf N}_{\varepsilon}}$ is generated by the Hamiltonian $\widehat{\bf M}^0_{\varepsilon}={\mathsf N}^0_{\varepsilon}\circ \widehat\Psi_{\varepsilon}^0$ relatively to the symplectic form \begin{equation}\label{eq:inducedform} \Omega_{\varepsilon}:=(\widehat\Psi_{\varepsilon}^0)^*\Omega=d\eta\wedge d\xi+d\rho\wedge d\varphi +{\varepsilon}\, d U_{\varepsilon}\wedge d S_{\varepsilon}. \end{equation} Clearly \begin{equation}\label{eq:hambm} \begin{array}{lll} \widehat{\bf M}^0_{\varepsilon}(\xi,\eta,\varphi,\rho)=\widehat\omega\,\eta &+& \sqrt{\varepsilon}\,\eta\,{\bf L}\big(\sqrt{\varepsilon}\,\eta,\varphi,\rho,\sqrt{\varepsilon}\, U_{\varepsilon}(\eta,\varphi,\rho), \sqrt{\varepsilon}\, S_{\varepsilon}(\eta,\varphi,\rho)\big)\\ &+& {\mathsf C}\big(\varphi,\rho,\sqrt{\varepsilon}\, U_{\varepsilon}(\eta,\varphi,\rho), \sqrt{\varepsilon}\, S_{\varepsilon}(\eta,\varphi,\rho)\big)\\ &+ &{\bf R}_{\varepsilon}^0\big(\sqrt{\varepsilon}\,\eta,\varphi,\rho,\sqrt{\varepsilon}\, U_{\varepsilon}(\eta,\varphi,\rho), \sqrt{\varepsilon}\, S_{\varepsilon}(\eta,\varphi,\rho)\big). \end{array} \end{equation} Observe that $\widehat{\bf M}^0_{\varepsilon}$ is independent of $\xi$. Moreover, one deduces from (\ref{eq:avsys20}) that the function $\eta$ is a first integral for $\widehat{\bf M}^0_{\varepsilon}$ relatively to $\Omega_{\varepsilon}$. Each level set $\widehat{\bf M}^0_{\varepsilon}={\bf e}$ (for $\abs{{\bf e}}$ small enough) is regular since the Hamiltonian vector field does not vanish, and is moreover foliated by the invariant subsets \begin{equation}\label{eq:leaves} {\mathscr T}(\eta^0,{\bf e})=\big({\mathbb T}\times\{\eta^0\}\big)\times \big\{(\varphi,\rho)\in{\mathbb T}\times J\mid \widehat{\bf M}^0_{\varepsilon}(\eta^0,\varphi,\rho)={\bf e}\big\},\qquad \abs{\eta^0}<1. \end{equation} Thanks to (\ref{eq:hambm}), one immediately checks that \begin{equation} \partial_\rho\widehat{\bf M}^0_{\varepsilon}(\eta,\varphi,\rho)={\mathsf C}'_0(\rho)+O(\sqrt{\varepsilon}). \end{equation} Hence $\partial_\rho\widehat{\bf M}^0_{\varepsilon}(\eta,\varphi,\rho)\neq0$ fo ${\varepsilon}$ small enough, by our monotonicity assumption on the periods of the periodic orbits of ${\mathsf A}$ (see Section 3 of the Introduction and Section 1 of Part II). Therefore the functions $\widehat{\bf M}^0_{\varepsilon}$ and $\eta$ are independent, and ${\mathscr T}(\eta^0,{\bf e})\subset (\widehat{\bf M}^0_{\varepsilon})^{-1}({\bf e})$ is a $2$--dimensional Liouville torus for $\widehat {\bf M}_{\varepsilon}^0$, Lagrangian for $\Omega_{\varepsilon}$. \paraga By the Liouville-Arnold theorem, there exist angle-action coordinates adapted to the Lagrangian foliation $({\mathscr T}(\eta^0,{\bf e}))$, relatively to which the Hamiltonian is independent of the angles. The actions are the periods $ A_j=\int_{\nu_j}\lambda_{\varepsilon} $, $j=1,2$, of the Liouville form $$ \lambda_{\varepsilon}=(\widehat\Psi_{\varepsilon}^0)^*\lambda=\eta \,d\xi + \rho\,d\varphi + {\varepsilon} S_{\varepsilon} \, dU_{\varepsilon} $$ (where $\lambda=\eta\,d\xi+\rho\,d\varphi+s\,du$ is the standard Liouville form of ${\mathbb A}^3$) over a basis $(\nu_j)$ of the homology of the tori ${\mathscr T}(\eta^0,{\bf e})$. For $\nu_1$, one chooses the canonical cycle generated by the angle $\xi$, which obviously yields $A_1=\eta$. As for $\nu_{2}$, one chooses the cycle generated by the angle $\varphi$, that is: $\{(0,\eta^0)\}\times\{(\varphi,\rho)\in{\mathbb T}\times J\mid \widehat{\bf M}^0_{\varepsilon}(\eta^0,\varphi,\rho)={\bf e}\}$. Therefore by immediate computation taking (\ref{eq:hambm}) into account to get $\rho$ as an implicit function of $\eta,\varphi$: $$ A_{2}=\int_{\mathbb T}\rho(\varphi)\,d\varphi+{\varepsilon}\int_{\nu_2} S_{\varepsilon} \, dU_{\varepsilon}=\rho+\sqrt{\varepsilon} \,a^\rho(\eta,\rho,{\varepsilon}), $$ where $a^\rho$ is a $C^p$ function. The associated angles $\alpha_i$ are computed using a generating function, which easily yields $$ \alpha_1=\xi,\qquad \alpha_{2}=\varphi+\sqrt{\varepsilon}\,a^\varphi(\eta,\rho), $$ where $a^\varphi$ is a $C^p$ function. Given $J^{\bullet}$ such that $J\subset J^{\bullet}\subset J^*$ be a slightly smaller interval, the angle-action embedding ${\sigma}_{\varepsilon}(\alpha_1,A_1,\alpha_2,A_2)=(\xi,\eta,\varphi,\rho)$ is well defined over ${\mathbb T}_{\varepsilon}\times \,]-1,1[\,\times {\mathbb T}\times J^{\bullet}$ for ${\varepsilon}$ small enough, with values in ${\mathbb T}_{\varepsilon}\times \,]-1,1[\,\times {\mathbb T}\times J^{*}$ According to the previous product decomposition, ${\sigma}_{\varepsilon}={\rm Id}\times {\sigma}_{\varepsilon}^{(2)}$ , where ${\sigma}_{\varepsilon}^{(2)}$ is $\sqrt{\varepsilon}$--close to the identity in the $C^p$ topology. Moreover, by construction $$ {\sigma}_{\varepsilon}^*\Omega_{\varepsilon}=dA_1\wedge d\alpha_1+dA_2\wedge d\alpha_2, $$ and the transformed Hamiltonian ${\bf M}_{\varepsilon}^0=\widehat{\bf M}_{\varepsilon}^0\circ {\sigma}_{\varepsilon}$ depends only on $(A_1,A_2)$. In the following we set $\Psi_{\varepsilon}^0=\widehat\Psi_{\varepsilon}^0\circ{\sigma}_{\varepsilon}$, so that ${\bf M}_{\varepsilon}^0={\mathsf N}_{\varepsilon}^0\circ\Psi_{\varepsilon}^0$. Clearly, using (\ref{eq:hambm}): \begin{equation}\label{eq:interham} {\bf M}_{\varepsilon}^0(\alpha_1,A_1,\alpha_2,A_2)=\widehat\omega\cdot A_1+{\mathsf C}_0(A_2) +\Lambda_{\varepsilon}^0(A_1,A_2),\qquad \norm{\Lambda_{\varepsilon}^0}_{C^p}\leq a^{**}\sqrt{\varepsilon} \end{equation} where $a^{**}$ is a large enough constant. \vskip3mm \setcounter{paraga}{0} \noindent${\bullet}$ {\bf Third step.} We finally use the straightening lemma again to get a normal form in the neighborhood of ${\mathcal A}_{\varepsilon}^0$, to which we only have to add the remainder ${\mathsf R}_{\varepsilon}$ to get the final annulus ${\mathcal A}$ and the corresponding estimates by the persistence theorem. The method is very similar to that of the first step and we will skip the details. \paraga By the symplectic normally hyperbolic persistence theorem (Appendix~\ref{app:normhyp}), one can continue the immersion $\Psi_{\varepsilon}^0$ to a straightening symplectic embedding $$ \Phi_{\varepsilon}^0:\frac{{\mathbb R}}{\sqrt{\varepsilon}{\mathbb Z}}\times \,]-1,1[\,\times {\mathbb T}\times J\times ]-\alpha,\alpha[^2\to{\mathscr U} $$ where ${\mathscr U}$ is an open neighborhood of the annulus ${\mathcal A}_{\varepsilon}^0$ in ${\mathbb A}^3$, such that $$ \Phi_{\varepsilon}^0(\xi,\eta,\varphi,\rho,0,0)=\Psi_{\varepsilon}^0(\xi,\eta,\varphi,\rho), $$ and such that the stable and unstable manifolds of the points of ${\mathcal A}_{\varepsilon}^0$ are straightened. We introduce the composed Hamiltonian $$ {\mathscr N}_{\varepsilon}^0={\mathsf N}_{\varepsilon}^0\circ\Phi_{\varepsilon}^0={\mathsf N}_{\varepsilon}^0\circ\chi\circ\Phi_{\varepsilon}^0 $$ where $\chi$ was defined in (\ref{eq:chi}). One easily proves that ${\mathscr N}_{\varepsilon}^0$ admits the following expansion with respect to the hyperbolic variables $(u,s)$: $$ {\mathscr N}_{\varepsilon}^0(x,u,s)={\bf M}_{\varepsilon}^0(x)+{\boldsymbol \lambda}(x)us+({\mathscr N}_{\varepsilon}^0)_3(x,u,s,{\varepsilon}), $$ where $x=(\xi,\eta,\varphi,\rho)$, ${\boldsymbol \lambda}(x)\geq{\boldsymbol \lambda}_0>0$ and \begin{equation}\label{eq:vanishbN} ({\mathscr N}_{\varepsilon}^0)_3(x,0,0)=0,\qquad D({\mathscr N}_{\varepsilon}^0)_3(x,0,0)=0,\qquad D^2({\mathscr N}_{\varepsilon}^0)_3(x,0,0)=0. \end{equation} In the following we abbreviate $({\mathscr N}_{\varepsilon}^0)_3$ in ${\mathscr N}_3$. \paraga We now go back to the initial Hamiltonian ${\mathsf N}_{\varepsilon}$ in the new set of variables, and set $$ {\mathscr N}_{\varepsilon}={\mathsf N}_{\varepsilon}\circ\chi\circ\Phi_{\varepsilon}^0={\mathscr N}_{\varepsilon}^0+{\mathscr R}_{\varepsilon},\qquad {\mathscr R}_{\varepsilon}={\mathsf R}_{\varepsilon}\circ\chi\circ\Phi_{\varepsilon}^0. $$ As above, we perform a cutoff and limit ourselves to the domain ${\mathscr D}_{\varepsilon}$ $$ {\mathscr D}_{\varepsilon}:\qquad \xi\in{\mathbb R},\quad \abs{\eta}\leq{\tfrac{1}{2}},\quad \varphi\in{\mathbb R},\quad \rho\in J_{\bullet},\quad \norm{(u,s)}\leq2{\varepsilon}^q, $$ where again $J_{\bullet}\subset J$ is arbitrarily close to $J$ and $q\geq 1$ is an arbitrary integer. We introduce the rescaled variables $$ \xi=\frac{\widetilde\xi}{\gamma_0},\quad \eta=\widetilde\eta,\quad \varphi=\frac{\widetilde\varphi}{\gamma_0}, \quad\rho=\widetilde\rho,\quad u={\varepsilon}^q\widetilde u,\quad s={\varepsilon}^q\widetilde s. $$ where $\gamma_0$ is to be chosen below, and $\norm{(\widetilde u,\widetilde s)}\leq 2$. We use again top-flat bump functions to continue the functions to ${\mathbb R}^6$. The system associated with $X^{{\mathscr N}_{\varepsilon}}$ on ${\mathbb R}^6$ then reads: \begin{equation}\label{eq:mainsyst2} \left\vert \begin{array}{lllll} \widetilde\xi'=\gamma_0\big[\partial_\eta {\bf M}_{\varepsilon}^0 &\!\!\!+\ {\varepsilon}^{2q}\widetilde{\partial_{\eta}{\boldsymbol \lambda}}\,\widetilde u\widetilde s& \!\!\!+\ \widetilde{\partial_{\eta}{\mathscr N}_3}&\!\!\! +\ \widetilde{\partial_{\eta} {\mathscr R}_{\varepsilon}}\big] \\[5pt] \widetilde\eta'=\hskip8mm0& &\!\!\!-\ \widetilde{\partial_{\xi}{\mathscr N}_3}&\!\!\! -\ \widetilde{\partial_{\xi} {\mathscr R}_{\varepsilon}} \\[5pt] \widetilde\varphi'=\gamma_0\big[\partial_\rho {\bf M}_{\varepsilon}^0 &\!\!\!+\ {\varepsilon}^{2q}\widetilde{\partial_{\rho }{\boldsymbol \lambda}}\,\widetilde u\widetilde s& \!\!\!+\ \widetilde{\partial_{\rho }{\mathscr N}_3}&\!\!\! +\ \widetilde{\partial_{\rho} {\mathscr R}_{\varepsilon}}\big] \\[5pt] \widetilde\rho'=\hskip8mm0& &\!\!\!-\ \widetilde{\partial_{\varphi}{\mathscr N}_3}&\!\!\! -\ \widetilde{\partial_{\varphi} {\mathscr R}_{\varepsilon}} \\[5pt] \widetilde u'=&\hskip10mm\widetilde{{\boldsymbol \lambda}}\,\widetilde u &\!\!\!+\ {{\varepsilon}^{-q}}\widetilde{\partial_{s}{\mathscr N}_3}&\!\!\! +\ {{\varepsilon}^{-q}} \widetilde{\partial_{s} {\mathscr R}_{\varepsilon}} \\[5pt] \widetilde s'=&\hskip8mm-\widetilde{{\boldsymbol \lambda}}\,\widetilde s &\!\!\!-\ {{\varepsilon}^{-q}}\widetilde{\partial_{u}{\mathscr N}_3}&\!\!\! -\ {{\varepsilon}^{-q}}\widetilde{\partial_{u} {\mathscr R}_{\varepsilon}} \\ \end{array} \right. \end{equation} where the $\widetilde{\phantom{A}}$ stands for the composition by the rescaled variables. Given $v\in\{\xi,\eta,\rho,\varphi,u,v\}$, the following (not optimal) estimates are immediate: $$ \norm{\widetilde{\partial_v{\mathscr N}_3}}_{C^j}\leq {\sigma}_j\frac{{\varepsilon}^{2q}}{\gamma_0^p}, \qquad \norm{\partial_v{\mathscr R}_{\varepsilon}}_{C^j}\leq {\sigma}_j\frac{{\varepsilon}^{\ell-j/2}}{\gamma_0^p}, $$ for suitable constants ${\sigma}_j>0$, independent of $\gamma_0,\gamma_1$ and ${\varepsilon}$. We will assume $$ \ell>q+p/2, $$ a condition which holds as soon as $\kappa$ is large enough. As for the first step of the proof, one readily sees that it is possible to choose $\gamma_0>0$ and ${\varepsilon}_0>0$ small enough so that for $0<{\varepsilon}<{\varepsilon}_0$, the system (\ref{eq:mainsyst2}) admits a normally hyperbolic invariant annulus with equation $$ \widetilde\xi\in{\mathbb R},\quad \abs{\widetilde\eta}<{\tfrac{1}{2}},\quad \widetilde\varphi\in{\mathbb R},\quad \widetilde\rho\in J_{\bullet},\quad \widetilde u =\widetilde U_{\varepsilon}(\widetilde x),\quad \widetilde s=\widetilde S_{\varepsilon}(\widetilde x), $$ where $\widetilde U_{\varepsilon}$ and $\widetilde S_{\varepsilon}$ are $C^p$ functions with $\norm{(\widetilde U_{\varepsilon},\widetilde S_{\varepsilon})}_{C^p}\leq 1$. As a consequence, the system $X^{{\mathscr N}_{\varepsilon}}$ possesses a pseudo invariant annulus ${\mathcal A}_{\varepsilon}$ contained in the domain ${\mathscr D}_{\varepsilon}$, with equation $$ \xi\in{\mathbb R},\quad \abs{\eta}<{\tfrac{1}{2}},\quad \varphi\in{\mathbb R},\quad \rho\in J_{\bullet},\quad u ={\varepsilon}^q U_{\varepsilon}(x),\quad s={\varepsilon}^qS_{\varepsilon}(x), $$ where $U_{\varepsilon}(\xi,\eta,\varphi,\rho)=\widetilde U_{\varepsilon}(\gamma_0\xi,\eta,\gamma_0\varphi,\rho)$ and a similar definition for $S_{\varepsilon}$. Going back to the initial variables: \begin{equation} {\mathcal A}_{\varepsilon}=\Big\{\Big(\frac{1}{\sqrt{{\varepsilon}}}\xi,\sqrt{\varepsilon}\,\eta,\,j_{\varepsilon}\big(\xi,\eta,\varphi,\rho\big)\Big) \mid \xi\in\frac{{\mathbb R}}{\sqrt{\varepsilon}{\mathbb Z}},\,\abs{\eta}<{\tfrac{1}{2}},\,\varphi\in{\mathbb T},\,\rho\in J_{\bullet}\Big\}, \end{equation} with now $$ \norm{j_{\varepsilon}-j^0_{\varepsilon}}_{C^p}\leq {\sigma} {\varepsilon}^q $$ for a suitable constant ${\sigma}>0$. The annulus ${\mathcal A}_{\varepsilon}$ is symplectic, so the Hamiltonian vector field $\Psi_{\varepsilon}^*X^{{\mathsf N}_{\varepsilon}}$ is generated by ${\mathsf N}_{\varepsilon}\circ\Psi_{\varepsilon}$ relatively to the symplectic form $$ \Omega_{\varepsilon}=d\eta\wedge d\xi+d\rho\wedge d\varphi+{\varepsilon}^{2q}dU_{\varepsilon}\wedge dS_{\varepsilon}. $$ This immediately yields the final Hamiltonian ${\mathsf M}_{\varepsilon}$. \end{proof} \subsubsection{Twist sections} We now examine the intersection of the previous pseudo invariant annulus with constant energy levels of the initial Hamiltonian ${\mathsf N}_{\varepsilon}$. \begin{lemma}\label{lem:section} We keep the notation and assumptions of Lemma~\ref{lem:existann}. Then ${\mathcal C}_{\varepsilon}={\mathcal A}_{\varepsilon}\cap {\mathsf N}_{\varepsilon}^{-1}(0)$ is a pseudo invariant hyperbolic cylinder for ${\mathsf N}_{\varepsilon}$, on which $(\xi,\varphi,\rho)\in{\mathbb T}_{\varepsilon}\times{\mathbb T}\times J^{\bullet}$ form a chart. The set ${\Sigma}_{\varepsilon}$ defined in this chart by the equation $\xi=0$ is a $2$--dimensional transverse section for the Hamiltonian vector field $X^{{\mathsf M}_{\varepsilon}}$ on ${\mathcal C}_{\varepsilon}$, on which the coordinates $(\varphi,\rho)\in{\mathbb T}\times J$ form an exact symplectic chart. Relatively to these coordinates, given $J^{\bullet}$ with $J\subset \overline J^{\bullet}\subset J^{\bullet}$, the Poincar\'e map induced by the Hamiltonian flow inside ${\mathcal C}_{\varepsilon}$ is well-defined over ${\mathbb T}\times J^{\bullet}$, with values in ${\mathbb T}\times J^{\bullet}$, and reads \begin{equation}\label{eq:formPoinc} {\mathscr P}_{\varepsilon}(\varphi,\rho)=\Big(\varphi+{\varepsilon}\varpi(\rho)+\Delta_{\varepsilon}^\varphi(\varphi,\rho),\ \rho+\Delta_{\varepsilon}^\rho(\varphi,\rho)\Big), \end{equation} where $\varpi$, $\Delta_{\varepsilon}^\varphi$, $\Delta_{\varepsilon}^\rho$ are $C^p$ functions such that: \begin{equation} \varpi'(\rho)\geq {\sigma},\qquad \norm{\Delta_{\varepsilon}^\varphi}_{C^p}\leq {\varepsilon}^q,\qquad \norm{\Delta_{\varepsilon}^\rho}_{C^p}\leq {\varepsilon}^q. \end{equation} for a suitable ${\sigma}>0$. \end{lemma} \begin{proof} In the chart $(\xi,\eta,\varphi,\rho)\in{\mathbb T}_{\varepsilon}\times\,]-1,1[\,\times{\mathbb T}\times J^{\bullet}$ of ${\mathcal A}_{\varepsilon}$ associated with the embedding $\Psi_{\varepsilon}$, the intersection ${\mathcal C}_{\varepsilon}$ admits the equation ${\mathsf M}_{\varepsilon}=0$, that is: $$ \widehat\omega\,\eta + {\mathsf C}_0(\rho) + \Lambda^0_{\varepsilon}(\eta,\rho)+\Lambda_{\varepsilon}\Big(\tfrac{1}{\sqrt{\varepsilon}}\xi,\eta,\varphi,\rho\Big)=0. $$ For ${\varepsilon}$ small enough, from this latter equation one gets the variable $\eta$ as an implicit function of $\xi,\varphi,\rho$, so that ${\mathcal C}_{\varepsilon}$ is a $3$--dimensional submanifold of ${\mathcal A}_{\varepsilon}$, diffeomorphic to ${\mathbb T}^2\times\,]0,1[$. Moreover ${\mathcal C}_{\varepsilon}$ is pseudo invariant since ${\mathsf N}_{\varepsilon}^{-1}(0)$ is invariant and ${\mathscr A}_{\varepsilon}$ is pseudo invariant. Since ${\mathcal A}_{\varepsilon}$ is normally hyperbolic, ${\mathcal C}_{\varepsilon}$ is normally hyperbolic too (see the introduction). Since $\dot\xi=\widehat\omega+O(\sqrt{\varepsilon})$, ${\Sigma}_{\varepsilon}$ is a transverse section for $X^{{\mathsf M}_{\varepsilon}}$ on ${\mathcal C}_{\varepsilon}$ when ${\varepsilon}$ is small enough. The statement on ${\mathscr P}_{\varepsilon}$ is immediate from the expression of ${\mathsf M}_{\varepsilon}$. \end{proof} \subsubsection{Invariant tori and the boundaries of $d$--cylinders}\label{sec:applicKAM} It only remains now to apply to ${\mathscr P}_{\varepsilon}$ the invariant curve theorem deduced from Herman's presentation, stated in Proposition~\ref{prop:KAM}, which is possible by chosing $\ell$ large enough. This yields the existence of ${\varepsilon}_0$ such that the map ${\mathscr P}_{\varepsilon}$ admits an essential curve in each connected component of ${\mathbb T}\times J^{{\bullet}}\setminus\overline J$. As a consequence, the Hamiltonian flow in ${\mathcal C}_{\varepsilon}$ admits an invariant torus in each domain ${\mathbb T}_{\varepsilon}\times{\mathbb T}\times J^{{\bullet}}\setminus\overline J$, which bound in ${\mathcal C}_{\varepsilon}$ a compact normally hyperbolic invariant cylinder. This concludes the proof of Lemma~\ref{lem:dcyl}. \subsection{Proof of Lemma~\ref{lem:singcyl}}\label{sec:prooflemsingcyl} Recall that given a singular $2$-annulus ${\mathsf A}_{\bullet}$ of $C$, there exists a neighborhood $O$ of ${\mathsf A}_{\bullet}$ in ${\mathbb A}^2$ and a vector field $X_\circ$ on an open set of ${\mathbb A}^2$ such that $X_\circ\equiv X_C$ in $O$ and $X_\circ$ admits a $C^1$ normally hyperbolic $2$-annulus (see Part II and Appendix~\ref{app:normhyp}) ${\mathsf A}_\circ$ which satisfies the properties: \vskip1mm ${\bullet}$ the components of the boundary of ${\mathsf A}_{\bullet}$ formed by the opposite periodic orbits coincide with the components of the boundary of ${\mathsf A}_\circ$, \vskip1mm ${\bullet}$ the time-one map of $X_\circ$ is a twist map relatively to adapted coordinates. \vskip1mm ${\bullet}$ $X_\circ$ is the Hamiltonian vector field generated by a $C^2$ Hamiltonian $H_\circ$ on ${\mathbb A}^2$. \vskip1mm As a consequence, the previous section applied to $H_\circ$ proves the existence of a cylinder attached to ${\mathsf A}_\circ$. Now the same argument as above proves the existence of three regular $2$-dimensional annuli of section at energy ${\bf e}$ on which the return map has the form (\ref{eq:formPoinc}) and so admits invariant circles arbitrarily close to the boundaries. One deduces the existence of $3$ invariant $2$-dimensional tori which form the boundary of a singular cylinder, which moreover admits a generalized twist section in the sense of Definition~\ref{def:singann2}. This concludes the proof. \subsection{Proof of Lemma~\ref{lem:scyl}}\label{sec:prooflemscyl} We will first prove the existence of ``the main part of'' the $s$--cylinder using the global normal form of Proposition~\ref{prop:globnorm} and hyperbolic persistence. We then prove the existence of the extremal cylinders at the double resonance points. Finally, we get the $s$--cylinders by gluing together the extremal cylinders at each end of the previous main part. \subsubsection{Global normal form: the ``main part'' of the $s$--cylinders}\label{ssec:normformhyppersist} Let ${\Gamma}=\omega^{-1}(k^\bot)\cap h^{-1}({\bf e})$. We fix adapted coordinates $({\theta},r)$ and write $r_3=r_3^*(\widehat r)$ for the equation of the resonance surface $\omega^{-1}(k^\bot)$. We set $$ \ell(\widehat r)=\big(\widehat r, r_3^*(\widehat r)\big). $$ As in Appendix~\ref{App:globnormforms}, we fix two consecutive points $r'$ and $r''$ in $D(\delta)$, fix $\rho<\!<{\rm dist\,}_{{\Gamma}}(r',r'')$ and set $$ {\Gamma}_\rho:=[r^*,r^{**}]_{\Gamma}\subset [r',r'']_{\Gamma}, $$ where $r^*,r^{**}$ are defined by the equalities $$ {\rm dist\,}_{\Gamma}(r^*,r')={\rm dist\,}_{\Gamma}(r^{**},r'')=\rho. $$ We set $\widehat{\Gamma}_\rho=\Pi({\Gamma}_\rho)$ where $\Pi$ is the projection $r\mapsto\widehat r$. For $c>0$, we then set \begin{equation} {\mathscr U}_{c,\rho}=\{r\in{\mathbb R}^3\mid {\rm dist\,}_\infty(r,{\Gamma}_\rho)<c\rho\}, \qquad {\mathscr D}_{c,\rho}=\Pi\big({\mathscr U}_{c,\rho}\big), \qquad {\mathscr W}_{c,\rho}={\mathbb T}^3\times {\mathscr U}_{c,\rho}. \end{equation} Our starting point is the following consequence of the global normal form in Proposition~\ref{prop:globnorm}. \begin{lemma}\label{lem:mainpart} Consider the system \begin{equation} N({\theta},r)=h(r)+{\varepsilon} V({\theta}_3,r)+{\varepsilon} W_0({\theta},r)+{\varepsilon} W_1({\theta},r)+{\varepsilon}^2 W_2({\theta},r), \end{equation} where \begin{equation} V({\theta}_3,r)=\int_{{\mathbb T}^2}f({\theta},r)\,d{\theta}_1d{\theta}_2, \end{equation} and where the functions $W_0\in C^p({\mathbb A}^3)$, $W_1\in C^{\kappa-1}({\mathscr W}_{c\rho})$, $W_2\in C^\kappa({\mathscr W}_{c\rho})$ satisfy \begin{equation} \begin{array}{lll} \norm{W_0}_{C^p({\mathscr W}_{c\rho})}\leq \delta,\\[5pt] \norm{W_1}_{C^2({\mathscr W}_{c\rho})}\leq c_1\, \rho^{-3} \\[5pt] \norm{W_2}_{C^2({\mathscr W}_{c\rho})}\leq c_2\,\rho^{-6}, \end{array} \end{equation} for suitable constants $c_1,c_2>0$. Assume that for $\widehat r\in {\mathscr D}_{c\rho}$ the function \begin{equation} \begin{array}{lll} V\big(\cdot,\ell(\widehat r)\big): {\mathbb T}\to{\mathbb R} \end{array} \end{equation} admits a single maximum at ${\theta}_3^*(\widehat r)$, which is nondegenerate. Then for $0<{\varepsilon}<{\varepsilon}_0$, the system $N_{\varepsilon}$ admits a pseudo invariant cylinder ${\mathcal C}_{\varepsilon}$ of the form \begin{equation} {\mathcal C}_{\varepsilon}=\Big\{\big(\widehat{\theta},\widehat r, {\theta}_3=\Theta_3(\widehat{\theta},\widehat r), r_3=R_3(\widehat{\theta},\widehat r)\big) \mid \widehat{\theta}\in{\mathbb T}^2,\ \widehat r\in {\mathscr D}_{c\rho}\Big\}\cap N_{\varepsilon}^{-1}({\bf e}), \end{equation} where $c>0$ is small enough, and where \begin{equation} \norm{(\Theta_3-{\theta}_3^*(\widehat r))}_{C^0}\leq c\sqrt\delta,\qquad \norm{(R_3-r_3^*(\widehat r))}_{C^0}\leq c\sqrt\delta\sqrt{\varepsilon}, \end{equation} for a suitable $C>0$ independent of $\rho$. Moreover, there exists $\mu>0$ such that any invariant set which is contained in a domain of the form \begin{equation}\label{eq:uniqueloc} \Big\{(\widehat{\theta},\widehat r,{\theta}_3,r_3)\mid \widehat{\theta}\in{\mathbb T}^2,\widehat r\in{\mathscr D}_{c\rho},\abs{{\theta}_3-{\theta}_3(\widehat r)},\abs{r_3-r_3^*(\widehat r)}\Big\}\cap N_{\varepsilon}^{-1}(0) \end{equation} is contained in ${\mathcal C}_{\varepsilon}$. \end{lemma} \begin{proof} We first work in the universal covering ${\mathbb R}^3\times{\mathbb R}^3$ of ${\mathbb A}^3$ and use the same notation for the elements of ${\mathbb A}^3$ and their lifts. \vskip2mm $\bullet$ The differential system associated with $X^{N_{\varepsilon}}$ reads \begin{equation}\label{eq:vectfield0} \left\vert \begin{array}{lllll} \widehat {\theta}'= \widehat\omega(r)&\!\!\! +\ {\varepsilon}\partial_{\widehat r}V({\theta}_3,r)&\!\!\! +\ {\varepsilon}\,\partial_{\widehat r}W_0({\theta},r)&\!\!\! +\ {\varepsilon}\,\partial_{\widehat r}W_1({\theta},r)&\!\!\! +\ {\varepsilon}^2\,\partial_{\widehat r}W_2({\theta},r)\\[5pt] \widehat r'= &&\!\!\! -\ {\varepsilon}\,\partial_{\widehat {\theta}}W_0({\theta},r)&\!\!\! -\ {\varepsilon}\,\partial_{\widehat {\theta}}W_1({\theta},r)&\!\!\! -\ {\varepsilon}^2\,\partial_{\widehat {\theta}}W_2({\theta},r)\\[5pt] {\theta}_3'= \omega_3(r)&\!\!\! +\ {\varepsilon}\partial_{r_3}V({\theta}_3,r)&\!\!\! +\ {\varepsilon}\,\partial_{r_3}W_0({\theta},r)&\!\!\! +\ {\varepsilon}\,\partial_{r_3}W_1({\theta},r)&\!\!\! +\ {\varepsilon}^2\,\partial_{r_3}W_2({\theta},r)\\[5pt] r_3'= &\!\!\! -\ {\varepsilon}\partial_{{\theta}_3}V({\theta}_3,r)&\!\!\! -\ {\varepsilon}\,\partial_{{\theta}_3}W_0({\theta},r)&\!\!\! -\ {\varepsilon}\,\partial_{{\theta}_3}W_1({\theta},r)&\!\!\! -\ {\varepsilon}^2\,\partial_{{\theta}_3}W_2({\theta},r).\\ \end{array} \right. \end{equation} \vskip2mm $\bullet$ We will first estimate the various terms of the previous system in the domain ${\mathsf D}_{\varepsilon}$ defined by \begin{equation}\label{eq:domainwork} {\mathsf D}_{\varepsilon}=\Big\{ ({\theta},r)\in{\mathbb R}^3\times{\mathbb R}^3\mid \widehat{\theta}\in {\mathbb R}^2,\ \widehat r\in {\mathscr D}_{\rho},\ \abs{{\theta}_3-{\theta}_3^*\big(\ell(\widehat r)\big)}\leq \sqrt\delta, \ \abs{r_3-r^*_3(\widehat r)}\leq\sqrt{\varepsilon} \Big\}. \end{equation} We first set \begin{equation} {\boldsymbol \th}_3={\theta}_3-{\theta}^*_3\big(\ell(\widehat r)\big),\qquad {\bf r}_3=r_3-r^*_3(\widehat r), \end{equation} so that \begin{equation}\label{eq:expansion} \left\vert \begin{array}{rll} \omega_3(r)&=&a(\widehat r)\,{\bf r}_3+\chi(\widehat r,{\bf r}_3)\,{\bf r}_3^2\\[5pt] -\partial_{{\theta}_3}V({\theta}_3,r)&=&b(\widehat r)\,{\boldsymbol \th}_3+\chi(\widehat r,{\boldsymbol \th}_3,{\bf r}_3)\,{\boldsymbol \th}_3^2+\chi(\widehat r,{\boldsymbol \th}_3,{\bf r}_3)\,{\boldsymbol \th}_3\,{\bf r}_3+\chi(\widehat r,{\boldsymbol \th}_3,{\bf r}_3)\,{\bf r}_3^2.\\ \end{array} \right. \end{equation} Observe that $a(\widehat r)\geq a>0$ and $b(\widehat r)\geq b>0$. To diagonalize the hyperbolic part, we set \begin{equation} {\zeta}(\widehat r)=a(\widehat r)^{\frac{1}{4}}b(\widehat r)^{-\frac{1}{4}} \end{equation} and \begin{equation} u={\zeta}(\widehat r)\,{\bf r}_3+\sqrt{\varepsilon}\, {\zeta}(\widehat r)^{-1}\,{\boldsymbol \th}_3,\qquad s={\zeta}(\widehat r)\,{\bf r}_3-\sqrt{\varepsilon}\,{\zeta}(\widehat r)^{-1}\,{\boldsymbol \th}_3, \end{equation} The inverse transformation reads \begin{equation}\label{eq:inverse} {\theta}_3\big(I,(u-s)/\sqrt{\varepsilon}\big):={\theta}_3^*(I)+{\zeta}(I)\frac{u-s}{2\sqrt{\varepsilon}},\qquad r_3\big(I,(u+s)\big):=r_3^*(I)+\frac{u+s}{2{\zeta}(I)}. \end{equation} We finally complete the change of variables and time by setting \begin{equation} \varphi= \frac{1}{\gamma\sqrt{\varepsilon}}\,\widehat{\theta},\qquad I=\widehat r,\qquad \dot{{u}}=\frac{1}{\sqrt{\varepsilon}} u'. \end{equation} In the following we denote by $M$ a universal constant, independent of ${\varepsilon}$ and $\delta$. \vskip2mm ${\bullet}$ {\bf Estimates for $\dot\varphi$.} Observe that \begin{equation} \begin{array}{lll} \dot\varphi=\gamma\widehat{\theta}'=\gamma\, \Omega(I,u,s)&+&{\varepsilon} \partial_{\widehat r}V\Big({\theta}_3\big(I,(u-s)/\sqrt{\varepsilon}\big),I,r_3\big(I,(u+s)\big)\Big)\\ &+&{\varepsilon}\partial_{\widehat r}W_0\Big(\varphi/(\gamma\sqrt{\varepsilon}), I, {\theta}_3\big(I,(u-s)/\sqrt{\varepsilon}\big),r_3\big(I,(u+s)\big)\Big)\\ &+&{\varepsilon}\partial_{\widehat r}W_1\Big(\varphi/(\gamma\sqrt{\varepsilon}), I, {\theta}_3\big(I,(u-s)/\sqrt{\varepsilon}\big),r_3\big(I,(u+s)\big)\Big)\\ &+&{\varepsilon}^2\partial_{\widehat r}W_2\Big(\varphi/(\gamma\sqrt{\varepsilon}), I, {\theta}_3\big(I,(u-s)/\sqrt{\varepsilon}\big),r_3\big(I,(u+s)\big)\Big) \end{array} \end{equation} where \begin{equation} \Omega(I,u,s)=\widehat\omega\big(I,\, r_3^*(I)+{\tfrac{1}{2}} (u+s)({\zeta}(I))^{-1}\big), \end{equation} Forgetting about the variables to avoid cumbersome notations, and using the estimates on the various fonctions, one gets: \begin{equation} \begin{array}{lll} \norm{{\varepsilon} \partial_{\widehat r}V}_{C^0}\leq M{\varepsilon},&\norm{{\varepsilon} \partial_{\widehat r}V}_{C^1}\leq M\sqrt{\varepsilon},\\[5pt] \norm{{\varepsilon}\partial_{\widehat r}W_0}_{C^0}\leq {\varepsilon}\delta,& \norm{{\varepsilon}\partial_{\widehat r}W_0}_{C^1}\leq \sqrt{\varepsilon}\delta,\\[5pt] \norm{{\varepsilon}\partial_{\widehat r}W_1}_{C^0}\leq M{\varepsilon}^{3/2}\rho^{-2},& \norm{{\varepsilon}\partial_{\widehat r}W_1}_{C^1}\leq M{\varepsilon}\rho^{-3},\\[5pt] \norm{{\varepsilon}^2\partial_{\widehat r}W_2}_{C^0}\leq M{\varepsilon}^2\rho^{-5},&\norm{{\varepsilon}^2\partial_{\widehat r}W_2}_{C^1}\leq M{\varepsilon}^{3/2}\rho^{-6}.\\ \end{array} \end{equation} \vskip2mm ${\bullet}$ {\bf Estimates for $\dot I$.} In the same way \begin{equation} \begin{array}{lll} \dot I= &-&\sqrt{\varepsilon}\partial_{\widehat {\theta}}W_0\Big(\varphi/(\gamma\sqrt{\varepsilon}), I, {\theta}_3\big(I,(u-s)/\sqrt{\varepsilon}\big),r_3\big(I,(u+s)\big)\Big)\\[5pt] &-&\sqrt{\varepsilon}\partial_{\widehat {\theta}}W_1\Big(\varphi/(\gamma\sqrt{\varepsilon}), I, {\theta}_3\big(I,(u-s)/\sqrt{\varepsilon}\big),r_3\big(I,(u+s)\big)\Big)\\[5pt] &-&{\varepsilon}^{3/2}\partial_{\widehat {\theta}}W_2\Big(\varphi/(\gamma\sqrt{\varepsilon}), I, {\theta}_3\big(I,(u-s)/\sqrt{\varepsilon}\big),r_3\big(I,(u+s)\big)\Big), \end{array} \end{equation} which yields \begin{equation} \begin{array}{lll} \norm{\sqrt{\varepsilon}\partial_{\widehat r}W_0}_{C^0}\leq \sqrt{\varepsilon}\delta,& \norm{\sqrt{\varepsilon}\partial_{\widehat r}W_0}_{C^1}\leq \delta,\\[5pt] \norm{\sqrt{\varepsilon}\partial_{\widehat r}W_1}_{C^0}\leq M{\varepsilon}\rho^{-2},& \norm{\sqrt{\varepsilon}\partial_{\widehat r}W_1}_{C^1}\leq M\sqrt{\varepsilon}\rho^{-3},\\[5pt] \norm{{\varepsilon}^{3/2}\partial_{\widehat r}W_2}_{C^0}\leq M{\varepsilon}^{3/2}\rho^{-5},&\norm{{\varepsilon}^{3/2}\partial_{\widehat r}W_2}_{C^1}\leq M{\varepsilon}\rho^{-6}.\\ \end{array} \end{equation} \vskip2mm ${\bullet}$ {\bf Estimates for $\dot u$ and $\dot s$.} We will give the details for $\dot u$ only, the case of $\dot s$ being exactly similar. First note that \begin{equation} \dot u=\frac{1}{\sqrt{\varepsilon}}\Bigg[{\zeta}(I)r_3'+\frac{\sqrt{\varepsilon}}{{\zeta}(I)}{\theta}_3' +I'\Big({\zeta}'(I)\big({\bf r}_3-\frac{\sqrt{\varepsilon}}{{\zeta}(I)^2}{\zeta}'(I){\boldsymbol \th}_3\big) +{\zeta}(I)(r_3^*)'(I)-\frac{\sqrt{\varepsilon}}{{\zeta}(I)}({\theta}_3^*)'(I)\Big)\Bigg] \end{equation} (where $'$ stands both for the initial time derivative and the usual derivative of functions). We first focus on the part of \begin{equation} \frac{1}{\sqrt{\varepsilon}}\Big({\zeta}(I)r_3'+\frac{\sqrt{\varepsilon}}{{\zeta}(I)}{\theta}_3'\Big) \end{equation} involving the functions $\omega_3$, $\partial_{{\theta}_3}V$ and $\partial_{r_3}V$ only. A straightforward computation, using in particular (\ref{eq:expansion}) proves that \begin{equation} \frac{1}{\sqrt{\varepsilon}}\Big(-{\varepsilon}{\zeta}\partial_{{\theta}_3}V+\frac{\sqrt{\varepsilon}}{{\zeta}}\big(\omega_3+\partial_{r_3}V\big)\Big)=\lambda(I)u+\chi(I,u,s,{\varepsilon}) \end{equation} with \begin{equation} \lambda(I)=\sqrt{a(I)b(I)}, \end{equation} and, assuming ${\varepsilon}\leq \delta$: \begin{equation} \norm{\chi}_{C^0}\leq M\sqrt{\varepsilon}\delta,\qquad \norm{\chi}_{C^1}\leq M\delta. \end{equation} As for the contribution of the functions $W_0,W_1,W_2$, one gets \begin{equation} \begin{array}{ll} \chi_0:=-{\zeta}\sqrt{\varepsilon}\partial_{{\theta}_3}W_0+\displaystyle\frac{{\varepsilon}}{{\zeta}}\partial_{r_3}W_0,\\[7pt] \chi_1:=-{\zeta}\sqrt{\varepsilon}\partial_{{\theta}_3}W_1+\displaystyle\frac{{\varepsilon}}{{\zeta}}\partial_{r_3}W_1,\\[7pt] \chi_2:=-{\zeta}{\varepsilon}^{3/2}\partial_{{\theta}_3}W_2+\displaystyle\frac{{\varepsilon}^2}{{\zeta}}\partial_{r_3}W_2,\\ \end{array} \end{equation} so that \begin{equation} \begin{array}{ll} \norm{\chi_0}_{C^0}\leq \delta\sqrt{\varepsilon},&\qquad \norm{\chi_0}_{C^1}\leq \delta,\\[5pt] \norm{\chi_1}_{C^0}\leq M\Big(\displaystyle\frac{{\varepsilon}}{\rho}+\frac{{\varepsilon}^{3/2}}{\rho^2}\Big), &\qquad \norm{\chi_1}_{C^1}\leq M\Big(\displaystyle\frac{\sqrt{\varepsilon}}{\rho^2}+\frac{{\varepsilon}}{\rho^3}\Big),\\[5pt] \norm{\chi_2}_{C^0}\leq \displaystyle \frac{{\varepsilon}^{3/2}}{\rho^5},&\qquad \norm{\chi_2}_{C^1}\leq \displaystyle\frac{{\varepsilon}}{\rho^6}. \end{array} \end{equation} Finally, the estimates for the remaining term \begin{equation} \frac{1}{\sqrt{\varepsilon}}I'\Big({\zeta}'(I)\big({\bf r}_3-\frac{\sqrt{\varepsilon}}{{\zeta}(I)^2}{\zeta}'(I){\boldsymbol \th}_3\big) +{\zeta}(I)(r_3^*)'(I)-\frac{\sqrt{\varepsilon}}{{\zeta}(I)}({\theta}_3^*)'(I)\Big) \end{equation} are the clearly same as those of $\dot I$. \vskip2mm ${\bullet}$ Once this preliminary work is done in the domain (\ref{eq:domainwork}) one easily extends the system to ${\mathbb R}^6$ by using bump functions. Let $\eta:{\mathbb R}\to[0,1]$ be a $C^\infty$ function with support in $[-2,2]$, which is equal to $1$ in $[-1,1]$. Then the functions \begin{equation} \mu_{\varepsilon}(x)=\eta(x/\sqrt{\varepsilon}),\qquad \nu_\delta(x)=\eta(x/\sqrt\delta) \end{equation} satisfy \begin{equation} \norm{\mu_{\varepsilon}}_{C^0}=1,\qquad \norm{\mu_{\varepsilon}}_{C^1}=1/\sqrt{\varepsilon}, \qquad \norm{\nu_\delta}_{C^0}=1,\qquad \norm{\nu_\delta}_{C^1}=1/\sqrt\delta. \end{equation} The new system obtained by replacing the various factors in (\ref{eq:vectfield0}) with their product by \begin{equation} \mu_{\varepsilon}({\bf r}_3)\nu_\delta({\boldsymbol \th}_3) \end{equation} admits the same estimates as the previous ones, to the cost of changing $M$ to a larger constant. \vskip2mm ${\bullet}$ As a consequence, with respect to the new variables and time, the new system in ${\mathbb R}^6$ reads \begin{equation}\label{eq:presystem} \left\vert \begin{array}{lllll} \dot \varphi= \gamma\, \Omega(I,u,s)&+&F_\varphi(\varphi,I,u,s,{\varepsilon})\\ \dot I=0&+&F_I(\varphi,I,u,s,{\varepsilon})\\ \dot u=\lambda(I)\,u&+&F_u(\varphi,I,u,s,{\varepsilon})\\ \dot s=-\lambda(I)\,s&+&F_s(\varphi,I,u,s,{\varepsilon}),\\ \end{array} \right. \end{equation} with, setting $F=(F_\varphi,F_I,F_u,F_s)$ and assuming \begin{equation} {\varepsilon}=\rho^7,\qquad {\varepsilon}^{1/7}<\delta, \end{equation} \begin{equation} \norm{F}_{C^0}\leq M\sqrt\delta\sqrt{\varepsilon},\qquad \norm{F}_{C^1}\leq M\sqrt\delta. \end{equation} \vskip2mm $\bullet$ Since $\lambda(I)$ is bounded from below by a positive constant $\lambda$. Therefore the persistence theorem applies when the constant $\delta$ is small enough. This yields the existence of a normally hyperbolic invariant manifold ${\mathscr A}^*_{\varepsilon}$ for (\ref{eq:presystem}), of the form $$ \Big\{\big(\varphi,I,U(\varphi,I),S(\varphi,I)\big)\mid (\varphi,I)\in{\mathbb R}^4\Big\}, $$ where $(U,S):{\mathbb R}^4\to{\mathbb R}^2$ is a $C^p$ map which satisfies \begin{equation}\label{eq:finest} \norm{(U,S)}_{C^0({\mathbb R}^4)}\leq \frac{2M}{\lambda}\,\delta\,\sqrt{\varepsilon},\qquad \norm{(U,S)}_{C^1({\mathbb R}^4)}\leq C\,\delta. \end{equation} One immediately checks these estimates are consistent with the definition of the domain (\ref{eq:domain}) so that ${\mathscr A}^*_{\varepsilon}\subset {\mathsf D}_{\varepsilon}$. \vskip2mm $\bullet$ To go back to the initial coordinates, one has to apply the inverse change (\ref{eq:inverse}) is contained in the domain $$ \abs{{\boldsymbol \th}_3}\leq c\sqrt\delta,\qquad \abs{{\bf r}_3}\leq c\sqrt\delta\sqrt{\varepsilon}, $$ for a suitable $c>0$. One also gets the periodicity in the angular variables by the same uniqueness argument as in \cite{B10}. The local maximality property (\ref{eq:uniqueloc}) is also an immediate consequence of the normally hyperbolic persistence theorem. This concludes the proof of Lemma~\ref{lem:mainpart}. \end{proof} \vskip2mm It remains now to prove the existence of a section and the twist property. In this section we will content ourselves with a non-connected section relatively to which the return map admits the twist property, we will in fact prove the existence of a covering of ${\mathcal C}_{\varepsilon}$ with consecutive cylinders with connected twist sections in the next Section. We begin with a lemma which is a direct consequence of the normal forms in Lemma~\ref{lem:mainpart} and describes the Hamiltonian vector field in restriction to the annuli ${\mathcal A}_{\varepsilon}$. \begin{lemma}\label{lem:normformvectfield} With the notation and assumptions of {\rm Lemma~\ref{lem:mainpart}}, the vector field on the invariant compact $s$-annulus ${\mathcal A}_{\varepsilon}$ admits the following normal form relatively to the coordinates $(\widehat{\theta},\widehat r)$: \begin{equation} \left\vert \begin{array}{llcll} \widehat {\theta}'&=& \widehat\omega\big(\widehat r, r_3^*(\widehat r)+R_3(\widehat r)\big)& +&\chi_{\widehat{\theta}}(\widehat{\theta},\widehat r)\\[5pt] \widehat r'&=& 0 & +&\chi_{\widehat r}(\widehat{\theta},\widehat r)\\[5pt] \end{array} \right. \end{equation} where \begin{equation} \begin{array}{lll} \norm{R_3}_{C^0}\leq M\delta\sqrt{\varepsilon},&\qquad \norm{R_3}_{C^1}\leq M\delta\\ \norm{\chi_{\widehat{\theta}}}_{C^0}\leq M\sqrt\delta \sqrt{\varepsilon},&\qquad \norm{\chi_{\widehat{\theta}}}_{C^1}\leq M\sqrt\delta \\ \norm{\chi_{\widehat r}}_{C^0}\leq M\sqrt\delta\,{\varepsilon},&\qquad \norm{\chi_{\widehat r}}_{C^1}\leq M\sqrt\delta\sqrt{\varepsilon}. \end{array} \end{equation} \end{lemma} One can cover the annulus ${\mathcal A}_{\varepsilon}$ with a a finite number of (overlapping) open subsets over which $\omega_1\big (\ell(\widehat r)\big)\neq0$ or $\omega_2\big (\ell(\widehat r)\big)\neq0$. To simplify the following, we will assume that $\omega_1\big (\ell(\widehat r)\big)\neq0$ in the neighborhood of ${\mathcal A}_{\varepsilon}$, the general case being easily deduced from the latter (since we allow for non-connected sections). \begin{lemma} With the notation and assumptions of {\rm Lemma~\ref{lem:normformvectfield}}, and assuming moreover that \begin{equation} \omega_1\big (\ell(\widehat r)\big)\geq\varpi_0>0 \end{equation} in the neighborhood of ${\mathcal A}_{\varepsilon}$, then the submanifold \begin{equation} {\Sigma}=\Big\{(\widehat {\theta},\widehat r)\in{\mathbb T}^2\times{\mathscr D}_{c{\varepsilon}^{1/7}}\mid {\theta}_1=0\Big\} \end{equation} is a transverse section for the vector field on ${\mathcal A}_{\varepsilon}$. The intersection ${\Sigma}\cap {\mathcal C}_{\varepsilon}$ is a global section for the flow on ${\mathcal C}_{\varepsilon}$ which admits $({\theta}_2,r_2)$ as a global exact-symplectic chart. Relatively to these coordinates, the flow-induced return map attached to ${\Sigma}\cap {\mathcal C}_{\varepsilon}$ is a twist map. \end{lemma} \begin{proof} Recall that $h$ is a Tonelli Hamiltonian on ${\mathbb R}^3$ and that $({\theta},r)$ are adapted coordinates, relatively to which ${\Gamma}=\{\omega_3=0\}\cap h^{-1}({\bf e})$. Let $\Pi:{\mathbb R}^3\to{\mathbb R}^2$ be the projection on the $\widehat r$--plane. The submanifold $\{\omega_3=0\}$ is a graph over its projection $\pi(\{\omega_3=0\})$ since $\partial_{r_3}\omega_3=\partial^2_{r_3}h>0$. Observe also that ${\Gamma}$ is the ``apparent contour'' of the level $h^{-1}({\bf e})$ with respect to the direction $r_3$, so that $\Pi({\Gamma})$ bounds the projection $C_{\bf e}:=\Pi\big(h^{-1}(]-\infty,{\bf e}])\big)$, which is strictly convex. By the implicit function theorem, this proves also that $\Pi\big(\{\omega_3=0\}\cap h^{-1}(]-\infty,{\bf e}])\big)=C_{\bf e}$ and therefore that $\{\omega_3=0\}$ is a graph over the whole $\widehat r$--plane. Fix ${\theta}_3^*\in{\mathbb T}$ and set $S:=\{{\theta}_3={\theta}_3^*\}\times\{\omega_3=0\}\subset {\mathbb T}^3\times{\mathbb R}^3$. Then clearly $S$ is invariant under the Hamiltonian flow generated by $h$, and is moreover symplectic. Taking $(\widehat {\theta},\widehat r)$ as a chart on $S$, the induced Liouville form reads $r_2d{\theta}_2+r_3d{\theta}_3$. The restriction of the Hamiltonian flow to $S$ is generated by the restriction $\widehat h$ of $h$ to $S$. The sublevels of this restriction are the sets $C_{\bf e}$, so that $\widehat h$ is quasi-convex. Given ${\bf e}>\mathop{\rm Min\,}\limits h$, relatively to the coordinates $(\widehat{\theta},\widehat r)$: $$ \widehat h^{-1}({\bf e})={\mathbb T}^2\times \widehat{\Gamma}. $$ Each torus ${\mathbb T}^2\times\{\widehat r\}$ on this level is invariant under the flow, with rotation vector $\varpi(\widehat r)\in{\mathbb R}^2$. Since $\widehat h$ is quasi-convex, the map ${\Gamma}\to P{\mathbb R}^2$ which associates to $\widehat r\in\widehat{\Gamma}$ the projective line generated by $\varpi(\widehat r)$ is a local diffeomorphism. The submanifold ${\Sigma}$ is clearly a transverse section for the vector field on ${\mathcal A}_{\varepsilon}$. The previous property shows that the unperturbed return map associated with the vector field \begin{equation} \left\vert \begin{array}{llcll} \widehat {\theta}'&=& \widehat\omega\big(\widehat r, r_3^*(\widehat r)+R_3(\widehat r)\big)& &\\[5pt] \widehat r'&=& 0 & &\\[5pt] \end{array} \right. \end{equation} is a twist map relatively to the coordinates $({\theta}_2,r_2)$. Since the complete map is a $\sqrt\delta$ perturbation in the $C^1$ topology of the unperturbed one, it still admits the twist property when ${\varepsilon}$ is small enough. \end{proof} We finally go back to the initial system by applying the inverse normalization introduced in Appendix~\ref{App:globnormforms}, Proposition~\ref{prop:globnorm}. \begin{lemma} Given $\mu>0$, there exists ${\varepsilon}_0$ such that for $0<{\varepsilon}<{\varepsilon}_0$ the Hamiltonian system $H_{\varepsilon}$ admits a pseudo invariant and pseudo normally hyperbolic cylinder ${\mathscr C}_{\varepsilon}$ at energy ${\bf e}$, which contains any invariant set contained in the domains $$ \begin{array}{ll} D^0=\Big\{(\widehat{\theta},\widehat r,{\theta}_3,r_3)\mid \widehat{\theta}\in{\mathbb T}^2,\ a{\varepsilon}^{1/7}\leq \norm{\widehat r-\widehat r^0}\leq b{\varepsilon}^{1/7},\ \abs{{\theta}_3-{\theta}_3^*(\widehat r)}\leq \mu,\ \abs{r_3-r_3^*(\widehat r)}\leq \mu\sqrt{\varepsilon}\Big\},\\[5pt] D^1=\Big\{(\widehat{\theta},\widehat r,{\theta}_3,r_3)\mid \widehat{\theta}\in{\mathbb T}^2,\ a{\varepsilon}^{1/7}\leq \norm{\widehat r-\widehat r^1}\leq b{\varepsilon}^{1/7},\ \abs{{\theta}_3-{\theta}_3^*(\widehat r)}\leq \mu,\ \abs{r_3-r_3^*(\widehat r)}\leq \mu\sqrt{\varepsilon}\Big\}. \end{array} $$ The cylinder ${\mathscr C}_{\varepsilon}$ admits a (non-connected) twist section. \end{lemma} \begin{proof} Recall that, by Proposition~\ref{prop:globnorm}: $$ H_{\varepsilon}=N_{\varepsilon}\circ \Phi_{\varepsilon}^{-1}, $$ where, setting $\Phi_{\varepsilon}=(\Phi_{\varepsilon}^{{\theta}},\Phi_{\varepsilon}^{r})$: \begin{equation} \norm{\Phi_{\varepsilon}^{{\theta}}-{\rm Id}}_{C^0({\mathscr W}_{c\rho})}\leq c_\Phi\,{\varepsilon}\,\rho^{-2}\leq c_\Phi{\varepsilon}^{5/7},\qquad \norm{\Phi_{\varepsilon}^{r}-{\rm Id}}_{C^0({\mathscr W}_{c\rho})}\leq c_\Phi\,{\varepsilon}\,\rho^{-1}\leq c_\Phi{\varepsilon}^{6/7}. \end{equation} The inverse image ${\mathscr C}_{\varepsilon}=\Phi_{\varepsilon}^{-1}({\mathcal C}_{\varepsilon})$ is therefore a pseudo invariant and normally hyperbolic cylinder for $H_{\varepsilon}$, which contains any invariant set contained in $D^0\cup D^1$ (up to the choice of suitable constants). The inverse image $\Phi^{-1}({\Sigma})$ is a section for the Hamiltonian flow in ${\mathscr C}_{\varepsilon}$, which satisfies the twist condition since its return map is a small $C^1$ perturbation of that of ${\Sigma}$, which admits a nondegenerate torsion. \end{proof} \subsection{Proof of Lemma~\ref{lem:extcyl}}\label{sec:prooflemextcyl} \setcounter{paraga}{0} We assume without loss of generality that $m^0=0$ and ${\bf e}=0$. We introduce adapted coordinates $(x,y)$ at $0$, in which the equation of the resonance ${\Gamma}$ moreover reads $\omega_3=0$. We set \begin{equation} [f](\overline x,y)=\int_{\mathbb T} f\big((x_1,\overline x),y\big)dx_1. \end{equation} \paraga Our starting point is the normal form of Proposition~\ref{prop:normal2}. We set, for $({\theta},r)\in{\mathbb T}^3\times B(0,{\varepsilon}^d)$: \begin{equation}\label{eq:rednormform} N_{\varepsilon}({\theta},r):=H_{\varepsilon}\circ\Phi_{\varepsilon}({\theta},r)=h(r)+ g_{\varepsilon}(\overline {\theta},r)+R_{\varepsilon}({\theta},r),\qquad \end{equation} where $g_{\varepsilon}$ and $R_{\varepsilon}$ are $C^p$ functions such that \begin{equation} \label{eq:estimg} \norm{g_{\varepsilon}-{\varepsilon}[f]}_{C^p\big( {\mathbb T}^2\times B(0,{\varepsilon}^d)\big)}\leq {\varepsilon}^{1+{\sigma}},\qquad \norm{R_{\varepsilon}}_{C^p\big( {\mathbb T}^3\times B(0,{\varepsilon}^d)\big)}\leq {\varepsilon}^\ell. \end{equation} and $\Phi_{\varepsilon}$ is ${\varepsilon}^{{\sigma}}$--close to the identity in the $C^p$--topology. We set $$ V_{\varepsilon}({\theta}_3,r)=\frac{1}{{\varepsilon}}\int_{{\mathbb T}}g_{\varepsilon}\big(({\theta}_2,{\theta}_3),r\big)\,d{\theta}_2. $$ \paraga We assumed that for each point $r$ of ${\Gamma}$ in the neighborhood of $0$, the $s$--averaged potential \begin{equation} <f>({\theta}_3,r)=\int_{\mathbb T}[f](\overline {\theta},r)d{\theta}_2 \end{equation} admits a single and nondegenerate maximum at some point ${\theta}_3^*(r)$, which in turns yields the following result. \begin{lemma} For each point $r\in{\Gamma}$ in the neighborhood of $0$, the function $V_{\varepsilon}(\cdot,r)$ admits a unique and nondegenerate maximum on ${\mathbb T}$ at some point ${\theta}_3^{**}(r)$. Moreover, there is a constant $c>0$ such that \begin{equation}\label{eq:locmax} \abs{{\theta}_3^{**}(r)-{\theta}_3^{*}(r)}<c\,{\varepsilon}^{{\sigma}/2}. \end{equation} \end{lemma} \begin{proof} By (\ref{eq:estimg}): $$ \norm{V_{\varepsilon}-<f>}_{C^p\big( {\mathbb T}\times B(0,{\varepsilon}^d)\big)}\leq {\varepsilon}^{{\sigma}} $$ with $p\geq 2$. The claim then immediately follows from the nondegeneracy of the maximum ${\theta}_3^*(r)$. \end{proof} \paraga We introduce now the truncated normal form $$ N_{\varepsilon}^0({\theta},r):=h(r)+ g_{\varepsilon}(\overline {\theta},r). $$ The main observation now is that $N_{\varepsilon}^0$ is independent of ${\theta}_1$, so that $r_1$ is a first integral and the system can be reduced to its level sets. The total system is then recovered by taking the product with the circle ${\mathbb T}$ of ${\theta}_1$. So we fix $r_1$ and set $$ {\bf h}(\overline r)=h(r_1,\overline r),\qquad {\bf g}_{\varepsilon}(\overline {\theta},\overline r)=g_{\varepsilon}\big(\overline {\theta},(r_1,\overline r)\big),\qquad {\mathsf N}_{\varepsilon}^0(\overline{\theta},\overline r)={\bf h}(\overline r)+{\bf g}_{\varepsilon}(\overline{\theta},\overline r). $$ Observe that the function ${\bf h}$ is convex and superlinear over ${\mathbb R}^2$. Therefore, setting ${\boldsymbol \omega}=\nabla{\bf h}$, the resonance ${\boldsymbol \Gamma}$ defined by ${\boldsymbol \omega}_3(\overline r)=0$ is a graph over the $r_2$--axis, with equation $$ {\boldsymbol \Gamma}:\quad r_3=r_3^{**}(r_2;r_1). $$ \paraga We set ${\bf V}_{\varepsilon}({\theta}_3,\overline r)=V_{\varepsilon}\big({\theta}_3,(r_1,\overline r)\big)$. Fix a point $$ \overline r^0\in{\boldsymbol \Gamma}\cap B(0,{\varepsilon}^d). $$ Let $S_{\varepsilon}:{\mathbb T}^2\to{\mathbb R}$ be the solution of the homological equation $$ {\boldsymbol \omega}(\overline r^0)\partial_{\overline{\theta}}S_{\varepsilon}(\overline{\theta})={\bf g}_{\varepsilon}(\overline{\theta},\overline r)-{\varepsilon}{\bf V}_{\varepsilon}({\theta}_3,\overline r) $$ that is, since ${\boldsymbol \omega}_3(\overline r^0)=0$: $$ S_{\varepsilon}(\overline{\theta})=\frac{1}{{\boldsymbol \omega}_2(\overline r^0)}\int_0^{{\theta}_2}\big({\bf g}_{\varepsilon}(\overline{\theta},\overline r)-{\varepsilon}{\bf V}_{\varepsilon}({\theta}_3,\overline r)\big)\,d{\theta}_2. $$ In particular, there is a $\mu>0$ such that $$ \norm{S_{\varepsilon}}_{C^p}\leq \frac{\mu}{\norm{{\ov r}^0}}. $$ We introduce the the symplectic change $$ \phi(\overline{\theta},\overline r)=\big(\overline{\theta},\overline r^0+\overline r-{\varepsilon}\partial_{\overline{\theta}}S_{\varepsilon}(\overline{\theta})\big) $$ \paraga By immediate computation, one checks that $$ {\mathsf N}_{\varepsilon}^0\circ\phi(\overline{\theta},\overline r)={\bf h}(\overline r^0+\overline r) +{\varepsilon}{\bf V}_{\varepsilon}(\overline {\theta}_3,\overline r^0) +{\bf W}_0({\ov\th},{\ov r}) +{\varepsilon}{\bf W}_1({\ov\th},{\ov r}) +{\varepsilon}^2{\bf W}_1({\ov\th},{\ov r}), $$ with $$ {\bf W}_0({\ov\th},{\ov r})=\big({\bf g}_{\varepsilon}({\ov\th},{\ov r}^0+{\ov r})-{\bf g}_{\varepsilon}({\ov\th},{\ov r}^0)\big), $$ $$ {\bf W}_1({\ov\th},{\ov r})=\big({\boldsymbol \omega}({\ov r}^0+{\ov r})-{\boldsymbol \omega}({\ov r}^0)\big)\partial_{\ov\th} S - \int_0^1\partial_{\ov r}{\bf g}_{\varepsilon}({\ov\th},{\ov r}^0+{\ov r}-{\sigma}{\varepsilon}\partial_{\ov\th} S)(\partial_{\ov\th} S)\,d{\sigma}, $$ $$ {\bf W}_2({\ov\th},{\ov r})=\int_0^1(1-{\sigma})D^2{\bf h}({\ov\th},{\ov r}^0+{\ov r}-{\sigma}{\varepsilon}\partial_{\ov\th} S)(\partial_{\ov\th} S)^2\,d{\sigma}. $$ Hence, there is an $M>0$ such that $$ \begin{array}{lll} \norm{{\bf W}_0}_{C^2}\leq M{\varepsilon},\qquad \norm{{\bf W}_0(\cdot,\overline r)}_{C^2}\leq M{\varepsilon}\norm{\overline r},\\[6pt] \norm{{\varepsilon}{\bf W}_1}_{C^2}\leq \displaystyle M\frac{{\varepsilon}}{\norm{{\ov r}^0}},\\[6pt] \norm{{\varepsilon}^2{\bf W}_2}_{C^2}\leq \displaystyle M\frac{{\varepsilon}^2}{\norm{{\ov r}^0}^2}. \end{array} $$ \paraga The resulting Hamiltonian differential equations read \begin{equation} \left\vert \begin{array}{lllll} {\theta}_2'= {\boldsymbol \omega}_2(\overline r)&\!\!\! +&\!\!\! +\ \partial_{r_2}{\bf W}_0&\!\!\! +\ {\varepsilon}\,\partial_{r_2}{\bf W}_1&\!\!\! +\ {\varepsilon}^2\,\partial_{r_2}{\bf W}_2\\[5pt] r_2'= &&\!\!\! -\ \partial_{{\theta}_2}{\bf W}_0&\!\!\! -\ {\varepsilon}\,\partial_{{\theta}_2}{\bf W}_1&\!\!\! -\ {\varepsilon}^2\,\partial_{{\theta}_2}{\bf W}_2\\[5pt] {\theta}_3'= {\boldsymbol \omega}_3(\overline r)&\!\!\! +&\!\!\! +\ \partial_{r_3}{\bf W}_0&\!\!\! +\ {\varepsilon}\,\partial_{r_3}{\bf W}_1&\!\!\! +\ {\varepsilon}^2\,\partial_{r_3}{\bf W}_2\\[5pt] r_3'= &\!\!\! -\ {\varepsilon}\,V'({\theta}_3)&\!\!\! -\ \partial_{{\theta}_3}{\bf W}_0&\!\!\! -\ {\varepsilon}\,\partial_{{\theta}_3}{\bf W}_1&\!\!\! -\ {\varepsilon}^2\,\partial_{{\theta}_3}{\bf W}_2.\\ \end{array} \right. \end{equation} \paraga We now follow exactly the same lines as in the proof of Lemma~\ref{lem:mainpart}, in the much simpler present framework. The previous estimates prove the existence in the system ${\mathsf N}_{\varepsilon}^0$ of a ``local'' annulus of hyperbolic periodic orbits ${\mathsf A}({\ov r}^0)$ ``centered at ${\ov r}^0$'', which is a graph of the form \begin{equation}\label{eq:graphform} {\theta}_3=\Theta_3({\theta}_2,r_2),\quad r_3=R_3({\theta}_2,r_2),\qquad ({\theta}_2,r_2)\in {\mathbb T}\times \,\Big]r^0_2-c\norm{{\ov r}^0},r^0_2+c\norm{{\ov r}^0}\Big[. \end{equation} provided that $$ \norm{\overline r^0}\geq C\sqrt{\varepsilon} $$ where $C$ is large enough, with also $c<C$ smal enough. \paraga Now, by immediate hyperbolic maximality for periodic orbits, the union of all these ``local annuli'' is an annulus ${\mathsf A}^{ext}$, which is a graph of the previous form with $$ ({\theta}_2,r_2)\in {\mathbb T}\times\,]c''\sqrt{\varepsilon}, c'''{\varepsilon}^\nu[ $$ for suitable constants $c'',c'''>0$. Moreover, for any prescribed constant $\mu>0$, one can choose $c''$ large enough and $c'''$ small enough so that \begin{equation}\label{eq:localization} \abs{\Theta_3({\theta}_2,r_2)-{\theta}_3^{**}\big(r_1,r_2,r_3^{**}(r_1,r_2)\big)}\leq \mu,\quad \abs{R_3({\theta}_2,r_2)-r^{**}_3(r_1,r_2)}\leq \mu\sqrt{\varepsilon}, \end{equation} (recall that the value of $r_1$ was fixed at the beginning of this part). \paraga Going back to the total truncated normal form $N_{\varepsilon}^0$ in (\ref{eq:rednormform}) and varying the variable $r_1$, the previous family of annuli give rise to a $4$--dimensional invariant hyperbolic annulus ${\mathcal A}^0_{\varepsilon}$, which is now a graph of the form $$ {\theta}_3=\Theta_3(\widehat{\theta},\widehat r),\quad r_3=R_3(\widehat{\theta},\widehat r),\qquad ({\theta}_1,{\theta}_2)\in {\mathbb T}^2,\ r_1\in \,]-\widehat c{\varepsilon}^\nu,\widehat c{\varepsilon}^\nu[,\ r_2\in\,]\widehat c\sqrt{\varepsilon},\widehat c{\varepsilon}^\nu[, $$ for a suitable $\widehat c>0$ and for $0<{\varepsilon}<{\varepsilon}_0$ small enough. Note that the restriction to ${\mathcal A}^0_{\varepsilon}$ of the Hamiltonian flow generated by $N_{\varepsilon}^0$ is completely integrable, since ${\mathcal A}^0_{\varepsilon}$ is foliated by the invariant $2$--tori obtained by taking the product of the circle of ${\theta}_1$ with the periodic orbits foliating the annulus ${\mathsf A}^{ext}$. \paraga We now follow the same process as for the $d$--cylinders (Lemma~\ref{lem:existann}). The previous integrability property yields symplectic angle-action coordinates on ${\mathcal A}_{\varepsilon}$. This enable us to use the smallness of the complementary term $R$ in (\ref{eq:rednormform}) and get a perturbed $4$--dimensional pseudo invariant annulus ${\mathcal A}_{\varepsilon}$ and $3$--dimensional pseudo invariant cylinder ${\mathcal C}^*_{\varepsilon}={\mathcal A}_{\varepsilon}\cap N_{\varepsilon}^{-1}(0)$ at energy $0$ equipped with a global section ${\Sigma}$ with coordinates $({\theta}_1,r_1)$. Relatively to these coordinates, the Poincar\'e return maps takes the form described in Lemma~\ref{lem:section}, and we deduce as in Section~\ref{sec:applicKAM} the existence of Birkhoff-Herman tori arbitrarily close to the boundary of the cylinder. \paraga It remains now to prove that the pullback ${\mathscr C}^*_{\varepsilon}=\Phi^{-1}_{\varepsilon}({\mathcal C}^*_{\varepsilon})$ of the previous cylinder ``continues'' the cylinder ${\mathscr C}_{\varepsilon}$ attached with the annulus ${\mathsf A}_\ell$ of the averaged system $C$. This will be done by proving that one (well-chosen) end of the cylinder ${\mathscr C}^*_{\varepsilon}$ is contained in ${\mathscr C}_{\varepsilon}$. Recall that the cylinder ${\mathscr C}_{\varepsilon}$ is maximal in some neighborhood of the form $$ {\mathscr U}({\mathscr C}_{\varepsilon}):\quad \abs{{\theta}_3-{\theta}_3^*}\leq a,\quad \abs{r_3-r_3^*}\leq a\sqrt{\varepsilon}, $$ where $a$ is an arbitrary constant. It is therefore enough to prove that at least two Birkhoff-Herman tori contained in the end of ${\mathscr C}^*_{\varepsilon}$ are contained in ${\mathscr U}({\mathscr C}_{\varepsilon})$. We will state the problem in the variables of $N_{\varepsilon}$. Just as in the proof of Lemma~\ref{lem:dnormalizing} the rescaling $r =\sqrt{\varepsilon}\, {\mathsf r}$, $t=\frac{1}{\sqrt{\varepsilon}}\,{{\mathsf t}}$ yields the new Hamiltonian \begin{equation} {\mathsf N}_{\varepsilon}({\theta},{\mathsf r})=\frac{1}{{\varepsilon}}\, N_{\varepsilon}({\theta},{\sqrt{\varepsilon}}\,{\mathsf r}). \end{equation} Then, performing a Taylor expansion of $h$ and $[f]$ at $r^0=0$ in $N_{\varepsilon}$, one gets the normal form: \begin{equation} {\mathsf N}_{\varepsilon}({\theta},{\mathsf r})=\frac{1}{{\varepsilon}}N_{\varepsilon}({\theta},\sqrt{\varepsilon}{\mathsf r})= \frac{\widehat \omega}{\sqrt{\varepsilon}}\,{\mathsf r}_1+{\tfrac{1}{2}} D^2h(0)\,{\mathsf r}^2+ U(\overline{\theta}) +{\mathsf R}^0_{\varepsilon}(\overline{\theta},{\mathsf r})+{\mathsf R}_{\varepsilon}({\theta},{\mathsf r}) \end{equation} where $$ \norm{{\mathsf R}^0_{\varepsilon}}_{C^p({\mathbb T}^2\times B^3(0,d^*))}\leq a\sqrt {\varepsilon},\qquad \norm{{\mathsf R}_{\varepsilon}}_{C^p({\mathbb T}^n\times B(0,d^*))}\leq {\varepsilon}^\ell. $$ Our statement will be an immediate consequence of (\ref{eq:locmax}) and (\ref{eq:localization}), taking into account the following expansion: \begin{equation} \begin{array}{lll} \frac{1}{{\varepsilon}} {\mathsf N}({\theta},{\mathsf r})&=&{\frac{1}{2}} T({\mathsf r})+U(\overline{\theta},0)+\big[\frac{1}{{\varepsilon}}g_{\varepsilon}(\overline{\theta},r)-U(\overline{\theta},0)\big] +\big[U(\overline{\theta},0)-U(\overline{\theta},0)\big]+\sqrt{\varepsilon}\widehat h({\mathsf r})\\ &=&C(\overline{\theta},{\mathsf r})+\sqrt{\varepsilon} \widehat C(\overline{\theta},{\mathsf r}), \end{array} \end{equation} where $\widehat h$ stands for the third order term in the Taylor expansion of $h$ at $0$. \paraga Finally, as for the other boundary torus of ${\mathscr C}^{ext}_{\varepsilon}$, one only has to apply the localization statement on the periodic orbits of the extremal annulus ${\mathcal A}^0_{\varepsilon}$ together with the results of Lemma~\ref{lem:section} and Section~\ref{sec:applicKAM}. This concludes the proof of Lemma~\ref{lem:extcyl}. \subsubsection{Bifurcation points}\label{Sec:bifurcation1} In this part we examine the case where bifurcation points may exist between the resonant points $m^0$ and $m^1$. Three types of intervals of ${\Gamma}$ have to be considered: $[m^0,b]$, $[b^0,b^1]$ and $[b,m^1]$. We will limit ourselves to the first one, the other two being essentially equivalent (and simpler for the second one). \begin{lemma}\label{lem:bifurcation1} Assume that the interval $[m^0,b]$ contains no other bifurcation points than $b$. Then for ${\varepsilon}_0>$ small enough there exists a family $({\mathscr C}_{\varepsilon})_{0<{\varepsilon}<{\varepsilon}_0}$ of (invariant and normally hyperbolic) $s$-cylinders along $[m^0,b]$, in the sense that: \begin{itemize} \item ${\mathscr C}_{\varepsilon}$ contains the extremal cylinder ${\mathscr C}_{\varepsilon}^{ext}(m^0)$ of {\rm Lemma~\ref{lem:extcyl}}, \item the projection in action of the other boundary of ${\mathscr C}_{\varepsilon}$ is located in a ball $B(b_+,\sqrt{\varepsilon})$, where $b_+$ is such that $[m^0,b]\subset [m^0,b_+[$, \item the projection in action of ${\mathscr C}_{\varepsilon}$ is located in an $O(\sqrt{\varepsilon})$ tubular neighborhood of ${\Gamma}$. \end{itemize} \end{lemma} \begin{proof} The global normal form used in the proof of Lemma~\ref{lem:mainpart} is still valid here and results in the existence of a pseudo invariant cylinder whose ``left extremity'' contains the extremal cylinder ${\mathscr C}_{\varepsilon}^{ext}(m^0)$, and which moreover admits a twist section. It remains to study the behavior of the cylinder in the neihborhood of the point $b$. We will use the ${\varepsilon}$-dependent normal form of Appendix~\ref{app:normformepsdep}. For this, we fix two actions $b_0,b_1$ on the resonant circle ${\Gamma}$, very close to one another, such that the bifurcation point in the small interval they delimit. We moreover assume that the frequency vectors $\omega(b_1)$, $\omega(b_2)$ are $2$-Diophantine and that the $s$-averaged potential staill admits a nondegenerate maximum in a neighborhood of $b_1$ and $b_2$. We will prove the statement for the first cyllinder, the other one being similar. We set $$ [f]({\theta}_3,r)=\int_{{\mathbb T}^{2}}f\big((\widehat{\theta},{\theta}_3),r\big)\,d\widehat{\theta}. $$ Given two integers $p,\ell\geq 2$ and two constants $d>0$ and $\delta<1$ with $1-\delta>d$, then, if $\kappa$ is large enough, there is an ${\varepsilon}_0>0$ such that for $0<{\varepsilon}<{\varepsilon}_0$, there exists an analytic symplectic embedding $$ \Phi_{\varepsilon}: {\mathbb T}^3\times B(b_1,{\varepsilon}^d)\to {\mathbb T}^3\times B(b_1,2{\varepsilon}^d) $$ such that $$ N_{\varepsilon}({\theta},r)=H_{\varepsilon}\circ\Phi_{\varepsilon}({\theta},r)=h(r)+ g_{\varepsilon}({\theta}_3,r)+R_{\varepsilon}({\theta},r), $$ where $g_{\varepsilon}$ and $R_{\varepsilon}$ are $C^p$ functions such that \begin{equation} \norm{g_{\varepsilon}-{\varepsilon}[f]}_{C^p\big( {\mathbb T}\times B(b_1,{\varepsilon}^d)\big)}\leq {\varepsilon}^{2-\delta},\qquad \norm{R_{\varepsilon}}_{C^p\big( {\mathbb T}^n\times B(b_1{\varepsilon}^d)\big)}\leq {\varepsilon}^\ell. \end{equation} Moreover, $\Phi_{\varepsilon}$ is close to the identity, in the sense that \begin{equation} \norm{\Phi_{\varepsilon}-{\rm Id}}_{C^p\big( {\mathbb T}^3\times B(b_1,{\varepsilon}^d)\big)}\leq {\varepsilon}^{1-\delta}. \end{equation} One proves as in the previous section that the function $g_{\varepsilon}(\cdot,r):{\mathbb T}\to{\mathbb R}$ admits for $r$ close to $b_1$ a unique and nondegenerate maximum at some ${\theta}_3(r)$. Now the differential system generated by the truncated system $N_{\varepsilon}^0({\theta},r)=h(r)+ g_{\varepsilon}({\theta}_3,r)$ reads $$ \left\vert \begin{array}{lllll} \widehat {\theta}'= \widehat\omega(r)&\!\!\! +\ {\varepsilon}\partial_{\widehat r}g_{\varepsilon}({\theta}_3,r)\\[5pt] \widehat r'= 0 &\\[5pt] {\theta}_3'= \omega_3(r)&\!\!\! +\ {\varepsilon}\partial_{r_3}g_{\varepsilon}({\theta}_3,r)\\[5pt] r_3'= &\!\!\! -\ {\varepsilon}\partial_{{\theta}_3}g_{\varepsilon}({\theta}_3,r).\\ \end{array} \right. $$ One proves exactly as in the same way as in the previous sections the existence of a pseudo invariant normally hyperbolic $4$ annulus of the form $$ \widehat{\theta}\in{\mathbb T}^2,\quad \norm{\widehat r-\widehat b_1}\leq d\sqrt{\varepsilon},\quad \abs{{\theta}_3-{\theta}_3(r)}\leq \delta,\quad \abs{{\theta}_3-{\theta}_3(r)}\leq\delta\sqrt{\varepsilon}. $$ Moreover, this annulus carries a completely integrable Hamiltonian flow and is foliated by the invariant tori $\widehat r= cte$. Then we proceed as in the case of the $d$-cylinders to prove the existence of a section ${\Sigma}$ for this flow, whose attached return map admits nondegenerate torsion. Finally the existence of invariant circles for this return map is proved by the Herman theorem. \end{proof} \section{Intersection conditions and chains}\label{sec:chains} This section is devoted to the precise description of homoclinic and heteroclinic properties of the cylinders we got in the previous one. These properties will be of crucial use in \cite{M} in order to prove the generic existence of orbits shadowing the chains of cylinders. \setcounter{paraga}{0} \subsection{\bf Intersection conditions, gluing condition, and admissible chains} Let $H$ be a proper $C^2$ Hamiltonian function on ${\mathbb A}^3$ and fix a regular value~${\bf e}$. \paraga{\bf Oriented cylinders.} We say that a cylinder ${\mathscr C}$ is {\em oriented} when an order is prescribed on the two components of its boundary. We denote the first one by $\partial_{\bullet}{\mathscr C}$ and the second one by $\partial^{\bullet}{\mathscr C}$. \paraga {\bf The homoclinic condition {\rm(FS1)}.} A compact invariant cylinder ${\mathscr C}\subset H^{-1}({\bf e})$ with twist section ${\Sigma}$ and associated invariant symplectic $4$-annulus ${\mathscr A}$ satisfies condition {\rm(FS1)}\ when there exists a $5$-dimensional submanifold $\Delta\subset {\mathbb A}^3$, transverse to $X_H$ such that, \begin{itemize} \item there exist $4$-dimensional submanifolds ${\mathscr A}^\pm\subset W^\pm({\mathscr A})\cap\Delta$ such that the restrictions to ${\mathscr A}^\pm$ of the characteristic projections $\Pi^\pm:W^\pm({\mathscr A})\to{\mathscr A}$ are diffeomorphisms on ${\mathscr A}$, whose inverses we denote by $j^\pm:{\mathscr A}\to{\mathscr A}^\pm$; \item there exists a continuation ${\mathscr C}_*$ of ${\mathscr C}$ such that the $3$-dimensional manifolds ${\mathscr C}^\pm_*=j^\pm({\mathscr C}_*)$ have a nonempty intersection, transverse in the $4$-dimensional manifold $ \Delta_{\bf e}:=\Delta\cap H^{-1}({\bf e}), $; \item the projections $\Pi^\pm({\mathscr I})\subset{\mathscr C}$ are $2$-dimensional transverse sections of the vector field $X_H$ restricted to ${\mathscr C}$, and the associated Poincar\'e maps $P^\pm: \Pi^\pm({\mathscr I})\to{\Sigma}_k$ are diffeomorphisms. \end{itemize} Note that ${\mathscr C}_*^+\cap{\mathscr C}_*^-$ is a $2$-dimensional submanifold of $\Delta_{\bf e}$, whose role will be crucial in \cite{M}. \paraga {\bf The heteroclinic condition {\rm(FS2)}.} A pair $({\mathscr C}_0,{\mathscr C}_1)$ of compact invariant oriented cylinders with twist sections ${\Sigma}_0$, ${\Sigma}_1$ and associated invariant symplectic $4$-annuli $({\mathscr A}_0,{\mathscr A}_1)$ satisfies condition {\rm(FS2)}\ when there exists a $5$-dimensional submanifold $\Delta\subset {\mathbb A}^3$, transverse to $X_H$ such that: \begin{itemize} \item there exist $4$-dimensional submanifolds $\widetilde {\mathscr A}_0^-\subset W^-({\mathscr A}_0)\cap\Delta$ and $\widetilde {\mathscr A}_1^+\subset W^+({\mathscr A}_1)\cap\Delta$ such that $\Pi_0^-{\vert \widetilde{\mathscr A}_0^-}$ and $\Pi_1^+{\vert \widetilde{\mathscr A}_1^+}$ are diffeomorphisms on their images $\widetilde {\mathscr A}_0$, $\widetilde {\mathscr A}_1$, which we require to be neighbohoods of the boundaries $\partial^{\bullet}{\mathscr C}_0$ and $\partial_{\bullet}{\mathscr C}_1$ in ${\mathscr A}_0$ and ${\mathscr A}_1$ respectively, we denote their inverses by $j_0^-$ and $j_1^+$; \item there exist neighborhoods $\widetilde{\mathscr C}_0$ and $\widetilde{\mathscr C}_1$ of $\partial^{\bullet}{\mathscr C}^0$ and $\partial_{\bullet}{\mathscr C}^1$ in continuations of the initial cylinders, such that $\widetilde {\mathscr C}_0^-=j_0^-(\widetilde{\mathscr C}_0)$ and $\widetilde{\mathscr C}_1^+=j_1^+(\widetilde{\mathscr C}_0)$ intersect transversely in the $4$-dimensional manifold $ \Delta_{\bf e}:=\Delta\cap H^{-1}({\bf e}), $, let ${\mathscr I}_*$ be this intersection; \item the projections $\Pi_0^-({\mathscr I}_*)\subset{\mathscr C}$ and $\Pi_1^+({\mathscr I}_*)\subset{\mathscr C}$ are $2$-dimensional transverse sections of the vector field $X_H$ restricted to $\widetilde {\mathscr C}_0$ and $\widetilde{\mathscr C}_1$, and the Poincar\'e maps $P_0: \Pi_0^-({\mathscr I}_*)\to{\Sigma}_0$ and $P_1: \Pi_1^-({\mathscr I}_*)\to{\Sigma}_1$ are diffeomorphisms (where ${\Sigma}_I$ stands for Poincar\'e sections in the neighborhoods $\widetilde{\mathscr C}_i$). \end{itemize} \paraga {\bf The homoclinic condition {\rm(PS1)}.} Consider an invariant cylinder ${\mathscr C}\subset H^{-1}({\bf e})$ with twist section ${\Sigma}$ and attached Poincar\'e return map $\varphi$, so that ${\Sigma}=j_{\Sigma}({\mathbb T}\times[a,b])$, where $j_{\Sigma}$ is exact-symplectic. Define ${\rm Tess}({\mathscr C})$ as the set of all invariant tori generated by the previous circles under the action on the Hamiltonian flow (so each element of ${\rm Tess}({\mathscr C})$ is a Lispchitzian Lagrangian torus contained in ${\mathscr C}$). The elements of ${\rm Tess}({\mathscr C})$ are said to be {\em essential tori}. \vskip2mm We say that an invariant cylinder ${\mathscr C}$ with associated invariant symplectic $4$-annulus ${\mathscr A}$ satisfies the {\em partial section property~{\rm(PS)}} when there exists a $5$-dimensional submanifold $\Delta\subset {\mathbb A}^3$, transverse to $X_H$ such that: \begin{itemize} \item there exist $4$-dimensional submanifolds ${\mathscr A}^\pm\subset W^\pm({\mathscr A})\cap\Delta$ such that the restrictions to ${\mathscr A}^\pm$ of the characteristic projections $\Pi^\pm:W^\pm({\mathscr A})\to{\mathscr A}$ are diffeomorphisms, whose inverses we denote by $j^\pm:{\mathscr A}\to{\mathscr A}^\pm$; \item there exist conformal exact-symplectic diffeomorphisms \begin{equation} \Psi^{\rm ann}:{\mathscr O}^{\rm ann}\to {\mathscr A},\qquad \Psi^{\rm sec}:{\mathscr O}^{\rm sec}\to\Delta_{\bf e}:=\Delta\cap H^{-1}({\bf e}) \end{equation} where ${\mathscr O}^{\rm ann}$ and ${\mathscr O}^{\rm sec}$ are neighborhoods of the zero section in $T^*{\mathbb T}^2$ endowed with the conformal Liouville form $a\lambda$ for a suitable $a>0$; \item each torus ${\mathscr T}\in{\rm Tess}({\mathscr C})$ is contained in some ${\mathscr C}$ and the image $\Psi^{\rm ann}({\mathscr T})$ is a Lipschitz graph over the base ${\mathbb T}^2$; \item for each such torus ${\mathscr T}$, setting ${\mathscr T}^\pm:=j^\pm({\mathscr T})\subset \Delta_{\bf e}$, the images $\Psi^{\rm sec}({\mathscr T}^\pm)$ are Lipschitz graphs over the base ${\mathbb T}^2$. \end{itemize} \paraga {\bf Bifurcation condition.} For bifurcations points we just rephrase our nondegeneracy condition {\bf ($\bf S_2$)}: for any $r^0\in B$, the derivative $\tfrac{d}{dr}\big(m^*(r)-m^{**}(r)\big)$ does not vanish. This will immediately yield transverse heteroclinic intersection properties for the intersections of the corresponding cylinders. \paraga {\bf The gluing condition {\rm(G)}.} A pair $({\mathscr C}_0,{\mathscr C}_1)$ of compact invariant oriented cylinders satisfies condition {\rm(G)}\ when they are contained in a invariant cylinder and satisfy \begin{itemize} \item $\partial^{\bullet}{\mathscr C}_0=\partial_{\bullet}{\mathscr C}_1$ is a dynamically minimal invariant torus that we denote by ${\mathscr T}$, \item $W^-({\mathscr T})$ and $W^+({\mathscr T})$ intersect transversely in $H^{-1}({\bf e})$. \end{itemize} \paraga {\bf Admissible chains.} A finite family of compact invariant oriented cylinders $({\mathscr C}_k)_{1\leq k\leq k_*}$ is an {\em admissible chain} when each cylinder satisfies either ${\rm(FS1)}$ or ${\rm(PS1)}$ and, for $k\in\{1,\ldots,k_*-1\}$, the pair $({\mathscr C}_k,{\mathscr C}_{k+1})$ satisfies either ${\rm(FS2)}$ or ${\rm(G)}$, or correspond to a bifurcation point. \paraga The main result of this section is the following. Recall that $\Pi:{\mathbb A}^3\to{\mathbb R}^3$ is the second projection and ${\bf d}$ the Hausdorff distance between compact subsets of ${\mathbb R}^3$. \begin{prop}\label{prop:intersection} Fix a $C^\kappa$ Tonelli Hamiltonian $h$, let $f\in C_b^\kappa({\mathbb A}^3)$ and set $H_{\varepsilon}=h+{\varepsilon} f$. Assume that $H$ satisfies {\bf (S)}\ along ${\Gamma}$. Then there is an ${\varepsilon}_0>0$ such that for $0<{\varepsilon}\leq{\varepsilon}_0(\delta)$, there exists an admissible chain $\big({\mathscr C}_k({\varepsilon})\big)_{0\leq k\leq k_*}$ of cylinders and singular cylinders at energy ${\bf e}$ for $H_{\varepsilon}$, whose projection by $\Pi$ satisfies $$ {\mathcal D}\Big(\bigcup_{1\leq k\leq k^*},{\Gamma}\Big)=O(\sqrt{\varepsilon}). $$ \end{prop} Note that $k^*$ is {\rm independent} of ${\varepsilon}$. The rest of this section is dedicated to the proof of the previous proposition \subsection{Proof of Proposition~\ref{prop:intersection}} \subsubsection{Conditions {\rm(FS1)}\ and {\rm(FS2)}\ for $d$-cylinders} In this section we prove the following lemma. \begin{lemma}\label{lem:dcondFS} Let $r^0$ be a double resonance point of $D$, with $d$-averaged system $C$. \begin{itemize} \item Fix a compact annulus ${\mathsf A}$ of $C$, defined over $I$, with continuation ${\mathsf A}^*$ defined over $I^*$. We assume that for each energy $e\in I^*$, there exists a homoclinic solution ${\zeta}_e$ for $\gamma_e$, continuously depending on $e$, such that the stable and unstable manifolds of $\gamma_e$ transversely intersect in $C^{-1}(e)$. Then there is an ${\varepsilon}_0>0$ such that for $0<{\varepsilon}\leq{\varepsilon}_0$ the cylinder ${\mathscr C}_{\varepsilon}$ satisfies the homoclinic condition {\rm(FS1)}. \item Fix two annuli ${\mathsf A}$ and ${\mathsf A}'$ of $C$ defined over adjacent intervals $I$ and $I'$ and such that the boundary orbits at $e$ admit transverse heteroclinic connections. Then there is an ${\varepsilon}_0>0$ such that for $0<{\varepsilon}\leq{\varepsilon}_0$ the associated cylinders ${\mathscr C}_{\varepsilon}$ and ${\mathscr C}'_{\varepsilon}$ satisfies the hetroclinic condition {\rm(FS2)}. \end{itemize} \end{lemma} \begin{proof} We begin with the case of a single annulus ${\mathsf A}$, with continuation ${\mathsf A}^*$. \paraga Let $\pi^\pm:W^\pm({\mathsf A}^*)\to {\mathsf A}^*$ be the characteristic projections. Fix a $C^\kappa$ arc $w:I^*\to {\mathbb A}^2$ such that $w(e)\in {\zeta}_e\setminus\gamma_e$ for $e\in I^*$. Since the manifolds $W^+(\gamma_e)\pitchfork W^-(\gamma_e)$ in $C^{-1}(e)$ and since the vector field $X^C$ is transverse to the stable and unstable foliations of $W^\pm(\gamma_e)$ inside these manifolds, the tangent vectors to the stable and unstable leaves at $w(e)$ together with $X^C(\nu(e))$ form a basis of $T_{w(e)}C^{-1}(e)$. The $C^{\kappa-1}$ curves \begin{equation} {\sigma}^\pm=\pi^\pm\big(w(I^*)\big)\subset {\mathsf A}^*. \end{equation} are clearly global sections of the Hamiltonian flow of $X^C$ on the annulus ${\mathsf A}$. Finally, since $w(I^*)$ is contractible, one can find a $3$--dimensional $C^\kappa$ submanifold $S\subset{\mathbb A}^2$ containing $w(I^*)$ and transverse to $X^C$. \paraga Consider now the Hamiltonian ${\mathsf N}_{\varepsilon}$ on ${\mathbb A}^3$. To ged rid of the artificial singular term $\widehat\omega\,\widehat {\mathsf r}/\sqrt{\varepsilon}$, we perform the symplectic change $\boldsymbol{\chi}=\chi\times{\rm Id}:{\mathbb A}_{\varepsilon}\times{\mathbb A}^2\to{\mathbb A}^3$, where $\chi(\xi,\eta)=(\widehat{\theta}=\xi/\sqrt{\varepsilon},\widehat{\mathsf r}=\sqrt{\varepsilon}\eta)$. Hence ${\mathsf N}_{\varepsilon}\circ\boldsymbol{\chi}$ is now defined on ${\mathbb T}_{\varepsilon}\times\,]-1,1[\,\times{\mathbb T}^2\times B^2(0,d^*)$, with explicit expression \begin{equation}\label{eq:snchi} {\mathsf N}_{\varepsilon}\circ\boldsymbol{\chi}(\xi,\eta,\overline {\theta},\overline{\mathsf r})=\widehat\omega\,\eta+Q(\sqrt{\varepsilon}\eta,\widehat{\mathsf r})+C(\overline{\theta},\overline{\mathsf r})+{\mathsf R}_{\varepsilon}^0(\sqrt{\varepsilon}\eta,\widehat{\theta},\widehat{\mathsf r}) +{\mathsf R}_{\varepsilon}(\tfrac{1}{\sqrt{\varepsilon}}\xi,\sqrt{\varepsilon}\eta,\widehat{\theta},\widehat{\mathsf r}). \end{equation} \paraga The system (\ref{eq:snchi}) is an $O(\sqrt{\varepsilon})$--perturbation in the $C^p$ topology of the truncated form $$ {\rm Av}(\xi,\eta,\overline {\theta},\overline{\mathsf r})=\widehat\omega\,\eta+C(\overline{\theta},\overline{\mathsf r}), $$ which admits the product ${\Sigma}:={\mathbb T}_{\varepsilon}\times\,]-1,1[\,\times{\mathsf A}^*$ as an invariant annulus, with stable and unstable manifolds $W^\pm({\Sigma})={\mathbb T}_{\varepsilon}\times\,]-1,1[\,\times W^\pm({\mathsf A}^*)$. The characteristic projections also inherits the same product structure: for any $(\xi,\eta,\overline {\theta},\overline{\mathsf r})\in W^\pm({\Sigma})$ $$ \Pi^\pm(\xi,\eta,\overline {\theta},\overline{\mathsf r})=\big(\xi,\eta,\pi^\pm(\overline {\theta},\overline{\mathsf r})\big). $$ Moreover, $W^\pm({\Sigma})$ transversely intersect one another along ${\mathbb T}_{\varepsilon}\times\,]-1,1[\,\times \big(W^+({\mathsf A}^*)\cap W^-({\mathsf A}^*)\big)$ in ${\mathbb A}^3$, and both manifolds transversely intersect the section ${\mathcal S}:={\mathbb T}_{\varepsilon}\times\,]-1,1[\,\times S$ in ${\mathbb A}^3$. Let $$ \Lambda_{\varepsilon}:={\mathcal S}\cap W_{loc}^+({\Sigma})\cap W_{loc}^-({\Sigma}), $$ then $\Pi^\pm(\Lambda)={\mathbb T}_{\varepsilon}\times\,]-1,1[\,\times {\sigma}^\pm$ are global transverse sections for the unperturbed flow on ${\Sigma}$. \paraga We proved in the previous section that the annulus ${\Sigma}$ persists in the system ${\mathsf N}_{\varepsilon}\circ\boldsymbol{\chi}$ and gives rise to an annulus ${\mathcal A}_{\varepsilon}$ which is $O(\sqrt{\varepsilon})$ close to ${\Sigma}$ in the $C^p$ topology. By the nomally hyperbolic persistence theorem, the local parts of the stable and unstable manifolds limited by ${\Sigma}$ and ${\mathcal S}$ also persist in the perturbed system. The corresponding parts $W_{loc}^\pm({\mathcal A}_{\varepsilon})$ are $O(\sqrt{\varepsilon})$ close to the unperturbed ones in the $C^p$ topology. As a consequence, for ${\varepsilon}$ small enough, these manifolds transversely intersect in ${\mathbb A}^3$ and they both transversely intersect ${\mathcal S}$ in ${\mathbb A}^3$. We set $$ \Lambda_{\varepsilon}:={\mathcal S}\cap W_{loc}^+({\mathcal A}_{\varepsilon})\cap W_{loc}^-({\mathcal A}_{\varepsilon}). $$ The characteristic foliations on $W_{loc}^\pm({\mathcal A}_{\varepsilon})$ are $O(\sqrt{\varepsilon})$--perturbations of those of $W_{loc}^\pm({\Sigma})$ in the $C^{p-1}$ topology. As a consequence the characteristic projections $\Pi^\pm_{\varepsilon}$ are also $O(\sqrt{\varepsilon})$--perturbations of $\Pi^\pm$ and their restriction to $\Lambda_{\varepsilon}$ are embeddings into ${\mathcal A}_{\varepsilon}$. Finally their images ${\Sigma}_{\varepsilon}^\pm$ are clearly transverse sections for the restriction to ${\mathcal A}_{\varepsilon}$ of Hamiltonian vector field generated by ${\mathsf N}_{\varepsilon}\circ\boldsymbol{\chi}$ for ${\varepsilon}$ small enough. This concludes the proof of the first part of the lemma. \vskip3mm \setcounter{paraga}{0} We now fix two compact annuli ${\mathsf A}$ and ${\mathsf A}'$ of $C$, defined over $I,I'$ such that $I\cap I'\neq\emptyset$ and such there exists $e_0\in I\cap I'$ with $W^-(\gamma_{e_0})\pitchfork W^+(\gamma'_{e_0})$ inside $C^{-1}(e)$. We fix an interval $I_\circ\subset I\cap I'$ over which the periodic solutions $\gamma_e$ and $\gamma'_e$ admit a heteroclinic solution ${\zeta}_e$ continuously depending on $e$. We let ${\mathsf A}_\circ$ and ${\mathsf A}'_\circ$ be the corresponding annuli for $C$ We finally fix coherent families (${\mathscr C}_{\circ{\varepsilon}}$, ${\mathscr A}_{\circ{\varepsilon}}$), (${\mathscr C}'_{\circ{\varepsilon}}$, ${\mathscr A}'_{\circ{\varepsilon}}$) attached to ${\mathsf A}_\circ$ and ${\mathsf A}'_\circ$. The proof essentially follows the same lines as the previous one. \paraga Let $\pi^-:W^\pm({\mathsf A})\to {\mathsf A}$ and $({\pi'})^-:W^\pm({\mathsf A}')\to {\mathsf A}'$ be the characteristic projections. Fix a $C^\kappa$ arc $w:I\to {\mathbb A}^2$ such that $w(e)\in {\zeta}_e\setminus{\gamma_e\cap \gamma'_e}$ for $e\in I$. The tangent vectors to the stable and unstable leaves at $w(e)$ together with $X^C(\nu(e))$ form a basis of $T_{w(e)}C^{-1}(e)$. The $C^{\kappa-1}$ curves \begin{equation} {\sigma}^-=\pi^\pm\big(w(I^*)\big),\quad ({\sigma}')^+=({\pi'})^+\big(w(I^*)\big). \end{equation} are global sections of the Hamiltonian flow of $X^C$ on the annulus ${\mathsf A}$. Let $S\subset{\mathbb A}^2$ be a transverse section containing $w(I^*)$. We keep the same convention as above for ${\mathsf N}_{\varepsilon}$ and ${\mathsf N}_{\varepsilon}\circ\boldsymbol{\chi}$. \paraga The system $$ {\rm Av}(\xi,\eta,\overline {\theta},\overline{\mathsf r})=\widehat\omega\,\eta+C(\overline{\theta},\overline{\mathsf r}), $$ admits the products ${\Sigma}:={\mathbb T}_{\varepsilon}\times\,]-1,1[\,\times{\mathsf A}$ and ${\Sigma}':={\mathbb T}_{\varepsilon}\times\,]-1,1[\,\times{\mathsf A}'$ as invariant annuli, whose stable and unstable manifolds and characteristic projections have same product structure as in the previous section. Now, $W^\pm({\Sigma})$ transversely intersect one another along ${\mathbb T}_{\varepsilon}\times\,]-1,1[\,\times \big(W^-({\mathsf A})\cap W^+({\mathsf A}')\big)$ in ${\mathbb A}^3$ and both manifolds transversely intersect the section ${\mathcal S}:={\mathbb T}_{\varepsilon}\times\,]-1,1[\,\times S$ in ${\mathbb A}^3$. We set $$ \Lambda:={\mathcal S}\cap W_{loc}^-({\Sigma})\cap W_{loc}^+({\Sigma}'), $$ then $\Pi^-(\Lambda)$ and $(\Pi')^+(\Lambda)$ are global transverse sections for the unperturbed flows on ${\Sigma}$ and ${\Sigma}'$. \paraga By the same perturbative argument as above, $$ \Lambda_{\varepsilon}:={\mathcal S}\cap W_{loc}^+({\mathcal A}_{\varepsilon})\cap W_{loc}^-({\mathcal A}'_{\varepsilon}). $$ satsifies the conditions of our claim. This concludes the proof of Lemma~\ref{lem:dcondFS}. \end{proof} \subsubsection{Condition {\rm(PS1)}\ for $s$-cylinders} \setcounter{paraga}{0} We end the proof of the existence of a covering of an $s$-cylinders with subcylinders which admits a twist section, and by the same token we prove the graph properties of Condition {\rm(PS1)}. \paraga {\bf The section $\Delta$ and the transition diffeomorphisms.} We want now to go back to the normal forms of Lemma~\ref{lem:mainpart} which we localize over domains of diameter $\sqrt{\varepsilon}$ in the $r$ variable, which will enable us to get a covering of a neighborhood of the annulus ${\mathcal A}_{\varepsilon}$ with domains in which we can control the behavior of its center-stable and center-unstable foliations (when properly defined). The main preliminary observation is the following well-known one, whose proof can be easily deduced from \cite{Po93}. \begin{lemma} With the assumptions of {\rm Lemma~\ref{lem:mainpart}}, there exists a constant $a>0$ such that the annulus ${\mathcal A}_{\varepsilon}$ admits a covering by {\em invariant} subannuli $({\mathcal A}_{\varepsilon}^{m})_{1\leq m\leq m_*({\varepsilon})}$, with diameter $\leq a\sqrt{\varepsilon}$. \end{lemma} The main result of this part is the following. \begin{lemma} With the assumptions of {\rm Lemma~\ref{lem:mainpart}}, assume moreover that $\abs{{\theta}_3^*(\widehat r)}\leq 1/4$. Fix $\delta>0$. Then the section \begin{equation} \Delta=\Big\{({\theta},r)\in{\mathbb A}^3\mid {\theta}_3={\frac{1}{2}}\Big\} \end{equation} is transverse to the Hamiltonian vector field and intersects the manifolds $W^\pm({\mathcal A}_{\varepsilon})$ transversely in ${\mathbb A}^3$. Moreover, there exist conformal exact-symplectic diffeomorphisms \begin{equation} \Psi^{\rm ann}:{\mathscr O}^{\rm ann}\to {\mathscr A},\qquad \Psi^{\rm sec}:{\mathscr O}^{\rm sec}\to\Delta_{\bf e}:=\Delta\cap H^{-1}({\bf e}) \end{equation} where ${\mathscr O}^{\rm ann}$ and ${\mathscr O}^{\rm sec}$ are neighborhoods of the zero section in $T^*{\mathbb T}^2$ endowed with the conformal structure $\sqrt{\varepsilon} \sum r_id{\theta}_ i$ in suitable coordinates. Relatively to the induced coordinates, the characteristic transition diffeomorphisms $j^\pm$ are $\delta$-Lipschitz when ${\varepsilon}$ is small enough. \end{lemma} \begin{proof} The differential system associated with $X^{N_{\varepsilon}}$ reads \begin{equation}\label{eq:vectfield2} \left\vert \begin{array}{lllll} \widehat {\theta}'= \widehat\omega(r)&\!\!\! +\ {\varepsilon}\partial_{\widehat r}V({\theta}_3,r)&\!\!\! +\ {\varepsilon}\,\partial_{\widehat r}W_0({\theta},r)&\!\!\! +\ {\varepsilon}\,\partial_{\widehat r}W_1({\theta},r)&\!\!\! +\ {\varepsilon}^2\,\partial_{\widehat r}W_2({\theta},r)\\[5pt] \widehat r'= &&\!\!\! -\ {\varepsilon}\,\partial_{\widehat {\theta}}W_0({\theta},r)&\!\!\! -\ {\varepsilon}\,\partial_{\widehat {\theta}}W_1({\theta},r)&\!\!\! -\ {\varepsilon}^2\,\partial_{\widehat {\theta}}W_2({\theta},r)\\[5pt] {\theta}_3'= \omega_3(r)&\!\!\! +\ {\varepsilon}\partial_{r_3}V({\theta}_3,r)&\!\!\! +\ {\varepsilon}\,\partial_{r_3}W_0({\theta},r)&\!\!\! +\ {\varepsilon}\,\partial_{r_3}W_1({\theta},r)&\!\!\! +\ {\varepsilon}^2\,\partial_{r_3}W_2({\theta},r)\\[5pt] r_3'= &\!\!\! -\ {\varepsilon}\partial_{{\theta}_3}V({\theta}_3,r)&\!\!\! -\ {\varepsilon}\,\partial_{{\theta}_3}W_0({\theta},r)&\!\!\! -\ {\varepsilon}\,\partial_{{\theta}_3}W_1({\theta},r)&\!\!\! -\ {\varepsilon}^2\,\partial_{{\theta}_3}W_2({\theta},r).\\ \end{array} \right. \end{equation} We fix $r^0\in {\Gamma}^*$ and we localize the study to the domain \begin{equation} {\mathsf D}_{\varepsilon}=\Big\{({\theta},r)\in{\mathbb A}^3\mid \norm{r-r^0}\leq \alpha\sqrt{\varepsilon},\ \abs{{\theta}_3-{\theta}_3^*(\widehat r^0)}\leq\sqrt\delta\Big\} \end{equation} where $\alpha>0$ is a fixed constant. We perform the conformally symplectic change of variables \begin{equation} \varphi=\gamma{\theta},\qquad r-r^0=\gamma^{-1}\sqrt{\varepsilon}{\bf r}, \end{equation} and we set \begin{equation} {\boldsymbol \ph}_3=\varphi_3-\varphi^*_3(\widehat I). \end{equation} This yields the new system \begin{equation}\label{eq:vectfield3} \left\vert \begin{array}{lccll} \dot{\widehat \varphi}= \gamma\displaystyle\frac{\widehat\omega(r^0)}{\sqrt{\varepsilon}}+ \gamma D\widehat\omega(r^0)({\bf r})&+& \sqrt{\varepsilon}\, G_{\widehat {\theta}}(\varphi,{\bf r})\\[5pt] \dot{\widehat{\bf r}}=0 &+& \delta\, G_{\widehat r}(\varphi,{\bf r})\\[5pt] \dot{\boldsymbol \ph}_3= a(\widehat r ){\bf r}_3&+& \sqrt{\varepsilon}\, G_{\widehat {\theta}}(\varphi,{\bf r})\\[5pt] \dot {\bf r}_3=b(\widehat r){\boldsymbol \ph}_3&+& \delta\, G_{\widehat r}(\varphi,{\bf r}).\\[5pt] \end{array} \right. \end{equation} with \begin{equation} a(\widehat r)=\gamma\partial_{r_3}\omega_3\big(\widehat r,r^*_3(\widehat r)\big),\qquad b(\widehat r)=-\gamma^{-1}\partial^2_{{\theta}_3^2}V({\theta}_3,\ell(\widehat r)). \end{equation} To diagonalize the hyperbolic part, we set \begin{equation} u={\zeta}(\widehat r){\bf r}_3+{\zeta}(\widehat r)^{-1}{\boldsymbol \th}_3,\qquad s={\zeta}(\widehat r){\bf r}_3+{\zeta}(\widehat r)^{-1}{\boldsymbol \th}_3,\qquad {\zeta}=a(\widehat r)^{1/4}b(\widehat r)^{1/4}, \end{equation} This in turn yields the system \begin{equation}\label{eq:vectfield1} \left\vert \begin{array}{lccll} \dot{\widehat \varphi}= \gamma\displaystyle\frac{\widehat\omega(r^0)}{\sqrt{\varepsilon}}+ \gamma D\widehat\omega(r^0)({\bf r})&+& F_{\widehat \varphi}(\varphi,{\bf r})\\[5pt] \dot{\widehat{\bf r}}=0 &+& F_{\widehat r}(\varphi,{\bf r})\\[5pt] \dot u= \lambda({\bf r}) u&+& F_{u}(\varphi,{\bf r})\\[5pt] \dot s=-\lambda({\bf r})s&+& F_{s}(\varphi,{\bf r}).\\[5pt] \end{array} \right. \end{equation} where $ \lambda({\bf r})=\sqrt{a({\bf r})b({\bf r})} $ and \begin{equation} \norm{\big(F_{\widehat \varphi},F_{\widehat {\bf r}},F_{u},F_{s}\big)}_{C^2}\leq \delta. \end{equation} The main interest of the previous change is that now the time-one map of the unperturbed flow is $C^0$ and $C^1$ bounded by a constant which is {\em independent of ${\varepsilon}$}. By the persistence theorem and the covering argument, this proves that the stable and unstable manifolds of the annulus ${\mathcal A}_{\varepsilon}\cap {\mathsf D}_{\varepsilon}$ admit the equations \begin{equation} {\bf r}_3=R_3^\pm(\widehat\varphi,\widehat {\bf r},\widehat{\boldsymbol \ph}_3),\qquad \norm{R^\pm_3}_{C^2}\leq M\delta. \end{equation} Their characteristic vector fields are therefore $\delta$-small in the $C^1$ topology, which proves the existence of a small constant $\ell$ such that the section \begin{equation} \overline\Delta=\{\abs{{\theta}_3-{\theta}_3^*(\widehat r)} =\ell\} \end{equation} which intersects $W^\pm({\mathcal A}_{\varepsilon})$ transversely, and such that the transition diffeomorphisms induced by the characteristic flow are $\delta$-close to the identity in the $C^1$ topology. Setting finally \begin{equation} \overline\Delta=\{{\theta}_3={\tfrac{1}{2}}\} \end{equation} one immediately sees from the system (\ref{eq:vectfield2}) that the transition diffeomorphisms induced by the characteristic flows on $W^\pm({\mathscr A}_{\varepsilon})$ between ${\mathscr A}$ and the section $\overline\Delta$ are also $\delta$-close to the identity in the $C^1$ topology, relatively to the coordinates $(\widehat{\theta},\widehat{\bf r})$, when ${\varepsilon}$ is small enough. This proves in particular that the image by the transition diffeomorphisms of an essential torus contained in ${\mathscr A}_{\varepsilon}$, which is a graph in the coordinates $(\widehat{\theta},\widehat{\bf r})$, remains a graph over the base ${\mathbb T}^2$ in the previous coordinates, provided that its Lipschitz constant is small enough. But one can choose this constant arbitrarily small by assuming ${\varepsilon}$ small enough, by usual theorems on twist maps, since the return map on the section ${\Sigma}$ is a perturbation of the integrable twist. \end{proof} \section{From nondegeneracy to cusp-genericity}\label{sec:cuspgen} \setcounter{paraga}{0} \paraga We keep the assumptions and notation of Theorem~I. We select a finite set of simple resonance circles ${\Gamma}_1,\ldots,{\Gamma}_\ell$ at energy ${\bf e}$, whose union contains a broken line of resonance arcs ${\boldsymbol \Gamma}$ with intersections in the open sets $O_i$, as depicted in Section~\ref{sec:TheoremI} of the Introduction. We first prove that Conditions {\bf S} hold for each resonance ${\Gamma}_i$ for a residual set of functions $f$ in ${\mathscr S}^\kappa$. \paraga We introduce the averaging operator ${\mathscr I}$ along the resonance curve ${\Gamma}_k$. We arbitrarily choose a $C^\infty$ parametrization $\tau:{\mathbb T}\to{\Gamma}_k$ and we fix adapted coordinates $({\theta},r)$ to ${\Gamma}_k$. We define $$ \begin{array}{rll} {\mathscr I}_k : C_b^\kappa({\mathbb A}^3,{\mathbb R})&\longrightarrow& C^\kappa({\mathbb T}\times {\mathbb T},{\mathbb R})\\ f&\longmapsto&g(\varphi,s)=\displaystyle\int_{{\mathbb T}^2} f\big((\widehat {\theta}, \varphi),\tau(s)\big)\,d\widehat {\theta}. \end{array} $$ Then ${\mathscr I}_k$ is clearly linear, continuous and surjective, so it is an open mapping. \paraga By classical Morse theory, the subset ${\mathscr V}\subset C^\kappa({\mathbb T}\times{\mathbb T},{\mathbb R})$ of all functions $V(\varphi,s)$ such that $V(\cdot,s)$ admits a single and nondegenerate global maximum for $\varphi\in{\mathbb T}\subset B$, where $B$ is a finite subset of ${\mathbb T}$, and exactly two nondegenerate global maximums with ``transverse crossing'' at the points of $B$, is open dense in $C^\kappa({\mathbb T}\times{\mathbb T},{\mathbb R})$. \paraga The inverse image of ${\mathscr V}$ by ${\mathscr I}_k$ is therefore open dense in $C^\kappa({\mathbb A}^3)$ and so is its intersection with the unit sphere ${\mathscr S}^\kappa$, by linearity. This is precisely the set of functions $f\in{\mathscr S}^\kappa$ which satisfy the nondegeneracy assumptions ${\bf (S_1)}$ and $\bf (S_2)$ for ${\Gamma}_k$. One gets the opennes and density of the same conditions for $1\leq k\leq \ell$ by finite intersection. \paraga As for condition ${\bf (S_3)}$, we follow a similar method and introduce the averaging operator attached to a given double resonance point $r^0\in {\Gamma}_{k}$, that is, the function $$ \begin{array}{rll} {\mathscr J}: C^k({\mathbb A}^3,{\mathbb R})&\longrightarrow& C^k({\mathbb T}^2,{\mathbb R})\\ f&\longmapsto&g(\overline{\theta})=\displaystyle\int_{\mathbb T} f\big(({\theta}_1,\overline {\theta}),r^0\big)\,d{\theta}_1. \end{array} $$ As above, this is a linear, surjective and open mapping. So, if $T$ is the quadratic part of the $d$--averaged system at $r^0$, the set of functions $f$ such that ${\mathscr J}(f)$ is in the open dense set ${\mathscr U}({\mathbb T})$ of Theorem~II is residual in $C^\kappa({\mathbb A}^3,{\mathbb R})$, and so also in ${\mathscr S}^\kappa$. Since the set of double resonant points in the broken line which have to be taken into account is finite, Condition ${\bf (S_3)}$ for ${\Gamma}_k$, $1\leq k\leq \ell$ is open and dense in ${\mathscr S}^\kappa$. \paraga We denote by ${\mathscr O}$ the open dense subset of ${\mathscr S}^\kappa$ for which Conditions $\bf(S)$ are satisfied for ${\Gamma}_k$, $1\leq k\leq \ell$. Given $f\in{\mathscr O}$, then there exists a largest ${\varepsilon}_0(f)$ which satisfies all the threshold conditions involved in all the constructions of the proof of existence of admissible chains. For $0<{\varepsilon}<{\varepsilon}_0(f)$, the system $h+{\varepsilon} f$ admits an admissible chain located in the $O(\sqrt{\varepsilon})$ neighborhood of ${\mathbb T}^3\times{\boldsymbol \Gamma}$. Given $\delta>0$ small enough (independent of $f$), there exists a largest ${\boldsymbol \varepsilon}_0(f)<{\varepsilon}_0(f)$ such that for $0<{\varepsilon}<{\boldsymbol \varepsilon}_0(f)$ each $O_i$ contains the $\delta$-neighborhood of some essential torus in the chain. \paraga Given $f$ in the previous open dense subset of ${\mathscr S}^\kappa$, we define ${\varepsilon}_0(f)$ as the largest positive real number which satisfies all the threshold conditions involved in all the constructions of the proof of existence of admissible chains. Since all the threshold can be chosen lower semicontinuous by construction, ${\varepsilon}_0$ is itself lower-semicontinuous. \setcounter{section}{0} \begin{center} {\LARGE B. Hyperbolic properties of classical systems on ${\mathbb A}^2$} \end{center} The proof of Theorem II will necessitate the introduction of a set of explicit nondegeneracy conditions (Conditions $(D)$), under which our result holds true. We will gather these conditions in Section 1, together with some basic definitions and results. The rest of the paper is organized as follows. \begin{itemize} \item In Section 2 we first recall basic facts on the Jacobi-Maupertuis correspondence between classical systems and geodesic flows, and then very classical results on the dynamics of the geodesic flows on the torus ${\mathbb T}^2$ (mainly those of Morse and Hedlund), as well as the hyperbolicity theorem of Poincar\'e on globally minimizing orbits on surfaces. Together with Conditions $(D)$, this enables us to prove the existence of (possibly infinite) chains of annuli, starting from the critical energy $e=\overline e$ and asymptotic to $e=+\infty$. \item In Section 3 we prove that under Conditions $(D)$ the previous chains admit a {\em single} annulus asymptotic to $+\infty$. \item In Section 4 we analyze the behavior of the system in the neighborhood of the critical energy $\overline e$, assuming the existence of suitable homoclinic orbits for the hyperbolic fixed point at the critical energy. A Hamiltonian version of the Birkhoff-Smale theorem proves that the previous chains are indeed {\em finite}, with a single annulus asymptotic to the critical energy. Moreover, the same theorem proves the existence of (at least) one singular annulus, together with the existence of the family of heteroclinic orbits between the various objects. At this point, Theorem II will be proved except for the existence of homoclinic orbits for the hyperbolic fixed point and the genericity of the constraints induced by Conditions $(D)$. \item Section 5 is devoted to the proof of the genericity of Conditions $(D)$ and its consequences. \item In Appendix~\ref{sec:hom} we prove the existence of the homoclinic orbits for the hyperbolic fixed point of (generic) classical systems. \item In Appendix~\ref{sec:proofBS}, we give an extensive proof of the Hamiltonian Birkhoff-Smale theorem. \item Finally we recall in Appendix~\ref{sec:horses} some results on horseshoes in the plane, in a simple and well adapted form due to Moser. \end{itemize} We tried to make all proofs basic and self-contained, so we certainly do not give the shortest possible (neither the most elegant) ones. Some results are rather well-known, in particular the existence of homoclinic orbits, since the works of Bolotin \cite{Bo78,Bo83,BB03,BR02} (see also \cite{Be00,Mat10} and the results of weak KAM theory). However we need here to make them more precise and, in particular, to analyze the convergence of periodic orbits to the (poly)homoclinic orbits. This is also the case for the Hamiltonian Birkhoff-Smale theorem, for which we need a very accurate formulation. We could have more fully exploited the genericity results on the boundary of the unit ball of the stable norms for classical systems (see \cite{C}) but we chosed to limit ourselves to what is strictly necessary in view of giving a complete geometrical description of the diffusion mechanism. \section{Basic definitions and the nondegeneracy conditions}\label{sec:nondeg2} \subsection{Basic definitions} In this section we fix a classical system $C$ of the form (\ref{eq:classham}). \paraga We denote by $\rho:{\mathbb A}^2\to{\mathbb A}^2$ the involution defined by \begin{equation}\label{eq:natinv} \rho({\theta},r)=({\theta},-r), \end{equation} which reverses the symplectic form $\Omega$. Since $C$ is invariant under $\rho$, its solutions arise in opposite pairs $\gamma^*$ and $\gamma^{**}$, which satisfy $\gamma^*(t)=\rho\circ\gamma^{**}(-t)$. \paraga We set $\overline e=\mathop{\rm Max\,}\limits U$ and for $e>\overline e$ we introduce the so-called Jacobi metric induced by $C$ at energy $e$, defined for $v\in T_{\theta}{\mathbb A}^2$ by \begin{equation}\label{eq:riemmet} \abs{v}_e=\big(2(e-U({\theta}))\big)^{{\tfrac{1}{2}}}\norm{v}, \end{equation} where $\norm{\ }$ stands for the norm on ${\mathbb R}^2$ associated with the dual of $T$. \paraga Given a $\tau$--periodic solution of a vector field $X$ on a manifold, the associated {characteristic exponents} are the eigenvalues of the derivative of the time-$\tau$ flow $\Phi^{\tau X}$ at any point $m$ of the orbit. In the case where $X$ is Hamiltonian and the periodic solution is nontrivial, $D_m\Phi^{\tau X}$ always admits~$1$ as an eigenvalue, with multiplicity at least $2$ (due to the invariance by time shifts and the preservation of energy). We will say that a nontrivial periodic solution of a Hamiltonian system is {\em nondegenerate} when the multiplicity of the eigenvalue~$1$ is exactly $2$. This amounts to saying that the Poincar\'e map associated with any section does not admit $1$ as an eigenvalue. A periodic solution of a Hamiltonian vector field $X$ is said to be hyperbolic when it is nondegenerate and when moreover the eigenvalues of $D_m\Phi^{\tau X}$ which are different from $1$ are not located on the unit circle (which is equivalent to the usual hyperbolicity of the Poincar\'e return map). \paraga Let $\abs{\ }$ be a Riemannian metric on ${\mathbb T}^2$. The length of a piecewise $C^1$ arc ${\zeta}:[a,b]\to {\mathbb T}^2$ is defined as the integral $ \ell({\zeta})=\int_a^b \abs{{\zeta}'(t))}\,dt. $ If an arc ${\zeta}$ satisfies ${\zeta}'(t)\neq 0$ for $t\in[a,b]$, one defines a new arc $\xi$ (its arc-length parametrization), defined by $ \xi={\zeta}\circ{\sigma}^{-1} $, where $ {\sigma}(t)=\int_a^t\abs{{\zeta}'(\tau)}\,d\tau. $ Consequently, the domain of $\xi$ is $[0,\ell({\zeta})]$, where $\ell({\zeta})$ is the Riemannian length of the curve~${\zeta}$, and $\abs{\xi'(s)}=1$ for $s\in[0,\ell({\zeta})]$. \paraga Let us recall the so-called Jacobi-Maupertuis principle in the setting of our classical system $C$. We denote by ${\mathscr L}_C:T{\mathbb T}^2\to T^*{\mathbb T}^2$ the Legendre diffeomorphism associated with $C$. We say that a trajectory has energy $e$ when the corresponding orbit is contained in $C^{-1}(e)$. \vskip2mm \noindent{\bf Lemma (Jacobi-Maupertuis).} {\em Let $e>\overline e$ be fixed. Fix a $C^1$ curve ${\zeta}$ on ${\mathbb T}^2$ such that ${\mathscr L}_C({\zeta},{\zeta}')$ takes its values in $C^{-1}(e)$. Then ${\zeta}$ is a trajectory with energy $e$ of $C$ if and only if the arc-length parametrization of ${\zeta}$ relative to the Jacobi metric $\abs{\ }_e$ is a geodesic of this metric. } \vskip2mm So, up to reparametrization, the solutions of the vector field $X^C$ in $C^{-1} (e)$ and those of the geodesic vector field $X_e$ induced by $\abs{\ }_e$ in the unit tangent bundle are in one-to-one correspondence. In particular, the reparametrization of a periodic solution of $X^C$ is a periodic solution of $X_e$ and the characteristic exponents of both solutions are related by the reparametrization. In particular, both are simultaneously nondegenerate or hyperbolic. \paraga Let $\abs{\ }$ be a Riemannian metric on ${\mathbb T}^2$. We say that a closed curve on ${\mathbb T}^2$ is length-minimizing in some class $c\in H^1({\mathbb T}^2,{\mathbb Z})$ when it belongs to this class and when moreover its length is minimal among the lengths of all closed piecewise $C^1$ curves belonging to $c$. Turning back to our classical system $C$, we say that a periodic solution of $X^C$ with energy $e>\overline e$ is minimizing when its projection on ${\mathbb T}^2$ is length-minimizing in its homology class for the Jacobi metric $\abs{\ }_e$. These definitions can be related to the minimization of the Lagrangian action, see the next section and \cite{DC95,Mat10}. \paraga We canonically identify $H_1({\mathbb T}^2,{\mathbb Z})$ with ${\mathbb Z}^2$, so that a primary class $c$ is an indivisible pair of integers (that is, $c=(c_1,c_2)\in{\mathbb Z}^2$ with $c_1\wedge c_2=1$). We define the $c$--averaged potential associated with $U$ as the function \begin{equation}\label{eq:avpot2} U_c(\varphi)=\int_0^1 U\big(\varphi+s\,(c_1,c_2)\big)\,ds \end{equation} where $\varphi$ belongs to the circle ${\mathbb T}^2/T_c$, where $T_c=\{\lambda(c_1,c_2)\ [{\mathbb Z}^2]\mid \lambda\in{\mathbb R}\}$ (note that $T_c$ is also a circle). \paraga One checks that if ${\theta}^0$ is a nondegenerate maximum of $U$, then its lift $O=({\theta}^0,0)$ to the zero section of ${\mathbb A}^2$ is a hyperbolic fixed point for $X^C$, with real eigenvalues. We say that $O$ admits a {\em proper conjugacy neighborhood} when exists there exists a symplectic coordinate system $(u_1,u_2,s_1,s_2)$ in a neighbohood of $O$, relatively to which $O=(0,0,0,0)$ and the Hamiltonian $C$ takes the normal form \begin{equation} \lambda_1 u_1s_1+\lambda_2 u_2s_2+R(u_1s_1,u_2s_2), \end{equation} with $\lambda_1>\lambda_2>0$, $R(0,0)=0$ and $D_{(0,0)}R=0$. We moreover require the coordinates to satisfy the equivariance condition \begin{equation}\label{eq:equivariance} \rho(u,s)=(-s,u) \end{equation} where $\rho$ was defined in (\ref{eq:natinv}). We denote by $W_\ell^s=\{u=0\}$ and $W_\ell^u=\{s=0\}$ the local stable and unstable manifolds of $O$. The strongly stable and unstable manifolds of $O$ read, in the previous coordinates $W^{ss}=\{u=0,\, s_2=0\}$, $W^{uu}=\{s=0,\, u_2=0\}$. We also introduce the subsets $W^{sc}=\{u=0,\, s_1=0\}$, $W^{uc}=\{s=0,\, u_1=0\}$. We will see that these subsets may be intrinsically defined. We define the {\em exceptional set} as the union \begin{equation} {\mathscr E}=W^{ss}\cup W^{sc} \cup W^{uu} \cup W^{uc}. \end{equation} \paraga We introduce the amended Lagrangian $\widetilde L$ associated with $C$: \begin{equation} \widetilde L({\theta},v)={\tfrac{1}{2}} \norm{v}^2-(U-\mathop{\rm Max\,}\limits U),\qquad \forall ({\theta},v)\in T{\mathbb T}^2. \end{equation} Given a solution $\omega$ of $C$ homoclinic to the fixed point $O$, we define its amended action as the integral \begin{equation} \int_{-\infty}^{+\infty} \widetilde L({\zeta}(t),{\zeta}'(t))\,dt \end{equation} where ${\zeta}=\pi\circ\omega$. This integral is immediately proved to be convergent, due to vanishing of $\widetilde L$ at $O$ and the exponential convergence rate in the neighborhood of $O$. \subsection{The nondegeneracy conditions} In this section we fix a positive definite quadratic form $T$ and, for $U\in C^\kappa({\mathbb T}^2)$, we denote the associated classical system by $C_U({\theta},r)={\tfrac{1}{2}} T(r)+U({\theta})$. We now introduce our main nondegeneracy conditions, {\em which are to be understood as conditions on $U$ only, even if the dynamical behavior of the system $C_U$ is involved}. \paraga {\bf Conditions on the fixed point and its homoclinic orbits.} \begin{itemize} {\em \item {$(D_1)$} The potential function $U$ admits a single global maximum at some ${\theta}^0\in{\mathbb T}^2$, which is nondegenerate. \item $(D_2)$ The fixed point $O=({\theta}^0,0)$ of $C_U$ admits a proper conjugacy neighborood. \item $(D_3)$ No orbit homoclinic to $O$ intersects the exceptional set ${\mathscr E}$. \item $(D_{4})$ There exists a pair of opposite solutions homoclinic to $O$ whose amended action is smaller than the amended action of the other homoclinic solutions. }\end{itemize} \paraga {\bf Conditions on the periodic solutions.} Here e fix $c\in {\bf H}^1({\mathbb T}^2,{\mathbb Z})$. \begin{itemize} {\em \item $(D_5(c))$ Each periodic solution of $X^{C_U}$ with energy $>\overline e$ is nondegenerate. \item $(D_6(c))$ There exists a subset $B(c)$ of $]\bar e,+\infty[$ such that \begin{itemize} \item for $e\in \, ]\bar e,+\infty[\,\setminus B(c)$, there exists exactly one length-minimizing periodic trajectory in the class $c$ for the Jacobi metric $\abs{\ }_e$, \item for $e_0\in B(c)$, there exist exactly two length-minimizing periodic trajectories in $c$ for $\abs{\ }_{e_0}$. \end{itemize} \item $(D_7(c))$ Given $e_0\in B(c)$, the lengths $\ell^*(e)$ and $\ell^{**}(e)$ of the continuations of the two length-minimizing periodic orbits at $e_0$ satisfy $ (\ell^*-\ell^{**})'(e_0)\neq0. $ \item $(D_8(c))$ The stable and unstable manifolds of two minimizing periodic solutions of $X^{C_U}$ with the same energy transversely intersect inside their energy level. }\end{itemize} \paraga {\bf Conditions on the averaged potential.} \begin{itemize} {\em \item $(D_9(c))$ The averaged potential $U_c$ admits a single maximum, which is nondegenerate. }\end{itemize} In the rest of the paper we say for short that $U$ satisfies Conditions $(D_5)-(D_9)$ when $U$ satisfies $(D_5(c))-(D_9(c))$ for each $c\in {\bf H}_1({\mathbb T}^2,{\mathbb Z})$. \setcounter{paraga}{0} \section{Conditions $(D_5)-(D_8)$ and the chains of annuli}\label{sec:annuli''} In this section we fix a classical system $C$ of class $C^2$ of the form (\ref{eq:classham}), and we prove the existence of {\em possibly infinite} chains of annuli realizing each primary homology class as soon as Conditions $(D_5)-(D_8)$ are satisfied. Only minimizing periodic orbits will be involved in our construction. \paraga Given a discrete subset $B$ of $]\overline e,+\infty[$, the subset $]\overline e,+\infty[\setminus B$ admits a finite or countable familly of connected components $(I_k)_{k\in Z}$, where $Z$ is an interval of ${\mathbb Z}$, which can be ordered so that $\mathop{\rm Sup\,}\limits {I_k}=\mathop{\rm Inf\,}\limits I_{k+1}$ when $k,k+1\in Z$. Since $Z$ is obviously unique up to translation, we call $(I_k)_{k\in Z}$ {\em the} family of connected components of $]\overline e,+\infty[\setminus B$. \begin{prop}\label{mainprop} Let $c\in {\bf H}_1({\mathbb T}^2,{\mathbb Z})$ be fixed and assume that $U$ satisfies Conditions $(D_5(c))-(D_8(c))$. Then the set $B(c)$ is discrete and the system $C$ possesses a chain ${\mathscr A}(c)$ of annuli realizing $c$, defined over the ordered family $(\overline I_{k})_{k\in Z}$, where $(I_k)_{k\in Z}$ is the ordered family of connected components of $]\overline e,+\infty[\setminus B(c)$ and where $B(c)$ was introduced in Condition $(D_6(c))$. Moreover, each annulus satisfies the twist property and each periodic orbit in it admits homoclinic orbits. \end{prop} The closure $\overline I_k$ is to be understood relatively to the space $]\overline e,+\infty[$. \paraga The proof relies on some classical results that we now recall. Let $\abs{\ }$ be a Riemannian metric on ${\mathbb T}^2$ and let us keep the same notation for its lift to ${\mathbb R}^2$. We say that a piecewise $C^1$ curve ${\zeta}:[a,b]\to {\mathbb R}^2$ is {\em length-minimizing} between $a$ and $b$ if its length is minimal among those of all piecewise $C^1$ curves defined on $[a,b]$ and taking the same values as ${\zeta}$ at $a$ and~$b$. We then say that a piecewise $C^1$ curve ${\zeta}:{\mathbb R}\to {\mathbb R}^2$ is {\em fully length-minimizing} when, for any compact interval $[a,b]$, the restriction ${\zeta}_{\vert[a,b]}$ is length-minimizing in the previous sense. Of course length-minimizing curves are geodesics of the metric. \vskip1mm We say that ${\zeta}:{\mathbb R}\to{\mathbb R}^2$ is ${\mathbb Z}^2$--periodic with period $T>0$ when there exists $m\in{\mathbb Z}^2$ such that $$ {\zeta}(t+T)= \tau_m\circ {\zeta}(t),\quad \forall t\in{\mathbb R}, $$ where $\tau_m$ is the translation of the vector $m$ in ${\mathbb R}^2$. In this case we say that $m$ is a rotation vector for ${\zeta}$. The period and rotation vector are not unique, but one obviously has ``minimal'' ones. \vskip2mm A classical theorem of Morse proves the existence of fully minimizing ${\mathbb Z}^2$--periodic geodesics for any (minimal) rotation vector $m\neq0$. It is not difficult to prove that such a geodesic is a graph over the line ${\mathbb R}.m$ (relatively to a suitable coordinate system). Moreover, one proves that two fully minimizing ${\mathbb Z}^2$--periodic geodesics with the same rotation vector are either disjoint or equal. Therefore, two such disjoint geodesics form the boundary of a well-defined {\em strip} in ${\mathbb R}^2$. The following result will be crucial in our construction. \vskip3mm \noindent{\bf Theorem (Hedlund).} {\em Assume that the strip defined by two fully minimizing ${\mathbb Z}^2$--periodic geodesics ${\zeta}_1$ and ${\zeta}_2$ with the same minimal rotation vector does not contain any other ${\mathbb Z}^2$--periodic fully minimizing geodesic. Then there exist a fully minimizing geodesic which is $\alpha$--asymptotic to ${\zeta}_1$ and $\omega$--asymptotic to ${\zeta}_2$, and a fully minimizing geodesic which is $\alpha$--asymptotic to ${\zeta}_2$ and $\omega$--asymptotic to ${\zeta}_1$. } \vskip3mm See for instance \cite{Ba} for a complete proof. The following result for geodesic systems on ${\mathbb T}^2$ is also classical and easily deduced from \cite{Ba}. \vskip3mm \noindent{\bf Lemma.} \label{prop:equivmin} {\em Let ${\zeta}:{\mathbb R}\to{\mathbb R}^2$ be a piecewise $C^1$ ${\mathbb Z}^2$--periodic curve with period $T$ and let $\pi:{\mathbb R}^2\to{\mathbb T}^2$ be the canonical projection. Then the following three properties are equivalent. \begin{itemize} \item The restriction ${\zeta}_{\vert[0,T]}$ is length-minimizing between $0$ and $T$. \item The curve ${\zeta}$ is fully minimizing. \item The projection $\pi\circ{\zeta}$ is length-minimizing in its homology class (see the previous section). \end{itemize} } Let us finally state in our setting the hyperbolicity theorem of Poincar\'e, whose proof relies on the previous statement (see \cite{P} (vol. 3) for the original one). Recall that we say that a periodic solution of $X^C$ with energy $e>\overline e$ is minimizing when its projection on ${\mathbb T}^2$ is length-minimizing in its homology class. \vskip3mm \noindent{\bf Theorem (Poincar\'e).} {\em Let $C$ be a classical system of the form (\ref{eq:classham}). A minimizing periodic solution with energy $e>\overline e$ which is nondegenerate is hyperbolic.} \vskip3mm \paraga {\bf Proof of Proposition \ref{mainprop}.} We first prove the existence of the chain, and we then study the twist property of the annuli. The fact that $B(c)$ is discrete is an immediate consequence of $(D_6(c))$ and $(D_7(c))$. Let $I=\,]e_0,e_1[$ be a component of $]\overline e,+\infty[\,\setminus B(c)$, so $e_0,e_1\in B(c)$. \vskip1mm Assume first that $e_0 >\overline e$. For each energy $e\in I$, the Jacobi metric $\abs{\ }_e$ admits a single length-minimizing periodic geodesic $\xi_e$ in the class $c$. The Jacobi-Maupertuis lemma associates with this geodesic a unique reparametrized periodic solution $\gamma_e$ of $X^C$, with energy $e$, which is minimizing by definition. Since it is nondegenerate by Condition $(D_5(c))$, it is hyperbolic by the Poincar\'e theorem. This and the uniqueness of $\xi_e$ prove that $(\gamma_e)_{e\in I}$ is a differentiable family. The Poincar\'e theorem still applies to each of the two minimizing solutions which exist at the limit point $e_1$. The same is true for $e_0$ since $e_0\neq \overline e$. By uniqueness and hyperbolicity, this proves that the previous family $(\gamma_e)_{e\in I}$ can be differentiably continued over a slightly larger interval $\widehat I\supset \overline I=[e_0,e_1]$. This proves the union ${\mathsf A}$ of the orbits of the solutions $\gamma_e$, $e\in \overline I$, is an annulus defined over $I$ and realizing $c$. \vskip1mm In the case where $e_0=\overline e$, the proof is even simpler since $e_0$ no longer belongs to the interval over which the annulus is defined. So the above reasoning proves that ${\mathsf A}$ is an annulus over $]\overline e,e_1]$. \vskip1mm This way one gets a family $({\mathsf A}_k)_{k\in Z}$ of annuli defined over the ordered intervals $(\overline I_k)_{k\in Z}$. Consider two intervals $I_k$ and $I_{k+1}$ of $]\overline e,+\infty[\setminus B(c)$, with $e_0=\mathop{\rm Max\,}\limits I_k=\mathop{\rm Min\,}\limits I_{k+1}$, so $e_0\in B(c)$. Let ${\Gamma}_{e_0}^k$ and ${\Gamma}_{e_0}^{k+1}$ be the minimizing periodic orbits at $e_0$. By the Hedlund theorem applied to the associated geodesic solutions as above, there exist in the level $C^{-1} (e_0)$ a heteroclinic orbit connecting ${\Gamma}^k_{e_0}$ and ${\Gamma}^{k+1}_{e_0}$ and another one connecting ${\Gamma}^{k+1}_{e_0}$ and ${\Gamma}^k_{e_0}$. By Condition $(D_8(c))$ the invariant manifolds of ${\Gamma}^k_{e_0}$ and ${\Gamma}^{k+1}_{e_0}$ transversely intersect along these solutions. This proves that the family ${\mathscr A}(c):=({\mathsf A}_k)_{k\in Z}$ is a chain of annuli. \vskip1mm Hedlund's theorem applied at energies in the interior of $I_k$ also proves that the corresponding periodic solutions admit at least two homoclinic solutions. \vskip1mm It only remains to prove that each ${\mathsf A}_k$ satisfies the twist property, and for this we will use some basic facts from Mather's variational theory. By Mather's graph property, one easily sees that when $e\in I_k^\circ$, the unique minimizing periodic orbit ${\Gamma}(e)$ realizing $c$ coincides with the Mather set ${\mathscr M}_\omega$, where $\omega\in H^1({\mathbb T}^2)$ belongs to the subderivative of the Mather $\beta$ function at $\rho=c/T(e)$, where $T(e)$ is the period of ${\Gamma}(e)$. Moreover, $e=\alpha(\omega)$, where $\alpha$ stands for the Mather $\alpha$ function (see \cite{DC95,Fa09,Mat10}). For each $T\in \{T(e)\mid e\in I_k^\circ\}$, fix $\omega(T)$ in the subderivative of $\beta$ at $\rho=c/T$. Then $$ \alpha(\omega(T(e))=e, $$ which shows that $e\mapsto T(e)$ is injective. Since we already know that $T$ is continuous, this proves that $T$ is monotone, which concludes the proof. $\square$ \vskip3mm Note that the Hedlund theorem in fact provides us with more homoclinic or heteroclinic connections than what is required in our definitions. \section{Condition $(D_9)$ and the high energy annuli}\label{sec:highann} In this section we prove that, under the additional condition $(D_9(c))$, the set $B(c)$ of Condition $(D_6(c))$ is bounded above. \begin{prop}\label{prop:highen} Let $C$ be a classical system of the form (1), with $U\in C^2({\mathbb T}^2)$. Fix $c\in {\bf H}_1({\mathbb T}^2,{\mathbb Z})$. Assume that Condition $(D_9(c))$ is satisfied. Then there is $e(c)\geq\overline e$ such that for $e > e(c)$, there exists a unique length-minimizing geodesic in the class $c$ for the Jacobi metric $\abs{\ }_e$. Moreover, the corresponding solution of $X^C$ is hyperbolic in $C^{-1}(e)$. \end{prop} Note that we do not assume that Conditions $(D_4(c)-D_8(c))$ are satisfied in the previous proposition. Our proof here is perturbative rather than based on minimizing arguments. Indeed, we will crucially use the fact that when the energy $e$ is large enough, the system $C$ can be viewed as a perturbation of the integrable system ${\tfrac{1}{2}} T(r)$. We in fact essentially reprove the Poincar\'e theorem on the destruction of resonant tori and the birth of hyperbolic periodic orbits, paying moreover attention to their minimizing properties. \begin{proof} We canonically identify $H_1({\mathbb T}^2,{\mathbb Z})$ with ${\mathbb Z}^2$. Using a standard linear symplectic change of variable, one can assume that $c=(1,0)$. \vskip2mm $\bullet$ Let $M=(m_{ij})$ be the matrix of $T$, so that the frequency map associated with ${\tfrac{1}{2}} T(r)$ is $\varpi(r)=M r$. We will examine the properties of $C$ in the neighborhood of the resonance ${\mathscr R}$ of equation $\varpi_2(r)=0$, that is, the line $$ m_{12}r_1+m_{22}r_2=0 $$ in the action space. Note that $m_{22}\neq0$, so that $r_1$ is a natural parameter on ${\mathscr R}$ and \begin{equation}\label{eq:energy} C({\theta},r)\sim_{\abs{r_1}\to\infty} \alpha r_1^2,\qquad \alpha>0, \end{equation} in a small enough neighborhood of ${\mathscr R}$. The averaged potential at a point $r^0\in{\mathscr R}$ is the function \begin{equation}\label{eq:avclasspot} U_c({\theta}_2)=\int_{\mathbb T} U({\theta}_1,{\theta}_2)\,d{\theta}_1. \end{equation} Note that $U_c$ is independent of $r^0$. By $(D_9(c))$, $U_c$ admits a nondegenerate maximum at some point ${\theta}_2^*\in{\mathbb T}$. \vskip2mm $\bullet$ We fix $r^0=(r_1^0,r_2^0)\in{\mathscr R}$ and introduce the homological equation $$ \varpi(r^0)\,\partial_{\theta} S({\theta})=\big(U({\theta})-U_c({\theta}_2)\big), $$ whose solution is immediate: up to constants $$ S({\theta})=\frac{1}{\varpi_1(r^0)}\int_0^{{\theta}_1}\big(U(s,{\theta}_2)-U_c({\theta}_2)\big)\,ds. $$ Therefore $$ \norm{S}_{C^{\kappa}({\mathbb T}^2)}\leq \frac{2\norm{U}_{C^\kappa({\mathbb T}^2)}}{\abs{\varpi_1(r^0)}}\leq \frac{\mu_0}{\abs{r^0_1}}, $$ for a suitable constant $\mu_0>0$. \vskip2mm $\bullet$ We perform the symplectic change $$ \Phi({\theta},r)=\big({\theta},\ r^0+r-\partial_{{\theta}} S({\theta})\big), $$ so that: $$ C\circ\Phi({\theta},r)=e^0+\varpi_1(r^0)\, r_1+{\tfrac{1}{2}} T(r) +U_c({\theta}_2)+ R({\theta},r), $$ with $e^0={\tfrac{1}{2}} T(r^0)$ and $$ R({\theta},r)={\tfrac{1}{2}} T\big(\partial_{\theta} S({\theta})\big) -\varpi(r)\partial_{\theta} S({\theta}). $$ Given $\rho>0$ large enough, to be chosen below, one therefore gets \begin{equation}\label{eq:remainder} \norm{R}_{C^1({\mathbb T}^2\times \overline B(0,\rho))}\leq \frac{\mu_1}{\abs{r_1^0}}, \end{equation} for a suitable $\mu_1>0$ (which depends on $\rho$ but not on $r^0$). \vskip2mm $\bullet$ The Hamiltonian vector field generated by $C\circ\Phi$ reads \begin{equation}\label{eq:vectfield1'} \left\vert \begin{array}{llll} \dot {\theta}_1=\varpi_1(r^0) \ \ + & [m_{11}r_1+m_{12}r_2] &+ &\partial_{r_1} R({\theta},r) \\ \dot {\theta}_2= &[m_{12}r_1+m_{22}r_2] &+&\partial_{r_2} R({\theta},r) \\ \dot r_1= & &-& \partial_{{\theta}_1} R({\theta},r) \\ \dot r_2= -U'_c({\theta}_2)& &-&\partial_{{\theta}_2} R({\theta},r).\\ \end{array} \right. \end{equation} Let $\ell$ be the integer part of $\varpi_1(r^0)$. To avoid the classical degeneracy problem when $\abs{r_1}\to\infty$, we will consider this system as defined on the covering $\widehat {\mathbb A}^2$ of ${\mathbb A}^2$ induced by the covering ${\mathbb R}/(\ell{\mathbb Z})\to{\mathbb R}/{\mathbb Z}$ for the first factor, so that we consider the various functions as being $\ell$-periodic with respect to ${\theta}_1$. We denote by $\widehat {\mathbb T}^2$ the coresponding covering of ${\mathbb T}^2$. \vskip2mm $\bullet$ Let $\Phi_0$ denote the flow of the unperturbed vector field obtained by setting $R\equiv0$. Setting $r_2^*=\frac{m_{12}}{m_{22}} r_1+r_2$, the unperturbed vector field reads \begin{equation}\label{eq:vectfield2'} \left\vert \begin{array}{llll} \dot {\theta}_1=\varpi_1(r^0) + [m^*_{11}r_1+m_{12}r^*_2]\\ \dot r_1= 0\\ \dot {\theta}_2=m_{22}r_2^*\\ \dot r_2^*= -U'_c({\theta}_2),\\ \end{array} \right. \end{equation} with $m_{11}^*=m_{11}-\frac{m^2_{12}}{m_{22}}$. The last two equations are induced by the ``pendulum-like'' Hamiltonian $$ H({\theta}_2,r_2^*)={\tfrac{1}{2}} m_{22} (r_2^*)^2+U_c({\theta}_2) $$ for the usual symplectic structure, and therefore immediately integrated. The complete integration easily follows. In particular, the hyperbolic fixed point $({\theta}_2^*,0)$ for $X^H$ gives rise to an family of periodic hyperbolic solution of (\ref{eq:vectfield2'}), parametrized by the energy. \vskip2mm $\bullet$ On the compact set $[0,2]\times {\mathbb T}^2\times \overline B(0,\rho)$, by (\ref{eq:remainder}), the flow $\Phi$ of the system (\ref{eq:vectfield1'}) clearly satisfies \begin{equation}\label{eq:approxflow} \norm{\Phi-\Phi_0}_{C^1}\leq \frac{\mu_2}{\abs{r_1^0}} \end{equation} for a suitable $\mu_2>0$. We assume that $\rho$ is small enough so that the derivative $\dot{\theta}_1$ in~(\ref{eq:vectfield1'}) satisfies $\mabs{\dot {\theta}_1} \geq {\tfrac{1}{2}} \abs{\varpi_1(r^0)}>0$ on the set $\widehat {\mathbb T}^2\times \overline B(0,\rho)$. Therefore the $2$-dimensional submanifold $$ {\Sigma}=\{{\theta}_1=0\}\cap (C\circ\Phi)^{-1}(\{e^0\}) $$ is a symplectic section for the flow of (\ref{eq:vectfield1'}), on which the coordinates $({\theta}_2,r^*_2)$ define a chart. By (\ref{eq:approxflow}), it is easy to check that the associated return map to ${\Sigma}$ takes the form $$ P({\theta}_2,r^*_2)=P_0({\theta}_2,r^*_2)+\overline P({\theta}_2,r^*_2) $$ where $P_0$ is the time--$(\ell/\abs{\varpi_1(r^0)})$ flow of the system $H$, and where the remainder term $\overline P$ tends to $0$ in the $C^1$ topology when $\abs{r_1^0}\to\infty$. Now $$ 1-\frac{1}{\abs{\varpi_1(r^0)}}\leq \frac{\ell}{\abs{\varpi_1(r^0)}} \leq 1 $$ so for $r^0_1$ large enough $P$ has a hyperbolic fixed point arbitrarily close to $({\theta}_2^*,0)$. Moreover, this point admits stable and unstable manifolds that are graphs over a subset of the form $\abs{{\theta}_2-{\theta}_2^*}<1$ in the covering ${\mathbb R}^2$ of ${\mathbb A}$. \vskip2mm $\bullet$ Now, as a consequence of the ${\mathbb Z}^2$--periodicity of (\ref{eq:vectfield1'}) and by local uniqueness of the hyperbolic solutions, the periodic orbit $\widehat{\Gamma}$ of (\ref{eq:vectfield1'}) on $\widehat {\mathbb A}^2$ is an $\ell$--covering of a unique periodic orbit ${\Gamma}$ for the system on ${\mathbb A}^2$, and this is also the case for the previous stable and unstable manifolds. From their graph property, we deduce that suitable parts of them are graphs of weak KAM solutions of the system $C$ and, as a consequence, that ${\Gamma}$ is the corresponding Mather set. So the evenly distributed measure on ${\Gamma}$ minimizes the Lagrangian action and by \cite{DC95}, its time reparametrization minimizes the action of the Jacobi metric at energy $e^0$. This proves that the projection of ${\Gamma}$ on ${\mathbb T}^2$ minimizes the $\abs{\ }_{e^0}$ length. The definition of a Mather set also proves the uniqueness of the minimizing solution. \vskip2mm $\bullet$ The previous results hold true as soon as $\abs{r^0}$ is large enough. Our claim then easily follows from (\ref{eq:energy}). \end{proof} \begin{cor}\label{cor:highen} With the same assumptions as in Proposition~\ref{prop:highen}, suppose now that Conditions $(D_4(c))-(D_8(c))$ are satisfied. Then $B(c)$ is bounded above. \end{cor} \begin{proof} This is an immediate consequence of Proposition~\ref{prop:highen}, since $e(c)$ is an upper bound for $B(c)$. \end{proof} Note that in general $e(c)>\mathop{\rm Sup\,}\limits B(c)$. \section{The low-energy annuli}\label{sec:lowann} In this section we admit two technical results on the existence of polyhomoclinic orbits and horseshoes, to be proved in Appendix~\ref{sec:hom} and Appendix~\ref{sec:proofBS}. We fix a potential $U$ satisfying Conditions $(D)$. \vskip1mm Without loss of generality, we assume $\overline e=\mathop{\rm Max\,}\limits U=0$. \subsection{The horseshoes in the neighborhood of the critical energy} In this section we state two results to be proved in Appendix~\ref{sec:proofBS}, which constitute our version of the Hamiltonian Birkhoff-Smale theorem. By Condition $(D_2)$, there exists a symplectic coordinate system $(u_1,u_2,s_1,s_2)$ in a neighbohood of $O$, in which the Hamiltonian $C$ takes the normal form \begin{equation}\label{eq:normform4} \lambda_1 u_1s_1+\lambda_2 u_2s_2+R(u_1s_1,u_2s_2), \end{equation} where the remainder $R$ is flat at order $1$ at $0$. The coordinates moreover satisfy the equivariance condition (\ref{eq:equiv}). We still denote by $C$ the classical system in the normalizing coordinates. \paraga We first have to introduce particular sections in the neighborhood of $O$. We refer to Appendix~\ref{sec:proofBS} for more information on their definition and construction. We denote by $B({\varepsilon})$ the ball of ${\mathbb R}^4$ centered at $0$ with radius ${\varepsilon}$ for the Max norm. For ${\varepsilon}>0$ small enough and for ${\sigma}\in\{-1,+1\}$, we introduce the (pieces of) hyperplanes \begin{equation}\label{eq:sections1} {\Sigma}_{{\sigma}}^u[{\varepsilon}]=\{(u,s)\in \overline B({\varepsilon})\mid u_2={\sigma}{\varepsilon}\},\qquad {\Sigma}_{{\sigma}}^s[{\varepsilon}]=\{(u,s)\in \overline B({\varepsilon})\mid s_2={\sigma}{\varepsilon}\}, \end{equation} which obviously are transverse sections for the flow, due to (\ref{eq:normform4}). We say that ${\sigma}$ is the {\em sign} of the section ${\Sigma}_{{\sigma}}^{u,s}[{\varepsilon}]$. Moreover, the subsets \begin{equation}\label{eq:sections2} {\Sigma}_{{\sigma}}^u[{\varepsilon},e]={\Sigma}_{{\sigma}}^u[{\varepsilon}]\cap C^{-1}(e),\qquad {\Sigma}_{{\sigma}}^s[{\varepsilon},e]={\Sigma}_{{\sigma}}^s[{\varepsilon}]\cap C_{\vert B}^{-1}(e) \end{equation} are (for $\abs{e}$ small enough) two--dimensional submanifolds of $C^{-1}(e)$ which are transverse to the flow inside $C^{-1}(e)$. \paraga A {\em polyhomoclinic orbit} for $O$ is a finite ordered family $\Omega=(\Omega_1,\ldots,\Omega_\ell)$ of orbits $\Omega_i$ of $X^C$ which are homoclinic to $O$. Polyhomoclinic solutions are defined in the same way. The order of a polyhomoclinic orbit is to be understood as a cyclic order, so we consider any shifted sequence $(\Omega_k,\ldots,\Omega_1,\ldots,\Omega_{k-1})$ as defining the same polyhomoclinic orbit as$\Omega$, the context being always clear in the following. Note finally that the energy of each homoclinic orbit is $0$. Given a polyhomoclinic orbit $\Omega=(\Omega_1,\ldots,\Omega_\ell)$, we will see in Section \ref{sec:proofBS} that the (first and last) intersections $a_i$ and $b_i$ of each $\Omega_i$ with the sections ${\Sigma}^u[{\varepsilon}]$ and ${\Sigma}^s[{\varepsilon}]$ respectively are well defined when ${\varepsilon}$ is small enough. Consequently, with each $\Omega_i$ are associated the entrance sign ${\sigma}_{ent}(\Omega_i)$ (that is, the sign ${\sigma}$ such that $a_i\in{\Sigma}^u_{\sigma}[{\varepsilon}]$) and the exit sign ${\sigma}_{ex}(\Omega_i)$ (that is, the sign ${\sigma}$ such that $b_i\in{\Sigma}^s_{\sigma}[{\varepsilon}]$). Such an ${\varepsilon}$ will be said to be {\em adapted} to $\Omega$. Let us now introduce a central definition. \begin{Def}\label{def:compa} We say that a polyhomoclinic orbit $\Omega=(\Omega_1,\ldots,\Omega_\ell)$ is {\em compatible} when, for $1\leq i\leq\ell$: $$ {\sigma}_{ent}(\Omega_i)={\sigma}_{ex}(\Omega_{i+1}), $$ with the usual convention $\ell+1=1$. \end{Def} Given such a compatible $\Omega$, the ordered associated sequence of signs $({\sigma}_{ex}(\Omega_i))_{1\leq i\leq \ell}$ therefore completely characterizes the entrance and exit data of the sequence $(\Omega_i)$. \paraga Given a polyhomoclinic orbit $\Omega=(\Omega_1,\ldots,\Omega_{\ell})$, the Birkhoff-Smale theorem states the existence of a family of horseshoes for the Poincar\'e map induced by the flow. The basic rectangles of the horsehoes are contained in the sections ${\Sigma}_{{\sigma}}^u[{\varepsilon},e]$ (and so are two-dimensional) and the family is therefore parametrized by the energy. The rectangles are located around the exit points of the homoclinic orbits, so that each homoclinic orbit gives rise to such a rectangle. In turns out that the behavior of the horseshoes crucially depends on the sign of the energy; we will therefore state two different results, for which we refer to the Appendix for the basic notions. \begin{thm}\label{thm:hypdyn1} Fix a compatible polyhomoclinic orbit $\Omega=(\Omega_1,\ldots,\Omega_{\ell})$, a {\em small enough} adapted ${\varepsilon}>0$, and denote by $a_i$ the exit point of $\Omega_i$ relatively to the section ${\Sigma}^u[{\varepsilon}]$. Then, the following properties hold true. \begin{enumerate} \item There exists $e_0>0$ and, for $1\leq i\leq \ell$, there exist neighborhoods ${\mathcal R}_{i}$ in ${\Sigma}^u[{\varepsilon}]$ of the exit points $a_{i}$, such that for $\abs{e}\leq e_0$, the intersections $$ R_{i}(e)={\mathcal R}_{i}\cap C^{-1}(e) $$ are rectangles in the section ${\Sigma}^u[{\varepsilon},e]$ (relatively to suitable coordinates). \item For $e\in\,]0,e_0[$, the family $\big(R_{i}(e)\big)_{1\leq i\leq \ell}$ is a horseshoe for the Poincar\'e map $\Phi$ associated with the section ${\Sigma}^u[{\varepsilon},e]$ in $C^{-1}(e)$, whose transition matrix $A=\big(\alpha(i,j)\big)$ satisfies \begin{equation}\label{eq:transmat} \alpha(i,j)=1\quad\textit{when}\quad {\sigma}_{ent}(\Omega_i)={\sigma}_{ex}(\Omega_{j}). \end{equation} In particular, since $\Omega$ is compatible: \begin{equation}\label{eq:transmat1} \alpha(i,i+1)=1\quad\textit{for}\quad1\leq i\leq {\ell}. \end{equation} \item As a consequence, for $e\in\,]0,e_0[$, the periodic coding sequence $$ \cdots(1,2,\ldots,\ell)\cdots $$ is admissible and yields a hyperbolic periodic point $m(e)$ in $R^0(e)$ for the Poincar\'e return map. Let $\Phi_{out}$ be the Poincar\'e map between ${\Sigma}^u$ and ${\Sigma}^s$ along the homoclinic orbit $\Omega^0_1$. Then, when $e\to 0$: \begin{itemize} \item the unstable manifold of $m(e)$ converges to ${\Sigma}^u[{\varepsilon},0]\cap W^u(O)$, \item the stable manifold of $m(e)$ converges to $\Phi_{out}^{-1}\big({\Sigma}^s[{\varepsilon},0]\cap W^s(O)\big)$, \end{itemize} in the $C^1$ compact-open topology. \end{enumerate} \end{thm} Again, we used the convention $\ell+1=1$ in the previous statement on the transition matrix. Our second result focuses on the case where the polyhomoclinic orbit is the concatenation of two compatible homoclinic orbits with different exit signs, so that it is no longer compatible. \begin{thm}\label{thm:hypdyn2} Fix two compatible homoclinic orbits $\Omega_0$ and $\Omega_1$, fix a {\em small enough} adapted ${\varepsilon}$ and assume that \begin{equation}\label{eq:condsign} {\sigma}_{ex}(\Omega_0)\neq {\sigma}_{ex}(\Omega_1). \end{equation} Then, denoting by $a_\nu$ the exit point of the homoclinic orbit $\Omega_\nu$ there exists $e_0<0$ and two neighborhoods ${\mathcal R}_0,{\mathcal R}_1$ in ${\Sigma}^u[{\varepsilon}]$ of $a_0$ and $a_1$ respectively, such that for $e_0\leq e<0$, the intersections $$ R_0(e)={\mathcal R}_0\cap C^{-1}(e) \quad\textit{and}\quad R_1(e)={\mathcal R}_1\cap C^{-1}(e) $$ are rectangles in the section ${\Sigma}^u[{\varepsilon},e]$ (relatively to suitable coordinates). Moreover, the pair $\big(R_0(e),R_1(e)\big)$ is a horseshoe for the Poincar\'e map $\Phi$ associated with the section ${\Sigma}^u[{\varepsilon},e]$ in $C^{-1}(e)$, with transition matrix $$ A=\left[ \begin{array}{lll} 0&1\\ 1&0\\ \end{array} \right]. $$ The periodic coding sequence $$ \cdots(0,1)\cdots $$ defines a unique hyperbolic periodic point $m(e)$ in the rectangle $R_0(e)$ for $e_0<e<0$. Let $\Phi_{out}$ be the Poincar\'e map induced by the flow between ${\Sigma}^u$ and ${\Sigma}^s$ along the homoclinic orbit $\Omega_0$. Then, when $e\to 0$: \begin{itemize} \item the unstable manifold of $m(e)$ converges to ${\Sigma}^u[{\varepsilon},0]\cap W^u(O)$ \item the stable manifold of $m(e)$ converges to $\Phi_{out}^{-1}\big({\Sigma}^s[{\varepsilon},0]\cap W^s(O)\big)$ \end{itemize} in the $C^1$ topology. \end{thm} The proofs of these theorems are postponed to Section \ref{sec:proofBS}, to which we refer for the coordinates on the sections ${\Sigma}^u$ and ${\Sigma}^s$. The constraints on the size of ${\varepsilon}$ will be made explicit in Section \ref{sec:proofBS}. \subsection{Convergence of periodic orbit to polyhomoclinic orbits}\label{ssec:convergence} We now introduce a specific definition for the convergence of periodic orbits. \begin{Def} \label{def:conv} Let $\Omega=(\Omega_1,\ldots,\Omega_p)$ be a polyhomoclinic orbit and we fix ${\varepsilon}>0$ as above. We say that a sequence $({\Gamma}_n)_{n\in{{\mathbb N}^*}}$ of periodic orbits of $X^C$ {\em converges to $\Omega$} when \begin{itemize} \item for $n\geq n_0$, $ {\Gamma}_n\cap{\Sigma}^u[{\varepsilon}]=\{a_1^{n},\ldots,a_p^{n}\} \ \ \textit{and}\ \ {\Gamma}_n\cap{\Sigma}^s[{\varepsilon}]=\{b_1^{n},\ldots,b_p^{n}\}, $ with the following cyclic order $$ a_1^{n}<b_1^{n}<a_2^{n}<b_2^{n}<\cdots<a_p^{n}<b_p^{n}, $$ according to the orientation on ${\Gamma}_n$ induced by the flow; \item $\lim_{n\to\infty} a_i^{n}=a_i$ and $\lim_{n\to\infty} b_i^{n}=b_i$, where $a_i$ and $b_i$ are the exit and entrance points of $\Omega_i$. \end{itemize} \end{Def} One easily sees that this definition makes sense since the convergence property is clearly independent of the choice of ${\varepsilon}$ (small enough). \begin{Def}\label{def:positive} We say that a polyhomoclinic orbit $\Omega=(\Omega_1,\ldots,\Omega_\ell)$ is {\em positive} when there exists a sequence of {\em minimizing} periodic orbits with {\em positive energy} of the system $C$ which converges to $\Omega$. \end{Def} One of the main interest of the notion comes from the following result, which will be proved in Section~\ref{sec:proofBS}. \begin{lemma}\label{lem:poscomp} A positive polyhomoclinic orbit is compatible. \end{lemma} We will also need the following result for opposite polyhomoclinic orbits. Recall that given a solution $\gamma$ of $C$, the function $\widehat \gamma : t\mapsto \rho\circ\gamma(-t)$, where $\rho({\theta},r)=({\theta},-r)$, is another solution of $C$, which we call opposite to $\gamma$. We adopt the same terminology an notation for the orbits. \begin{lemma}\label{lem:opp} Let $\Omega=(\Omega_1,\ldots,\Omega_\ell)$ be a positive polyhomoclinic orbit of the system $C$. Then $\widehat\Omega=(\widehat\Omega_1,\ldots,\widehat\Omega_\ell)$ is also a positive polyhomoclinic orbit of $C$. Moreover, for $1\leq i\leq\ell$ $$ {\sigma}_{ex}(\Omega_i)=-{\sigma}_{ex}(\widehat\Omega_{i}). $$ \end{lemma} \begin{proof} This is an immediate consequence of the definition of the sections, the invariance of $C$ and the equivariance property (\ref{eq:equivariance}). \end{proof} The whole construction of the initial annuli in the next section will be based on the previous two theorems and the following results, which will be proved in Section \ref{sec:hom}. \begin{prop}\label{prop:polyhom} Let $c\in {\bf H}_1({\mathbb T}^2,{\mathbb Z})$. Then there exists a {\em positive} polyhomoclinic solution $\omega=(\omega_1,\ldots,\omega_\ell)$ such that the concatenation $$ (\pi\circ\omega_\ell)*\cdots*(\pi\circ\omega_1) $$ realizes the class $c$, where $\pi:{\mathbb A}^2\to {\mathbb T}^2$ is the canonical projection. For each primitive class $c$ we choose once and for all such a polyhomoclinic solution, which we denote by $\omega(c)$, and we write $\Omega(c)$ for the corresponding polyhomoclinic orbit. \end{prop} We conclude this part with a last lemma, whose proof is postponed to Section \ref{sec:hom} and which will be crucial for proving the existence of singular cylinders. \begin{lemma}\label{lem:simppos} Assume that Condition $(D_4)$ is satisfied. Then there exists a simple homoclinic orbit to $O$ which is positive. \end{lemma} Of course, by simple homoclinic orbit we mean here a polyhomoclinic orbit containing a single element. \subsection{Existence of annuli asymptotic to polyhomoclinic orbits} In this part, we prove the existence of ``Birkhoff-Smale'' annuli, which are asymptotic to the polyhomoclinic orbits $\Omega(c)$. \begin{lemma}\label{lem:anasympt} Fix a classical system of the form (\ref{eq:classham}) and assume that Conditions $(D)$ are satisfied. Fix a primitive homology class $c\in {\bf H}_1({\mathbb T}^2,{\mathbb Z})$. Then there exists an annulus ${\mathsf A}_{BS}(c)$ defined over an interval of the form $]0,e_1(c)]$ (recall that $\overline e=0$), which is ``asymptotic'' to the polyhomoclinic orbit $\Omega(c)$ when $e\to 0$, in the sense that $\Omega(c)\in\overline{{\mathsf A}_{BS}(c)}$. Moreover, ${\mathsf A}_{BS}(c)$ satisfies the transverse homoclinic property and the twist property, and the period of the orbit at energy $e$ on ${\mathsf A}_{BS}(c)$ tend to $+\infty$ when $e\to0$. \end{lemma} Note that the periodic orbits in ${\mathsf A}_{BS}(c)$ need {\em not} be minimizing. \vskip1mm \begin{proof} We will of course apply Theorem~\ref{thm:hypdyn1} to $\Omega(c):=\big(\Omega_1(c),\ldots,\Omega_{\ell^0}(c)\big)$, but, in order to prove that this annulus admits the transverse homoclinic property, we will need to introduce another polyhomoclinic orbit $\Omega(\widehat c\,):=\big(\Omega_1(\widehat c\,),\ldots,\Omega_{\ell^1}(\widehat c)\big)$, with $\widehat c \in {\bf H}_1({\mathbb T}^2,{\mathbb Z})$, $\widehat c\neq c$, and to apply Theorem~\ref{thm:hypdyn1} to the concatenation $\Omega(c)*\Omega(\widehat c\,)$. For this we need the sign condition \begin{equation}\label{eq:signcond} {\sigma}_{ex}(\Omega_1(c))={\sigma}_{ex}(\Omega_1(\widehat c\,)) \end{equation} to be satisfied. The existence of such an $\omega(\widehat c\,)$ is immediate by Proposition~\ref{prop:polyhom} and Lemma~\ref{lem:opp}. Note that $\widehat c\neq-c$. \vskip1mm${\bullet}$ We set $\Omega(c):=\Omega^0=(\Omega^0_1,\ldots,\Omega^0_{\ell^0})$ and $\Omega(\widehat c\,):=\Omega^1=(\Omega^1_1,\ldots,\Omega^1_{\ell^1})$ and we write $\{1^0,2^0,\ldots,\ell^0,1^1,2^2,\ldots,\ell^1\}$ for the associated set of indices. \vskip1mm${\bullet}$ Note that, by (\ref{eq:signcond}), the polyhomoclinic $$ \Omega^*=\big(\Omega^0_1,\ldots,\Omega^0_{\ell^0},\Omega^1_1,\ldots,\Omega^1_{\ell^1}\big) $$ is compatible, so that one can apply Theorem \ref{thm:hypdyn1}. There exists an energy $e_1(c)>0$ and, for $1^\nu\leq i^\nu\leq \ell^\nu$, there exist neighborhoods ${\mathcal R}^\nu_{i^\nu}$ in ${\Sigma}^u[{\varepsilon}]$ of the exit points $a_{i^\nu}$, such that for $\abs{e}\leq e_0$, the intersections $ R_{i^\nu}^\nu(e)={\mathcal R}_{i^\nu}^\nu\cap C^{-1}(e) $ are rectangles in the section ${\Sigma}^u[{\varepsilon},e]$. These rectangles form a horseshoe for the Poincar\'e map $\Phi$ associated with the section ${\Sigma}^u(e)$ in $C^{-1}(e)$, whose transition matrix $A=\big(\alpha(i,j)\big)$ satisfies (\ref{eq:transmat}). \vskip1mm${\bullet}$ For $e\in \,]0,e_1(c)]$ we denote by $m(e)\in R^0_{1^0}(e)$ the periodic point associated with the periodic coding \begin{equation}\label{eq:percoding1} \cdots\,(1^0,2^0,\ldots,\ell^0)\,\cdots \end{equation} which is admissible relatively to $A$ since $\Omega^0$ is compatible. Let ${\Gamma}_e$ be the corresponding periodic orbit for the Hamiltonian flow. We will prove that the union $$ {\mathsf A}_{BS}(c)=\bigcup_{e\in\,]0,e_1(c)]}{\Gamma}_e $$ is an annulus defined over $]0,e_1(c)]$. \vskip1mm${\bullet}$ Note first that ${\Gamma}_e$ is hyperbolic, as $m(e)$ is. One then has to prove that the projection on ${\mathbb T}^2$ of the corresponding solution realizes $c$. For this, the crucial point is that ${\mathcal R}_{i^0}^0$ is a neighborhood of $a_{i^0}^0$ {\em in ${\Sigma}^u$}. By compatibility, there exists a sequence $(\overline {\Gamma}(e_n))$ of minimizing periodic orbits, with $\overline {\Gamma}(e_n)\subset C^{-1}(e_n)$, which converge to the polyhomoclinic orbit $\Omega_0=\Omega^{(c)}$ (so $e_n\to0$ when $n\to\infty$). So, for $n$ large enough the orbits $\overline{\Gamma}(e_n)$ intersect the section ${\Sigma}^u$ at points $m_i^n\in {\mathcal R}_{i^0}^0$, which are ordered in the following (cyclic) way $$ m_1^n\prec m_2^n\cdots\prec m_\ell^n. $$ By periodicity, the point $m_1^n$ is in the maximal invariant set defined by the horseshoe and admits the coding (\ref{eq:percoding1}). It therefore coincides with $m(e_n)$ by uniqueness. As a consequence the orbits ${\Gamma}_{e_n}$ and $\overline{\Gamma}(e_n)$ coincide. Now all the orbits in ${\mathsf A}_{BS}(c)$ are homotopic in ${\mathbb A}^2$ and the orbits $\overline{\Gamma}_n$ realize $c$, this proves in particular that the annulus ${\mathsf A}_{BS}(c)$ realizes $c$. Note moreover that each orbit ${\Gamma}(e)$ is homotopic in ${\mathbb A}^2$ to the concatenation $\Omega_1*\cdots*\Omega_\ell$. \vskip1mm${\bullet}$ Let us prove the existence of transverse homoclinic orbits for each ${\Gamma}(e)$. In fact, there exists an infinite set of such orbits, which come from the application of Theorem~\ref{thm:hypdyn1} to the polyhomoclinic orbit $\Omega^*$. Given any finite sequence $[a_1,\ldots,a_p]$ which is not a concatenation of the sequence $1^0,2^0,\ldots,\ell^0$ and which is admissible according to the transition matrix $A$, each coding of the form $$ \ldots,1^0,2^0,\ldots,\ell^0, [a_1,\ldots,a_p],1^0,2^0,\ldots,\ell^0,\ldots $$ gives rise to a nontrivial orbit homoclinic to the periodic point $m(e)$. Now sequences such as $[a_1,\ldots,a_p]$ exist due to the presence of symbols from the second polyhomoclinic orbit $\Omega^1$. The resulting homoclinic orbits are obviously transverse inside their energy level by construction of the horseshoe. \vskip1mm${\bullet}$ It only remains to prove the twist property. For this first remark that the period of ${\Gamma}(e)$ is equivalent to $\ell^0\tau(e)$, where $\tau(e)$ is the transition time between the entrance and exit sections near the fixed point introduced in Theorem~\ref{thm:hypdyn1}. Now, by Lemma~\ref{lem:phiin} one immediately checks that $$ \tau(e)=-\frac{1}{\lambda_2}{\rm Log\,}(e)+\tau_r(e), $$ where $\tau_r$ is $C^1$ bounded. This proves that $\tau'(e)<0$ and that moreover $\tau(e)\to-\infty$ when $e\to0$. Reducing $e_1(c)$ if necessary, this proves our claim for the restricted annulus. \end{proof} \subsection{The singular annulus} In this section we in fact prove the existence of a singular annulus attached to each pair of opposite {\em simple} positive homoclinic orbits. \begin{lemma}\label{lem:singann} Fix a classical system $C$ of the form (\ref{eq:classham}) and assume that Conditions $(D)$ are satisfied. Then there exists a singular annulus ${\mathsf A}^{\bullet}$ for $C$, which admits heteroclinic connections with each initial annulus of the chains of Lemma~\ref{lem:inannhetconn}, which are transverse in their energy levels. Moreover, ${\mathsf A}^{\bullet}$ admits a neighborhood $O$ in ${\mathbb A}^2$ such that there exists a Hamiltonian $C_\circ$, defined on an open set ${\mathscr O}\subset {\mathbb A}^2$ containing $O$, whose Hamiltonian vector field coincides with $X_C$ on $O$, and which admits a normally hyperbolic $2$-dimensional annulus on which the its time-one map is a twist map in suitable symplectic coordinates. \end{lemma} \begin{proof} We know by Lemma \ref{lem:simppos} that there exist a simple positive homoclinic orbit $\Omega$, and, by Lemma~\ref{lem:opp}, its opposite orbit $\widehat\Omega$ is also positive and satifies ${\sigma}_{ex}(\widehat\Omega\,)=-{\sigma}_{ex}(\Omega)$. We denote by $c$ and $-c$ the (necessarily primitive) homology classes corresponding to $\Omega$ and $\widehat \Omega$ respectively. We will apply Theorem~\ref{thm:hypdyn1} (and, more precisely, Lemma~\ref{lem:anasympt}) to {\em each} homoclinic orbit $\Omega$ and $\widehat \Omega$, and Theorem~\ref{thm:hypdyn2} to the pair $(\Omega,\widehat\Omega)$, which obviously satisfies the sign condition of this theorem. \vskip1mm${\bullet}$ By Lemma~\ref{lem:anasympt}, there exists an interval $I^*=\,]0,e^*]$ and two annuli $$ {\mathsf A}^\pm={\mathsf A}_{BS}(\pm c) $$ realizing $\pm c$ and defined over $I^*$, such that $ \Omega\in\overline{{\mathsf A}^+},\qquad \widehat\Omega\in\overline{{\mathsf A}^-}. $ By Theorem~\ref{thm:hypdyn2}, there exists $e^0<0$ auch that for $e\in\,]e^0,0[$, the periodic sequence $\cdots(0,1)\cdots$ defines a hyperbolic periodic point $m^0(e)$ for the Poincar\'e map associated with ${\Sigma}[{\varepsilon},e]$. Let ${\Gamma}^0(e)$ be the associated hyperbolic periodic orbit for the flow, then, as in Lemma~\ref{lem:anasympt}, the union $$ {\mathsf A}^0=\bigcup_{e\in\,]e^0,0[} {\Gamma}^0(e) $$ is an annulus, which, by construction, contains the union $\Omega\cup\widehat\Omega$ in its closure. Now we define our singular annulus as the union $$ {\mathsf A}^{\bullet}={\mathsf A}^+\cup {\mathsf A}^-\cup {\mathsf A}^0\cup \Omega\cup\widehat\Omega\cup\{O\}. $$ \vskip1mm${\bullet}$ Let us now prove that ${\mathsf A}^{\bullet}$ is a $C^1$ normally hyperbolic submanifold of ${\mathbb A}^2$. There exist several ways for doing this, the most elegant one being to use the ``block theory'' of \cite{C08}. We only need to exhibit a neighborhood of ${\mathsf A}^{\bullet}$ which satisfies the expansion and contraction conditions of \cite{C08}. For this we can use the (singular) foliation $$ {\mathsf A}^{\bullet}=\bigcup_{e\in[e^0,e^*]}{\mathsf A}^{\bullet}\cap C^{-1}(e) $$ and construct a suitable $3$--dimensional block around each leaf of this foliation, contained in the corresponding energy level and continuously varying with the energy. This amounts to finding stable and unstable bundles for the $C^0$ manifold ${\mathsf A}^{\bullet}$, fibered by the energy, and proving that these bundles are $C^0$. This is obvious for the bundles over the union ${\mathsf A}^+\cup {\mathsf A}^-\cup {\mathsf A}^0$, which is (regularly) foliated by hyperbolic periodic orbits: the stable and unstable bundles are just the unions of the stable and unstable bundles over each orbit, these latter ones being defined as those of the map $\Phi^{TC}$, where $T$ is the period of the corresponding orbit. Now, going back to Theorem~\ref{thm:hypdyn1} applied to the simple homoclinic orbit $\Omega$, denote by $m^+(e)$ the periodic point corresponding to the coding sequence $\cdots (0,0) \cdots$. Therefore $$ m^+(e)={\Sigma}^u[{\varepsilon},e]\cap {\mathsf A}^+. $$ On the other hand $$ m^0(e)={\Sigma}^u[{\varepsilon},e]\cap {\mathsf A}^0. $$ Now $m^+(e)$ and $m^0(e)$ lie at the intersection of their stable and unstable manifolds, and by Theorem~\ref{thm:hypdyn1} and Theorem~\ref{thm:hypdyn2}, when $e\to 0$: $$ W^u(m^+(e))\to {\Sigma}[{\varepsilon},0]\cap W_{loc}^u(O),\qquad W^u(m^0(e))\to {\Sigma}[{\varepsilon},0]\cap W_{loc}^u(O) $$ and, {\em if $\Phi_{out}$ is the Poincar\'e map along $\Omega$ between ${\Sigma}^u[{\varepsilon}]$ and ${\Sigma}^s[{\varepsilon}]$}, then $$ W^s(m^+(e))\to \Phi_{out}^{-1}\big({\Sigma}[{\varepsilon},0]\cap W_{loc}^s(O)\big),\qquad W^s(m^0(e))\to \Phi_{out}^{-1}\big({\Sigma}[{\varepsilon},0]\cap W_{loc}^s(O)\big), $$ (the convergence being understood in the $C^1$ compact open topology). This proves that one can define the stable and unstable bundles along $\Omega$ by continuously continuating those of the orbits in ${\mathsf A}^+$ and ${\mathsf A}^0$. The same argument holds for $\widehat \Omega$. As for $O$, of course the energy manifolds becomes singular. However, observe that, due to the form of the flow on $W^u(O)$ and $W^s(O)$ (see Section \ref{sec:proofBS}), the ``transverse space'' at $O$ (in $C^{-1}(0)$) is necessarily the plane $W$ of equation $$ u_1=0,\ s_1=0, $$ that is, the plane generated by the weak directions. In this plane, the stable direction is of course $u_1=0$, while the unstable one is $s_1=0$. Finally, the strong $\lambda$--lemma (see for instance \cite{D89}) proves that the stable and unstable bundles are continuous at $O$. Now, considering normalized generating vectors for the stable and unstable bundles, one can construct a tubular neighborhood $N_e$ (with small radius $\delta>0$) of each leave in its energy level. The union of these blocks satisfies the expansion and contraction condition of \cite{C08} for a large enough iterate $\Phi^{\tau C}$, which proves that ${\mathsf A}^{\bullet}$ is a Lipschitz manifold. Finally, one sees from Section~\ref{sec:proofBS} that the ratio between the outer Lipschitz constants and the inner ones is lower bounded by $\lambda_1/\lambda_2-\rho$, where $\rho$ can be made arbitrarily small by taking $\abs{e^0}$ and $e^*$ small enough, which proves by \cite{C08} that ${\mathsf A}^{\bullet}$ is in fact of class $C^{1+\delta}$ for some suitable $\delta>0$. The last assertion on the continuation of ${\mathsf A}_{\bullet}$ comes directly from the possibility of gluing symplectically a disc whose boundary is a periodic orbit with zero homology, and continuing the vector field $X_C$ to this disc in such a way that it is foliated by periodic orbits surrounding an elliptic point. Using the relation energy/period in the neighborhood of the hyperbolic fixed point on the annulus ${\mathsf A}_{\bullet}$, one can moreover control the continuation in such a way that the time-one map of the flow satisfies a twist condition (as in the case of the standard pendulum ${\tfrac{1}{2}} r^2+ a\cos{\theta}$ when $a$ is small enough.) Finally the normal hyperbolicity is got by taking a trivial product of the disc with a hyperbolic point and smoothing in the neighborhood of the gluing zone. \end{proof} \subsection{Initial annuli and heteroclinic connections} In this section we show how to modify the chains ${\mathscr A}(c)$ obtained in Proposition~\ref{mainprop} in order for them to be finite, with initial annuli admitting heteroclinic connections with the singular annulus. We also show how to continue them to obtain the third statement of Theorem~II. \begin{lemma}\label{lem:inannhetconn} Fix a classical system of the form (\ref{eq:classham}) and assume that Conditions $(D)$ are satisfied. Then: \begin{enumerate} \item for each $c\in {\bf H}({\mathbb T}^2,{\mathbb Z})$ there exists a chain ${\bf A}(c)=\big({\mathsf A}_1(c),\ldots,{\mathsf A}_\ell(c)\big)$, where ${\mathsf A}_1(c)$ is defined over $]0,e_1(c)]$ and ${\mathsf A}_\ell(c)$ is defined over $[e_\ell,\ldots,+\infty[$; \item given $c,c'\in{\bf H}_1({\mathbb T}^2,{\mathbb Z})$, there exists ${\sigma}\in\{0,1\}$ such that ${\mathsf A}_1(c)$ and ${\mathsf A}_1({\sigma} c')$ satisfy $$ W^u({\mathsf A}_1(c))\cap W^s({\mathsf A}_1({\sigma} c'))\neq \emptyset,\qquad W^s({\mathsf A}_1(c))\cap W^u({\mathsf A}_1({\sigma} c')))\neq \emptyset $$ both intersections being transverse in ${\mathbb A}^2$; \item moreover, for each $c\in{\bf H}_1({\mathbb T}^2,{\mathbb Z})$, ${\mathsf A}_1(c)$ admits transverse heteroclinic connections as above with ${\mathsf A}^{\bullet}$. \end{enumerate} \end{lemma} \begin{proof} Recall that, given a class $c\in {\bf H}_1({\mathbb T}^2,{\mathbb Z})$, we proved in Proposition~\ref{mainprop} and Corollary~\ref{cor:highen} the existence of a chain ${\mathscr A}(c)=({\mathsf A}_k)_{k\in Z}$ of annuli realizing $c$ and defined over a sequence of consecutive intervals of the form $(I_k)_{k\in Z}$, where $Z$ is an upper bounded interval of ${\mathbb Z}$. \vskip1mm${\bullet}$ If $Z$ is finite we choose ${\bf A}(c)={\mathscr A}(c)$. However, in order to prove our claim on the heteroclinic connections, we have to make precise the relation between the first annulus ${\mathsf A}_1$ of this chain and the ``Birkhoff-Smale'' annulus ${\mathsf A}_{BS}(c)$ of Lemma~\ref{lem:anasympt}. Let $I_1$ and $I_{BS}$ be the intervals associated with ${\mathsf A}_1$ and ${\mathsf A}_{BS}$ respectively. By construction and Condition $(D_6(c))$, there exists a unique minimizing periodic orbit in the class $c$ for each energy $e$ in $I_1$, which is precisely the intersection $\overline{\Gamma}(e)={\mathsf A}_1\cap C^{-1}(e)$. Now, since the polyhomoclinic orbit $\Omega(c)$ is positive, there exists a sequence $(e_n)$ in $I_1$, with $e_n\to 0$ when $n\to\infty$, such that the associated orbit $\overline{\Gamma}(e_n)$ converges to $\Omega(c)$. Therefore, for $n$ large enough, for the same reason as in Lemma~\ref{lem:anasympt} (hyperbolic maximality), the orbit $\overline{\Gamma}(e_n)$ necessarily coincides with the orbit ${\Gamma}(e_n)={\mathsf A}(c)\cap C^{-1}(e_n)$. As a consequence ${\mathsf A}(c)\cap {\mathsf A}_0\neq\emptyset$. But this intersection is closed in $C^{-1}({\mathbb R}^{*+})$ and it is also open by uniqueness of the continuation of hyperbolic periodic orbits. Therefore both annuli coincide over the intersection $I_1\cap I_{BS}$. \vskip1mm${\bullet}$ Assume now that $Z$ is infinite. In this case, by the same arguments as above, there exists a sequence $n_k\to-\infty$, such that each annulus ${\mathsf A}_{n_k}$ contains a minimizing orbit ${\Gamma}(e_{n_k})$ and the sequence $\big({\Gamma}(e_{n_k})\big)$ converges to $\Omega(c)$. Again, the same arguments as above prove that ${\Gamma}(e_{n_k})\subset{\mathsf A}(c)$ for $k\geq k_0$ large enough, and that ${\mathsf A}_{n_k}$ is contained in ${\mathsf A}(c)$ for $k\geq k_0$. In this case, we set ${\bf A}(c)=({\mathsf A}'_1,\ldots,{\mathsf A}'_\ell)$, with $$ {\mathsf A}'_1={\mathsf A}_{BS}(c)\cap C^{-1}(]0,{\rm Max\,} I_{n_{k_0}}]), $$ and $$ {\mathsf A}'_2:={\mathsf A}_{k_0+1},\ldots,\ {\mathsf A}'_\ell:={\mathsf A}_{{\rm Max\,} Z}. $$ \vskip1mm${\bullet}$ It remains to prove the existence of heteroclinic connections. Note first that by Lemma \ref{lem:opp}, given two primitive classes $c$ and $c'$, there exists ${\sigma}\in\{0,1\}$ such that $$ {\sigma}_{ex}\big(\Omega^0_1\big)={\sigma}_{ex}\big(\Omega^1_1\big), $$ where, as usual, $\Omega(c)=(\Omega^0_1,\ldots,\Omega^0_{\ell^0})$ and $\Omega({\sigma} c')=(\Omega^0_1,\ldots,\Omega^0_{\ell^0})$. We will prove that the initial annuli of ${\bf A}(c)$ and ${\bf A}({\sigma} c')$ admit heteroclinic connections. For this we will apply Theorem \ref{thm:hypdyn1} to the polyhomoclinic orbit $\Omega(c)*\Omega({\sigma} c')$, which is compatible by our choice of ${\sigma}$. For $0<e<e_0$, this yields the existence of orbits for the Poincar\'e map with coding sequences of the form \begin{equation}\label{eq:hetcoding} \ldots,(1^0,2^0,\ldots,\ell^0),[a_1,\ldots,a_p],(1^1,2^1,\ldots,\ell^1),\ldots \end{equation} where $[\ell^0,a_1,\ldots,a_p,1^1]$ is any finite sequence admissible relatively to the transition matrix. Such sequences obviously exist (for instance $[\ell^0,1^0,\ldots,\ell^0,1^1]$, thanks to (\ref{eq:transmat1})). Now, for each energy $e\in\,]0,e_0]$, the coding (\ref{eq:hetcoding}) induces a heteroclinic orbit for the Poincar\'e map between the periodic point $m(e)$ with periodic coding $(1^0,2^0,\ldots,\ell^0)$ and the periodic point $m'(e)$ with periodic coding $(1^1,2^1,\ldots,\ell^1)$. Therefore the associated orbits ${\Gamma}(e)$ and ${\Gamma}'(e)$ admit heteroclinic connections at energy $e$. These connections are transverse in their energy level, by construction of the horseshoe. This immediately proves that the initial annuli of the chains ${\bf A}(c)$ and ${\bf A}({\sigma} c)$ admit transverse heteroclinic connections, by our previous construction of these annuli. \vskip1mm${\bullet}$ Our last statement is then obvious, by construction of the singular annulus ${\mathsf A}^{\bullet}$, since it contains ${\mathsf A}_{BS}(\pm c^{\bullet})$ for the corresponding $c^{\bullet}$. \end{proof} \section{The set ${\mathscr U}$ is residual}\label{sec:gen} We fix as usual a positive definite quadratic form $T$ on ${\mathbb R}^2$ and for each $U\in C^\kappa({\mathbb T}^2)$, $\kappa\geq 2$, we denote by $C_U$ the associated classical system on ${\mathbb A}^2$. In this section we complete the proof of Theorem II, that is, we show that the set ${\mathscr U}$ of potentials $U\in C^\kappa({\mathbb T}^2)$ such that $C_U$ satisfies the three items of this theorem is residual in $C^\kappa({\mathbb T}^2)$. The main ingredient of our proof is the parametrized genericity theorem of Abraham, which we recall here in an adapted form for the convenience of the reader. \vskip2mm \noindent {\bf Theorem (\cite{AR}).} {\em Fix $1\leq k<+\infty$. Let ${\mathscr A}$ be a $C^k$ and second-countable Banach manifold. Let $X$ and $Y$ be finite dimensional $C^k$ manifolds. Let $\chi : {\mathscr A}\to C^k(X,Y)$ be a map such that the associated evaluation $$ {\bf ev\,}_\chi: {\mathscr A}\times X\to Y,\qquad {\bf ev\,}_\chi(A,x)=\big(\chi(A)\big)(x) $$ is $C^k$ for the natural structures. Fix a be a submanifold $\Delta$ of $Y$ such that $$ k>\dim X-{\rm codim\,} \Delta $$ and assume that ${\bf ev\,}_\chi$ is transverse to $\Delta$. Then the set ${\mathscr A}_\Delta$ of $A\in{\mathscr A}$ such that $\chi(A)$ is transverse to $\Delta$ is residual in ${\mathscr A}$.} \vskip2mm As always, the spaces $C^\kappa({\mathbb T}^m)$ are endowed with their usual $C^\kappa$ norms for $1\leq \kappa <\infty$, which make them Banach spaces, and $C^\infty({\mathbb T}^m)$ is equipped with its usual Fr\'echet structure. We will try to use everywhere abstract arguments; however direct proofs based on explicit constructions would also often be possible. \subsection{Large energies} Let us first recall an easy result. \begin{lemma}\label{lem:dens1} Given $m\geq 1$, the set ${\mathscr M}^\kappa({\mathbb T}^m)$ of functions of $C^\kappa({\mathbb T}^m)$ which admit a unique maximum, which is nondegenerate, is open and dense in $C^\kappa({\mathbb T}^m)$ for $2\leq \kappa\leq +\infty$. \end{lemma} \begin{proof} This is a standard result in Morse theory. The fact that ${\mathscr M}^\kappa({\mathbb T}^m)$ is open is obvious and its density can easily be proved by adding a suitable small enough $C^\infty$ bump function with a unique nondegenerate maximum to any given function in $C^\kappa({\mathbb T}^m)$. \end{proof} \begin{lemma} Fix $2\leq \kappa\leq +\infty$. Given $c\in{\bf H}_1({\mathbb T}^2,{\mathbb Z})$, the set ${\mathscr U}_9(c)$ of potentials in $C^\kappa({\mathbb T}^2)$ such that Condition $(D_9(c))$ is satisfied is open and dense $C^\kappa({\mathbb T}^2)$. As a consequence, the set ${\mathscr U}_9$ of potentials in $C^\kappa({\mathbb T}^2)$ such that Condition $(D_9)$ is satisfied is residual in $C^\kappa({\mathbb T}^2)$. \end{lemma} \begin{proof} Up to the standard coordinate change, one can assume that $c\sim (1,0)$ in ${\mathbb Z}^2$. The averaged potential then reads $$ U_c({\theta}_2)=\int_{\mathbb T} U({\theta}_1,{\theta}_2)\,d{\theta}_1. $$ Consider the (obviously well-defined) map ${\mathscr I}_c:C^\kappa({\mathbb T}^2)\to C^\kappa({\mathbb T})$ such that $$ {\mathscr I}_c(U)=U_c. $$ Clearly ${\mathscr I}_c$ is linear and continuous ($\norm{{\mathscr I}_c}\leq 1$). It is also clearly surjective (a given function $U_c$ admits the function $U({\theta}_1,{\theta}_2)=U_c({\theta}_2)$ as a preimage). So by the usual Open Mapping Theorem for Banach spaces the map ${\mathscr I}_c$ is open for $2\leq \kappa <+\infty$. This is still an open mapping for $\kappa=+\infty$, by the Fr\'echet version of the previous theorem (see for instance \cite{Ru}). Now by Lemma \ref{lem:dens1} the subset ${\mathscr M}^\kappa({\mathbb T})\subset C^\kappa({\mathbb T})$ is open and dense, so its inverse image ${\mathscr U}_9(c)={\mathscr I}_c^{-1}({\mathscr M}^\kappa({\mathbb T}))$ is open and dense in $C^\kappa({\mathbb T}^2)$. The second claim is immediate by countable intersection, since $C^\kappa({\mathbb T}^m)$ is complete for the usual norm. \end{proof} \subsection{The neighborhood of the critical energy} In this part we prove that our conditions $(D_1)-(D_4)$ are generic in $C^\infty({\mathbb T}^2)$ and postpone the study of the $C^\kappa$ regularity to the last section, where the global structure of the systems will be used explicitely. \paraga One can even be more general for $(D_1)$. By Lemma \ref{lem:dens1}, the set ${\mathscr U}_1^\kappa:={\mathscr M}^\kappa({\mathbb T}^2)$ of all potentials $U$ such that $(D_1)$ is satisfied is open and dense in $C^\kappa({\mathbb T}^2)$, for $2\leq \kappa\leq +\infty$. \paraga We will now have to use the Sternberg conjucacy theorem and we limit ourselves to $C^\infty$ potentials. Given $U\in {\mathscr U}_1^\infty$, we denote by $O_U$ the hyperbolic fixed point of $C_U$ associated with the maximum ${\theta}^0_U$ of $U$. \begin{lemma} The set ${\mathscr U}_2$ of potentials $U\in{\mathscr U}_1^\infty$ such such that $O_U$ admits a proper conjugacy neighborhood is residual in $C^\infty({\mathbb T}^2)$. \end{lemma} \begin{proof} Following \cite{B10}, recall that if $A$ is the matrix of $T$, if $B_U=-\partial^2U({\theta}_0)$ and if $$ L_U=\big(A^{-1/2}(A^{1/2}B_U A^{1/2})^{1/2}A^{-1/2}\big)^{1/2}, $$ then the change of variables $\overline u=\frac{1}{\sqrt 2}(Lx+L^{-1} y)$, $\overline s=\frac{1}{\sqrt 2}(Lx-L^{-1} y)$, is symplectic and reduces the quadratic part of the system $C_U$ to the form ${\tfrac{1}{2}} \langle D(U)\overline u, \overline s\rangle$, with \begin{equation}\label{eq:mapD} D(U)=L_UAL_U \in \S. \end{equation} Here $\langle\,\,,\,\rangle$ stands for the Euclidean scalar product and where $\S\subset M_2({\mathbb R})$ is the cone of positive definite symmetric matrices. This shows that the positive eigenvalues of $O_U$ are the eigenvalues of $D(U)$. Finally, one easily checks that the map $D:{\mathscr U}_1^\infty\to \S$ defined by~(\ref{eq:mapD}) is continuous and open. \vskip1mm To apply the Sternberg theorem and get our conjugacy result, we need the system $C_U$ to be formally conjugated to a normal form $$ N(u,s)=\lambda_1u_1s_1+\lambda_2u_2s_2+R(u_1s_1,u_2s_2), $$ where $R$ is $C^1$ flat at $(0,0)$. For this, a sufficient condition is that the positive eigenvalues $\lambda_i$ of $O_U$ satisfy the nonresonance conditions \begin{equation}\label{eq:nonres} \lambda_1k_1+\lambda_2k_2\neq 0,\qquad \forall (k_1,k_2)\in{\mathbb Z}^2\setminus\{(0,0)\}. \end{equation} The subset $\S^*$ of positive symmetric matrices whose eigenvalues satisfy (\ref{eq:nonres}) is cleary residual in $\S$, so that the inverse image ${\mathscr U}_2:=D^{-1}(\S^*)\subset {\mathscr U}_1^\infty$ is also residual in ${\mathscr U}_1^\infty$, by the previous property. \vskip1mm Now, the equivariant symplectic Sternberg theorem (see \cite{C85} for a general exposition and \cite{BK02} for a recent proof in our setting) applies in a neighborhood of $O_U$ for every $U\in {\mathscr U}_2$, and yields a proper conjugacy neighborhood. This proves our statement. \end{proof} Note that the previous result does not hold in finitely differentiable classes, since the minimal regularity of the system in order to get a conjugacy tends to $+\infty$ when the ratio $\lambda_1/\lambda_2$ tends to $1$. \paraga Recall that given $U\in{\mathscr U}_2$, the exceptional set ${\mathscr E}_U\subset W^s(O_U)\cup W^u(O_U)$ attached to $O_U$ is intrinsically defined and continuously depend on $U$. \begin{lemma} The set ${\mathscr U}_3$ of potentials in ${\mathscr U}_2$ such that condition $(D_3)$ is satisfied is residual in $C^\infty({\mathbb T}^2)$. \end{lemma} \begin{proof} We will prove that for each integer $N$, the set ${\mathscr U}_3(N)$ of all $U\in {\mathscr B}^\infty(0,N)\cap {\mathscr U}_2$ such that $(D_3)$ is satisfied for $C_U$ is residual in ${\mathscr B}^\infty(0,N)\cap {\mathscr U}_2$. Our claim easily follows, since $$ \bigcap_{N\in{\mathbb N}}\Big({\mathscr U}_3(N)\cup ({\mathscr U}_2\setminus \overline {\mathscr B}^\infty(0,N)\Big) $$ is residual in ${\mathscr U}_2$ and $(D_3)$ is satisfied for any $U$ in this subset. \vskip1mm Observe first that given $U\in {\mathscr B}^\infty(0,N)$, there exists a Euclidean disk $B_{\delta(N)}\subset {\mathbb T}^2$ centered at ${\theta}^0_{U}$ with radius $\delta(N)>0$, such that the local stable and unstable manifolds $W_{loc}^\pm(O_U)$ of $O_U$ are graphs over $B_{\delta(N)}$. From now on we fix $N$ and drop it from the notation. \vskip1mm We denote by $W^\pm(U,\delta)$ the parts of $W_{loc}^\pm(O_U)$ located above $B_\delta$. For $n\in{\mathbb N}$, we set $$ W_n^-(U)=\Phi^{C_U}([0,n],W^u(U,\delta)). $$ We will first prove that the set of $U\in {\mathscr B}^\infty(0,N)$ such that $W_n^-(U)$ transversely intersects $W^s(U,\delta)$ and satisfies $W_n^-\cap {\mathscr E}_U =\emptyset$ is open and dense in ${\mathscr B}^\infty(0,N)$. Since the proof of the transversality property is classical, we will only give a sketch of proof, a more detailed version can be found in [LM]. \vskip2mm The following lemma can be easily proved following the lines of [CP]. Here $B(0,r)$ will be the ball in ${\mathbb R}^2$ centered at $0$ with radius $r$ relatively to the Max norm. \begin{lemma}\label{lem:pertlag} Let $(M,\omega)$ be a $4$--dimensional symplectic manifold, let $H\in C^2(M,{\mathbb R})$ and let $L$ be a Lagrangian submanifold contained in some level $H^{-1}(e)$ (not necessarily regular). Assume that $z\in L$ satisfies $X^H(z)\neq0$. Then one can find a neighborhood $N$ of $z$ in $M$ and a symplectic coordinate system $(x,y)\in B(0,2)\times B(0,{\varepsilon})$ on $N$ such that $$ L\cap N=\{y=0\},\qquad z=\big((0,1),(0,0)\big), \qquad X_{\vert L}^H=\frac{\partial}{\partial x_1}. $$ Assume that $L'$ is another Lagrangian manifold contained in $H^{-1}(e)$ such that $z\in L\cap L'$. Let ${\Sigma}=\{x_1=1\}\cap H^{-1}(e)$, so that ${\Sigma}$ is a symplectic section for $X^H$ (assuming $N$ small enough). Then there exists a neighborhood $\widetilde{\Sigma}$ of $z$ in ${\Sigma}$ such that given ${\zeta}\in \widetilde{\Sigma}\cap L'$, there exists a Lagrangian submanifold ${\mathscr L}$ of $M$ such that \begin{itemize} \item ${\mathscr L}\cap N$ is the graph of some function $y=\phi(x)$ over $B(0,2)$; \item ${\mathscr L}\cap L' \cap {\Sigma}=\{{\zeta}\}$; \item ${\mathscr L}\cap {\Sigma}$ and $L'\cap {\Sigma}$ transversely intersect at ${\zeta}$; \item the part of ${\mathscr L}$ over $B(0,2)\setminus B(0,1)$ is contained in $H^{-1}(e)$; \item the part of ${\mathscr L}$ over $\{\abs{x_2}\geq 1\}\cup \{x_1\leq -1\}$ coincides with $L$; \item for $1\leq k\leq +\infty$, given ${\varepsilon}>0$, one can choose $\widetilde {\Sigma}$ small enough so that the previous conditions are realized with $d_{C^k}(0,\phi)<{\varepsilon}$. \end{itemize} \end{lemma} \vskip2mm Let now $\partial W^s(U,\delta)$ be the part of $W^s_{loc}(O_U)$ located over the circle $\partial B_\delta$. By compactness of $\overline W^u_n$, for every $z\in \partial W^s(U,\delta)\cap W^u_n$, one easily proves the existence of an arbitrarily small $\tau(z)>0$, continuously depending on $z$, such that, if $z'=\Phi^{C_U}(z,\tau(z))$, $$ (*) \hskip2cm \pi^{-1}(z')\cap \Phi^{C_U}({\mathbb R}^-,z)=\emptyset\hskip2cm $$ (that is, the projection of the semiorbit $\Phi^{C_U}({\mathbb R}^-,z)$ has no selfintersection at $z'$). Moreover, using the classical twist property for Lagrangian spaces along an orbit of $C_U$, one can assume that: \vskip1mm \item \hskip3cm $(**)$ \hskip1cm the restriction of $\pi$ to $W^u_n$ is regular at $z'$. \vskip2mm Applying this process to each point of $\partial W^s(U,\delta)\cap W^u_n$, one gets a continuous curve ${\sigma}_U\subset W^s(U,\delta)$, surrounding $O_U$, such that every point $z\in{\sigma}_U$ satisfies both properties $(*)$ and $(**)$. \vskip1mm The previous lemma will be applied to the case where $L=W_n^-$, $L'=W^s(U,\delta)$ and where $z$ is a point of ${\sigma}\cap W_n^-$. We fix ${\varepsilon}>0$. By compactness of the semiorbit $\Phi^{C_U}({\mathbb R}^-,z)$ and the previous assumption, one can assume the neighborhood $N$ of Lemma~\ref{lem:pertlag} small enough so that the restriction of $\pi$ to the subset $\Phi^{C_U}({\mathbb R}^-,W_n^-\cap N)$ is injective and its restriction to $L\cap N$ is regular. With the previous notation, one immediately checks that one can find a point ${\zeta}\in\widetilde{\Sigma}\cap W^s(U,\delta)$ {\em not contained in the exceptional set ${\mathscr E}_U$} such that the conclusions of the lemma hold true for ${\zeta}$ and the new manifold ${\mathscr L}$, and moreover such that ${\mathscr L}$ is the graph of some function $\psi$ over $\pi(L\cap N)$ with $d_{C^\infty}(0,\psi)<{\varepsilon}$. \vskip1mm We now define a new potential function $\widetilde U$, which coincides with $U$ outside the projection $\pi(B(0,1)\times\{0\})$ (with the notation of the lemma for the coordinates in $N$), and such that for ${\theta}\in\pi(B(0,1)$: $$ \widetilde U({\theta})=\overline e_U -{\tfrac{1}{2}} T(\psi(x)), $$ where $\overline e_U=\mathop{\rm Max\,}\limits U$ is the critical energy for $C_U$ (so that the associated level contains $W^\pm(O_U)$). By construction, ${\mathscr L}\subset C_{\widetilde U}^{-1}(\overline e_U)$, and one immediately checks that the fixed point $O_{\widetilde U}$ of new system $C_{\widetilde U}$ coincides with $O_U$ and has the same energy. Moreover, the new manifolds $\widetilde W_n^-$ and $\widetilde W_{loc}^+(O_U)$, defined in the same way as above for the new system $C_{\widetilde U}$ and $O_{\widetilde U}$, now transversely intersect at ${\zeta}$ in $C_{\widetilde U}^{-1}(\widetilde e_{\widetilde U})$. \vskip1mm Taking Lemma \ref{lem:pertlag} into account, this yields a neighborhood $N_z$ of $z$ inside which the intersection of $\widetilde W_n^-$ and $\widetilde W_{loc}^+(O_U)$ is transverse. The main remark is that the size of $N_z$ is by construction independent of ${\varepsilon}$. Moreover, clearly $\widetilde U\to U$ in the $C^\infty$ topology when ${\varepsilon}\to 0$. \vskip1mm Observe now that the subset ${\sigma}\cap W_n^-$ is compact (this is an easy consequence of the fact that ${\sigma}_U$ is a transverse section for the flow of $C_U$ on $W^s_{loc}$). It is therefore possible to cover ${\sigma}\cap W_n^-$ with a finite number of neighborhoods $(N_i)$ of the previous form. Using the fact that transversality is an open property, one can then choose a (sufficiently decreasing) finite sequence $({\varepsilon}_i)$ in such a way that such that the above process applied to each $N_i$ with the perturbation parameter ${\varepsilon}_i$ yields a new potential $\widetilde U$ for which $\widetilde W_n^-$ and $\widetilde W_{loc}^+(O_{\widetilde U})$ transversely intersect in their energy level, and moreover such that the intersection points do not belong to the exceptional set ${\mathscr E}_{\widetilde V}$. \vskip1mm As a consequence, the subset ${\mathscr U}_3(n,N)$ of all $U\in {\mathscr B}^\infty(0,N)\cap {\mathscr U}_2$ such that $W_n^-(U)$ and $W^s(U,\delta)$ transversely intersect in their energy level outside the exceptional set is dense in ${\mathscr B}^\infty(0,N)\cap {\mathscr U}_2$. Since the exceptional set and the transverse intersections continuously vary with $U$, this is also an open subset. Therefore the intersection $$ \bigcup_{n\in{\mathbb N}} {\mathscr U}_3(n,N) $$ is residual in ${\mathscr B}^\infty(0,N)\cap {\mathscr U}_2$, which concludes the proof, according to our first remark. \end{proof} \paraga Recall now that $O_U$ admits a set of homoclinic orbits whose projection on ${\mathbb T}^2$ generate $\pi_1({\mathbb T}^2,{\theta}^0_U)$. We defined the amended potential $U^*=U-{\rm Max\,}_{{\mathbb T}^2}U$, together with the associated amended Hamiltonian, Lagrangian and action. The amended action of a homoclinic orbit is by definition the amended action of the two opposite solutions associated with it. \begin{lemma} The set ${\mathscr U}_4$ of potentials in ${\mathscr U}_3$ such that condition $(D_4)$ is satisfied is residual in $C^\infty({\mathbb T}^2)$. \end{lemma} \begin{proof} We fix a lift of ${\mathbb T}^2$ to ${\mathbb R}^2$ and given $V\in{\mathscr V}(U)$, we write $x_V$ for the corresponding lift of ${\theta}^0_V$. We will first prove that there is $M>0$, independent of $V$, such that any trajectory homoclinic to ${\theta}^0_V$ with minimal amended action lifts to a trajectory contained in the ball $B(x_V,M)\subset{\mathbb R}^2$. Observe first that in the zero energy level of the amended Hamiltonian $C_{V^*}$, the velocity $\norm{\dot {\theta}}$ is bounded above by some constant $\mu_1>0$ (independent of $V\in{\mathscr V}$). Moreover, $$ \mathop{\rm Max\,}\limits_{{\mathbb T}^2\setminus B} V^*= -\mu_2 <0 $$ and moreover, assuming $\delta$ small enough, for any $m\in{\mathbb Z}^2\setminus\{0\}$ any curve from $x_V$ to $x_V+m$ intersects ${\mathbb R}^2\setminus\Pi^{-1}(B)$ along segments of curves of total length at least $\norm{m}/2$. Hence if the lift $\eta$ of a homoclinic trajectory (starting from $x_V$) is not contained in $B_\infty(x_V,M)$, it lies in ${\mathbb R}^2\setminus\Pi^{-1}(B)$ during at least $M/2\mu$ and, taking the conservation of energy into account, its action satisfies $$ \int L^*(\eta,\dot\eta)\geq \frac{M\mu_2}{\mu_1}, $$ which proves our claim. \vskip1mm The previous remark proves that any minimizing homoclinic orbit lies in the intersection $$ W^u(O_V,M)\cap W^s(O_V,M) $$ where $$ W^\pm(O_V,M)=\Pi \Big[W^\pm(X_V)\cap (B_\infty (x_V,M)\times{\mathbb R}^2)\Big]. $$ One easily checks that $W^\pm(O_V,M)$ are compact. Therefore, using standard arguments of transversality and the graph property of the stable and unstable manifolds over $B$ (see for instance \cite{O08} and references therein), one proves that the subset $\widetilde {\mathscr V}$ of potentials $V$ such that $W^\pm(O_V,M)$ intersect transversely in $C_{V^*}^{-1}(0)$ is open and dense in ${\mathscr V}$. In particular, for $V\in\widetilde{\mathscr V}$, the number of corresponding homoclinic orbits is finite. \vskip1mm Fix now $V\in \widetilde {\mathscr V}$ and select arbitrarily one homoclinic orbit with minimal amended action. Pick out some point ${\theta}$ on its trajectory and fix a bump function $\eta:{\mathbb T}^2\to {\mathbb R}$ whose support is centered at ${\theta}$ and do not intersect any other minimizing homoclinic trajectory. Then, by transversality, for $\mu$ small enough the system $C_{V+\mu\eta}$ still admits a homoclinic orbit in the neighborhood of the initial one, whose amended action is smaller that that of the initial one. This proves that this orbit is strictly minimizing, and so the subset of potentials with this property is dense. The openness easily follows from the finiteness of the number of initial minimizing homoclinic orbits. \vskip1mm It only remains to prove that one can perturb the system so that the strictly minimizing orbit does not intersect the exceptional orbits, which can be easily proved by using adapted bump functions. \end{proof} \subsection{Intermediate energies} We will now use the parametric transversality theorem after introducing adapted representations, for which we need to work again in the $C^\kappa$ topology with $\kappa<+\infty$. \paraga Let us begin with the nondegeneracy of periodic orbits and prove the following result. \begin{lemma} The set ${\mathscr U}_5$ of potentials $U\in{\mathscr U}_2$ such such each periodic solution of $C_U$ contained in $C_U^{-1}(]\mathop{\rm Max\,}\limits U,+\infty[)$ is nondegenerate is residual in $C^\kappa({\mathbb T}^2)$. \end{lemma} A similar result is proved in \cite{O08} for the restriction of classical systems to regular energy levels. The proof here is slighly more difficult since we have to deal with one parameter families of energy levels. Again, more general results with full details will appear in [LM]. \begin{proof} Let us set $M={\mathbb A}^2$ for the sake of clarity. Let $X=M\times [0,T]$, where $T>0$ is a fixed parameter and let $Y=M^2\times {\mathbb R}$. From now on we fix $k\geq 2$. Given $N\in {\mathbb N}^*$, we set $$ {\mathscr B}_N={\mathscr U}_1\cap B_{C^k({\mathbb T}^2)}(0,N). $$ Fix $E>0$ (large). It is not difficult to prove that there exists $\tau(N,E)>0$ such that for each $U\in {\mathscr B}_N$, any nontrivial periodic solution of $C_U$ with energy in $[\mathop{\rm Max\,}\limits U, E]$ has a (minimal) period larger than $\tau(N,E)$. Obviously $\tau(N,E)$ tends to $0$ when $(N,E)\to\infty$. \vskip2mm We introduce the map $\chi: {\mathscr B}_N\to C^{k-1}(X,Y)$ defined by $$ \chi_U(z,t)=\Big(z,\Phi^{C_U}_t(z),C_U(z)-\mathop{\rm Max\,}\limits U\Big) $$ (where $\chi_U$ stands for $\chi(U)$). Finally, we set $$ \Delta=\Big\{(z,z,e)\mid z\in M,\ e\in\,]\alpha,\beta[\Big\} $$ where $0<\alpha<\beta$ are two fixed parameters. Therefore the preimage $\chi_U(\Delta)$ is the set of $(z,t)$ such that $z$ is $t$--periodic (note that $t$ need not be the minimal period of $z$) with energy $C_U(z)-\mathop{\rm Max\,}\limits U\in\,]\alpha,\beta[$. \vskip2mm One easily checks that if $\chi_U\pitchfork_{(z,t)} \Delta$, then $z$ is nondegenerate $t$--periodic, in the sense that the Poincar\'e return map relatively to a transverse section inside the energy level of $z$ does not admit $1$ as an eigenvalue. \vskip2mm Now one deduces from Takens perturbability theorem (see \cite{Ta83}) that there exists an open dense subset ${\mathscr O}_N(\alpha,\beta)\subset {\mathscr B}_N$ such that ${\bf ev\,}_\chi={\mathscr O}_N\times X\to Y$ is transverse to $\Delta$. Indeed, since there exists a minimal period $\tau$ for all the periodic orbits contained in $\chi_U^{-1}(\Delta)$, it is enough to ensure that the Poincar\'e maps of all periodic points do not admit any root of unity of order $\leq T\tau$ as an eigenvalue, which is an immediate consequence of Takens results. \vskip2mm Finally Abraham's transversality theorem applies in our setting and yields the existence of a residual subset \begin{equation}\label{eq:defR} {\mathscr R}_N(\alpha,\beta)\subset{\mathscr O}_N(\alpha,\beta) \end{equation} such that for $U\in {\mathscr R}:={\mathscr R}_N(\alpha,\beta)$, any periodic solution of $C_U$ contained in $$ C_U^{-1}(]\alpha+\mathop{\rm Max\,}\limits U,\beta+\mathop{\rm Max\,}\limits U[) $$ is nondegenerate. Since this is clearly an open property, the set ${\mathscr R}$ is in fact open and dense, which will be used in our subsequent constructions. Now our claim is easily obtained by considering sequences $\alpha_n\to0$, $\beta_n\to+\infty$ and the corresponding intersections of subsets, and finally considering the intersections over $N$. \end{proof} \paraga Conditions $(D_6)$ and $(D_7)$ are to be examined simultaneously. Here again we assume that $2\leq \kappa<+\infty$ is fixed. \begin{lemma} The set ${\mathscr U}_{6,7}$ of potentials $U\in{\mathscr U}_1^\kappa$ such that conditions $(D_6)$ and $(D_7)$ hold true is residual in $C^\kappa({\mathbb T}^2)$. \end{lemma} \begin{proof} We will first prove that the set of potentials for which two periodic orbits of the same length and the same energy satisfy Condition $(D_7)$ is generic, and then that there generically exist at most two orbits with the same length at the same energy, from which Condition $(D_6)$ will follow. \vskip2mm $\bullet$ Here we use the same techniques as above as start with the open subset ${\mathscr R}_N(\alpha,\beta)$ defined in (\ref{eq:defR}). We fix a parameter $\tau_{\rm max}>0$ and introduce the set $$ D({\tau_m})=\Big\{(z_1,z_2)\in M^2\mid z_2\notin \Phi^{C_U}\big([0,{\tau_m}],z_1\big)\Big\}. $$ So $D({\tau_m})$ is clearly open and when $z_1$ and $z_2$ are periodic points with period $\leq {\tau_m}$, their orbits are disjoint. Now we set $$ X=D({\tau_m})\times [0,{\tau_m}]^2\times ({\mathbb R}^+)^2,\qquad Y=M^2\times M^2\times {\mathbb R}\times{\mathbb R}. $$ Given $z\in M$ such that $C_U(z)>\mathop{\rm Max\,}\limits U$ and $t\in{\mathbb R}^+$, we denote by $L_U(z,t)$ the length of the projection $$ \pi\big(\Phi^{C_U}([0,t],z\big) $$ relatively to the Jacobi-Maupertuis metric at energy $C_U(z)$. We also denote by $\Psi^{C_U}$ the gradient flow of $C_U$ relatively to the Euclidean metric on $M$. We can now introduce the map $$ \chi:{\mathscr R}_N(\alpha,\beta)\to C^{k-1}(X,Y) $$ such that for $(z_1,z_2,t_1,t_2,s_1,s_2)\in X$, $$ \chi_U(z_1,z_2,t_1,t_2,s_1,s_2)= $$ $$ \Big( \Psi^{C_U}_{s_1}(z_1),\Phi^{C_U}_{t_1}(z_1),\Psi^{C_U}_{s_2}(z_2),\Phi^{C_U}_{t_2}(z_2), C_U(z_1)-C_U(z_2),L_U(z_1,t_1)-L_U(z_2,t_2). \Big) $$ We finally set $$ \Delta=\Big\{(z_1,z_1,z_2,z_2,0,0,)\mid z_1\in M,\ z_2\in M\Big\}. $$ Using the fact that the gradient vector field does not vanish on $C_U>\mathop{\rm Max\,}\limits U$, one easily checks that the preimage $\chi_U^{-1}(\Delta)$ is the set of points $x=(z_1,z_2,t_1,t_2,s_1,s_2)\in X$ such that \begin{itemize} \item $(z_1,z_2)\in D({\tau_m})$, $z_1$ is $t_1$--periodic and $z_2$ is $t_2$--periodic, \item $s_1=s_2=0$, \item $C_U(z_1)=C_U(z_2)$ and $L_U(z_1)=L_U(z_2)$. \end{itemize} Note that $\chi_U^{-1}(\Delta)$ is invariant under the diagonal action of $\Phi^{C_U}$ on the first two factors. As a consequence, $$ \dim \Big(T_x\chi_U(X) \cap T_{\chi_U(x)}\Delta\Big)\geq 2. $$ Now $$ \dim X = 2 \dim M + 4 = {\rm codim\,} \Delta +2. $$ Note also that by construction of ${\mathscr R}_N(\alpha,\beta)$, the points $z_i$ are nondegenerate. Therefore they can be continued in one-parameter families $(z_i(e))$ of periodic points when the energy varies (in an essentially unique way) for $e$ in a neighborhood of $e_0=C_U(z_1)=C_U(z_2)$. Assume that $$ \frac{dL_U(z_1(e))}{de}_{\vert e=e_0}=\frac{dL_U(z_2(e))}{de}_{\vert e=e_0}, $$ then clearly $$ \dim \Big(T_x\chi_U(T_xX) \cap T_{\chi_U(x)}\Delta\Big)\geq 3 $$ and $\chi_U$ cannot be transverse to $\Delta$ at $x$. This proves that if $\chi_U\pitchfork_x \Delta$, then the lengths of the corresponding periodic orbits have a transverse crossing at the energy $e_0$. \vskip2mm Therefore, to prove that Condition $(D_7)$ is generic, one only has to prove that $\chi_U$ is generically transverse to $\Delta$, for which we will use Abraham's theorem. We are therefore reduced to prove the transversality of the map $\chi$ with $\Delta$. We will use the decomposition $$ T_{\chi_U(z)}Y=T_{(z_1,z_1)}M^2\times T_{(z_2,z_2)}M^2\times{\mathbb R}\times{\mathbb R}. $$ Since the points $z_i$ are nondegenerate, and for the same reason as above, for a fixed $U\in{\mathscr R}_N(\alpha,\beta)$ the projection of the image $ T_x\chi_U(T_xX) $ on the factor $T_{(z_1,z_1)}M^2\times T_{(z_2,z_2)}M^2$ is transverse to the space $T_{(z_1,z_1)}\Delta_M\times T_{(z_2,z_2)}\Delta_M$, where $\Delta_M=\{(z,z)\mid z\in M\}$. It only remains to prove that varying $U$ enables one to control independently the last two terms in the decomposition. \vskip2mm Using standard straightening theorems, one sees that given a point $w\neq z_1$ on the orbit of $z_1$ under $\Phi^{C_U}$, one can add an arbitrarily small $C^\infty$ bump function (with controlled support around $w$) $\eta$, chosen so that $z_1$ is still a $t_\eta$--periodic for $C_{U+\eta}$, with the same energy, and such that $L_{U+\eta}(z_1,t_\eta)>L_U(z_1,t_1)$. This way one easily proves that the image $ T_x\chi_U(T_xX) $ contains vectors of the form $(0,0,0,0,0,1)$, according to the previous decomposition. \vskip2mm Now the same ideas applied with bump functions $\eta$ centered at the point $z$ enables one to vary the energy of the point $z_1$ alone, so that the image $ T_x\chi_U(T_xX) $ also contains vectors of the form $(0,0,0,0,1,u)$. This proves that $\chi$ is transverse to $\Delta$. \vskip2mm Applying Abraham's theorem now proves that the set of potentials $U\in {\mathscr R}_N(\alpha,\beta)$ such that $(D_7)$ holds true is residual, which concludes the first part of the proof. \vskip2mm $\bullet$ As for Condition $(D_6)$, we now introduce the open subset $$ D({\tau_m},U)=\Big\{(z_1,z_2,z_3)\in M^3\mid z_i\notin \Phi^{C_U}([0,{\tau_m}],z_j),\ i\neq j\Big\} $$ and the manifolds $$ X=D({\tau_m})\times [0,{\tau_m}]^3\times ({\mathbb R}^+)^3,\qquad Y=M^2\times M^2\times M^2\times {\mathbb R}^3\times{\mathbb R}^3, $$ together with the map $\chi:{\mathscr R}_N(\alpha,\beta)\to C^\kappa(X,Y)$ defined by $$ \chi_U((z_i,z_i),t_i,s_i)=\Big(\big(\Psi^{C_U}(z_i),\Phi^{C_U}(z_i)\big),\big(C_U(z_i)\big),\big(L_U(z_i,t_i)\big)\Big),\qquad i\in\{1,2,3\}. $$ Exactly as above, one proves that $\chi$ is transverse to the submanifold $$ \Delta=\Big\{\big((z_i,z_i),(e,e,e),(\ell,\ell,\ell)\big)\mid i\in\{1,2,3\}, e\in{\mathbb R},\ell\in{\mathbb R}\Big\}. $$ Again, $$ \dim X=3\dim M+6={\rm codim\,} \Delta+2 $$ but now, if $x\in \chi_U^{-1}(\Delta)$ $$ \dim \Big(T_x\chi_U(T_xX) \cap T_{\chi_U(x)}\Delta\Big)\geq 3. $$ Therefore if $\chi_U\pitchfork_x \Delta$, then $\chi_U(x)\notin \Delta$. Therefore if $\chi_U\pitchfork \Delta$, then there exist at most two periodic orbits with the same length on the same energy level. Since $\chi$ is transverse to $\Delta$, this property is residual for the potentials in ${\mathscr R}_N(\alpha,\beta)$. Finally, the transverse crossing condition $(D_7)$ proves that Condition $(D_6)$ holds over a residual subset in ${\mathscr R}_N(\alpha,\beta)$, so also in $C^\kappa({\mathbb T}^2)$ by countable intersection. \end{proof} \paraga One cannot directly apply Oliveira's work to get the Kupka-Smale theorem in the complete phase space, due to the presence of homoclinic tangencies. Nevertheless it is enough to obtain the transversality of heteroclinic intersections beetwen annuli at bifurcation points and suitable coverings of the various annuli. \begin{lemma} {The set ${\mathscr U}_8$ of $U\in{\mathscr U}_{6,7}$ such that two distinct minimizing orbits in the same energy level of $C_U$ admit transverse heteroclinic connections is residual in $C^\infty({\mathbb T}^2)$. Morever, each annulus admits a covering by subannuli for which there exist a continuous family of transverse homoclinic orbits attached to each periodic orbit.} \end{lemma} \begin{proof} This is an immediate consequence of Oliveira's work \cite{O08} by slightly perturbing the homoclinic tangencies in order to create new homoclinic orbits in a neighborhood, with transverse homoclinic intersections. \end{proof} \subsection{End of proof of the Theorem II} We now have to glue the previous result together, taking into account that the required regularity for the nearly critical energies is $\kappa=+\infty$. \paraga {\bf Existence of the singular annulus.} Given a potential $U$ in ${\mathscr U}_4\subset C^\infty({\mathbb T}^2)$, one can apply the results of Section 4 to the associated classical system $C_U$. In particular, Proposition 5.4 proves the existence of a singular ${\mathsf A}^{\bullet}$ annulus defined over $]-e_0,e_*[$, with $\overline e\in \,]-e_0,e_*[$, which realizes some suitable homology classes. This is a $C^1$ normally hyperbolic manifold, to which the usual persistence results under $C^1$ perturbations apply. Therefore, given a small $\delta>0$, for $V$ close enough to $U$ in the $C^\kappa$ topology with $\kappa\geq 2$, the system $C_V$ also admits a singular annulus ${\mathsf A}^{\bullet}_V$ defined over $]-e_0+\delta,e_*-\delta[$ and realizing the same homology classes as ${\mathsf A}^{\bullet}$. Moreover, for each $c\in{\bf H}_1({\mathbb T}^2,{\mathbb Z})$, $C_V$ also admits an annulus ${\mathsf A}_{BC}(c)$, as constructed in Lemma~\ref{lem:anasympt}. \paraga {\bf Chains and heteroclinic connections with the singular annulus.} Fix now $c\in{\bf H}_1({\mathbb T}^2,{\mathbb Z})$. Consider a potential $U_0\in C^\kappa({\mathbb T}^2)$, with $\kappa\geq \kappa_0$, and fix $c\in{\bf H}_1({\mathbb T}^2,{\mathbb Z})$. Given ${\varepsilon}>0$, there exists $U\in {\mathscr U}_4$ with $\norm{U-U_0}_{C^\kappa({\mathbb T}^2)}<{\varepsilon}$ such that $C_U$ admits a singular annulus as above. Moreover, given a small $\delta>0$, there exists a small neighborhood ${\mathscr O}$ of $U$ in the $C^\kappa$ topology such that for $V\in{\mathscr O}$, in addition to the singular annulus described above, the system $C_V$ admits a chain ${\mathsf A}_1,\ldots,{\mathsf A}_m$ of annuli, defined over $I_1=\,]\overline e+\delta, e_*],\ldots, I_m=[e_\infty,+\infty[$ and realizing $c$, such that ${\mathsf A}_1$ admits transverse heteroclinic connections with ${\mathsf A}^{\bullet}$. The annuli moreover satisfy the transverse homoclinic property as well as the twist property. \vskip1mm From this, one easily deduce that, given $N\geq N_0$ and $c\in {\bf H}_1({\mathbb T}^2,{\mathbb Z})$, the subset of $C^\kappa({\mathbb T}^2)$ of all potentials for which there exists a chain ${\mathsf A}_1,\ldots,{\mathsf A}_m$ of annuli, defined over $I_1=\,]\overline e+1/N, e_*],\ldots, I_m=[e_\infty,+\infty[$, with a heteroclinic connection between ${\mathsf A}_1(U)$ and ${\mathsf A}^{\bullet}(U)$, is dense in $C^\kappa({\mathbb T}^2)$. Again, easy persistence results for hyperbolic orbits together with our construction of the high-energy annulus prove that this set is open in $C^\kappa({\mathbb T}^2)$. Therefore the set of potentials for which exists a chain realizing $c$ is residual in $C^\kappa({\mathbb T}^2)$ and our claim follows by countable intersection over $c$. \vskip1mm The same type of arguments also prove the existence of connections between annuli ${\mathsf A}_1(c)$ and ${\mathsf A}_1({\sigma} c')$. \vskip1mm The estimates of 4) in Theorem II are straightforward computations, using the fact that a classical system at high energy is a perturbation of a flat metric on ${\mathbb T}^2$. \vskip1mm Finally, the existence of a chain being an open property, the set of ${\mathscr U}$ for which the conclusions of Theorem II hold is open and dense. \appendix \section{Normal hyperbolicity and symplectic geometry}\label{app:normhyp} \setcounter{paraga}{0} We refer to \cite{Berg10,BB13,C04,C08,HPS} for the references on normal hyperbolicity, see also \cite{Y} for a setting close to ours, in the spirit of Fenichel's approach. Here we limit ourselves to a very simple class of systems which admit a normally hyperbolic invariant (non compact) submanifold, which serves us as a model from which all other definitions and properties will be deduced. \paraga The following statement is a simple version of the persistence theorem for normally hyperbolic manifolds well-adapted to our setting, whose germ can be found in \cite{B10} and whose proof can be deduced from the previous references. \vskip3mm \noindent {\bf The normally hyperbolic persistence theorem.} {\em Fix $m\geq 1$ and consider a vector field on ${\mathbb R}^{m+2}$ of the form ${\mathscr V}={\mathscr V}_0+{\mathscr F}$, with ${\mathscr V}_0$ and ${\mathscr F}$ of class $C^1$ and reads \begin{equation}\label{eq:formV0} \dot x=X(x,u,s),\qquad \dot u=\lambda_u(x)\, u,\qquad \dot s =-\lambda_s(x)\,s, \end{equation} for $(x,u,s)\in {\mathbb R}^{m+2}$. Assume moreover that there exists $\lambda>0$ such that the inequalities \begin{equation}\label{eq:ineg} \lambda_u(x)\geq \lambda,\quad{\it and}\quad \lambda_s(x)\geq \lambda,\qquad x\in{\mathbb R}^m. \end{equation} hold. Fix a constant $\mu>0$. Then there exists a constant $\delta_*>0$ such that if \begin{equation}\label{eq:condder} \norm{\partial _xX}_{C^0({\mathbb R}^{m+2})}\leq \delta_*,\qquad \norm{{\mathscr F}}_{C^1({\mathbb R}^{m+2})}\leq \delta_*, \end{equation} the following assertions hold. \begin{itemize} \item The maximal invariant set for ${\mathscr V}$ contained in $O=\big\{(x,u,s)\in{\mathbb R}^{m+2}\mid \norm{(u,s)}\leq \mu\big\}$ is an $m$-dimensional manifold ${\rm A}({\mathscr V})$ which admits the graph representation: $$ {\rm A}({\mathscr V})=\big\{\big(x,u=U(x), s=S(x)\big)\mid x\in{\mathbb R}^m\big\}, $$ where $U$ and $S$ are $C^1$ maps ${\mathbb R}^m\to{\mathbb R}$ such that \begin{equation}\label{eq:loc} \norm{(U,S)}_{C^0({\mathbb R}^m)}\leq \frac{2}{\lambda}\,\norm{{\mathscr F}}_{C^0}. \end{equation} \item The maximal positively invariant set for ${\mathscr V}$ contained in $O$ is an $(m+1)$-dimensional manifold $W^+\big({\rm A}({\mathscr V})\big)$ which admits the graph representation: $$ W^+\big({\rm A}({\mathscr V})\big)=\big\{\big(x,u=U^+(x,s), s\big)\mid x\in{\mathbb R}^m\ s\in[-\mu,\mu]\big\}, $$ where $U^+$ is a $C^1$ map ${\mathbb R}^m\times[-1,1]\to{\mathbb R}$ such that \begin{equation}\label{eq:loc2} \norm{U^+}_{C^0({\mathbb R}^m)}\leq c_+\,\norm{{\mathscr F}}_{C^0}. \end{equation} for a suitable $c_+>0$. Moreover, there exists $C>0$ such that for $w\in W^+\big({\rm A}({\mathscr V})\big)$, \begin{equation} {\rm dist\,}\big(\Phi^t(w),{\rm A}({\mathscr V})\big)\leq C\exp(-\lambda t),\qquad t\geq0. \end{equation} \item The maximal negatively invariant set for ${\mathscr V}$ contained in $O$ is an $(m+1)$-dimensional manifold $W^-\big({\rm A}({\mathscr V})\big)$ which admits the graph representation: $$ W^-\big({\rm A}({\mathscr V})\big)=\big\{\big(x,u, s=S^-(x,u)\big)\mid x\in{\mathbb R}^m,\ u\in[-\mu,\mu]\big\}, $$ where $S^-$ is a $C^1$ map ${\mathbb R}^m\times[-1,1]\to{\mathbb R}$ such that \begin{equation}\label{eq:loc3} \norm{S^-}_{C^0({\mathbb R}^m)}\leq c_-\,\norm{{\mathscr F}}_{C^0}. \end{equation} for a suitable $c_->0$. Moreover, there exists $C>0$ such that for $w\in W^-\big({\rm A}({\mathscr V})\big)$, \begin{equation} {\rm dist\,}\big(\Phi^t(w),{\rm A}({\mathscr V})\big)\leq C\exp(\lambda t),\qquad t\leq0. \end{equation} \item The manifolds $W^\pm\big({\rm A}({\mathscr V})\big)$ admit $C^0$ foliations $\big(W^\pm(x)\big)_{x\in {\rm A}({\mathscr V})}$ such that for $w\in W^\pm(x)$ \begin{equation} {\rm dist\,}\big(\Phi^t(w),\Phi^t(x)\big)\leq C\exp(\pm\lambda t),\qquad t\geq0. \end{equation} \item If moreover ${\mathscr V}_0$ and ${\mathscr F}$ are of class $C^p$, $p\geq1$, and if in addition of the previous conditions the domination inequality, the condition \begin{equation}\label{eq:addcond} p\,\norm{\partial_x X}_{C^0({\mathbb R}^m)}\leq \lambda \end{equation} holds, then the functions $U$, $S$, $U^+$, $S^-$ are of class $C^p$ and \begin{equation} \norm{(U,S)}_{C^p({\mathbb R}^m)}\leq C_p \norm{{\mathscr F}}_{C^p({\mathbb R}^{m+2})}. \end{equation} for a suitable constant $C_p>0$. \item Assume moreover that the vector fields ${\mathscr V}_0,{\mathscr V}$ are $R$-periodic in $x$, where $R$ is a lattice in ${\mathbb R}^m$. Then their flows and the manifolds ${\rm A}({\mathscr V})$ and $W^\pm\big({\rm A}({\mathscr V})\big)$ pass to the quotient $({\mathbb R}^m/R)\times {\mathbb R}^2$ Assume that the time-one map of ${\mathscr V}_0$ on ${\mathbb R}^m/R\times\{0\}$ is $C^0$ bounded by a constant $M$. Then, with the previous assumptions, the constant $C_p$ depends only on $p$, $\lambda$ and $M$. \end{itemize} } The last statement will be applied in the case where $m=2\ell$ and $R=c{\mathbb Z}^\ell\times\{0\}$, where $c$ is a positive constant, so that the quotient ${\mathbb R}^{2\ell}/R$ is diffeomorphic to the annulus ${\mathbb A}^\ell$. \paraga The following result describes the symplectic geometry of our system in the case where ${\mathscr V}$ is a Hamiltonian vector field. We keep the notation of the previous theorem. \vskip3mm \noindent {\bf The symplectic normally hyperbolic persistence theorem.} {\it Endow ${\mathbb R}^{2m+2}$ with a symplectic form $\Omega$ such that there exists a constant $C>0$ such that for all $z\in O$ \begin{equation}\label{eq:assumpsymp} \abs{\Omega(v,w)}\leq C\norm{v}\norm{w},\qquad \forall v,w \in T_z M. \end{equation} Let ${\mathscr H}_0$ be a $C^2$ Hamiltonian on ${\mathbb R}^{2m+2}$ whose Hamiltonian vector field ${\mathscr V}_0$ satisfies (\ref{eq:formV0}) with conditions (\ref{eq:ineg}), and consider a Hamiltonian ${\mathscr H}={\mathscr H}_0+{\mathscr P}$. Then there exists a constant $\delta_*>0$ such that if \begin{equation}\label{eq:condder2} \norm{\partial _xX}_{C^0({\mathbb R}^{m+2})}\leq \delta_*,\qquad \norm{{\mathscr P}}_{C^2({\mathbb R}^{m+2})}\leq \delta_*, \end{equation} the following properties hold. \begin{itemize} \item The manifold ${\rm A}({\mathscr V})$ is $\Omega$-symplectic. \item The manifolds $W^\pm\big({\rm A}({\mathscr V})\big)$ are coisotropic and the center-stable and center-unstable foliations $\big(W^\pm(x)\big)_{x\in {\rm A}({\mathscr V})}$ coincide with the characteristic foliations of $W^\pm\big({\rm A}({\mathscr V})\big)$. \item If ${\mathscr H}$ is $C^{p+1}$ and condition (\ref{eq:addcond}) is satisfied, then $W^\pm\big({\rm A}({\mathscr V})\big)$ are of class $C^p$ and the foliations $\big(W^\pm(x)\big)_{x\in {\rm A}({\mathscr V})}$ are of class $C^{p-1}$. \item There exists a neighborhood ${\mathscr O}$ of ${\rm A}({\mathscr V})$ and a symplectic straightening symplectic diffeomorphism $\Psi:{\mathscr O}\to O$ such that \begin{equation} \begin{array}{lll} \Psi\big({\rm A}({\mathscr V})\big)={\mathbb A}^{\ell}\times\{(0,0)\};\\[4pt] \Psi\big(W^-\big({\rm A}({\mathscr V})\big)\big)\subset{\mathbb A}^{\ell}\times\big({\mathbb R}\times\{0\}\big),\qquad \Psi\big(W^-\big({\rm A}({\mathscr V})\big)\big)\subset{\mathbb A}^{\ell}\times\big(\{0\}\times{\mathbb R}\big);\\[4pt] \Psi\big(W^-(x)\big)\subset\{\Psi(x)\}\times\big({\mathbb R}\times\{0\}\big),\qquad \Psi\big(W^+(x)\big)\subset\{\Psi(x)\}\times\big(\{0\}\times{\mathbb R}\big).\\ \end{array} \end{equation} \end{itemize} } \begin{proof} Let $f$ be the time-one flow of ${\mathscr V}$. By the domination condition, one can assume $\delta_*$ small enough so that there exist two positive constants $\chi$ and $\mu$ verifiying $\chi\mu<1$, such that \begin{equation} \begin{array}{lll} \forall z\in W^+\big({\rm A}({\mathscr V})\big)\cap O,&\forall v\in T_zW^+\big({\rm A}({\mathscr V})\big), \ \norm{T_zf(v)}\leq \mu \norm{v},\phantom{\frac{\int}{\int}}\\ &\forall v\in T_z W^+(\Pi^+(z)),\ \norm{T_zf(v)}\leq \chi\norm{v},\phantom{\frac{\int}{\int}} \end{array} \end{equation} \begin{equation} \begin{array}{lll} \forall z\in W^-\big({\rm A}({\mathscr V})\big)\cap O,&\forall v\in T_z W^-\big({\rm A}({\mathscr V})\big),\ \norm{T_zf^{-1}(v)}\leq \mu\norm{v},\phantom{\frac{\int}{\int}}\\ &\forall v\in T_z W^-(\Pi^-(z)),\ \norm{T_zf^{-1}(v)}\leq \chi\norm{v}.\phantom{\frac{\int}{\int}}\\ \end{array} \end{equation} Fix $z\in W^+\big({\rm A}({\mathscr V})\big)\cap O$. Then if $v\in T_z\big(W^+\big({\rm A}({\mathscr V})\big)\big)$ and $w\in T_z\big(W^+(\Pi^+(z))\big)$, since $f$ is symplectic $$ \abs{\Omega(v,w)}=\abs{\Omega\big(T_zf^m(v),T_zf^m(w)\big)}\leq C\norm{T_zf^m(v)}\,\norm{T_zf^m(w)} \leq C\,(\chi\mu)^m\norm{v}\,\norm{w} $$ for $m\in O$, so passing to the limit shows that $\Omega(v,w)=0$. Therefore \begin{equation}\label{eq:inclu} T_z\big(W^+(\Pi^+(z),f)\big)\subset\big(T_z(W^+\big({\rm A}({\mathscr V})\big))\big)^{\bot_{\Omega}}. \end{equation} This proves in particular that the manifolds $W^+(x)$, $x\in V$, are isotropic. One obviously gets a similar result for $W^-(x)$. \vskip2mm -- Let $n^0$ be the dimension of $V$, so that by normal hyperbolicity $2n=n_0+n_++n_-$ and $\dim W^\pm\big({\rm A}({\mathscr V})\big)=n_0+n_\pm$. Since $\Omega$ is nondegenerate: $$ \dim \big(T_z(W^+\big({\rm A}({\mathscr V})\big))\big)^{\bot_{\Omega}}=2n-(n^0+n_+)=n_-, $$ and by (\ref{eq:inclu}) $n_+\leq n_-$. By symmetry one gets the equality $n_+=n_-$. \vskip2mm -- Moreover, this equality proves that $$ T_z\big(W^+(\Pi^+(z),f)\big)\subset\big(T_z(W^+\big({\rm A}({\mathscr V})\big))\big)^{\bot_{\Omega}}, $$ and $W^+\big({\rm A}({\mathscr V})\big)$ is coisotropic. This also proves that its characteristic foliation is the family $(W^+(x))_{x\in {\rm A}({\mathscr V})}$. The analogous statements for the unstable manifolds are obvious. \vskip2mm -- The manifold ${\rm A}({\mathscr V})$ is therefore the quotient of the manifolds $W^\pm\big({\rm A}({\mathscr V})\big)$ by its characteristic foliation, this immediately implies that the projections $\Pi^\pm$ are symplectic sumbmersion, that is $$ (\Pi^\pm)^*\Omega_{\vert {\rm A}({\mathscr V})}=\Omega_{\vert W^\pm\big({\rm A}({\mathscr V})\big)}. $$ \vskip2mm -- Finally, the manifold ${\rm A}({\mathscr V})$ is the intersection of the two coisotropic manifolds $W^\pm\big({\rm A}({\mathscr V})\big)$, and at each point $x\in {\rm A}({\mathscr V})$ $$ (T_x{\rm A}({\mathscr V}))^{\bot_\Omega}=\big(T_x(W^+\big({\rm A}({\mathscr V})\big))\big)^{\bot_{\Omega}}+\big(T_x(W^-\big({\rm A}({\mathscr V})\big))\big)^{\bot_{\Omega}} =E_x^++E_x^- $$ so by definition $(T_x{\rm A}({\mathscr V}))^{\bot_\Omega}\cap T_x{\rm A}({\mathscr V})=\{0\}$ and ${\rm A}({\mathscr V})$ is symplectic. \vskip2mm -- The last statement is a direct consequence of the symplectic tubular neighborhood theorem and the Moser isotopy argument (see \cite{LM} for more details). \end{proof} \section{Global normal forms along arcs of simple resonances}\label{App:globnormforms} We consider perturbed systems of the form $H_{\varepsilon}({\theta},r)=h(r)+{\varepsilon} f({\theta},r)$, where $h$ is a $C^\kappa$ Tonelli Hamiltonian on ${\mathbb R}^3$ and $f$ an element of the unit ball ${\mathscr B}^\kappa$ of $C^\kappa({\mathbb A}^3)$. We fix a simple resonance ${\Gamma}$ at energy ${\bf e}>\mathop{\rm Min\,}\limits h$ for $h$ and assume the coordinates $({\theta},r)$ to be adapted to ${\Gamma}$, that is, ${\Gamma}=\{r\in h^{-1}({\bf e})\mid \omega_3(r)=0\}$. Relatively to these new coordinates, $\norm{f}_{C^\kappa}\leq M$. We split the variables $x=(x_1,x_2,x_3)$ into the fast part $\widehat x=(x_1,x_2)$ and the slow part $\overline x= x_3$. \subsection{The global normal form} Given a subset ${\Gamma}_\rho$ of ${\Gamma}$, for $\rho>0$, we introduce the tubular neighborhood \begin{equation}\label{eq:tube} {\mathscr W}_{\rho}({\Gamma}_\rho)={\mathbb T}^3\times \{r\in{\mathbb R}^3\mid {\rm dist\,}(r,{\Gamma}_\rho)<\rho\}. \end{equation} Recall that we say that a connected subset of ${\Gamma}$ is an interval. Given a control parameter $\delta>0$, we denote by $D(\delta)$ the set of $\delta$-strong double resonance points of ${\Gamma}$, introduced in Definition~\ref{def:control}. \begin{prop}\label{prop:globnorm} Fix an integer $p\in\{2,\ldots,\kappa-4\}$ and a control parameter $\delta>0$. Fix two consecutive points $r'$ and $r''$ in $D(\delta)$, fix $\rho<\!<{\rm dist\,}_{{\Gamma}}(r',r'')$ and set $$ {\Gamma}_\rho:=[r^*,r^{**}]_{\Gamma}\subset [r',r'']_{\Gamma}, $$ where $r^*,r^{**}$ are defined by the equalities $$ {\rm dist\,}_{\Gamma}(r^*,r')={\rm dist\,}_{\Gamma}(r^{**},r'')=\rho. $$ Then there exists $c\in\,]0,1[$ such that for $0<{\varepsilon}<c\rho^4$ there exists a symplectic analytic embedding $\Phi_{{\varepsilon}}: {\mathscr W}_{c\rho}\to{\mathscr W}_\rho$ which satisfies \begin{equation}\label{eq:normform} N({\theta},r)=H\circ \Phi_{{\varepsilon}}({\theta},r)=h(r)+{\varepsilon} V({\theta}_3,r)+{\varepsilon} W_0({\theta},r)+{\varepsilon} W_1({\theta},r)+{\varepsilon}^2 W_2({\theta},r), \end{equation} where \begin{equation} V({\theta}_3,r)=\int_{{\mathbb T}^2}f({\theta},r)\,d{\theta}_1d{\theta}_2, \end{equation} and where the functions $W_0\in C^p({\mathbb A}^3)$, $W_1\in C^{\kappa-1}({\mathscr W}_{c\rho})$, $W_2\in C^\kappa({\mathscr W}_{c\rho})$ satisfy \begin{equation}\label{eq:estimates} \begin{array}{lll} \norm{W_0}_{C^p({\mathscr W}_{c\rho})}\leq \delta,\\[5pt] \norm{W_1}_{C^2({\mathscr W}_{c\rho})}\leq c_1\, \rho^{-3} \\[5pt] \norm{W_2}_{C^2({\mathscr W}_{c\rho})}\leq c_2\,\rho^{-6}, \end{array} \end{equation} for suitable constants $c_1,c_2>0$. Moreover, there exists $c_\Phi>0$ such that, if $\Phi_{\varepsilon}=(\Phi_{\varepsilon}^{{\theta}},\Phi_{\varepsilon}^{r})$, \begin{equation}\label{eq:approxphi} \norm{\Phi_{\varepsilon}^{{\theta}}-{\rm Id}}_{C^0({\mathscr W}_{c\rho})}\leq c_\Phi\,{\varepsilon}\,\rho^{-2},\qquad \norm{\Phi_{\varepsilon}^{r}-{\rm Id}}_{C^0({\mathscr W}_{c\rho})}\leq c_\Phi\,{\varepsilon}\,\rho^{-1}. \end{equation} The constants $c,c_1,c_2,c_\Phi$ do not depend on $\rho$ and ${\varepsilon}$. \end{prop} \subsection{Proof of Proposition~\ref{prop:globnorm}} Let $p\in\{2,\ldots,\kappa-4\}$ and $\delta>0$ be fixed, and let $K(\delta)$ be as in Lemma~\ref{lem:choseK}. \paraga We begin with a geometric lemma which enables us to control the size of the small denominators which appear in the averaging process. \begin{lemma}\label{lem:estdist} With the notation of {\rm Proposition~\ref{prop:globnorm}}, given $\rho>0$, set \begin{equation} {\mathscr U}_{\rho}({\Gamma}^*)=\{r\in{\mathbb R}^3\mid {\rm dist\,}(r,{\Gamma}_\rho)<\rho\}. \end{equation} Then there exist constants $c_0,C_0>0$ such that for every $r\in{\mathscr U}_{c_0\rho}({\Gamma}^*)$ \begin{equation}\label{eq:controlgeom} \mathop{\rm Min\,}\limits_{\widehat k\in B^*\big( K(\delta)\big)}\abs{\widehat k\cdot \widehat \omega(r)}\geq C_0\rho, \end{equation} where $B^*\big( K(\delta)\big)=\big\{\widehat k\in{\mathbb Z}^2\setminus\{0\}\mid \norm{k}\leq K(\delta)\big\}$ and where $K(\delta)$ was defined in {\rm Lemma~\ref{lem:choseK}}. \end{lemma} \begin{proof} Choose $\widehat k',\widehat k''\inB^*_\Z\big(K(\delta)\big)$ {\em with minimal norm} such that $\widehat k'\cdot \widehat \omega(r')=0$ and $\widehat k''\cdot \widehat \omega(r'')=0$. As a consequence, if $\widehat k\in{\mathbb Z}^2$ satisfies $\widehat k\cdot \widehat \omega(r')=0$ or $\widehat k\cdot \widehat \omega(r'')=0$, then $\widehat k\in{\mathbb Z}\,\widehat k'\cup {\mathbb Z}\,\widehat k''$. \vskip1mm The resonance surfaces $\omega^{-1}\big((\widehat k',0)\big)$ and $\omega^{-1}\big((\widehat k'',0)\big)$ are transverse to ${\Gamma}$ at $r'$ and $r''$ respectively (in ${\mathbb R}^3$). As a consequence, there exist constants $c_0,C_0>0$ such that \begin{equation} \forall r\in {\mathscr U}_{c_0\rho},\qquad \abs{\omega(r)\cdot \widehat k'}\geq C_0\rho,\quad \abs{\omega(r)\cdot \widehat k''}\geq C_0\rho. \end{equation} Hence the previous inequalities also hold for the vectors $\widehat k\in {\mathbb Z}\,\widehat k'\cup {\mathbb Z}\,\widehat k''$. Now if $$ \widehat k\in B^*\big( K(\delta)\big)\setminus ({\mathbb Z}\,\widehat k'\cup {\mathbb Z}\,\widehat k''), $$ $\widehat\omega(r)\cdot \widehat k\neq 0$ for $r\in[r',r'']_{\Gamma}$. As a consequence, reducing $c_0$ and $C_0$ if necessary, (\ref{eq:controlgeom}) holds true. \end{proof} \paraga {\bf Averaging and proof of Proposition~\ref{prop:globnorm}.} Recall that \begin{equation} f({\theta},r)=\sum_{\widehat k\in{\mathbb Z}^2}\phi_{\widehat k}({\theta}_3,r)e^{2i\pi\,\widehat k\cdot \widehat{\theta}}\quad \textrm{with}\quad \phi_{\widehat k}({\theta}_3,r)=\sum_{k_3\in{\mathbb Z}}[f]_{(\widehat k,k_3)}(r)e^{2i\pi\,k_3\cdot {\theta}_3} \end{equation} and \begin{equation} f_{> K}({\theta},r)=\sum_{\widehat k\in{\mathbb Z}^2,\norm{\widehat k}> K}\phi_{\widehat k}({\theta}_3,r)\,e^{2i\pi\, k_3\cdot {\theta}_3}. \end{equation} We use the classical Lie transform method to produce a diffeomorphism which cancels the harmonics $\phi_{\widehat k}$ for $1\leq \norm{\widehat k}\leq K:=K(\delta)$. \vskip2mm $\bullet$ We first solve the homological equation \begin{equation}\label{eq:homol} \widehat\omega(r)\cdot\partial_{\widehat{\theta}} S ({\theta},r)=f({\theta},r)-\phi_0({\theta}_3,r)-f_{> K}({\theta},r). \end{equation} Up to constants, the solution of~(\ref{eq:homol}) reads \begin{equation}\label{eq:genfonct} S({\theta},r)=\sum_{\widehat k\in{\mathbb Z}^2\setminus\{0\},\norm{\widehat k}\leq K} \frac{\phi_{\widehat k}({\theta}_3,r)}{2i\pi\,\widehat k\cdot\widehat \omega(r)}e^{2i\pi\,\widehat k\cdot\widehat {\theta}}. \end{equation} By Lemma~\ref{lem:estdist}, it is therefore well-defined an analytic in the domain ${\mathscr W}_{c_0\rho}$, provided that $c_0>0$ is small enough. Moreover, by direct computation for $i,j$ in ${\mathbb N}^3$ and $0\leq \abs{i},\abs{j}\leq \ell$ : \begin{equation}\label{eq:estimS} \norm{\partial ^i_{\theta}\partial ^j_rS}_{C^0({\mathscr W}_{c\rho})}\leq \overline c(\ell)\rho^{-(1+\abs{j})} \end{equation} for a constant $\overline c(\ell)>0$. \vskip2mm $\bullet$ We now consider the time-one diffeomorphism $\Phi_{\varepsilon}:=\Phi^{{\varepsilon} S}$ of the Hamiltonian flow generated by the function ${\varepsilon} S$, defined on the set ${\mathscr W}_{c\rho}$ with $c<c_0$. The Taylor expansion at order 2 of the transformed Hamiltonian $H_{\varepsilon}$ reads \begin{equation} H_{\varepsilon}\circ \Phi^{{\varepsilon} S}({\theta},r)=H({\theta},r)+{\varepsilon}\{H,S\}({\theta},r)+ {\varepsilon} ^2\int_0^1(1-{\sigma})\big\{\{H,S\},S\big\}\big(\Phi^{{\sigma}\,{\varepsilon} S}({\theta},r)\big)\,d{\sigma}, \end{equation} with Poisson bracket $\{u,v\}=\partial_{\theta} u\partial_r v-\partial_r u\partial_{\theta} v$. The new Hamiltonian reads $$ H\circ \Phi^{{\varepsilon} S}({\theta},r)=h(r)+{\varepsilon} V({\theta}_3,r)+{\varepsilon} W_0({\theta},r)+{\varepsilon} W_1({\theta},r)+{\varepsilon}^2 W_2({\theta},r), $$ where \begin{equation}\label{eq:expform} \begin{array}{lll} V({\theta}_3,r)&\!\!=&\!\!\displaystyle\phi_0({\theta}_3,r)\,=\,\int_{{\mathbb T}^2} f({\theta},r)\,d{\theta}_1d{\theta}_2,\\[5pt] W_0({\theta},r)&\!\!=&\!\!\displaystyle f_{> K}({\theta},r),\\[5pt] W_1({\theta},r)&\!\!=&\omega_3(r)\partial_{{\theta}_3}S({\theta},r),\\ W_2({\theta},r)&\!\!=&\!\!\displaystyle \{f,S\}({\theta},r)+\int_0^1(1-{\sigma})\big\{\{H,S\},S\big\}\big(\Phi^{{\sigma}\,{\varepsilon} S}({\theta},r)\big)\,d{\sigma},\\ \end{array} \end{equation} which proves (\ref{eq:normform}), together with the estimate on $W_0$ in (\ref{eq:estimates}) by Lemma~\ref{lem:choseK}. \vskip2mm $\bullet$ It remains to estimate the size of the various functions. To prove (\ref{eq:approxphi}), we use the same method as in \cite{Bou10}, based on (\cite{DH09}, Lemma 3.15). We introduce the weighted norm on ${\mathbb R}^3\times{\mathbb R}^3=T_{({\theta},r)}{\mathbb A}^3$: $$ \abs{(u_{\theta},u_r)}=\mathop{\rm Max\,}\limits\big(\rho\norm{u_{\theta}},\norm{u_r}\big) $$ for which, by (\ref{eq:estimS}): $$ \abs{X^{{\varepsilon} S}}_{C^0({\mathscr W}_{c_0\rho})}\leq \overline c(1){\varepsilon}. $$ Therefore, provided that ${\varepsilon}\rho^{-1}$ is small enough to ensure that $\Phi^{{\varepsilon} S}({\mathscr W}_{c\rho})\subset {\mathscr W}_{c_0\rho}$, there exists $c_\Phi>0$ such that $$ \norm{\Phi_{\varepsilon}^{{\theta}}-{\rm Id}}_{C^0({\mathscr W}_{c\rho})}\leq c_\Phi\,{\varepsilon}\,\rho^{-2},\qquad \norm{\Phi_{\varepsilon}^{r}-{\rm Id}}_{C^0({\mathscr W}_{c\rho})}\leq c_\Phi\,{\varepsilon}\,\rho^{-1}, $$ which proves (\ref{eq:approxphi}). Finally, for the same reason and provided that ${\varepsilon}\rho^{-4}$ is small enough $$ \norm{\Phi^{{\sigma}\,{\varepsilon} S}}_{C^2({\mathscr W}_{c\rho})}\leq \norm{{\rm Id}}_{C^2({\mathscr W}_{c\rho})}+\norm{\Phi^{{\sigma}\,{\varepsilon} S}-{\rm Id}}_{C^2({\mathscr W}_{c\rho})} \leq 2, $$ Note also that $$ \norm{\{f,S\}}_{C^2({\mathscr W}_\rho)}\leq c_*\,\rho^{-4},\qquad \norm{\big\{\{H,S\},S\big\}}_{C^2({\mathscr W}_\rho)}\leq c_{**}\,\rho^{-6}. $$ for some $c_*,c_{**}>0$. Therefore, using the Faa di Bruno formula as in \cite{Bou10} to estimate the second term of $W_2$, one immediately gets the last estimate in (\ref{eq:estimates}). \section{Normal forms over ${\varepsilon}$--dependent domains}\label{app:normformepsdep} As usual ${\mathbb T}^n={\mathbb R}^n/{\mathbb Z}^n$ and ${\mathbb A}^n=T^*{\mathbb T}^n$. In this section we will exceptionally work with Hamiltonian systems on ${\mathbb A}^n$, $n\geq2$, since our result is in fact easier to state in its full generality. Moreover, we will no longer assume any convexity or superlinearity condition for the unperturbed part $h$. We will construct normal forms for perturbed Hamiltonians $H_{\varepsilon}=h+{\varepsilon} f$ of class $C^\kappa$ on ${\mathbb A}^n$, in ${\varepsilon}$--dependent neighborhoods of partially resonant and partially Diophantine actions. \subsection{Setting and main result} \setcounter{paraga}{0} For $2\leq p\leq \infty$, the $L^p$ norm on ${\mathbb R}^n$ or ${\mathbb C}^n$ will be denoted by $\norm{\cdot}_p$, while we will write $\abs{\,\cdot\,}$ when $p=1$. \paraga Given $\tau>0$ and a submodule ${\mathcal M}$ of ${\mathbb Z}^n$ of rank $m\geq0$, we say that a vector $\omega\in{\mathbb Z}^n$ is {\em ${\mathcal M}$--resonant and $\tau$--Diophantine} if $\omega^\bot\cap{\mathbb Z}^n={\mathcal M}$ and for any submodule ${\mathcal M}'$ of ${\mathbb Z}^n$ such that ${\mathcal M}\oplus{\mathcal M}'={\mathbb Z}^n$, there exists a constant $\gamma>0$ (depending on ${\mathcal M}'$) such that \begin{equation}\label{eq:inegdioph} \forall k\in {\mathcal M}'\setminus\{0\}, \qquad \abs{\omega\cdot k}\geq \frac{\gamma}{\abs{k}^\tau}. \end{equation} Clearly, (\ref{eq:inegdioph}) is satisfied for any complementary submodule ${\mathcal M}'$ if and only if it is satisfied for a single one. We say that $\omega$ is {\em $m$--resonant and $\tau$--Diophantine} when there exists a rank $m$ submodule ${\mathcal M}$ such that $\omega$ is ${\mathcal M}$--resonant and $\tau$--Diophantine. The set of $m$--resonant and $\tau$--Diophantine vectors has full measure as soon as $\tau>n-m-1$ (and is residual when $\tau=n-m-1$). When $m=0$ one recovers the usual Diophantine case, and we will assume $m\geq1$ in the following. The case $m=n-1$ is particular since (\ref{eq:inegdioph}) is trivially satisfied for any nonzero $(n-1)$--resonant vector (for a suitable $\gamma$) as soon as $\tau\geq0$. In the following we will not make an explicit distinction between the case $m=n-1$ and the case $1\leq m\leq n-2$, even thought the proofs are slightly different. \paraga Recall that given a submodule ${\mathcal M}$ of ${\mathbb Z}^n$ of rank $m$, there exists a ${\mathbb Z}$--basis of ${\mathbb Z}^n$ whose last $m$ vectors form a ${\mathbb Z}$--basis of ${\mathcal M}$. Given the matrix $P$ in ${\bf Gl}_n({\mathbb Z})$ whose $i^{th}$-column is formed by the components of the $i^{th}$-vector of this basis, one defines a symplectic linear coordinate change in ${\mathbb A}^n$ by setting \begin{equation}\label{eq:adcoord2} {\theta}=\,^tP^{-1} \widetilde{\theta}\ \ [{\rm mod}\ {\mathbb Z}^n],\qquad r=P \,\widetilde r. \end{equation} \paraga Let $h$ be an integrable Hamiltonian on ${\mathbb R}^n$ fix $r^0$ such that $\big((\nabla h)(r^0)\big)^\bot\cap{\mathbb Z}^n={\mathcal M}$. The change (\ref{eq:adcoord2}) transforms $h$ into a new Hamiltonian $\widetilde h$ such that the last $m$ coordinates of the frequency vector $\nabla\widetilde h(\widetilde r^0)$ vanish, while the first $n-m$ ones are nonresonant. Such coordinates $(\widetilde{\theta},\widetilde r)$ will be said {\em adapted to $\widetilde r^0$}. We say that $r^0$ is $m$--resonant and $\tau$--Diophantine for $h$ when its associated frequency vector $\nabla h(r^0)$ is. One easily checks that this is the case if and only if there exists adapted coordinates of the form (\ref{eq:adcoord2}), relatively to which the frequency vector satisfies $$ (\widehat\omega,0)\in{\mathbb R}^{n-m}\times{\mathbb R}^m $$ where the vector $\widehat\omega$ is $\tau$--Diophantine in the usual sense. Once such adapted coordinates are chosen, we accordingly split all variables $x$ into $(\widehat x,\overline x)$, where $\widehat x$ stands for the first $n-m$ components of $x$ and $\overline x$ stands for the last $m$ ones. \paraga We can now state our result. We fix $n\geq 3$ and $1\leq m \leq n-1$. We define the $C^p$ norm of a function on a fixed domain as the upper bound of the partial derivatives of order $\leq k$ on the domain. \begin{prop}\label{prop:normal2} Consider an unperturbed Hamiltonian $h$ of class $C^\kappa$ on ${\mathbb R}^n$, fix a perturbation $f$ in the unit ball of $C^\kappa({\mathbb A}^n)$ and set as usual $H_{\varepsilon}=h+{\varepsilon} f$. Fix two integers $p,\ell\geq 2$ and two constants $d>0$ and $\delta<1$ with $1-\delta>d$. Fix an $m$--resonant and $\tau$--Diophantine action $r^0$ for $h$ and assume the coordinates $({\theta},r)$ to be adapted to $r^0$. Set $$ [f](\overline{\theta},r)=\int_{{\mathbb T}^{n-m}}f\big((\widehat{\theta},\overline{\theta}),r\big)\,d\widehat{\theta}. $$ Then, if $\kappa$ is large enough, there is an ${\varepsilon}_0>0$ such that for $0<{\varepsilon}<{\varepsilon}_0$, there exists an analytic symplectic embedding $$ \Phi_{\varepsilon}: {\mathbb T}^n\times B(r^0,{\varepsilon}^d)\to {\mathbb T}^n\times B(r^0,2{\varepsilon}^d) $$ such that $$ H_{\varepsilon}\circ\Phi_{\varepsilon}({\theta},r)=h(r)+ g_{\varepsilon}(\overline {\theta},r)+R_{\varepsilon}({\theta},r), $$ where $g_{\varepsilon}$ and $R_{\varepsilon}$ are $C^p$ functions such that \begin{equation}\label{eq:estimnormform} \norm{g_{\varepsilon}-{\varepsilon}[f]}_{C^p\big( {\mathbb T}^{n-m}\times B(r^0,{\varepsilon}^d)\big)}\leq {\varepsilon}^{2-\delta},\qquad \norm{R_{\varepsilon}}_{C^p\big( {\mathbb T}^n\times B(r^0,{\varepsilon}^d)\big)}\leq {\varepsilon}^\ell. \end{equation} Moreover, $\Phi_{\varepsilon}$ is close to the identity, in the sense that \begin{equation}\label{eq:estimphi} \norm{\Phi_{\varepsilon}-{\rm Id}}_{C^p\big( {\mathbb T}^n\times B(r^0,{\varepsilon}^d)\big)}\leq {\varepsilon}^{1-\delta}. \end{equation} \end{prop} \subsection{Proof of Proposition \ref{prop:normal2}} Proposition \ref{prop:normal2} will be an easy consequence of the resonant normal forms for analytic systems derived in \cite{Po93}, together with classical analytic smoothing results for which we refer for instance to \cite{Ze76}. Another and more direct technique was introduced in \cite{Bou10} to prove Nekoroshev-type results in the finitely differentiable case. For the sake of simplicity we adopt here the convention of \cite{Po93} and set ${\mathbb T}^n={\mathbb R}^n/(2\pi{\mathbb Z}^n)$, one immediately recovers our usual setting by a linear change of variables which will not affect the estimates in Proposition \ref{prop:normal2}. \subsubsection{P\"oschel's normal form} \paraga Given a subset $D\subset {\mathbb R}^n$, for any function $u: {\mathbb T}^n\times D\to {\mathbb C}$ such that $u(\cdot,r)\in L^1({\mathbb T}^n)$ for $r\in D$, we write $$ [u]_k(r)=\int_{{\mathbb T}^n}u({\theta},r)e^{ik\cdot {\theta}}\,d{\theta} $$ for the Fourier coefficient of order $k\in{\mathbb Z}^n$. \paraga Given ${\sigma}>0$, $\rho>0$, we set $$ U_{\sigma}{\mathbb T}^n=\{{\theta}\in{\mathbb C}^n\mid \abs{{\rm Im\,} {\theta}} <{\sigma}\},\qquad V_\rho D=\{r\in{\mathbb C}^n\mid {\rm dist\,} (r,D) < \rho\}, $$ where ${\rm dist\,}$ is the metric associated with $\norm{\cdot}_2$. As in \cite{Po93}, for $u$ analytic in $U_{{\sigma}'}{\mathbb T}^n\times V_\rho D$ with ${\sigma}'>{\sigma}$, with Fourier expansion $$ u({\theta},r)=\sum_{k\in{\mathbb Z}^n} [u]_k(r)\,e^{ik\cdot{\theta}} $$ we set $$ \norm{u}_{D,{\sigma},\rho}=\mathop{\rm Sup\,}\limits_{r\in V_\rho D}\sum_{k\in{\mathbb Z}^n}\abs{u_k(r)}\,e^{\abs{k}{\sigma}}<+\infty. $$ One easily gets the following inequalities \begin{equation}\label{eq:inegnorm} \norm{u}_{C^0(U_{{\sigma}}{\mathbb T}^n\times V_\rho D)}\leq \norm{u}_{D,{\sigma},\rho}\leq ({\rm coth}^na) \norm{u}_{C^0(U_{{\sigma}+a}{\mathbb T}^n\times V_\rho D)}, \end{equation} for $0< a <{\sigma}'-{\sigma}$. \paraga In this section we consider a nearly integrable Hamiltonian of the form $$ {\mathsf H}_{\varepsilon}({\theta},r)={\mathsf h}(r)+{\mathsf f}_{\varepsilon}({\theta},r) $$ where ${\mathsf h}$ and ${\mathsf f}_{\varepsilon}$ are analytic on the complex domain $U_{{\sigma}_0}{\mathbb T}^n\times V_{\rho_0} P$, where ${\sigma}_0>0$, $\rho_0>0$ are fixed and where $P$ is some domain in ${\mathbb R}^n$. We denote by $\varpi$ the frequency map associated with ${\mathsf h}$. \paraga Let $\alpha$ be a (small) constant and $K$ be a (large) constant, which will eventually depend on the parameter ${\varepsilon}$. Fix a submodule ${\mathcal M}$ of rank $m$ of ${\mathbb Z}^n$. Following \cite{N77}, we say that a domain $D^*$ in the frequency space ${\mathbb R}^n$ is {\em $(\alpha,K)$--nonresonant modulo ${\mathcal M}$} when for all $\omega\in D^*$, $$ \abs{\omega\cdot k}\geq\alpha\ \textrm{for\ all}\ k\in{\mathbb Z}^n\setminus{\mathcal M}\ \textrm{such\ that}\ \abs{k}\leq K. $$ We then say that a domain $D$ in the action space is $(\alpha,K)$--nonresonant modulo ${\mathcal M}$ for the unperturbed Hamiltonian ${\mathsf h}$ when $\varpi(D)$ is. \paraga We assume now that ${\mathcal M}=\{0\}\times{\mathbb Z}^m$ and we use the corresponding decomposition $x=(\widehat x,\overline x)$ for the variables. The main ingredient of our proof is the following result by P\"oschel. \vskip2mm \noindent{\bf Theorem \cite{Po93}.} {\em Let $D\subset P$ be a domain which is $(\alpha,K)$--nonresonant modulo ${\mathcal M}$ for~${\mathsf h}$. Let $$ \mu({\varepsilon}):=\norm{f_{\varepsilon}}_{D,{\sigma}_0,\rho_0}. $$ Let $$ Z_0(\overline{\theta},r) =\!\! \sum_{\overline k\in{\mathbb Z}^m,\norm{\overline k}\leq K} [f_{\varepsilon}]_{(0,\overline k)}(r)\,e^{2i\pi \overline k\cdot\overline {\theta}}. $$ Then there are positive constants $c,c',c''$ depending only on the $C^2$ norm of ${\mathsf h}$ such that for any triple $(\mu,{\sigma},\rho)$ which satisfies \begin{equation}\label{const1} 0\leq \mu\leq c \,\frac{\alpha}{K}\, \rho,\qquad \rho\leq {\rm Min\,}\big(c'\,\frac{\alpha}{K},\rho_0\big),\qquad \frac{6}{K}\leq {\sigma}\leq{\sigma}_0, \end{equation} then, when $\mu({\varepsilon})\leq\mu$, there exists a symplectic embedding $$ \Phi_{\varepsilon} : U_{\rho_*}{\mathbb T}^n\times V_{{\sigma}_*} D\to U_{\rho}{\mathbb T}^n\times V_{{\sigma}}D, $$ where $\rho_*=\rho/2$ and ${\sigma}_*={\sigma}/6$ such that $$ {\mathsf H}_{\varepsilon}\circ\Phi_{\varepsilon}({\theta},r)={\mathsf h}(r)+Z_{\varepsilon}(\overline{\theta},r)+M_{\varepsilon}({\theta},r) $$ where \begin{equation}\label{maj1} \norm{Z_{\varepsilon}-Z_0}_{D,{\sigma}_*,\rho_*}\leq c''\frac{K}{\alpha\rho}\mu^2,\qquad \norm{M_{\varepsilon}}_{D,{\sigma}_*,\rho_*}\leq e^{- K{\sigma}/6}\mu. \end{equation} Moreover, the ${\theta}$--component $\Phi_{\varepsilon}^{\theta}$ and the $r$--component $\Phi_{\varepsilon}^r$ of $\Phi_{\varepsilon}$ are close to the identity, in the sense that \begin{equation}\label{maj2} \begin{array}{lll} &\displaystyle\phantom{\int^{\int}} \norm{\Phi_{\varepsilon}^{\theta}({\theta},r)-{\theta}}\leq c''\frac{K}{\alpha}\frac{{\sigma}}{\rho}\mu,\qquad \forall ({\theta},r)\in U_{{\sigma}_*}{\mathbb T}^n\times V_{\rho_*} D,\\ &\displaystyle\phantom{\int^{\int^\int}} \norm{\Phi_{\varepsilon}^r({\theta},r)-r}\leq c''\frac{K}{\alpha}\mu,\qquad \forall ({\theta},r)\in U_{{\sigma}_*}{\mathbb T}^n\times V_{\rho_*} D,\\ \end{array} \end{equation} (here $\norm{\cdot}$ stands for an arbitrary norm over ${\mathbb C}^n$ and the first equality is to be understood on the lift of $U_{{\sigma}_*}{\mathbb T}^n$). } \subsubsection{Proof of Proposition \ref{prop:normal2}} We keep the notation of Proposition \ref{prop:normal2}, in particular $h$ is a $C^\kappa$ Hamiltonian on ${\mathbb R}^n$, and we assume $\kappa\geq 2$, so that the frequency map $\omega=\nabla h$ is at least $C^1$. \paraga The proof will rely on the following easy result. \begin{lemma}\label{lem:nonres} Consider an $m$--resonant and $\tau$--Diophantine action $r^0$ for $h$ and assume the coordinates $({\theta},r)$ to be adapted to $r^0$. Therefore $\omega(r^0)=(\widehat \omega,0)$ with $ \widehat\omega\in{\mathbb R}^{n-m}$ such that \begin{itemize} \item $\widehat\omega\neq 0$ in the case $m=n-1$, \item $\displaystyle\abs{\widehat k\cdot \widehat\omega}\geq \frac{\gamma}{\gabs{\widehat k}^\tau},\quad \forall \widehat k\in{\mathbb Z}^{n-m}\setminus\{0\}$ for some $\gamma>0$ when $1\leq m\leq n-2$. \end{itemize} Let ${\mathcal M}:=\{0\}\times{\mathbb Z}^m$. Then the following properties hold true. \begin{itemize} \item If $m=n-1$, let $\alpha=\abs{\widehat\omega}/2$. Then there is a constant $\lambda>0$ such that for $K>0$ large enough, the ball $B(r^0,\lambda/K)$ is $(\alpha,K)$ nonresonant modulo ${\mathcal M}$. \item If $1\leq m\leq n-2$, let $\nu>1+\tau$. Then, for $K$ large enough, the ball $B(r^0,K^{-\nu})$ is $(\alpha,K)$ nonresonant modulo ${\mathcal M}$, with $$ \alpha =\frac{\gamma}{2}\,K^{-\tau}. $$ \end{itemize} \end{lemma} \begin{proof} Assume first that $m=n-1$. Then there is a $\lambda>0$ such that for $K$ large enough, $\norm{\omega(r)-\omega(r^0)}_\infty\leq \alpha/K$ when $\norm{r-r^0}\leq \lambda/K$ and the result easily follows from the inequality $$ \abs{\omega(r)\cdot k}=\gabs{\omega(r^0)\cdot k+\big(\omega(r)-\omega(r^0)\big)\cdot k}\geq \gabs{\widehat\omega\,\widehat k}-K \frac{\alpha}{K}\geq 2\alpha-\alpha=\alpha, $$ if $\norm{r-r^0}\leq\lambda/K$, $k=(\widehat k,\overline k)\notin{\mathcal M}$ (and so $\gabs{\widehat k}\geq 1$) and $\abs{k}\leq K$. \vskip2mm Assume now that $1\leq m\leq n-2$ and observe that for $k=(\widehat k,\overline k)\notin {\mathcal M}$ with $\abs{k}\leq K$, then $\widehat k\neq 0$ and $\gabs{\widehat k}\leq K$, so that $$ \abs{\omega(r^0)\cdot k}=\gabs{\widehat\omega\cdot \widehat k}\geq \frac{\gamma}{K^\tau}. $$ Moreover, there exists $C>0$ such that for $K$ large enough, $\norm{\omega(r)-\omega(r^0)}_\infty\leq C\, K^{-\nu}$ when $\norm{r-r^0}\leq K^{-\nu}$. Therefore, if $k\notin {\mathcal M}$ and $\norm{r-r^0}\leq K^{-\nu}$ $$ \abs{\omega(r)\cdot k}=\gabs{\omega(r^0)\cdot k+\big(\omega(r)-\omega(r^0)\big)\cdot k}\geq \frac{\gamma}{K^\tau}-C\, K^{-\nu} K $$ and the result easily follows since $\nu-1> \tau$. \end{proof} \paraga Let us now recall the following analytic smoothing result. \vskip2mm \noindent{\bf Theorem \cite{Ze76}.} {\em Let $\kappa$ be a fixed nonnegative integer, let $r^0\in{\mathbb R}^n$ and for $R>0$ set $A_R:={\mathbb T}^n\times \overline B^n(r^0,R)$. Fix $R>0$. Then there are constants $s_0>0, c_0>0$ such that for $0< s < s_0$, for any function $f\in C^\kappa(A_{2R},{\mathbb R})$, there exists a function $\ell_s(f)$, analytic in ${\mathscr B}_s=U_s{\mathbb T}^n\times V_s B^n(r^0,R)$, such that $\big(\ell_s(f)\big)(A_R)\subset {\mathbb R}$ and \begin{equation}\label{eq:smooth1} \norm{\ell_s(f)-f}_{C^p(A_R)}\leq c_0\, s^{\kappa-p}\norm{f}_{C^\kappa(A_R)},\qquad 0\leq p\leq\kappa; \end{equation} \begin{equation}\label{eq:smooth2} \gabs{\ell_s(f)}_{C^0({\mathscr B}_s)}\leq c_0\,\norm{f}_{C^\kappa(A_R)}. \end{equation} Moreover, the map $f\mapsto \ell_s(f)$ is linear and $\ell_s(f)({\theta},r)$ is independent of ${\theta}$ when $f({\theta},r)$ is.} \paraga We are given an $m$--resonant and $\tau$--Diophantine action $r^0$ for $h$. We will apply P\"oschel's theorem to the analytic Hamiltonian $$ {\mathsf H}_{\varepsilon}=\ell_{2{\sigma}({\varepsilon})}(H_{\varepsilon})=\ell_{2{\sigma}({\varepsilon})}(h)+{\varepsilon}\ell_{2{\sigma}({\varepsilon})}(f)={\mathsf h}_{\varepsilon}+{\mathsf f}_{\varepsilon}, $$ where the smoothing operator $\ell$ is defined relatively to the domain $A_1={\mathbb T}^n\times\overline B^n(r^0,1)$, and where ${\sigma}({\varepsilon})\to0$ when ${\varepsilon}\to0$ (see below the explicit form of ${\sigma}$). Note first that the setting is slightly different from that of P\"oschel, since the unperturbed Hamiltonian ${\mathsf h}_{\varepsilon}$ depends on ${\varepsilon}$. To control this dependence, we will chose $\kappa$ so that by (\ref{eq:smooth1}) the frequency vector $\nabla {\mathsf h}_{\varepsilon}$ is close enough to $\nabla h$, which will allow us to use Lemma~\ref{lem:nonres} to obtain nonresonant domains for ${\mathsf h}_{\varepsilon}$. Moreover, which is crucial, the $C^2$ norm of ${\mathsf h}_{\varepsilon}$ is bounded independently of ${\varepsilon}$ thanks to (\ref{eq:smooth1}), so that P\"oschel's theorem can be applied to ${\mathsf H}_{\varepsilon}$ with {\em uniform} constants $c,c',c''$. \paraga Our domain will have the following form: $$ U_{2{\sigma}({\varepsilon})}{\mathbb T}^n\times V_{2{\sigma}({\varepsilon})}B(r^0,{\varepsilon}^d) $$ where the exponent $d$ will be chosen below in order for $B(r^0,{\varepsilon}^d)$ to be $(\alpha({\varepsilon}),K({\varepsilon}))$--nonresonant modulo ${\mathcal M}$ for ${\mathsf h}_{\varepsilon}$. The regularity $\kappa$ will be chosen according to (\ref{eq:smooth1}), in order to satisfy a number of constraints. \paraga The main point is that, by (\ref{eq:inegnorm}), $$ \norm{{\mathsf f}_{\varepsilon}}_{B(r^0,{\varepsilon}^d),{\sigma}({\varepsilon}),{\sigma}({\varepsilon})} \leq {\varepsilon}\, ({\rm coth}^n{\sigma}({\varepsilon}))\abs{\ell_{2{\sigma}({\varepsilon})}(f)}_{C^0\big(U_{2{\sigma}({\varepsilon})}{\mathbb T}^n\times V_{2{\sigma}({\varepsilon})}B(r^0,{\varepsilon}^d)\big)}, $$ so that, by (\ref{eq:smooth2}), when ${\varepsilon}$ is small enough \begin{equation}\label{eq:estimmu} \norm{{\mathsf f}_{\varepsilon}}_{B(r^0,{\varepsilon}^d),{\sigma}({\varepsilon}),{\sigma}({\varepsilon})}\leq 2c_0\,{\varepsilon}\, \big({\sigma}({\varepsilon})\big)^{-n} \end{equation} since $\norm{f}_{C^\kappa(A_1)}\leq 1$. \paraga Forgetting first about the ${\varepsilon}$--dependence of the constants, let us describe our construction. By P\"oschel's theorem, there exists a symplectic embedding $$ \Phi_{\varepsilon} : U_{\rho_*}{\mathbb T}^n\times V_{{\sigma}_*} B(r^0,{\varepsilon}^d)\to U_{\rho}{\mathbb T}^n\times V_{{\sigma}}B(r^0,{\varepsilon}^d), $$ such that $$ {\mathsf H}_{\varepsilon}\circ\Phi_{\varepsilon}({\theta},r)={\mathsf h}_{\varepsilon}(r)+Z_{\varepsilon}(\overline{\theta},r)+M_{\varepsilon}({\theta},r). $$ As a consequence, for $({\theta},r)\in {\mathbb T}^n\times B(r^0,{\varepsilon}^d)$: $$ \begin{array}{lll} H_{\varepsilon}\circ\Phi_{\varepsilon}({\theta},r)&=&{\mathsf H}_{\varepsilon}\circ\Phi_{\varepsilon}({\theta},r)+(H_{\varepsilon}-{\mathsf H}_{\varepsilon})\circ\Phi_{\varepsilon}({\theta},r)\\ &=&{\mathsf h}_{\varepsilon}(r)+Z_{\varepsilon}(\overline{\theta},r)+\Big[M_{\varepsilon}({\theta},r)+(H_{\varepsilon}-{\mathsf H}_{\varepsilon})\circ\Phi_{\varepsilon}({\theta},r)\Big]\\ &=& h(r)+Z_{\varepsilon}(\overline{\theta},r)+\Big[M_{\varepsilon}({\theta},r)+({\mathsf h}_{\varepsilon}-h)(r)+(H_{\varepsilon}-{\mathsf H}_{\varepsilon})\circ\Phi_{\varepsilon}({\theta},r)\Big].\\ \end{array} $$ To get our final result we will therefore set $$ g_{\varepsilon}=Z_{\varepsilon},\qquad R_{\varepsilon}({\theta},r)=M_{\varepsilon}({\theta},r)+({\mathsf h}_{\varepsilon}-h)(r)+(H_{\varepsilon}-{\mathsf H}_{\varepsilon})\circ\Phi_{\varepsilon}({\theta},r), $$ and estimate the $C^p$ norms of these functions. This will be an easy consequence of the Cauchy inequalities once the size of the domains are properly determined. \paraga {\bf The case $m=n-1$.} To make explicit the dependence of the domains and constants with respect to ${\varepsilon}$, let us fix three constants $a,b,c$ which satisfy the following inequalities \begin{equation}\label{eq:inegconst} 0<a<b<c,\qquad b<d \qquad 2na+b+(p+1)c<\delta, \end{equation} and choose the regularity $\kappa$ large enough so as to satisfy \begin{equation}\label{eq:inegka} \kappa> \mathop{\rm Max\,}\limits\Big(p+\frac{\ell}{a},p+\frac{(n+p)b+1}{a},2p+n+\frac{2}{b}\Big), \end{equation} where $\delta,d$ and $p,\ell$ were introduced in Proposition~\ref{prop:normal2}. \vskip1mm${\bullet}$ Let $\omega(r^0)=(\widehat\omega,0)$. We first fix the width $$ {\sigma}({\varepsilon})={\varepsilon}^a $$ of the smoothing process. We will apply P\"oschel's theorem to the Hamiltonian ${\mathsf H}_{\varepsilon}={\mathsf h}_{\varepsilon}+{\mathsf f}_{\varepsilon}$ on the domain $D:=B(r^0,{\varepsilon}^d)$, which is $(\alpha/2=\abs{\widehat\omega}/4,K({\varepsilon}))$ nonresonant modulo ${\mathcal M}$ for ${\mathsf h}_{\varepsilon}$ with $$ K({\varepsilon})={\varepsilon}^{-b}, $$ for ${\varepsilon}$ small enough. To see this, observe that by (\ref{eq:smooth1}) applied to each component of $\varpi_{\varepsilon}=\nabla h_{\varepsilon}$ and $\omega=\nabla h$: $$ \norm{\varpi_{\varepsilon}-\omega}_{C^0}\leq c_0\big({\sigma}({\varepsilon})\big)^{\kappa-1}\norm{h}_{C^\kappa(B(r^0,1))}=C_0{\varepsilon}^{(\kappa-1)a} $$ for ${\varepsilon}$ small enough. So, for $r\in B(r^0,{\varepsilon}^d)$ and $\abs{k}\leq K({\varepsilon})$, since $b<d$, by Lemma~\ref{lem:nonres} $$ \abs{\varpi_{\varepsilon}(r)\cdot k}\geq\abs{\omega(r)\cdot k}-\abs{\big(\varpi_{\varepsilon}(r)-\omega(r)\big)\cdot k}\geq \alpha-C_0{\varepsilon}^{(\kappa-1)a-b}, $$ and the claim immediately follows for ${\varepsilon}$ small enough, since $(\kappa-1)a-b>0$ by (\ref{eq:inegka}). \vskip1mm${\bullet}$ With our choice of ${\sigma}({\varepsilon})$, equation (\ref{eq:estimmu}) yields $$ \mu({\varepsilon}):=\norm{{\mathsf f}_{\varepsilon}}_{B(r^0,{\varepsilon}^d),{\sigma}({\varepsilon}),{\sigma}({\varepsilon})}\leq 2c_0\,{\varepsilon}^{1-na} $$ We finally set $$ \rho({\varepsilon})={\varepsilon}^c $$ so that we can apply P\"oschel's theorem with $\rho_0={\sigma}({\varepsilon})$, ${\sigma}_0={\sigma}({\varepsilon})$ and the triple $$ (\mu,{\sigma},\rho)=\big(\mu({\varepsilon}),{\sigma}({\varepsilon}),\rho({\varepsilon})\big), $$ since the three constraints of equation~(\ref{const1}) are satisfied for ${\varepsilon}$ small enough, by equation~(\ref{eq:inegconst}). \vskip1mm${\bullet}$ Then by (\ref{maj1}) and the Cauchy inequalities, taking the inequality $\rho<{\sigma}$ into account, one gets for a suitable $C>0$ and for ${\varepsilon}$ small enough \begin{equation}\label{ineg1} \norm{Z_{\varepsilon}-Z_0}_{C^p}\leq c'' \frac{K}{\alpha\rho}\mu^2\frac{1}{\rho^p}\leq C {\varepsilon}^{2-2na-b-(p+1)c}, \end{equation} \begin{equation}\label{ineg2} \phantom{\int^\int}\norm{M_{\varepsilon}}_{C^p}\leq e^{-K{\sigma}/6}\frac{1}{\rho^p}\mu\leq {\frac{1}{2}}{\varepsilon}^\ell,\phantom{\int^\int} \end{equation} and \begin{equation}\label{ineg3} \norm{\Phi_{\varepsilon}^{\theta}-{\rm Id}}_{C^p},\ \norm{\Phi_{\varepsilon}^{\theta}-{\rm Id}}_{C^p}\leq c''\frac{K}{\alpha}\frac{{\sigma}}{\rho}\frac{1}{\rho^p}\mu \leq C{\varepsilon}^{1-(n-1)a-b-(p+1)c}. \end{equation} \vskip1mm${\bullet}$ The proof of (\ref{eq:estimphi}) is now immediate from (\ref{ineg3}) and (\ref{eq:inegconst}). \vskip1mm${\bullet}$ To prove the second inequality of (\ref{eq:estimnormform}) note that on the one hand $$ \norm{h-{\mathsf h}_{\varepsilon}}_{C^p}\leq c_0\, (2{\sigma})^{\kappa-p}\norm{h}_{C^\kappa},\quad \norm{H_{\varepsilon}-{\mathsf H}_{\varepsilon}}_{C^\kappa}\leq c_0\, (2{\sigma})^{\kappa-p}\norm{H_{\varepsilon}}_{C^p}\leq c_0\, (2{\sigma})^{\kappa-p}\big(\norm{h}_{C^\kappa}+1\big) $$ for ${\varepsilon}$ small enough, which yields by the Faa-di-Bruno formula, for a suitable $C>0$ $$ \norm{(h-{\mathsf h}_{\varepsilon})+(H_{\varepsilon}-{\mathsf H}_{\varepsilon})\circ\Phi_{\varepsilon}}_{C^p}\leq C {\sigma}^{\kappa-p}\leq {\varepsilon}^{a(\kappa-p)}\leq {\frac{1}{2}}{\varepsilon}^\ell, $$ by the first inequality of (\ref{eq:inegka}). The conclusion then readily follows from (\ref{ineg2}). \vskip2mm${\bullet}$ Finally, to prove the first inequality of (\ref{eq:estimnormform}), note that $$ g_{\varepsilon}-{\varepsilon}[f]=(Z_{\varepsilon}-Z_0)+(Z_0-{\varepsilon}[f]). $$ The first term is conveniently controlled by (\ref{ineg1}): \begin{equation}\label{eq:intermed} \norm{Z_{\varepsilon}-Z_0}_{C^p}\leq {\frac{1}{2}} {\varepsilon}^{2-\delta}, \end{equation} for ${\varepsilon}$ small enough, thanks to (\ref{eq:inegconst}). Moreover $$ Z_0(\overline{\theta},r)-{\varepsilon}[f](\overline{\theta},r)=\Delta_1(\overline{\theta},r)-\Delta_2(\overline{\theta},r) $$ with $$ \Delta_1(\overline{\theta},r)=\!\!\!\!\sum_{\overline k\in{\mathbb Z}^m,\abs{\overline k}\leq K}\!\!\!\!\big([{\mathsf f}_{\varepsilon}]_{(0,\overline k)}(r)-{\varepsilon}[f]_{(0,\overline k)}(r)\big)e^{2i\pi\overline k\cdot\overline {\theta}} $$ $$ \Delta_2(\overline{\theta},r)\!\!\!\!\sum_{\overline k\in{\mathbb Z}^m,\abs{\overline k}>K}\!\!\!\!{\varepsilon}[f]_{(0,\overline k)}(r)e^{2i\pi\overline k\cdot\overline {\theta}}.\phantom{\int^\int}\\ $$ Now, $$ \norm{{\mathsf f}_{\varepsilon}-{\varepsilon} f}_{C^p}\leq c_0\,(2{\sigma})^{\kappa-p}\,{\varepsilon} $$ since $f$ has unit norm in $C^\kappa({\mathbb A}^n)$. Therefore $$ \norm{\Delta_1}_{C^p}\leq C\,K^{n+p} \norm{{\mathsf f}_{\varepsilon}-{\varepsilon} f}_{C^p}\leq C' {\varepsilon}^{1+(\kappa-p)a-(n+p)b}. $$ Then, by usual integration by parts for Fourier coefficients, one gets: $$ \norm{\Delta_2}_{C^p}\leq CK^p\sum_{k\in{\mathbb Z}^n,\abs{k}>K} \frac{1}{\abs{k}^{\kappa-p}}\leq \frac{C'}{K^{\kappa-2p-n}}=C'\,{\varepsilon}^{b(\kappa-2p-n)}. $$ From these two estimates one finally deduces the inequality $$ \norm{Z_0-{\varepsilon}[f]}_{C^p}\leq {\varepsilon}^{2-\delta} $$ from (\ref{eq:intermed}) and the last two inequalities of (\ref{eq:inegka}). Observe finally that $\Phi_{\varepsilon}\big({\mathbb T}^n\times B(r^0,{\varepsilon}^d)\big)\subset {\mathbb T}^n\times B(r^0,2{\varepsilon}^d)$ for ${\varepsilon}$ small enough, thanks to (\ref{eq:estimphi}) since $d<1-\delta$, which concludes the proof. \paraga {\bf The case $1\leq m\leq n-2$.} The proof is very similar to the previous one, up to minor changes for the definition of the nonresonant domain. With the notation of Lemma~\ref{lem:nonres}, we now require the following inequalities for our constants (chosing $\nu=1+2\tau$): $$ 0<a<b<c,\qquad (1+2\tau b)<d,\qquad 2na +(1+\tau) b+(p+1)c<\delta, $$ and we still assume that $\kappa$ satisfies (\ref{eq:inegka}). The proof then exactly follows the same lines as above. \section{The invariant curve theorem}\label{app:invcurve} \setcounter{paraga}{0} For the sake of completeness we reproduce here the statement and proofs from \cite{LM}. Let $J^*$ be an open interval of ${\mathbb R}$. We consider a map ${\mathscr P}_{\varepsilon}:{\mathbb T}\times J^*\to {\mathbb A}$ of class $C^5$, of the form \begin{equation} \label{eq:Poincare} {\mathscr P}_\varepsilon(\varphi,\rho)=\bigl(\varphi+\varepsilon \varpi(\rho)+\Delta^\varphi_\varepsilon(\varphi,\rho),\rho+\Delta_\varepsilon^\rho(\varphi,\rho)\bigr), \end{equation} with $\norm{\varpi}_{C^5}<+\infty$, and we moreover assume \begin{equation}\label{eq:constraints1} \varpi'(\rho)\geq {\sigma}>0,\quad \norm{\Delta_\varepsilon^\varphi}_{C^5}\leq\varepsilon^7,\quad \norm{\Delta_\varepsilon^\rho}_{C^5}\leq\varepsilon^7. \end{equation} \begin{prop}\label{prop:KAM} Let $J\subset J^*$ be a nonempty open interval. Then there exists ${\varepsilon}_0>0$, depending only on the length of $J$, ${\sigma}$ and $\norm{\varpi}_{C^5}$, such that for $0\leq{\varepsilon}\leq{\varepsilon}_0$, the map ${\mathscr P}_{\varepsilon}$ admits an essential invariant circle contained in ${\mathbb T}\times J$. \end{prop} The proof will be based on the translated curve theorem of Herman (see VII.11.3 and VII.11.11.A.1 in~\cite{Herman}), which we first recall in a form adapted to our setting. Given $\delta>0$, we set ${\mathbb A}_\delta={\mathbb T}\times [-\delta,\delta]$. A map $F:{\mathbb A}_\delta\to{\mathbb A}$ is said to satisfy the {\em intersection property} provided that for each essential curve ${\mathscr C}\subset {\mathbb A}_\delta$, $F({\mathscr C})\cap {\mathscr C}\neq \varnothing$. \begin{thm} [Herman] Fix $\delta>0$. Fix $\gamma\in{\mathbb R}$ such that there exists $\Gamma>0$ satisfying \begin{equation}\label{eq:Markoff} \abs{\gamma-\frac{m}{n}}>\frac{{\Gamma}}{n^2},\qquad \forall n\geq 1,\ \forall m\in{\mathbb Z}. \end{equation} Assume moreover ${\Gamma}\leq 10\,\delta$. Consider an embedding $F:{\mathbb A}_\delta\to{\mathbb T}\times{\mathbb R}$ of the form \begin{equation}\label{eq:formF} F(\varphi,r)=\big(\varphi+\gamma+r,\ r+{\zeta}(\varphi,r)\big), \end{equation} with ${\zeta}\in C^4({\mathbb A}_\delta)$, which satisfies the intersection property and \[ \mathop{\rm Max\,}\limits_{1\leq i+j\leq 4}\norm{\partial_r^i\partial_\varphi^j {\zeta} }_{C^0({\mathbb A}_\delta)}\leq \Gamma^2. \] Then there is a continuous map $\psi : {\mathbb T}\to [-\delta,\delta]$ and a diffeomorphism $f\in {\mathrm{Diff}}^1({\mathbb T})$ with rotation number $\gamma$ such that $F(\varphi,\psi(\varphi))=f\big(\varphi,\psi(f(\varphi))\big)$ and \[ \norm{\psi}_{C^0({\mathbb T})}\leq \Gamma^{-1}\mathop{\rm Max\,}\limits_{1\leq i+j\leq 4} \norm{\partial_r^i\partial_\varphi^j {\zeta} }_{C^0({\mathbb A}_\delta)}. \] \end{thm} A real number $\gamma$ satisfying Condition~\eqref{eq:Markoff} is said to be of constant type, with Markoff constant $\Gamma$. Note that we do {\em not} require that $\Gamma$ is the best possible constant. A more comprehensive exposition of the previous theorem (with better constants) is presented in~\cite{LMS}. We will also need the following result (see IV.3.5 in~\cite{Herman}). \begin{lemma}[Herman] \label{lem:Herman-constant-type}There exists a constant $\tau\in\,]0,1[$ such that for any $0<\eta<1/2$, any interval of ${\mathbb R}$ with length $\geq\eta$ contains infinitely many real numbers of constant type with Markoff constant at least $\tau\eta$. \end{lemma} \begin{proof}[Proof of Proposition~\ref{prop:KAM}] We will first conjugate ${\mathscr P}_\varepsilon$ to a map of the form~(\ref{eq:formF}). We set \[ \delta_\varepsilon(\varphi,\rho)=\frac{1}{\varepsilon}\Delta_\varepsilon^\rho(\varphi,\rho),\qquad \Phi_\varepsilon(\varphi,\rho)=\bigl(\varphi,\varpi(\rho)+\delta_\varepsilon(\varphi,\rho)\bigr),\qquad (\varphi,\rho)\in{\mathbb T}\times J. \] Let $\alpha,\varepsilon>0$ and $\rho_0\in J$ satisfy $[\rho_0-2\alpha,\rho_0+2\alpha]\subset J$ and \begin{equation}\label{eq:inv-eps} \varepsilon^{6}\leq{\sigma}/2. \end{equation} By (\ref{eq:inv-eps}), $\Phi_\varepsilon$ properly embeds ${\mathbb T}\times J$ into ${\mathbb A}$ and, setting $\varpi_0=\varpi(\rho_0)$: \begin{equation}\label{eq:incfond} \Phi^{-1}_\varepsilon\bigl({\mathbb T}\times [\varpi_0-\alpha{\sigma}/2,\varpi_0+\alpha{\sigma}/2]\bigr) \subset {\mathbb T}\times[\rho_0-\alpha,\rho_0+\alpha]. \end{equation} If moreover \begin{equation}\label{eq:p-eps} \varepsilon^7\leq\alpha \end{equation} then the estimates on ${\mathscr P}_\varepsilon$ and $\delta_\varepsilon$ show that \[ {\mathscr P}_\varepsilon\circ \Phi^{-1}_\varepsilon\bigl( {\mathbb T}\times [\varpi_0-\alpha{\sigma}/2,\varpi_0+\alpha{\sigma}/2]\bigr) \subset {\mathbb T}\times [\rho_0-2\alpha,\rho_0+2\alpha]\subset {\mathbb T}\times J. \] Therefore, assuming~\eqref{eq:inv-eps} and~\eqref{eq:p-eps}, the map $\widetilde{\mathscr P}_\varepsilon=\Phi_\varepsilon\circ{\mathscr P}_\varepsilon\circ \Phi^{-1}_\varepsilon$ is well defined over $ {\mathbb T}\times [\varpi_0-\alpha{\sigma}/2,\varpi_0+\alpha{\sigma}/2]$. We write $$ R(\varphi,\rho)=\varpi(\rho)+\delta_\varepsilon(\varphi,\rho),\qquad (\varphi,\rho)\in{\mathbb T}\times J, $$ for the second component of $\Phi_{\varepsilon}$. By straightforward computation: \begin{equation}\label{eq:mapjP} \widetilde{\mathscr P}_\varepsilon\big(\varphi,R(\varphi,\rho)\big)=\big(\varphi+\varepsilon R(\varphi,\rho), R'(\varphi,\rho)\big), \end{equation} where \begin{equation}\label{eq:remR'} R'(\varphi,\rho)=\varpi\big(\rho+\Delta_\varepsilon^\rho(\varphi,\rho)\big) +\delta_\varepsilon\bigl(\varphi+\varepsilon R(\varphi,\rho),\rho+\Delta_\varepsilon^\rho(\varphi,\rho)\bigr). \end{equation} Therefore, by (\ref{eq:mapjP}) and (\ref{eq:remR'}), after expanding $R'$, the map $\widetilde{\mathscr P}_\varepsilon$ takes the form: \begin{equation} \widetilde{\mathscr P}_\varepsilon(\varphi,R)=\bigl(\varphi+\varepsilon R,R+\Delta_\varepsilon^R(\varphi,R)\bigr), \qquad (\varphi,R)\in{\mathbb T}\times[\varpi_0-\alpha{\sigma}/2,\varpi_0+\alpha{\sigma}/2], \end{equation} with \begin{equation} \norm{\Delta_\varepsilon^R}_{C^4}\leq \nu \varepsilon^7 \end{equation} where the constant $\nu>0$ depends only on ${\sigma}$ and $\norm{\varpi}_{C^5}$. We finally fix \begin{equation}\label{eq:intI} R_0\in I:= [\varpi_0- \alpha{\sigma}/4, \varpi_0+ \alpha{\sigma}/4] \end{equation} and set \begin{equation} \gamma_{\varepsilon}=\varepsilon R_0, \qquad \phi_{R_0,{\varepsilon}}(\varphi,r)=(\varphi,R_0+\tfrac{1}{{\varepsilon}} r). \end{equation} The map $$ F_{R_0,\varepsilon}=\phi_{R_0,{\varepsilon}}^{-1}\circ \widetilde{\mathscr P}_\varepsilon\circ\phi_{R_0,{\varepsilon}} $$ is well defined over ${\mathbb A}_{{\alpha\varepsilon{\sigma}}/{4}}$ and takes the required form $$ F_{R_0,\varepsilon}(\varphi,r)=\big(\varphi+\gamma_{\varepsilon}+r,\ r+{\zeta}_{\varepsilon}(\varphi,r)\big), $$ with $ {\zeta}_{\varepsilon}(\varphi,r)=\varepsilon\Delta_\varepsilon^R\bigl(\varphi,R_0+\tfrac{1}{{\varepsilon}} r\bigr). $ In particular $$ \norm{{\zeta}_{\varepsilon}}_{C^4}\leq \nu \varepsilon^{4}. $$ We can now apply the translated curve theorem to $F_{R_0,\varepsilon}$ restricted to a suitable subdomain of ${\mathbb A}_{{\alpha\varepsilon{\sigma}}/{4}}$. We assume that $\varepsilon$ is small enough so that \begin{equation} \varepsilon\alpha{\sigma}<1. \end{equation} Thus Lemma~\ref{lem:Herman-constant-type} applied to the interval $I_{\varepsilon}={\varepsilon} I$ (where $I$ was introduced in (\ref{eq:intI})), with $\eta=\varepsilon\alpha{\sigma}/2$, shows that there exists $\gamma_{\varepsilon}=\varepsilon R_0\in I_{\varepsilon}$ of constant type, with Markoff constant $$ \Gamma_{\varepsilon}=\tau\varepsilon\alpha{\sigma}/2. $$ So $R_0\in I$ and the map $F_{\varepsilon,R_0}$ is well defined on ${\mathbb A}_{{\alpha\varepsilon{\sigma}}/{4}}$. We will apply the translated curve theorem to $F_{\varepsilon,R_0}$ on ${\mathbb A}_{\delta_{\varepsilon}}$, with $$ \delta_{\varepsilon}=\tau\varepsilon\alpha{\sigma}/20<{\alpha\varepsilon{\sigma}}/{4}, $$ so that $ \Gamma_{\varepsilon}=10\,\delta_{\varepsilon}. $ Thus, assuming \begin{equation} \label{eq:estimation-reste-herman} \nu{\varepsilon}^4\leq \frac{1}{4}\tau^2\alpha^2{\sigma}^2\varepsilon^2, \end{equation} there exists a continuous map $\psi : {\mathbb T}\to [-\delta_{\varepsilon},\delta_{\varepsilon}]$ whose graph $C$ is an invariant essential circle for $F_{\varepsilon,R_0}$. Since $C\subset {\mathbb A}_{{\alpha\varepsilon{\sigma}}/{4}}$ $$ \phi_{R_0,{\varepsilon}} (C)\subset {\mathbb T}\times [R_0-{\alpha{\sigma}}/{4},R_0+{\alpha{\sigma}}/{4}]\subset {\mathbb T}\times [\varpi_0-{\alpha{\sigma}}/{2},\varpi_0+{\alpha{\sigma}}/{2}]. $$ Therefore, by (\ref{eq:incfond}) $$ {\mathscr C}=\Phi^{-1}\circ\phi_{R_0,{\varepsilon}} (C)\subset {\mathbb T}\times[\rho_0-\alpha,\rho_0+\alpha] $$ is an essential invariant circle for ${\mathscr P}_{\varepsilon}$, contained in ${\mathbb T}\times J$, which exists as soon as $$ 0\leq{\varepsilon}\leq{\varepsilon}_0:=\mathop{\rm Min\,}\limits(\tfrac{1}{\alpha{\sigma}},\tfrac{\tau\alpha{\sigma}}{2\sqrt\nu}). $$ This concludes the proof. \end{proof} \section{Proof of the existence of homoclinic orbits}\label{sec:hom} For the sake of completeness, in this section we prove the following proposition. \begin{prop} Let $C$ be of the form (\ref{eq:classham}) and satisfy Conditions~$(D)$. Let $c\in{\bf H}_1({\mathbb T}^2,{\mathbb Z})$. Then there exists a sequence $(\gamma_n)_{n\in{{\mathbb N}^*}}$ of minimizing periodic solutions of $X^C$ {\em with positive energies}, whose projections on ${\mathbb T}^2$ belong to $c$ and whose orbits converge to a polyhomoclinic orbit for the hyperbolic fixed point. \end{prop} Since we need to make precise the convergence process for periodic orbits to the polyhomoclinic orbits, we will give an extensive, though not original, proof. Here we will closely follow the simple proofs in \cite{BK,Be00} and use the discrete setting, which immediately yields finite dimensional spaces and easy compactness results. Only at the very end we will have to adapt this approach to recover the convergence notion in the continuous setting. \subsection{The discrete setting} \setcounter{paraga}{0} Here we fix $C$ as in (\ref{eq:classham}) . We write $x$ for points in ${\mathbb R}^2$ and ${\theta}$ for points in ${\mathbb T}^2$. We denote by $\widehat C(r,x)={\tfrac{1}{2}} T(r)+\widehat U(x)$ the lift of $C$ to $T^*{\mathbb R}^2$ and by $\widehat L(x,v)={\tfrac{1}{2}} T_{\bullet}(v)-\widehat U(x)$ the associated Lagragian. We denote by ${\mathscr L}:T{\mathbb R}^2\to T^*{\mathbb R}^2$ the Legendre diffeomorphism associated with $\widehat L$ and $\widehat C$. \paraga Thanks to the particular form of $\widehat C$ (a rescaling in action yields a perturbation of the convex integrable Hamiltonian ${\tfrac{1}{2}} T(r)$) one easily proves that there exists a constant $\tau_0>0$ such that for $0<\tau\leq\tau_0$, $\Phi^{\tau \widehat C}$ admits a generating function $\widehat S_\tau$ on $({\mathbb R}^2)^2$. This function is characterized by the following equivalence \begin{equation}\label{eq:equiv} \Big[(x',r')=\Phi^{\tau \widehat C}(x,r)\Big]\Longleftrightarrow \Big[r=-\partial_x \widehat S_\tau(x,x')\ \textrm{and}\ r'=\partial_{x'} \widehat S_\tau(x,x')\Big]. \end{equation} for any pairs of elements $(x,r)$ and $(x',r')$ of $T^*{\mathbb R}^2$. The function $\widehat S_\tau$ is nothing but the action integral $ \widehat S_\tau(x,x')=\int_0^\tau \widehat L\big(\eta(t)\big)\,dt $, where $\eta$ is the pullback by ${\mathscr L}$ of the solution of $X^{\widehat C}$ with initial condition $(x,r)$. In particular, $\widehat S_\tau$ satisfies the following periodicity property \begin{equation}\label{eq:periodS} \widehat S_\tau(x+m,x'+m)=\widehat S_\tau(x,x'),\qquad \forall m\in{\mathbb Z}^2,\quad \forall (x,x')\in ({\mathbb R}^2)^2. \end{equation} Moreover, one easily sees that \begin{equation}\label{eq:quadS} \widehat S_\tau (x,x')\sim\frac{1}{\tau}\,T_{\bullet}(x-x')\qquad\textrm{when}\qquad \norm{x-x'}_2\to+\infty \end{equation} uniformly with respect to $\norm{x-x'}_2$. Note finally that since the Hamiltonian flow is $C^1$, by transversality the action $\widehat S_\tau$ is $C^1$ in the variables $(x,x',\tau)$. \paraga There no longer exists a generating function in the usual sense on ${\mathbb T}^2$. However one can still introduce a generalized one, defined for $({\theta},{\theta}')\in({\mathbb T}^2)^2$ by $$ S_\tau({\theta},{\theta}')=\mathop{\rm Min\,}\limits\big\{\widehat S_\tau(x,x')\mid \pi(x)={\theta},\ \pi(x')={\theta}'\big\} $$ where $\pi$ stands for the projection ${\mathbb R}^2\to{\mathbb T}^2$. Note that, by (\ref{eq:periodS}) and (\ref{eq:quadS}), there exists $\rho>0$ such that for each pair $({\theta},{\theta}')\in({\mathbb T}^2)^2$, there exists a pair $(x,x')\in({\mathbb R}^2)^2$ with $\norm{x}\leq 1$, $\norm{x'}\leq \rho$ and $S_\tau({\theta},{\theta}')=\widehat S_\tau(x,x')$. We say that $(x,x')$ is a minimizing lift for $({\theta},{\theta}')$. The function $S_\tau$ is not differentiable in general, but it is easy to see that it is Lipschitzian for the natural product distance $d$ on ${\mathbb T}^2$. Indeed, consider two pairs $({\theta}_1,{\theta}'_1)$, $({\theta}_2,{\theta}'_2)$ on $({\mathbb T}^2)^2$, fix a minimizing lift $(x_1,x_1')$ for $({\theta}_1,{\theta}'_1)$ and choose the lifts $x_2,x'_2$ of ${\theta}_2,{\theta}'_2$ which are the closest ones to the points $x_1,x'_1$ (so $(x_2,x'_2)$ is not necessarily minimizing for $({\theta}_2,{\theta}'_2)$). Therefore, setting $K={\rm Lip\,}_{\overline B(0,2)\times \overline B(0,\rho+1)} \widehat S_\tau$, $$ \begin{array}{ll} S_\tau({\theta}_2,{\theta}'_2)-S_\tau({\theta}_1,{\theta}'_1)&\leq \widehat S_\tau(x_2,x'_2)-\widehat S_\tau(x_1,x'_1)\\ &\leq K\norm{(x_2,x'_2)-(x_1,x'_1)} = K d(({\theta}_2,{\theta}'_2),({\theta}_1,{\theta}'_1)). \end{array} $$ Interverting the pairs $({\theta}_1,{\theta}'_1)$ and $({\theta}_2,{\theta}'_2)$ then yields $$ \abs{S_\tau({\theta}_2,{\theta}'_2)-S_\tau({\theta}_1,{\theta}'_1)}\leq K d(({\theta}_2,{\theta}'_2),({\theta}_1,{\theta}'_1)), $$ which proves our claim. \paraga Given $\tau\in\,]0,\tau_0]$, a sequence of ${\mathbb R}^2$ is called a (discrete) {\em trajectory} of the system $\Phi^{\tau \widehat C}$ when its points are the projection on ${\mathbb R}^2$ of those of an orbit of $\Phi^{\tau \widehat C}$ in $T^*{\mathbb R}^2$. So a sequence $(x_k)_{k\in{\mathbb Z}}$ is a trajectory if and only if $$ \partial_{x'} \widehat S_\tau(x_{k-1},x_{k})+\partial_{x} \widehat S_\tau(x_k,x_{k+1})=0,\qquad \forall k\in{\mathbb Z}, $$ since it is then the projection of the orbit $(x_{k},r_{k})_{k\in{\mathbb Z}}$ with $r_k=-\partial_x \widehat S_\tau(x_k,x_{k+1})$. We finally define a (discrete) trajectory for $\Phi^{\tau C}$ on ${\mathbb T}^2$ as the projection on ${\mathbb T}^2$ of a discrete trajectory for $\widehat C$ on ${\mathbb R}^2$. \paraga A finite sequence will be called a segment. The following lemma is immediate. \begin{lemma} \label{lem:rel} Let $i<j$ be fixed integers and let $\tau\in\,]0,\tau_0]$. \begin{itemize} \item Fix a segment $(x_i,\ldots,x_j)$ in $({\mathbb R}^2)^{j-i+1}$ and assume that there exists a sequence $$ (x_i^n,\ldots,x_j^n)_{n\in{\mathbb N}} $$ of segments of trajectories for $\Phi^{\tau\widehat C}$ such that $ \lim_{n\to\infty}x_k^n= x_k $ for $i\leq k\leq j$. Then $(x_i,\ldots,x_j)$ is a segment of trajectory for $\Phi^{\tau \widehat C}$. \item Fix a segment $({\theta}_i,\ldots,{\theta}_j)$ in $({\mathbb T}^2)^{j-i+1}$ and assume that there exists a sequence $$ ({\theta}_i^n,\ldots,{\theta}_j^n)_{n\in{\mathbb N}} $$ of segments of trajectories for $\Phi^{\tau C}$, with energies bounded above, such that $\lim_{n\to\infty}{\theta}_k^n\to {\theta}_k$ for $i\leq k\leq j$. Then $({\theta}_i,\ldots,{\theta}_j)$ is a segment of trajectory for~$\Phi^{\tau C}$. \end{itemize} \end{lemma} \paraga We will focus of periodic trajectories in ${\mathbb T}^2$ and their lifts to~${\mathbb R}^2$. Given a positive integer $q$ and an integer vector $m\in{\mathbb Z}^2$, we introduce the space $$ \{\xi\in({\mathbb R}^2)^{\mathbb Z}\mid \xi(i+q)=\xi(i)+m,\ \forall i\in{\mathbb Z}\} $$ of all sequences in ${\mathbb R}^2$ whose projections on ${\mathbb T}^2$ are ``$q$--periodic with rotation vector $w/q$'', together with the space $$ {\mathscr X}_m^q=\big\{(x_0,\ldots,x_q)\in({\mathbb R}^2)^{q+1}\mid x_q=x_0+m\big\}. $$ In the following we will identify an element of ${\mathscr X}_w^q$ with the corresponding ``$q$--periodic'' complete sequence. Given $\tau\in\,]0,\tau_0]$, one easily sees that $(x_0,\ldots,x_q) \in{\mathscr X}_m^q$ is a trajectory of $\Phi^{\tau \widehat C}$ if and only if it is a critical point of the generalized action $\widehat S_\tau: {\mathscr X}_w^q\to{\mathbb R}$ defined by $$ \widehat S_\tau(x_0,\ldots,x_q)=\sum_{k=0}^{q-1}\widehat S_\tau(x_k,x_{k+1}). $$ We also identify a $q$--periodic sequence on ${\mathbb T}^2$ with its restriction to $\{0,,\ldots,q\}$ and we therefore introduce the space $$ {\mathscr E}^q=\big\{({\theta}_0,\ldots,{\theta}_q)\in({\mathbb T}^2)^{q+1}\mid {\theta}_0={\theta}_q\big\}. $$ together with the generalized action $S_\tau:{\mathscr E}^q\to{\mathbb R}$ defined by $$ S_\tau({\theta}_0,\ldots,{\theta}_q)=\sum_{i=0}^{q-1}S_\tau({\theta}_i,{\theta}_{i+1}). $$ \paraga We say that a segment $(x_j,\ldots,x_k)$ in $({\mathbb R}^2)^{k-j+1}$ is minimizing for $\widehat S_\tau$ when its action $\widehat S_\tau(x_j,\ldots,x_k)$ is smaller that the action of any other segment with the same length and the same extremities, and we say that a sequence is minimizing when each of its subsegments is minimizing. Finally, we say that a sequence $\xi\in{\mathscr X} ^q_m$ is $(q,m)$--minimizing for $S_\tau$ if $$ \widehat S_\tau(\xi)=\mathop{\rm Min\,}\limits_{\xi'\in{\mathscr X} ^q_m} \widehat S_\tau(\xi'). $$ One easily checks that minimizing sequences and $(q,m)$--minimizing sequences are discrete trajectories of the system $\Phi^{\widehat \tau C}$. \subsection{The hyperbolic fixed point as an Aubry set} \setcounter{paraga}{0} Here we assume that $C$ satifies Conditions $(D)$. Without loss of generality, one can assume that $U$ reaches its maximum at $0$ and that $U(0)=0$. We now examine the variational properties of the fixed point $O=(0,0)$ of $X^{\widehat C}$ in the discrete framework. In all this section we fix $\tau\in\,]0,\tau_0]$ and get rid of the corresponding index in the notation. Note first that the functions $\widehat S:{\mathbb R}^2\times{\mathbb R}^2\to{\mathbb R}$ and $S:{\mathbb T}^2\times{\mathbb T}^2\to{\mathbb R}$ clearly satisfy $\widehat S(x,x')\geq0$, $S({\theta},{\theta}')\geq0$ and \begin{equation}\label{eq:minim} \mathop{\rm Min\,}\limits_{(x,x')\in ({\mathbb R}^2)^2}\widehat S(x,x')=\widehat S(0,0)=0,\qquad \mathop{\rm Min\,}\limits_{({\theta},{\theta}')\in ({\mathbb T}^2)^2}S({\theta},{\theta}')=S(0,0)=0. \end{equation} We define the (projected) Aubry set ${\mathscr A}$ as the subset of ${\mathbb T}^2$ formed by the points ${\theta}$ such that there exists a sequence $\big(P^i=({\theta}_0^i,\ldots,{\theta}_{q_i}^i)\big)_{i\in{{\mathbb N}}}$ of $q_i$--periodic sequences, with $q_i\to+\infty$ when $i\to+\infty$, such that \begin{equation}\label{eq:aubry} {\theta}_0^i={\theta}\qquad\textrm{and}\qquad \lim_{i\to+\infty}S(P^i)=0. \end{equation} One easily obtains the following well-known lemma (see \cite{So,Fa09}). \begin{lemma}\label{lem:Aubry} The projected Aubry set ${\mathscr A}$ reduces to the maximum $\{0\}$ of the potential function $U$. \end{lemma} \begin{proof} The proof is based on the following remark. Let ${\theta}\in{\mathbb T}^2\setminus\{0\}$. Then $$ \mathop{\rm Min\,}\limits_{{\theta}'\in{\mathbb T}^2}S({\theta},{\theta}')>0. $$ To see this, fix $e^*>0=\mathop{\rm Max\,}\limits U$. Consider ${\theta}'\in{\mathbb T}^2$ and fix two lifts $x,x'$ of ${\theta},{\theta}'$. Let $B\subset{\mathbb R}^2$ be a compact ball, centered at $x$, on which ${\rm Max}_B(\widehat U)<0$ (which is possible since $x\notin {\mathbb Z}^2$). Consider the unique solution $\gamma(t)=(x(t),r(t)):[0,\tau]\to T^*{\mathbb R}^2$ of the vector field $X^{\widehat C}$ such that $x(0)=x$ and $x(\tau)=x'$. Let $e$ be the value of $\widehat C$ on $\gamma$. We consider two cases. \vskip1mm -- If $e\geq e^*$, since ${\tfrac{1}{2}} T(r(t))=e-\widehat U(x(t))$ then $\widehat L\big(x(t),\dot x(t)\big)=e-2\widehat U(x(t))\geq e^*$, so \begin{equation} \int_0^\tau \widehat L\big(x(t),\dot x(t)\big)\,dt\geq \tau e^*. \end{equation} \vskip-1mm -- If $e\leq e^*$, since $T_\bullet(\dot x(t))$ is bounded above by $2(e^*-\mathop{\rm Min\,}\limits U)$, there exists $\tau'>0$ (independent of $e$) such that $x(t)\in B$ for $t\in[0,\tau']$. So in this case \begin{equation} \int_0^\tau\widehat L(x(t),\dot x(t))\,dt\geq \int_0^{\tau'}\widehat L(x(t),\dot x(t))\,dt\geq -\tau' {\rm Max}_B(\widehat U). \end{equation} Since the lifts $x,x'$ are arbitrary, one finally gets $$ \mathop{\rm Min\,}\limits_{{\theta}'\in{\mathbb T}^2} S({\theta},{\theta}')\geq\mathop{\rm Min\,}\limits\big[\tau e^*,-\tau' {\rm Max}_B(\widehat U)]> 0, $$ which proves our remark. \vskip2mm Let us now turn to the proof of our lemma. Clearly $0\in{\mathscr A}$ by (\ref{eq:minim}). Conversely, if ${\theta}\neq0$ and if $P=({\theta}_0,\ldots,{\theta}_q)$ is a $q$-periodic sequence with ${\theta}_0^i={\theta}$, then $$ S(P)\geq\mathop{\rm Min\,}\limits_{{\theta}'\in{\mathbb T}^2} S({\theta},{\theta}') $$ and the limit of a sequence $(P^i)$ of such sequences is positive by the previous remark, which proves that ${\theta}\notin{\mathscr A}$. \end{proof} \subsection{Proof of Proposition \ref{prop:polyhom}} Thoughout this section, we identify $H_1({\mathbb T}^2,{\mathbb Z})$ with ${\mathbb Z}^2$ and fix $c\in H_1({\mathbb T}^2,{\mathbb Z})\setminus\{0\}$, so that $c$ is just an integer vector $m\in{\mathbb Z}^2\setminus\{0\}$. We assume that $C$ satisfies Conditions $(D)$. \subsubsection{Minimizing sequences} \begin{lemma}\label{lem:posen} The energy of any $(q,m)$-minimizing sequence for $\widehat S$ in ${\mathscr X}_m^q$ is nonnegative. \end{lemma} \begin{proof} Here we will work both with continuous and discrete trajectories. Let us first prove that a $(q,m)$-minimizing trajectory is fully minimizing, in the sense that it minimizes the action between any pair of points. This is an easy consequence of the Morse Crossing Lemma (\cite{Mat91} Theorem 2 and \cite{So} Lemma 5.31), which we recall here in a weak form. There exists ${\varepsilon}>0$ such that when two (continuous) trajectories ${\zeta}_i:[-{\varepsilon},{\varepsilon}]\to{\mathbb R}^2$ of $\widehat L$ satisfy ${\zeta}_1(0)={\zeta}_2(0)$ and $\dot{\zeta}_1(0)\neq\dot{\zeta}_2(0)$, then there exist $C^1$ curves $\alpha_i:[-{\varepsilon},{\varepsilon}]\to{\mathbb R}^2$ with endpoints $\alpha_1(-{\varepsilon})={\zeta}_1(-{\varepsilon})$, $\alpha_1({\varepsilon})={\zeta}_2({\varepsilon})$, $\alpha_2(-{\varepsilon})={\zeta}_2(-{\varepsilon})$ and $\alpha_2({\varepsilon})={\zeta}_1({\varepsilon})$ such that $$ A(\alpha_1)+A(\alpha_2)<A({\zeta}_1)+A({\zeta}_2) $$ (this result still holds true for higher dimensional systems). From this, one deduces that two distinct $(q,m)$-minimizing trajectories do not intersect one another. This in turn easily yields the fact that a $(q,m)$-minimizing trajectory is also a $(nq,m)$-minimizing trajectory, for any integer $n\geq 1$ (see \cite{Ba}, Theorem 3.3 for instance). Since a $(nq,m)$-minimizing trajectory minimizes the action on each subinterval of $[0,nq]$, this easily proves our claim. \vskip2mm We now fix a $(q,m)$-minimizing trajectory and consider the probability measure $\mu$ evenly distributed on its orbit in $T{\mathbb T}^2$. By (\cite{Mat91}, Proposition 2), there exists $c\in H^1({\mathbb T}^2)$ such that $A_c(\mu)=-\alpha(c)$. We already noticed that the support of $\mu$ is located in $C^{-1}(\alpha(c))$, and clearly $A_c(\mu)\leq0$ since the action $A_c$ vanishes on the zero trajectory. So $\alpha(c)\geq 0$, which concludes the proof since the support of $\mu$ coincides with the orbit. \end{proof} We now note that there exists a trivial upper bound, {\em uniform with respect to $q$}, for the actions of minimizing sequences in ${\mathscr X}_m^q$. \begin{lemma}\label{lem:unifbound} Let $m\in{\mathbb Z}^2\setminus\{0\}$ be fixed. Then, for any $q\geq1$ and any $(q,m)$--minimizing sequence $\xi\in{\mathscr X}^q_m$, the action of $\xi$ satisfies $$ \widehat S(\xi)\leq \widehat S(0,m). $$ \end{lemma} \begin{proof} Since $\xi$ is minimizing in ${\mathscr X}^q_m$ $$ \widehat S(\xi)\leq \widehat S(0,\ldots,0,m)=\sum_{k=0}^{q-1}\widehat S(0,0)+\widehat S(0,m)=\widehat S(0,m), $$ where of course the segments $(0,\ldots,0,m)$ and $(0,\ldots,0,0)$ are in ${\mathbb R}^{q+1}$. \end{proof} One also deduces from the previous lemma the following useful result. \begin{cor}\label{cor:unifbound1} Let $m\in{\mathbb Z}^2\setminus\{0\}$ be fixed. The energy of a $(q,m)$--minimizing sequence tends to $0$ when $q$ tends to $+\infty$. \end{cor} \begin{proof} Let $\xi$ be a $(q,m)$--minimizing sequence with energy $e_q$ and let ${\zeta}$ be the corresponding continuous trajectory. The Lagrangian satisfies $$ \widehat L\big({\zeta}(t),\dot{\zeta}(t)\big)=e_q-2\widehat U({\zeta}(t))\geq e_q, $$ so $\widehat L\big({\zeta}(t),\dot{\zeta}(t)\big)\geq e_q\geq 0$ and, as a consequence $$ q\tau e_q \leq \widehat S(\xi)\leq \widehat S(0,m), $$ which proves that $\lim_{q\to\infty} e_q=0$. \end{proof} \subsubsection{Homoclinic behavior} The next lemma proves the existence of homoclinic trajectories. \begin{lemma}\label{lem:homorb} Fix a sequence $(\xi^q)_{q\in{{\mathbb N}^*}}$ of elements of ${\mathscr X}_m^q$, such that $\xi^q$ is minimizing for $\widehat S$. Set $\Theta^q=\pi\circ\xi^q$. Consider a limit point $\Theta=({\theta}_k)_{k\in{\mathbb Z}}$ of the sequence $(\Theta^{q})_{q\in{{\mathbb N}^*}}$, relatively to the (compact) product toplogy of $({\mathbb T}^2)^{\mathbb Z}$. Then $\Theta$ is a {\em trajectory} for $\Phi^{\tau \widehat C}$ and is homoclinic to $0$, that is: \begin{equation}\label{eq:hom} \lim_{k\to -\infty}{\theta}_k=\lim_{k\to+\infty}{\theta}_k=0. \end{equation} \end{lemma} \begin{proof} The fact that $\Theta$ is a trajectory is an immediate consequence of Lemma~\ref{lem:rel} and Corollary \ref{cor:unifbound1}, since each subsegment of $\Theta$ is the limit of segments of trajectories with energy bounded above. We will prove (\ref{eq:hom}) for the $\omega$--limit of $\Theta$, the $\alpha$--limit case being similar. More precisely, we will show that~$0$ is the only limit point for the sequence $({\theta}_k)_{k\in{\mathbb Z}}$ when $k\to+\infty$, which proves our claim by compactness of ${\mathbb T}^2$. Consider such a limit point ${\theta}$. There exists an increasing sequence $(k_i)_{i\in{{\mathbb N}}}$ such that $({\theta}_{k_i})_{i\in{{\mathbb N}}}$ converges to~${\theta}$ for $i\to+\infty$, and one can moreover assume that $$ d({\theta}_{k_i},{\theta})\leq\frac{1}{2^i},\qquad \forall i\in{{\mathbb N}}. $$ Consider the $(k_{(i+1)}-k_i+1)$-periodic sequence $ P^i=({\theta},{\theta}_{k_i},\ldots,{\theta}_{k_{(i+1)}},{\theta}). $ Then obviously $$ 0\leq S(P^i)\leq S({\theta}_{k_i},{\theta}_{k_i+1},\ldots,{\theta}_{k_{(i+1)}-1},{\theta}_{k_{(i+1)}})+\frac{{\textstyle c}}{2^i}, $$ where $c=2\,{\rm Lip\,} S$. Therefore, given a positive integer $i_1$, adding these inequalities yields \begin{equation}\label{eq:series} 0\leq \sum_{i=0}^{i_1}S(P^i) \leq S({\theta}_{k_{0}},\ldots,{\theta}_{k_{(i_1+1)}}) + 2c. \end{equation} Consider now a subsequence $(\Theta^{q_\ell})_{\ell\in{{\mathbb N}}}$ which converges to $\Theta$, and set $\Theta^{q_\ell}=({\theta}^{q_\ell}_k)_{k\in{\mathbb Z}}$. For $\ell\geq\ell_0$ large enough, the period $q_\ell$ is larger than $(k_{(i_1+1)}-k_0+2)$ and therefore, by Lemma~\ref{lem:unifbound}, there is $M$ independent of $i_1$ such that $$ S({\theta}_{k_{0}}^{q_\ell},\ldots,{\theta}_{k_{(i_1+1)}}^{q_\ell})\leq M,\qquad \ell\geq \ell_0. $$ Taking the limit when $\ell\to\infty$ shows that $$ S({\theta}_{k_{0}},\ldots,{\theta}_{k_{(i_1+1)}}) \leq M $$ since $S$ is continuous. Therefore by (\ref{eq:series}) the series $\sum S(P^i)$ converge, hence $S(P^i)$ tends to~$0$. This proves that ${\theta}$ is in the projected Aubry set, so ${\theta}=0$ by Lemma \ref{lem:Aubry}. \end{proof} \subsubsection{Shifts and nontrivial limit points} Now, nothing prevents the previous limit point $\Theta$ to be the trivial zero sequence. To detect nontrivial limit points we will have to consider new sequences, deduced from $(\Theta^q)_{q\in{{\mathbb N}^*}}$ by translation of the indices in order to ``center the convergence process''. \vskip2mm We begin with two simple results for which we explicitely take advantage of the simple features of the Hamiltonian flow in the neighborhood of the fixed point $O$. \begin{lemma}\label{lem:limpoints} Fix a sequence $(\xi^q)_{q\in{{\mathbb N}^*}}$ of elements of ${\mathscr X}_m^q$, such that $\xi^q$ is minimizing for $\widehat S$. Set $\Theta^q=\pi\circ\xi^q$. There exists an infinite subsequence $(\Theta^q)_{q\in N}$ and a finite set $\{\Lambda_1,\dots,\Lambda_p\}$ of nonzero pairwise distinct trajectories $\Lambda_i=(\lambda^i_k)_{k\in {\mathbb Z}}$ homoclinic to $0$ in ${\mathbb T}^2$, such that for any shift-sequence $\kappa=(k_q)_{q\in N}$ and any limit point $\Theta=({\theta}_k)_{k\in {\mathbb Z}}$ of the sequence $(\Theta^{q,\kappa})_{q\in N}$, there exists $i\in \{1,\dots,p\}$ and $\ell\in {\mathbb Z}$ such that for any $k\in {\mathbb Z}$, ${\theta}_k=\lambda_{k+\ell}$. \end{lemma} \begin{proof} We write $\xi^q=(x_k^q)_{k\in{\mathbb Z}}$ and $\Theta^q=({\theta}_k^q)_{k\in{\mathbb Z}}$. By the previous remark and Corollary~\ref{cor:unifbound1}, there exists $q_0$ such that for $q\geq q_0$ $$ \norm{x_{i+1}^q-x_i^q}< \tfrac{1}{8},\qquad \forall i\in{\mathbb Z}. $$ We denote by $B_{{\mathbb T}^2}(0,\frac{1}{8})$ the ball of center $0$ and radius~$\frac{1}{8}$ in ${\mathbb T}^2$. Therefore, for $q\geq q_0$, if some points $x_i^q$ and $x_j^q$, $i<j$, belong to two different lifts of $B_{{\mathbb T}^2}(0,\frac{1}{8})$ in ${\mathbb R}^2$, then there is an index $k$ with $i<k<j$ such that $\pi(x_k^q)\notin B_{{\mathbb T}^2}(0,\frac{1}{8})$. \vskip2mm $\bullet$ \emph{Existence of one nontrivial trajectory homoclinic to $0$}. Given $q\geq q_0\geq 3$, by the previous remark, there exists $k\in\{0,\ldots,q-1\}$ such that ${\theta}^q_k\notin B_{{\mathbb T}^2}(0,\frac{1}{8})$. \noindent Let $k_q:={\rm Min\,}\{j\in\{0,\dots,q-1\}\mid{\theta}_j^q\notin B_{{\mathbb T}^2}(0,\frac{1}{8})\}$, so $\kappa_1:=(k_q)_{q\geq q_0}$ is a shift-sequence. By compactness of $({\mathbb T}^2)^{\mathbb Z}$ there exists an infinite subset $N_1$ of ${\mathbb N}$ such that the subsequence $(\Theta^{\kappa_1,q})_{q\in N_1}$ converges. Let $\Lambda_1=(\lambda_k^1)_{k\in{\mathbb Z}}$ be its limit. By Lemma \ref{lem:homorb}, $\Lambda_1$ is a trajectory homoclinic to $0$. Now, since ${\theta}^q_{k_q}={\theta}^{\kappa,q}_0$ belongs to the compact set ${\mathbb T}^2\setminus B_{{\mathbb T}^2}(0,\frac{1}{8})$, then $$ \lambda_0^1=\lim_{\substack{q\to+\infty\\ q\in N_1}}{\theta}^{\kappa_1,q}_0\in{\mathbb T}^2\setminus B_{{\mathbb T}^2}(0,\tfrac{1}{8}). $$ So the trajectory $\Lambda_1$ is nontrivial. \vskip2mm $\bullet$ \emph{Search for other distinct homoclinic trajectories.} For $\delta>0$, we denote by $V_\delta\subset{\mathbb T}^2$ the $\delta$--neighborhood of the set $\{\lambda_k\mid k\in{\mathbb Z}\}$. The following two cases only occur: \begin{enumerate} \item for all $\delta>0$ there exists $q_1$ such that for any $q\geq q_1$ the set $\{{\theta}^{\kappa,q}_0,\ldots,{\theta}^{\kappa,q}_{q-1}\}\subset V_\delta$ , \item there exists $\delta_1>0$ such that the set $\{q\in N_1\mid\{{\theta}^{\kappa,q}_0,\ldots,{\theta}^{\kappa,q}_{q-1}\}\not\subset V_{\delta_1}\}$ is infinite. \end{enumerate} In the first case, there is no other homoclinic trajectory, we set $N:=N_1$. In the second case, set $N_1':=\{q\in N_1\mid\{{\theta}^{\kappa_1,q}_0,\ldots,{\theta}^{\kappa_1,q}_{q-1}\}\not\subset V_{\delta_1}\}$. For $q\in N_1'$, let $k_q^2:={\rm Min\,}\{j\in \{0,\ldots,q-1\}\mid {\theta}_j^q\notin V_{\delta_1}\}$, so $\kappa_2:=(k^2_q)_{q\in N_2}$ is a shift-sequence. As before, there exists an infinite set $N_2\subset N_1'$ such that the subsequence $(\Theta^q)_{q\in N_2}$ converges to a limit $\Lambda_2:=(\lambda^2_k)_{k\in {\mathbb Z}}$ whose image is not contained in $V_{\delta_1}$. As before, one cheks that $\Lambda_2$ is a nontrivial trajectory homoclinic to $0$ and by construction, $\Lambda_2\neq \Lambda_1$. We then examine the same alternative considering neighborhoods of $\{\lambda_k^2\mid k\in{\mathbb Z}\}$ and the sequence $(\Theta^{q,\kappa_1+\kappa_2})_{q\in N_2}$ and continue the process until the first term of the alternative holds true. \vskip2mm $\bullet$ \emph{The process stops after a finite number of steps.} For $j\geq 1$, we denote by $N_j$ the index set we get after $j$ steps, by $\kappa_j:=(k^j_q)_{q\in N_j}$ the associated shift-sequence and by $\Lambda_j:=(\lambda^j_k)_{k\in {\mathbb Z}}$ the corresponding trajectory homoclinic to $0$. Let $\bar{\kappa}_j:= \sum_{i=1}^j \kappa_i$, that is, $\bar{\kappa}_j=(\bar{k}^j_q)_{q\in N_j}$ with $\bar{k}^j_q:=\sum_{i=1}^jk^i_q$. In particular, for $1\leq j\leq p$, the following convergence hold true: $$ \lim_{\substack{q\to+\infty\\ q\in N_j}}{\theta}^{\bar{\kappa}_j,q}_0=\lambda^j_0. $$ Assume there are $p$ steps with $p\geq 1$. By construction the points $\lambda_0^j$ with $1\leq j\leq p$ are pairwise distinct and contained in ${\mathbb T}^2\setminus B_{{\mathbb T}^2}(0,\tfrac{1}{8})$, so for $q\in N_p$ large enough, the set $\{{\theta}^q_0,\ldots, {\theta}_{q-1}^q\}$ contains at least $p$ points in the compact set ${\mathbb T}^2\setminus B_{{\mathbb T}^2}(0,\tfrac{1}{8})$. By the first remark in Lemma~\ref{lem:Aubry}, there exists $a>0$ such that for any $({\theta},{\theta}')\in ({\mathbb T}^2\setminus B_{{\mathbb T}^2}(0,\tfrac{1}{8}))\times {\mathbb T}^2$, $S({\theta},{\theta}')\geq a$. Therefore, by Lemma~\ref{lem:unifbound}, for $q\in N_p$ large enough, $$ pa \leq S(\Theta^q) \leq \widehat S(0,m) $$ and so $p$ is bounded above. \vskip2mm $\bullet$ \emph{Conclusion.} Let $p$ be the number of steps. With the previous notation, we set $N=N_p$ and $$ K:=\bigcup_{j=1}^p\{\lambda_k^j\,|\, k\in {\mathbb Z}\}, $$ so $K$ is the union of the images of the trajectories $\Lambda^j$. Consider a shift-sequence $\kappa$ and let $\Theta=({\theta}_k)_{k\in {\mathbb Z}}$ be a limit point of $(\Theta^{\kappa,q})_{q\in N}$. By construction, the image $\{{\theta}_k\mid k\in{\mathbb Z}\}$ of $\Theta$ is contained in the intersection $\cap _{\delta>0} V_\delta$, where $V_\delta$ is the $\delta$-neigborhood of $K$, hence $\{{\theta}_k\mid k\in{\mathbb Z}\}\subset K$. Moreover, by Lemma~\ref{lem:homorb}, $\Theta$ is a nontrivial trajectory homoclinic to $0$. This proves that the image of $\Theta$ coincides with the image of some $\Lambda_i$, which proves Lemma~\ref{lem:limpoints}, since two trajectories with the same image are deduced from one another by a shift of indices. \end{proof} \subsubsection{The continuous setting} We now prove the geometric convergence to the polyhomoclinic orbits in the sense of Section~\ref{ssec:convergence}. We keep the notation of the proof of Lemma~\ref{lem:limpoints}. \vskip2mm $\bullet$ Let $\omega^i$ be the continuous solution associated with the homoclinic trajectory $\Lambda^i\in{\mathscr H}$, with initial condition $\omega^i(0)=\lambda^i_0$. The limit polyhomoclinic orbit will be a concatenation of the orbits $\Omega_i$ of the solutions $\omega_i$, ordered in a suitable way that we will now make explicit. The notion of convergence being independent of the choice of sections, we choose as exit and entrance sections ${\Sigma}^u$ and ${\Sigma}^s$ for each homoclinic orbit suitable lifts to $T^*{\mathbb T}^2$ of small arcs of the boundary circle of the disc ${\mathscr O}$ (which can be assumed to be transverse to each homoclinic trajectory). \vskip2mm $\bullet$ Let $\gamma^q$ be the continuous solution associated with $\Theta^q$, such that $\gamma^q(0)={\theta}^q_0$. One easily deduces from Lemma \ref{lem:rel} that the sequence of functions $\big(\gamma^q(t+\kappa^1_q)\big)$ pointwise converges to the function $\omega^1(t)$. \vskip2mm $\bullet$ Let $t^1_u<t^1_s$ be the exit and entrance times for $\omega^1$, defined by $\omega^1(t^1_u)\in {\Sigma}^u$ and $\omega^1(t^1_s)\in{\Sigma}^s$, and $\omega^1(]-\infty,t^u])\in\pi^{-1}({\mathscr O})$, $\omega^1([t^s,+\infty[)\in\pi^{-1}({\mathscr O})$. By transversality and the previous property, there exist sequences $(t^1_u(q))_{q\in N}$ and $(t^1_s(q))_{q\in N}$, with limits $t^1_u$ and $t^1_s$ respectively, such that for $q$ large enough $$ \gamma_q(t^1_u(q))\in {\Sigma}^u,\qquad \gamma_q(t^1_s(q))\in {\Sigma}^s. $$ Obviously $[t^1_u(q),t^1_s(q)]\subset [0,q]$ for $q$ large enough. Moreover, $\big(\gamma^q(t+\kappa^1_q)\big)$ converges uniformly to $\omega^1(t)$ on $[t^1_u-\delta,t^1_s+\delta]$ for $\delta$ small enough. \vskip2mm $\bullet$ By definition of ${\mathscr O}$, there exists a minimal time $t^2_u(q)>t^1_s(q)$ such that $\gamma^q(t^2_u(q))\in {\Sigma}^u$. Up to extraction of a subsequence one can assume that $\gamma^q(t^2_u(q))$ converges, and the limit necessarily belongs to some homoclinic orbit $\Omega^j$, so that there exists $t^j_u$ such that $$ \lim_{q\to\infty} \gamma^q(t^2_u(q))=\omega^j(t^j_u). $$ \vskip2mm $\bullet$ The previous process may be continued and provides us, for $q$ large enough, with a finite sequence of consecutive intervals $$ 0\leq [t^1_u(q),t^1_s(q)]<[t^2_u(q),t^2_s(q)]<\cdots<[t^\ell_u(q),t^\ell_s(q)]\leq q $$ such that the intersection of the orbit of $\gamma^q$ with the lift of ${\mathscr O}$ to $T{\mathbb T}^2$ is the union $$ \bigcup_{1\leq j\leq\ell} \gamma^q\big([t^j_u(q),t^j_s(q)]\big), $$ and such that (up to extraction) each sequence $\big(\gamma^q(t^j_u(q))\big)$ and $\big(\gamma^q(t^j_s(q))\big)$ is convergent. Moreover, for $1\leq j\leq \ell$ the limits $$ \lim_{q\to+\infty}\gamma^q(t^{j-1}_s(q)) \qquad\textrm{and}\qquad \lim_{q\to+\infty}\gamma^q(t^j_u(q)) $$ belong to the same homoclinic orbit (with the cyclic convention $0=\ell$). Note that $\ell$ is larger or equal to the number of homoclinic orbits (some of them may be shadowed more than once). \subsubsection{The positive energies} To conclude the proof of Proposition \ref{prop:polyhom} it only remains to show the existence of a convergent sequence of minimizing periodic orbits with positive energy to the previous polyhomoclinic orbit. By Conditions $(D)$, starting with a sequence $({\Gamma}_n)$ of periodic orbits with energy $\geq 0$ converging to the polyhomoclinic orbit $\Omega$, one can slightly perturb each of them to get another sequence $({\Gamma}^+_n)$ of periodic orbits {\em with positive energy} $1/2^n$ close to the initial ones in the $C^1$ topology. This new sequence obviouly converges to the same polyhomoclinic orbit in the sense of Definition~\ref{def:conv}, which concludes the proof. \subsubsection{The simple homoclinic orbits} We are now in a position to prove Lemma~\ref{lem:simppos}. Assume that $C$ satisfies Conditions $(D)$. Following $(D_4)$ consider a homoclinic orbit$\Omega$ whose amended action is strictly minimal among the amended actions of the homoclinic orbits. Let $A$ be the lower bound of these latter actions. Let $c\sim m$ be the homology class. Then for $q$ large enough, by properly chosing an initial condition on $\Omega$ close enough to $O$, one can produce a $q$--periodic sequence whose rotation vector is $m$ and whose action is smaller that $A$. As a consequence, a minimizing sequence in ${\mathscr X}_m^q$ also has an action smaller that $A$. By semicontinuity and the previous section, this proves that the associated periodic orbits converge to $\Omega$, hence $\Omega$ is positive. \section{The Hamiltonian Birhoff-Smale theorem}\label{sec:proofBS} We give here a complete proof of Theorems \ref{thm:hypdyn1} and \ref{thm:hypdyn2}, which follows the lines of \cite{Mar98} with more details. Since in the Hamiltonian setting the (Lagrangian) invariant manifolds cannot be transverse in the ambient space, we have to restrict the system to energy levels and consider Poincar\'e sections for the flow. This process is {\em not} general and depends on the spectrum of the equilibrium point as well as on the ``disposition'' of the homoclinic orbits. Our result is related to the study of Shilnikov and Turaev \cite{ST89}, but our assumptions are different. We moreover need to get a very precise localization of the horseshoes we will construct, which we were unable to extract from \cite{ST89}. To this aim we find very helpful the existence of a proper coordinate system for the fixed point. Another approach may be found in \cite{KZ}. \subsection{The setting} \paraga We consider a $C^\infty$ Hamiltonian system $H$ on ${\mathbb A}^2=T^*{\mathbb T}^2$ with a hyperbolic fixed point~$O$. We assume that there exists a neighborhood of $O$ endowed with a $C^\infty$ symplectic coordinate system $(u_1,u_2,s_1,s_2)$, with values in some ball $B$ centered at $0$ in ${\mathbb R}^4$, in which $H$ takes the normal form \begin{equation}\label{eq:normform1} H(u,s)=\lambda_1 u_1s_1+\lambda_2 u_2s_2+R(u_1s_1,u_2s_2), \end{equation} with $\lambda_1>\lambda_2>0$, $R(0,0)=0$ and $D_{(0,0)}R=0$ (we do not assume any equivariance condition at this point, see (\ref{eq:equivariance})). The products $u_is_i$ are local first integrals of the system and in $B$ the vector field $X^H$ reads \begin{equation}\label{eq:normform2} X^H\ \left\vert \begin{array}{lll} \dot u_i&=&\lambda_i(u_1s_1,u_2s_2)\,u_i\\ \dot s_i&=&-\lambda_i(u_1s_1,u_2s_2)\,s_i\\ \end{array} \right. \end{equation} with \begin{equation}\label{eq:normform3} \lambda_i(u_1s_1,u_2s_2)=\lambda_i+\partial_{x_i}R(u_1s_1,u_2s_2), \qquad i=1,2. \end{equation} We can assume that for $(u,s)\in B$: \begin{equation}\label{eq:major} \lambda_i(u_1s_1,u_2s_2)\geq \overline\lambda_i>0,\qquad i\in\{1,2\}, \end{equation} and \begin{equation}\label{eq:estlamda1} \widehat\Lambda\geq\frac{\lambda_1(u_1s_1,u_2s_2)}{\lambda_2(u_1s_1,u_2s_2)}\geq\overline\Lambda>1, \end{equation} for some suitable constants $\overline\Lambda$ and $\widehat\Lambda$, which moreover satisfy \begin{equation}\label{eq:estlamb2} \widehat\Lambda-\overline\Lambda<2(\overline\Lambda-1). \end{equation} The local stable and unstable manifolds $W^s_\ell $ and $W^u_\ell$ of $O$ in $B$ are straightened: $$ W^u_\ell=\{s=0\}, \qquad W^s_\ell=\{u=0\} $$ and the Hamiltonian vector field on these manifolds is purely linear. Some stable and unstable orbits are depicted in Figure 1. As in Section 2, we introduce the subsets $$ W^{ss}_\ell=\{s_2=0\},\ \ W^{sc}_\ell=\{s_1=0\},\ \ W^{uu}_\ell=\{u_2=0\},\ \ W^{uc}_\ell=\{u_1=0\}. $$ Observe that their germs at $O$ are independent of the normalization. Indeed, $W_\ell^{ss}$ and $W_\ell^{uu}$ are the strong (local) stable and unstable manifolds of $O$, while $W^{sc}_\ell=\{s_1=0\}$ and $W^{uc}_\ell=\{u_1=0\}$ are the only $C^{\infty}$ invariant lines through the fixed points which are contained in in $W_\ell^{ss}$ and $W_\ell^{uu}$ and transverse to $W_\ell^{ss}$ and $W_\ell^{uu}$. Note also that these objects depend continuously on the Hamiltonian $H$: this comes from the usual (parametrized) Grobman-Hartman theorem. \begin{figure} \caption{The flows on $W^s_\ell$ and $W^u_\ell$.} \end{figure} \paraga Given ${\varepsilon}>0$ small enough and ${\sigma}\in\{-1,+1\}$, as in Section 5 we denote by $B({\varepsilon})$ the ball of ${\mathbb R}^4$ centered at $0$ with radius ${\varepsilon}$ for the Max norm and we introduce the sections \begin{equation} {\Sigma}_{{\sigma}}^u[{\varepsilon}]=\{(u,s)\in \overline B({\varepsilon})\mid u_2={\sigma}{\varepsilon}\},\qquad {\Sigma}_{{\sigma}}^s[{\varepsilon}]=\{(u,s)\in \overline B({\varepsilon})\mid s_2={\sigma}{\varepsilon}\}, \end{equation} and \begin{equation} {\Sigma}_{{\sigma}}^u[{\varepsilon},e]={\Sigma}_{{\sigma}}^u[{\varepsilon}]\cap (C_{\vert B})^{-1}(e),\qquad {\Sigma}_{{\sigma}}^s[{\varepsilon},e]={\Sigma}_{{\sigma}}^s[{\varepsilon}]\cap (C_{\vert B})^{-1}(e). \end{equation} To define suitable coordinates on these latter sets, we write $x_i=u_is_i$, $i=1,2$ and we fix two intervals $X_1=\,]-\widehat x_1,\widehat x_1[$, $E=\,]-\widehat e,\widehat e\,[$, together with a neighborhood $X$ of $0$ in ${\mathbb R}^2$ such that the equation \begin{equation}\label{eq:energie} H(x_1,x_2)=e, \qquad (x_1,x_2)\in X,\quad e\in E, \end{equation} is equivalent to \begin{equation}\label{eq:u2s20} x_2=\chi(x_1,e), \qquad (x_1,e)\in X_1\times E, \end{equation} where $\chi$ is a smooth function on $X_1\times E$. Therefore \begin{equation}\label{eq:deriv} \chi(x_1,e)=\frac{1}{\lambda_2}(e-\lambda_1 x_1)+\overline \chi(x_1,e),\qquad \overline \chi(0,0)=0, \quad D_{(0,0)}\overline \chi=0. \end{equation} As a consequence, there exists $\widehat {\varepsilon}>0$ such that for $0<{\varepsilon}<\widehat{\varepsilon}$, the equation of ${\Sigma}_{{\sigma}}^s[{\varepsilon},e]$ reads \begin{equation}\label{eq:U2} u_2=\frac{1}{{\sigma}{\varepsilon}}\chi(u_1s_1,e):=\phi_{{\varepsilon},{{\sigma}}}(u_1s_1,e),\qquad \norm{(u_1,s_1)}_\infty\leq{\varepsilon}, \end{equation} while the equation of ${\Sigma}_{{\sigma}}^u[{\varepsilon},e]$ reads \begin{equation}\label{eq:S2} s_2=\phi_{{\varepsilon},{{\sigma}}}(u_1s_1,e),\qquad \norm{(u_1,s_1)}_\infty\leq{\varepsilon}. \end{equation} We can now introduce our coordinates. Since we implicitely use the conservation of energy through the choice of our sections, we can take advantage of only one of the first integrals $u_is_i$ and we will choose the product $u_1s_1$. \begin{itemize} \item On the subset ${\Sigma}_{{\sigma}}^{s*}[{\varepsilon},e]=\big\{(u,s)\in {\Sigma}_{{\sigma}}^s[{\varepsilon},e]\mid s_1\neq0\big\}$ we define $(x_s,y_s)$ by \begin{equation}\label{eq:coords0} x_s = s_1,\qquad y_s = u_1s_1. \end{equation} The full set of coordinates of the point $m=(x_s,y_s)\in{\Sigma}_{{\sigma}}^{s*}[{\varepsilon},e]$ reads \begin{equation}\label{eq:coords} m=\Big(u_1=\frac{y_s}{x_s},\ u_2=\phi_{{\varepsilon},{\sigma}}(y_s,e),\ s_1=x_s,\ s_2={\sigma}{\varepsilon}\Big). \end{equation} \item Similarly, on ${\Sigma}_{{\sigma}}^{u*}[{\varepsilon},e]=\big\{(u,s)\in {\Sigma}_{{\sigma}}^u[{\varepsilon},e]\mid u_1\neq0\big\}$, we set \begin{equation}\label{eq:coordu0} x_u = u_1,\qquad y_u = u_1s_1, \end{equation} so that the coordinates of $m=(x_s,y_s)\in{\Sigma}_{{\sigma}}^{u*}[{\varepsilon},e]$ read \begin{equation}\label{eq:coordu} m=\Big(u_1=x_u,\ u_2={\sigma}{\varepsilon},\ s_1=\frac{y_u}{x_u},\ s_2=\phi_{{\varepsilon},{\sigma}}(y_u,e)\Big). \end{equation} \end{itemize} Note that the intersections of the local invariant manifolds $W^u_\ell$ and $W^s_\ell$ with the sections ${\Sigma}^u[{\varepsilon},0]$ and ${\Sigma}^s[{\varepsilon},0]$ admit the simple equations $y_u=0$ and $y_s=0$ respectively. \paraga The following lemma is a simple remark which will enable us to properly localize our construction of the horseshoes. Note that, due to the form of the flow on $W^s_\ell$, for ${\varepsilon}>0$ small enough (in particular ${\varepsilon}<\widehat{\varepsilon}$ defined above) ${\Sigma}^s[{\varepsilon}]$ is an {\em entrance section} for any orbit ${\Gamma}$ in $W^s_\ell$, in the sense that there exists a first intersection point in ${\Gamma}\cap {\Sigma}^s[{\varepsilon}]$, according to the flow induced orientation of ${\Gamma}$ (see the proof below). We call this point the entrance point of ${\Gamma}$ relatively to ${\Sigma}^s[{\varepsilon}]$. We define analogously the exit point of an orbit ${\Gamma}\subset W^u$ \begin{lemma}\label{lem:majcoord} Let $\Omega$ be an orbit in $W^s\setminus(W^{ss}_\ell\cup W^{sc}_\ell)$. Then, if ${\varepsilon}_a>0$ is small enough, for $0<{\varepsilon}\leq{\varepsilon}_a$ the entrance point $b$ of $\Omega$ relatively to ${\Sigma}^s[{\varepsilon}]$ belongs to a well-defined subset ${\Sigma}^{s*}_{{\sigma}}[{\varepsilon}]$. Its coordinates (\ref{eq:coords0}) read $b=(\eta,0)$ with \begin{equation}\label{eq:estiment} \abs{\eta}\leq {\tfrac{1}{2}}{\varepsilon}^{\overline \Lambda},\qquad 1\leq i\leq i^*. \end{equation} One has a similar statement when orbits $\Omega\subset W^u\setminus(W^{us}_\ell\cup W^{uc}_\ell)$, in this case its exit point $a=(\xi,0)\in {\Sigma}^{u*}_{{\sigma}}[{\varepsilon}]$ satisfies \begin{equation}\label{eq:estimex} \abs{\xi}\leq {\tfrac{1}{2}}{\varepsilon}^{\overline \Lambda},\qquad 1\leq i\leq i^*. \end{equation} \end{lemma} \begin{proof} It is of course enough to prove the first claim. Since $\Omega\subset W^s\setminus(W^{ss}_\ell\cup W^{sc}_\ell)$, $\Omega\cap W^s_\ell$ admits an equation of the form $$ s_1=c\,\abs{s_2}^{\lambda_1/\lambda_2},\qquad u=0, $$ with $c\in{\mathbb R}^*$. One therefore sees that ${\Sigma}^s[{\varepsilon}]$ is an entrance section when ${\varepsilon}$ is small enough. Therefore the entrance point $b$ is well-defined and belongs to a subset ${\Sigma}^{s*}_{{\sigma}}[{\varepsilon}]$ since $c\neq0$. Moreover $\eta=c {\varepsilon}^{\lambda_1/\lambda_2}$, so our claims then easily follow from the condition $\overline\Lambda<\lambda_1/\lambda_2$. \end{proof} Given a finite set $(\Omega_i)_{1\leq i\leq\ell}$ of orbits homoclinic to $O$, which do not intersect the exceptional set $W^{ss}_\ell\cup W^{sc}_\ell\cup W^{us}_\ell\cup W^{uc}_\ell$, we say that ${\varepsilon}_a>0$ is admissible for $(\Omega_i)$ when it satisfies both conditions of Lemma \ref{lem:majcoord}. \subsection{The Poincar\'e return map} Throughout this section we fix two compatible polyhomoclinic orbits $\Omega^0=(\Omega^0_1,\ldots,\Omega^0_{\ell^0})$ and $\Omega^1=(\Omega^1_1,\ldots,\Omega^1_{\ell^1})$ and we fix an admissible ${\varepsilon}_a>0$. The Poincar\'e map $\Phi$ will be the composition of a flow-induced outer maps $\Phi_{out}$ along the homoclinic orbits with an inner map $\Phi_{in}$, for which we will use the normal form~(\ref{eq:normform1}). Given a positive ${\varepsilon}<{\varepsilon}_a$, we denote by $a_i^\nu$ and $b_i^\nu$ the exit and entrance point of $\Omega_i^\nu$ relatively to the sections ${\Sigma}^u[{\varepsilon}]$ and ${\Sigma}^s[{\varepsilon}]$. \subsubsection{The outer map.} The outer map $\Phi_{out}$ will be defined over the union of small 3-dimensional neighborhoods ${\mathcal R}_i^\nu$ of the points $a^\nu_i$ in ${\Sigma}^u[{\varepsilon}]$ and will take its values in the union of 3-dimensional neighborhoods of the points $b^\nu_i$ in ${\Sigma}^s[{\varepsilon}]$. \paraga In the $(x_u,y_u)$--coordinates, we set \begin{equation}\label{eq:coordainu} a_i^\nu=(\xi_i^\nu,0), \end{equation} so that in particular $\xi_i^\nu\neq0$. To define the neighborhoods ${\mathcal R}_i^\nu$, we first need to introduce the following notation for $2$-dimensional rectangles in ${\Sigma}^u[{\varepsilon},e]$. \begin{itemize} \item For $\xi\in\,]-{\varepsilon},{\varepsilon}[\setminus\{0\}$ and for $0<\delta<\abs{\xi}$ and $\delta'>0$, we set \begin{equation}\label{eq:rectangle0} R[\xi,\delta,\delta',e]=\{(x_u,y_u)\in{\Sigma}^u[{\varepsilon},e]\mid \abs{x_u-\xi}\leq \delta,\ \abs{y_u}\leq \delta'\}. \end{equation} \end{itemize} The neighborhood ${\mathcal R}_i^\nu\subset {\Sigma}^u[{\varepsilon}]$ will be the union of a one-parameter family of such rectangles: \begin{equation}\label{eq:gloreg} {\mathcal R}_i^\nu=\bigcup_{\abs{e}\leq e_0} R[\xi_i^\nu,\delta,\delta',e]. \end{equation} where the parameters $\delta,\delta'$ will be chosen independently of $e$ (and of $i$ and $\nu)$. The determination of $e_0,\delta,\delta'$ will necessitate several steps which will be made explicit in the following. \paraga Let us introduce the first constraint on $e_0$ and $\delta,\delta'$. By the transversality condition of the invariant manifolds along the homoclinic orbits, a small (one-dimensional) segment of ${\Sigma}^u[{\varepsilon},0]\cap W^u_\ell=\{y_u=0\}$ around $a_i^\nu$ is sent by $\Phi_{out}$ on a small curve in ${\Sigma}^s[{\varepsilon},0]$ which is transverse to ${\Sigma}^s[{\varepsilon},0]\cap W^s_\ell=\{y_s=0\}$ at $b_i^\nu$. Now the image of $R[\xi_i^\nu,\delta,\delta',e]$ by $\Phi_{out}$ is the union of the images of the horizontals $\{y_u={\rm cte}\}$ of the rectangle. We require the following condition (see Figure \ref{fig:rectout}). \begin{itemize} \item The energy $e_0>0$ and the constants $\delta,\delta'$ are small enough so that for $\abs{e}<e_0$, the images of the horizontals of $R[\xi_i^\nu,\delta,\delta',e]$ by $\Phi_{out}$ transversely intersect the line $\{y_s=0\}\subset{\Sigma}^s[{\varepsilon},0]$. \end{itemize} \begin{figure} \caption{The image of a rectangle in ${\Sigma}^u[{\varepsilon},0]$ under the map $\Phi_{out}$.} \label{fig:rectout} \end{figure} \subsubsection{The inner map} \label{ssec:innermap} The inner map $\Phi_{in}$ sends a point $m\in{\Sigma}^s[{\varepsilon}]\setminus W^s(O)$ on the first intersection point of its orbit with the section ${\Sigma}^u[{\varepsilon}]$, provided moreover that the segment of orbit defined by these two points stays inside the coordinate ball $B$. \begin{lemma} \label{lem:phiin} For $0<{\varepsilon}<{\varepsilon}_a$ and $\abs{e}<\widehat e$ (see (\ref{eq:energie})), let us set \begin{equation}\label{eq:coordphi1} X_u(x_s,y_s)={\varepsilon}^{\Lambda(y_s,e)}\,\frac{y_s}{x_s}\abs{\,\phi_{{\varepsilon},{\sigma}}(y_s,e)}^{-\Lambda(y_s,e)},\qquad \Lambda(y_s,e)=\frac{\lambda_1(y_s,\chi(y_s,e))}{\lambda_2(y_s,\chi(y_s,e))}, \end{equation} where $\chi$ was introduced in (\ref{eq:u2s20}) and $\phi_{{\varepsilon}{\sigma}}=\frac{1}{{\varepsilon},{\sigma}}\chi$ as in (\ref{eq:U2}). Then \begin{equation}\label{eq:defdom} {\mathscr D}[{\varepsilon},e]=\{(x_s,y_s)\in {\Sigma}^s[{\varepsilon},e]\mid \abs{X_u(x_s,y_s)}<{\varepsilon}\}. \end{equation} and for $m=(x_s,y_s)\in {\mathscr D}[{\varepsilon},e]$, $\Phi_{in}(m)\in{\Sigma}_{{\sigma}'}^u[{\varepsilon},e]$ with \begin{equation}\label{eq:coordcomp} {\sigma}'={\rm sgn\,}\big(\phi_{{\varepsilon},{\sigma}}(y_s,e)\big),\qquad \Phi_{in}(m)=\Big(X_u(x_s,y_s),Y_u(x_s,y_s)=y_s\Big). \end{equation} \end{lemma} \vskip2mm \begin{proof} Let $m=(u_1^0,u_2^0,s_1^0,s_2^0)\in B$. Then if the orbit of $m$ stays inside $B$, its associated solution reads \begin{equation}\label{eq:linflow} u_i(t)=u^0_i\,e^{\lambda(u_1^0s_1^0,u_2^0s_2^0)t},\qquad s_i(t)=s^0_i\,e^{-\lambda(u_1^0s_1^0,u_2^0s_2^0)t}, \qquad i=1,2. \end{equation} Assume moreover that $m\in{\Sigma}^s_{\sigma}[{\varepsilon},e]$, with coordinates $(x_s,y_s)$ in this section, so that $y_s=u_1^0s_1^0$. Assume that $u_2^0=\phi_{{\varepsilon},{\sigma}}(y_s,e)$ is nonzero. Then the transition time $\tau(m)$ to reach the section $\abs{u_2}={\varepsilon}$ is well defined and reads $$ \tau(m)=\frac{1}{\Lambda_2(y_s,e)}{\rm Ln\,}\frac{{\varepsilon}}{\abs{\,\phi_{{\varepsilon},{\sigma}}(y_s,e)}}, $$ which immediately yields the equality $$ u_1\big(\tau(m)\big)=X_u(x_s,y_s) $$ by (\ref{eq:linflow}) and (\ref{eq:coordu0}), provided that the orbit stays inside $\norm{(u,s)}_\infty\leq {\varepsilon}$. Since the $u$-coordinates increase while the $s$-coordinates decrease along the orbits contained in this domain, one easily checks that the inequality $\abs{X_u(x_s,y_s)}\leq {\varepsilon}$ is a necessary and sufficient condition for $m$ being in ${\mathscr D}[{\varepsilon},e]$. This proves (\ref{eq:defdom}) and the expression of $\Phi_{in}$ directly follows from the previous remarks. Finally, since the sign of the coordinates is preserved by the flow, one immediately sees that ${\sigma}'={\rm sgn\,}\big(\phi_{{\varepsilon},{\sigma}}(y_s,e)\big)$ \end{proof} \vskip2mm \begin{figure} \caption{Case $e>0$.} \label{fig:philin1} \end{figure} \begin{figure} \caption{Case $e<0$.} \label{fig:philin2} \end{figure} Figures \ref{fig:philin1} and \ref{fig:philin2} depict the image of a transverse segment under the map $\Phi_{in}$. We limited ourselves to the part of its image which will prove useful in the following constructions. The ``useful domain'' in the section ${\Sigma}^u$ is delimited by gray rectangles. \vskip2mm We will also need the following result for $\Phi_{in}^{-1}$, whose proof is analogous to that of Lemma~\ref{lem:phiin}. \begin{lemma} \label{lem:phiininv} For $0<{\varepsilon}<{\varepsilon}_a$ and $\abs{e}<\widehat e$, we set \begin{equation}\label{eq:coordphiinv1} X_s(x_u,y_u)={\varepsilon}^{\Lambda(y_u,e)}\,\frac{y_u}{x_u}\abs{\,\phi_{{\varepsilon},{\sigma}}(y_u,e)}^{-\Lambda(y_u,e)},\qquad \Lambda(y_u,e)=\frac{\lambda_1(y_u,\chi(y_u,e))}{\lambda_2(y_u,\chi(y_u,e))}, \end{equation} Then the domain of $\Phi_{in}^{-1}$ reads \begin{equation}\label{eq:defdominv} {\mathscr D}^{-1}[{\varepsilon},e]=\{(x_u,y_u)\in {\Sigma}^s[{\varepsilon},e]\mid \abs{X_u(x_u,y_u)}<{\varepsilon}\}\subset {\Sigma}^u[{\varepsilon},e], \end{equation} and for $m=(x_u,y_u)\in {\mathscr D}^{-1}[{\varepsilon},e]$, $\Phi_{in}^{-1}(m)\in{\Sigma}_{{\sigma}'}^s[{\varepsilon},e]$ with ${\sigma}'={\rm sgn\,}\big(\phi_{{\varepsilon},{\sigma}}(y_u,e)\big)$. Finally \begin{equation}\label{eq:coordcompinv} \Phi_{in}^{-1}(m)=\Big(X_s(x_u,y_u),Y_s(x_u,y_u)=y_u\Big). \end{equation} \end{lemma} The behavior of $\Phi_{in}^{-1}$ is therefore immediately deduced from the one of $\Phi_{in}$, by a simple inversion of the subscripts $s$ and $u$. \subsubsection{A picture of the Poincar\'e map $\Phi=\Phi_{in}\circ\Phi_{out}$} Here we limit ourselves to the case of two simple homoclinic orbits $\Omega^0$ and $\Omega^1$, with exit points in the same section ${\Sigma}^u_{\sigma}[{\varepsilon},0]$ and entrance points in the same section ${\Sigma}^s_{\sigma}$, we moreover assume $e>0$. With the notation of (\ref{eq:gloreg}), we set $R^\nu=R[\xi^\nu,\delta,\delta',e]$ for $\nu=0,1$, where as usual $(\xi^\nu,0)$ stands for the coordinates of the exit point of $\Omega^\nu$. Gathering the previous descriptions, one gets the following picture for the images of rectangles $R^\nu$ (we have limited the images $\Phi(R^\nu)$ to their ``useful parts''). \begin{figure} \caption{The Poincar\'e return map.} \label{fig:Poincare} \end{figure} \subsection{Technical estimates for the inner map} \paraga We begin with an easy lemma on the behavior of the function $\phi_{{\varepsilon},{\sigma}}$ introduced in (\ref{eq:U2}). \begin{lemma}\label{lem:estimates1} Fix ${\varepsilon}\in\, ]0,\widehat{\varepsilon}\,]$ and $\abs{e}<\widehat e$ (see (\ref{eq:U2}), (\ref{eq:energie})). Set $\phi:=\phi_{{\varepsilon},{\sigma}}(.,e):[-{\varepsilon}^2,{\varepsilon}^2]\to{\mathbb R}$. Then $\phi$ is monotone, with ${\rm sgn\,}(\phi')=-{\sigma}$, and vanishes at a single point $y_s^*$ such that \begin{equation}\label{eq:zeroU} y_s^*=\frac{e}{\lambda_1}+O_2(e). \end{equation} \end{lemma} \begin{proof} The function $ \phi_{{\varepsilon},{\sigma}}(y_s,e) $ is well defined on $[-{\varepsilon}^2,{\varepsilon}^2]\times E$. The following derivative is immediately deduced from the implicit expression of $\chi$ in (\ref{eq:u2s20}): $$ \partial_{y_s}\phi_{{\varepsilon},{\sigma}}(y_s,e)=\displaystyle-\frac{\lambda_1+\partial_1 R(y_s,\chi(y_s,e))} {{\sigma}{\varepsilon}\big(\lambda_2+\partial_2 R(y_s,\chi(y_s,e))\big)}, $$ By (\ref{eq:major}) and (\ref{eq:estlamda1}), this shows that $\phi$ is monotone with ${\rm sgn\,}(\phi')=-{\sigma}$ on its domain. Finally, (\ref{eq:zeroU}) is immediate by (\ref{eq:deriv}). \end{proof} \paraga We now restrict ourselves to suitable horizontal strips inside the sections, in order to get asymptotic estimates on the various quantities involved in the construction of the horseshoes. In the following we write $\phi$ instead of $\phi_{{\varepsilon},{\sigma}}$ when the context is clear. \begin{lemma}\label{lem:estimates} For $0<{\varepsilon}<\widehat{\varepsilon}$ fix $\xi({\varepsilon})\in\ ]0,{\varepsilon}[$. Then there exist positive constants $\kappa$, $C_{\varepsilon}, C'_{\varepsilon}, e_0$, {\em with $\kappa<1/{2\lambda_1}$ independent of ${\varepsilon}$}, such that for $\abs{e}<e({\varepsilon})$ and for $(x_s,y_s)$ in the domain ${\mathscr D}[{\varepsilon},e]$ such that $\abs{y_s}\leq \kappa \abs{e}$, the function $ X_u(x_s,y_s) $ introduced in (\ref{eq:coordphi1}) satisfies the following estimates: \vskip-1mm \begin{equation}\label{eq:inegX} \abs{X_u(x_s,y_s)}\geq C_{\varepsilon}\abs{y_s}\abs{e}^{-\overline\Lambda}, \end{equation} \begin{equation}\label{eq:inegdxX} C'_{\varepsilon}\frac{1}{\abs{x_s}^2}\abs{e}^{-\widehat \Lambda+1}\geq \abs{\partial_{x_s}X_u(x_s,y_s)}\geq C_{\varepsilon}\abs{y_s}\abs{e}^{-\overline\Lambda}, \end{equation} and if $\abs{x_s}\geq \xi({\varepsilon})$: \begin{equation}\label{eq:inegdyX} \abs{\partial_{y_s}X_u(x_s,y_s)}\geq C_{\varepsilon} \abs{e}^{-\overline\Lambda}. \end{equation} \end{lemma} \vskip2mm \begin{proof} We can obviously assume that ${\varepsilon}_0<1$ and $\abs{\phi}<1$ so that (\ref{eq:coordphi1}) yields $$ {\varepsilon}^{\widehat\Lambda}\,\frac{\abs{y_s}}{\abs{x_s}}\abs{\,\phi(y_s)}^{-\overline \Lambda}\leq \abs{X_u(x_s,y_s)} \leq {\varepsilon}^{\overline\Lambda}\,\frac{\abs{y_s}}{\abs{x_s}}\abs{\,\phi(y_s)}^{-\widehat \Lambda}. $$ We assume first that $\kappa<1/{2\lambda_1}$, so with the upper bound (\ref{eq:deriv}) we easily get the inequalities \begin{equation}\label{eq:provin} \frac{1}{4\lambda_2{\varepsilon}}\abs{e}\leq \abs{\phi(y_s)}\leq \frac{2}{\lambda_2{\varepsilon}} \abs{e} \end{equation} for $\abs{e}\leq \overline e_0$, and (\ref{eq:inegX}) follows easily with $e({\varepsilon})=\mathop{\rm Min\,}\limits(\overline e_0,\lambda_2{\varepsilon}/2)$. The derivatives of $X_u$ read \begin{equation} \partial_{x_s}X_u(x_s,y_s)=-{\varepsilon}^{\Lambda(y_s,e)}\frac{y_s}{x_s^2}\abs{\,\phi(y_s)}^{-\Lambda(y_s,e)}, \end{equation} \begin{equation}\label{eq:dyX} \begin{array}{lll} \partial_{y_s}X_u(x_s,y_s) &=& \frac{{\varepsilon}^{\Lambda(y_s,e)}}{x_s}\big[\abs{\,\phi(y_s)}-\Lambda(y_s,e)\,y_s\abs{\phi}'(y_s)\big]\abs{\,\phi(y_s)}^{-(\Lambda(y_s,e)+1)}\\ &+& \partial_{y_s} \Lambda(y_s,e)\big({\rm Ln\,}{\varepsilon}-{\rm Ln\,}\abs{\,\phi(y_s)}\big)X_u(x_s,y_s).\\ \end{array} \end{equation} The estimates (\ref{eq:inegdxX}) are also immediate from (\ref{eq:provin}). To prove (\ref{eq:inegdyX}) observe first that by Lemma \ref{lem:estimates1}: $$ \abs{\phi}'(y_s)\leq \frac{\widehat\Lambda}{{\varepsilon}}. $$ Therefore, by (\ref{eq:provin}), for $\kappa>0$ small enough, there exists a positive constant $c>0$ such that \begin{equation}\label{eq:ineg4} \abs{\abs{\phi(y_s)}-\Lambda(y_s)\,y_s\abs{\phi}'(y_s)}\geq c\abs{e}. \end{equation} The estimate (\ref{eq:inegdyX}) immediately follows from (\ref{eq:dyX}), (\ref{eq:ineg4}) and (\ref{eq:inegX}). \end{proof} \paraga The following lemma will enable us to make precise the localization of our horseshoes. We keep the notation of Lemma~\ref{lem:phiin}. \begin{lemma}\label{lem:confin} Fix a constant $\kappa_0>0$. Then there exists ${\varepsilon}_0>0$ and $e_0>0$ such that for $0<{\varepsilon}<{\varepsilon}_0$ and $\abs{e}\leq e_0$ the subset ${\mathscr E}[{\varepsilon},e]$ of ${\Sigma}^s[{\varepsilon},e]$ defined by $$ {\mathscr E}[{\varepsilon},e]=\Big\{(x_s,y_s)\in {\Sigma}^s[{\varepsilon},e]\mid \abs{x_s}\leq {\varepsilon}^{\overline\Lambda},\ \abs{X_u(x_s,y_s)}\leq{\varepsilon}^{\overline \Lambda}\Big\} $$ is contained in the horizontal strip $\abs{y_s}\leq \kappa_0 e$. Moreover, given $0<c<1$, the set $$ {\mathscr E}[{\varepsilon},e,c]=\Big\{(x_s,y_s)\in {\Sigma}^s[{\varepsilon},e]\mid c{\varepsilon}^{\overline\Lambda}<\abs{x_s}\leq {\varepsilon}^{\overline\Lambda},\ \abs{X_u(x_s,y_s)}\leq{\varepsilon}^{\overline \Lambda}\Big\} $$ is bounded above and below by two $\mu(e)$ horizontal curves over $c{\varepsilon}^{\overline\Lambda}<\abs{x_s}\leq {\varepsilon}^{\overline\Lambda}$, with $\mu(e)\to 0$ when $e\to 0$. \end{lemma} \begin{proof} Assume that $\abs{y_s}>\kappa_0 \abs{e}$. Then $\abs{\lambda_1y_s-e}\leq (\lambda_1+\kappa^{-1})\abs{y_s}$ and, with the notation of (\ref{eq:deriv}), $\abs{\overline\chi(y_s,e)}\leq \abs{y_s}$ for $e$ small enough. As a consequence $$ \abs{\phi(y_s,e)}\leq \frac{c}{{\varepsilon}}\abs{y_s} $$ with $c=\frac{1}{\lambda_2}(\lambda_1+\kappa^{-1})+1$. Hence, if $\abs{x_s}\leq{\varepsilon}^{\overline\Lambda}$, $$ \abs{X_u(y_s,e)}\geq {\varepsilon}^{\widehat\Lambda} \frac{\abs{y_s}}{{\varepsilon}^{\overline\Lambda}}\Big(\frac{c}{{\varepsilon}}\abs{y_s}\Big)^{-\overline\Lambda} =c^{-\overline\Lambda}{\varepsilon}^{\widehat\Lambda}\abs{y_s}^{1-\overline\Lambda} \geq c^{-\widehat\Lambda}{\varepsilon}^{\widehat\Lambda+2(1-\overline\Lambda)} $$ since $\abs{y_s}\leq{\varepsilon}^2$. Now recall that we have assumed from the beginning that $\widehat\Lambda-\overline\Lambda< 2(\overline\Lambda-1)$ (see (\ref{eq:estlamb2})). Therefore $\widehat\Lambda+2(1-\overline\Lambda)<\overline\Lambda$ and $$ c^{-\widehat\Lambda}{\varepsilon}^{\widehat\Lambda+2(1-\overline\Lambda)}\geq {\varepsilon}^{\overline\Lambda} $$ for ${\varepsilon}$ small enough, which proves our first claim. The second one is an immediate consequence of the Implicit Function Theorem and estimates (\ref{eq:inegdxX}) and (\ref{eq:inegdyX}). \end{proof} We can finally prove Lemma~\ref{lem:poscomp}, which we recall here in an explicit form. \begin{lemma}\label{lem:compsign} Let $\Omega=(\Omega_1,\ldots,\Omega_\ell)$ be a positive polyhomoclinic orbit. We fix an admissible ${\varepsilon}>0$ such that the estimates (\ref{eq:estiment}) and (\ref{eq:estimex}) are satified for the exit points $a_i=(\xi_i,0)$ and $b_i=(\eta_i,0)$ of $\Omega_i$ relatively to ${\Sigma}^u[{\varepsilon}]$ and ${\Sigma}^s[{\varepsilon}]$. Then, with the usual cyclic order $$ {\sigma}(b_i)={\sigma}(a_{i+1}), $$ so that $\Omega$ is compatible. \end{lemma} \begin{proof} There exists a sequence $({\Gamma}_n)_{n\geq0}$ of orbits with energies $e_n>0$, which converges to $\Omega$. So, for $n$ large enough, ${\Gamma}_n\cap{\Sigma}^u=\{a_1^{n},\ldots,a_p^{n}\}$ and ${\Gamma}_n\cap{\Sigma}^s=\{b_1^{n},\ldots,b_p^{n}\}$ with the following cyclic order $$ a_1^{n}<b_1^{n}<a_2^{n}<b_2^{n}<\cdots<a_p^{n}<b_p^{n}, $$ according to the orientation on ${\Gamma}_n$ induced by the flow, and moreover $$ \lim_{n\to\infty} a_i^{n}=a_i,\qquad \lim_{n\to\infty} b_i^{n}=b_i. $$ In particular, for $n$ large enough, the signs ${\sigma}(a_i^{n})$ and ${\sigma}(b_i^{n})$ are well-defined and equal to ${\sigma}(a_i)$ and ${\sigma}(b_i)$ respectively. Let us set $b_i^n=(\eta_i^n,y_i^n)$ in the coordinate system of (the appropriate part of) ${\Sigma}^s[{\varepsilon},e_n]$. For $n$ large enough, $b_i^{n}$ belongs to the domain of $\Phi_{in}$, $a_{i+1}^{n}=\Phi_{in}(b_i^{n})$ and their coordinates satisfy $$ \abs{\eta_i^n}\leq {\varepsilon}^{\overline\Lambda},\quad \abs{\xi_{i+1}^n}\leq {\varepsilon}^{\overline\Lambda}. $$ By Lemma \ref{lem:confin}, these inequalities prove that $\abs{y_i^n}<\kappa e_n$ and since $\kappa<1/2\lambda_1$, this shows by (\ref{eq:zeroU}) and Lemma \ref{lem:estimates1} that $$ {\rm sgn\,}\phi_{{\varepsilon},{\sigma}(b_i^n)}(y_i^n,e_n)={\sigma}(b_i^n). $$ Finally, by (\ref{eq:coordcomp}) and Lemma \ref{lem:phiin}, this proves that $$ {\sigma}(a_{i+1}^{n})={\sigma}\big(\Phi_{in}(b_i^{n})\big)={\sigma}(b_i^n) $$ which finally yields our result by taking the limit $n\to\infty$. \end{proof} \subsection{Proof of Theorems \ref{thm:hypdyn1} and \ref{thm:hypdyn2}} Here we first examine the combinatorics of horseshoes associated with two homoclinic orbits, according to their exit data, from which the proof of Theorems \ref{thm:hypdyn1} and \ref{thm:hypdyn2} easily follows. \subsubsection{Construction of the parametrized horseshoes} Here we fix two homoclinic orbits $\Omega_0$ and $\Omega_1$ and we fix an admissible ${\varepsilon}_a>0$. For $0<{\varepsilon}<{\varepsilon}_a$, ${\Sigma}^u[{\varepsilon}]$ and ${\Sigma}^s[{\varepsilon}]$ are exit and entrance sections for $\Omega_i$, relatively to which the exit and entrance points are well-defined. We denote them by $a_i=(\xi_i,0)$ and $b_i=(\eta_i,0)$ respectively (recall that both depend on ${\varepsilon}$). In the following we {\em fix} ${\varepsilon}>0$ small enough so that there exists $e_0>0$ for which Lemma~\ref{lem:estimates} and Lemma~\ref{lem:confin} apply to each $e\in\,]-e_0,e_0[$, with the choice \begin{equation}\label{eq:choicexi} \xi:=\xi({\varepsilon})={\frac{1}{2}}\mathop{\rm Min\,}\limits_{i}\abs{\xi_i}. \end{equation} in Lemma~\ref{lem:estimates}, and the constant $\kappa_0$ of Lemma~\ref{lem:confin} chosen equal to the constant $\kappa$ of Lemma~\ref{lem:estimates}. \paraga {\bf The images of verticals curves by the inner map.} We begin with the behavior of the inner map with respect to vertical curves in the entrance section. We refer to the appendix for the definition of horizontal and vertical curves. \begin{lemma} \label{lem:intcond} Fix $\mu_*>0$ and consider a family $(v_e)_{\abs{e}<e_0}$ of $\mu_*$--vertical curves over some fixed interval $I$ containing $0$ in ${\Sigma}^{s*}_{{\sigma}}[{\varepsilon},e]$. Then there exists $0<e_1<e_0$ such that for $\abs{e}<e_1$, the image $ \Phi_{in}\big(v_e\cap{\mathscr E}[{\varepsilon},e]\big) $ is contained in the section ${\Sigma}_{{\sigma}'}^u[{\varepsilon},e]$, with ${\sigma}'={\rm sgn\,}(e){\sigma}$. Moreover, in this section, the intersection $$ \Phi_{in}\big(v_e\cap{\mathscr E}[{\varepsilon},e]\big)\cap \Big\{(x_u,y_u)\mid\abs{x_u}\leq{\varepsilon}^{\overline\Lambda}\Big\} $$ is a $\mu(e)$--horizontal curve over the interval $\abs{x_u}\leq{\varepsilon}^{\overline\Lambda}$, with $\mu(e)\to0$ when $e\to0$. \end{lemma} \begin{proof} As usual we use the same notation for a curve and its underlying function, so that $$ L_e:=\Phi_{in}\big(\widetilde v_e\cap{\mathscr E}[{\varepsilon},e]\big)=\Big\{\big(X_u\big(v_e(t),t\big)\mid t\in I,\ \big(v_e(t),t\big)\in {\mathscr E}[{\varepsilon},e]\Big\}. $$ This is a curve contained in ${\Sigma}_{{\sigma}'}^u[{\varepsilon},e]$ by Lemma \ref{lem:confin}. Moreover, the slope $s_e(t)$ of $L_e$ at $\ell_e(t)$ satisfies $$ \abs{s_e(t)}=\frac{1}{\abs{\partial_x X_u({\zeta}(t),t) {\zeta}_h'(t)+\partial_y X_u({\zeta}(t),t)}}\leq \frac{1}{\abs{\partial_y X_u({\zeta}(t),t)}-\mu^*\abs{\partial_x X_u({\zeta}(t),t)}} $$ and since ${\mathscr E}[{\varepsilon},e]$ is contained in the strip $\abs{y_s}\leq\kappa \abs{e}$, Lemma \ref{lem:estimates1} proves that $\abs{s_e(t)}$ converges uniformly to $0$ when $e\to 0$. Observe finally that by the second claim of Lemma \ref{lem:confin}, since $v_e$ is a $\mu_*$ vertical curve, then for $\abs{e}$ small enough it transversely intersects the horizontal curves of the boundary of ${\mathscr E}[{\varepsilon},e]$ and the set of $t\in I$ such that $\big(v_e(t),t\big)\in {\mathscr E}[{\varepsilon},e]$ is an interval containing $0$. This proves that $L_e$ is connected and intersects the vertical segments $\abs{x_u}={\varepsilon}^{\overline\Lambda}$. As a consequence, by the previous estimate of the slope, $L_e$ is a $\mu(e)$ horizontal curve in the rectangle $\{\abs{x_u}\leq{\varepsilon}^{\overline\Lambda}\}$, with $\mu(e)\to0$ when $e\to 0$. \end{proof} \paraga {\bf Rectangles and intersection conditions.} We will now apply the previous lemma to get our horseshoe. To simplify the notation, given $D\subset {\Sigma}^u$, we write $\Phi(D)$ for the image by $\Phi$ of the intersection of $D$ with the domain of definition of $\Phi$, with a similar convention for all the maps involved in the construction. \begin{lemma}\label{lem:intcases} Fix $0<\delta<\abs{\xi}$ (where $\xi$ was defined in (\ref{eq:choicexi})) and fix $0<\delta' <{\varepsilon}^2$. Consider the rectangles $$ R_{i}(e)=R[\xi_i,\delta,\delta',e]\subset {\Sigma}^{u*}_{{\sigma}(a_i)}[{\varepsilon},e],\qquad i=0,1. $$ Then the pair $(R_{0},R_1)$ satisfies the intersection condition (see Definition~\ref{def:horse}) for the Poincar\'e map $\Phi=\Phi_{in}\circ\Phi_{out}$. More precisely: \begin{itemize}\label{item:cases} \item if ${\sigma}(a_{i'})={\rm sgn\,}(e){\sigma}(b_{i})$, $\Phi(R_{i}(e))\cap R_{i'}(e)$ is a $\mu(e)$ horizontal strip in $R_{i'}(e)$; \item if ${\sigma}(a_{i'})=-{\rm sgn\,}(e){\sigma}(b_{i})$, $\Phi(R_{i}(e))\cap R_{i'}(e)=\emptyset$. \end{itemize} \end{lemma} \begin{proof} Observe first that for $\abs{e}$ small enough the image $\Phi_{out}(R_{i}(e))$ is a ``rectangle'' contained in ${\Sigma}^s_{{\sigma}(b_i}[{\varepsilon},e]$. Moreover, the images of the horizontals of $R_{i}(e)$ are curves which transversely intersect the axis $\{y_s=0\}$, whose intersection with the domain ${\mathscr E}[{\varepsilon},e]$ are $\mu_*$ vertical curves for a suitable $\mu_*$. Therefore, by Lemma~\ref{lem:phiin}, $$ \Phi(R_{i}(e))\subset {\Sigma}^u_{{\sigma}'}[{\varepsilon},e], \qquad {\sigma}'=({\rm sgn\,}(e){\sigma}(b_i)), $$ and, by Lemma~\ref{lem:intcond}, $\Phi(R_{i}(e))$ is a $\mu(e)$-horizontal strip in $\{(x_u,y_u)\mid\abs{x_u}\leq{\varepsilon}^{\overline\Lambda}\}$, defined over the whole interval $\abs{x_u}\leq{\varepsilon}^{\overline\Lambda}$, with $\mu(e)\to0$ when $e\to 0$. This proves our claim for $\abs{e}$ small enough. \end{proof} \paraga {\bf Sector and hyperbolicity.} We will now prove that the sector conditions and hyperbolicity constraints of Definition \ref{def:horse} hold true in the rectangles we considered above. We begin with a lemma on the derivative of the Poincar\'e map $\Phi$. Let us write the matrix of the derivative of $\Phi_{out}$ at the point $a_i$ relative to the coordinates $(x_u,y_u)$ and $(x_s,y_s)$ in the form \begin{equation} D_{a_i}\Phi_{out}=\left[ \begin{array}{lll} p_0&q_0\\ r_0&s_0\\ \end{array} \right] \end{equation} with $p_0s_0-q_0r_0\neq0$ and $r_0\neq0$ (since $\Phi_{out}(W^u_\ell(O))$ is transverse to $W_\ell^s(O)$). \begin{lemma}\label{lem:hypcond} For $\delta$ and $\delta'$ small enough, for $m\in R[\xi,\delta,\delta';e]$, the map $D_m\Phi$ admits two real eigenvalues $\lambda^-(m),\lambda^+(m)$ with $$ \lambda^-(m,e)\sim_{e\to 0}r(m)\,\partial_{y_s} X_u(\Phi_{in}(m)) $$ $$ \lambda^+(m,e)\sim_{e\to 0}\frac{(p(m)r(m)-q(m)s(m))\partial_{x_s} X_u(\Phi_{in}(m))}{r(m)\,\partial_{y_s} X_u(\Phi_{in}(m))}, $$ uniformly with respect to $m$. In particular $$ \lambda^-(m,e)\to+\infty\quad\textit{and}\quad \lambda^+(m,e)\to 0 \quad\textit{when}\quad e\to0, $$ uniformly with respect to $m$. The associated eigenlines are spanned by the vectors $$ w^-(m,e)=\Big(\frac{1}{r(m)}\big(\lambda_e^h-s(m)\big), 1\Big),\qquad w^+(m,e)=\Big(\frac{1}{r(m)}\big(\lambda_e^v-s(m)\big), 1\Big). $$ The line ${\mathbb R}\, w^-(m,e)$ converges to the line $\{y_u=0\}\subset {\Sigma}^u[{\varepsilon},e]$, while the line ${\mathbb R}\, w^+(m,e)$ converges to $\Phi_{out}(\{y_s=0\})\subset{\Sigma}^u[{\varepsilon},e]$ when $e\to 0$, uniformly with respect to $m$. \end{lemma} \begin{proof} For $m=(x_u,y_u)\in R_i$, \begin{equation} D_{m}\Phi_{out} =\left[ \begin{array}{lll} p(m)&q(m)\\ r(m)&s(m)\\ \end{array} \right]=\left[ \begin{array}{lll} p_0&q_0\\ r_0&s_0\\ \end{array} \right] +o(\delta,\delta'). \end{equation} We can therefore assume that $ps-qr$ and $r$ are bounded below by positive constants $\Delta$ and $\rho$ over the rectangle $R_i$. Now the derivative of the Poincar\'e map $\Phi$ reads \begin{equation} D_m\Phi= \left[ \begin{array}{lll} p(m)\,\partial_{x_s} X_u+r(m)\,\partial_{y_s} X_u&q(m)\,\partial_{x_s} X_u+s(m)\,\partial_{y_s} X_u\\ r(m)&s(m)\\ \end{array} \right] \end{equation} where the derivatives of $X_u$ are computed at $\Phi_{in}(m)$. The trace of $D_m\Phi$ satisfies \begin{equation} \abs{{{\rm Tr}_e}(m) }= \abs{p(m)\,\partial_{x_s} X_u+r(m)\,\partial_{y_s} X_u+s(m)}\geq C \abs{e}^{-\overline\Lambda} \end{equation} for a suitable constant $C>0$, since $r(m)\geq \rho>0$, by Lemma \ref{lem:estimates}. The determinant of $D_m\Phi$ satisfies \begin{equation} \Delta_e(m)=(pr-qs)\partial_{x_s} X_u=o({{\rm Tr}_e}(m)), \end{equation} By standard computation, one immediately gets the estimates \begin{equation} \lambda_e^h(m)\sim_{e\to 0} {{\rm Tr}_e}(m),\qquad \lambda_e^v(m)\sim_{e\to 0} \frac{\Delta_e(m)}{{{\rm Tr}_e}(m)}, \end{equation} which proves our first claim. The second one on the eigenvectors is immediate. Finally, the convergence of the line ${\mathbb R}\, w^-(m,e)$ to the line $\{y_u=0\}$ is immediate, while the convergence of the line ${\mathbb R}\, w^+(m,e)$ to $\Phi_{out}(\{y_s=0\})$ is proved by a completely analogous reasoning on $\Phi^{-1}$, using now Lemma~\ref{lem:phiininv} (the uniformity with respect to $m$ comes from the compactness of the domain and range of the various maps). \end{proof} \vskip2mm Observe that the line $\{y_u=0\}\subset {\Sigma}[{\varepsilon},e]$ converges to $W^u_\ell\cap{\Sigma}^u[{\varepsilon},0]$, while the curve $\Phi_{out}^{-1}(\{y_s=0\})$ converges to $\Phi_{out}^{-1}(W^s_\ell\cap{\Sigma}^s[{\varepsilon},0])$ (where the sections are endowed with the appropriate sign). Note also that $\Phi_{out}^{-1}(W^s_\ell\cap{\Sigma}^s[{\varepsilon},0])$ is nothing but (some connected component of) the intersection $W^s\cap{\Sigma}^s[{\varepsilon},0]$. \vskip2mm \begin{lemma}\label{lem:hypcond2} For $m\in R_i$, the sectors $S^h_m$ and $S^v_m$ in $T_m{\Sigma}^u(e)$ defined by $$ S^h_z=\{(\xi,\eta)\in{\mathbb R}^2\mid \abs{\eta}\leq \mu_e\abs{\xi}\},\qquad S^v_z=\{(\xi,\eta)\in{\mathbb R}^2\mid \abs{\xi}\leq \mu_e\abs{\eta}\} $$ satisfy the stability condition and the dilatation conditions of Definition \ref{def:horse} with $$ \mu_e=\frac{1}{2\mathop{\rm Max\,}\limits_{m\in R_i}{{\rm Tr}_e}(m)}. $$ \end{lemma} \begin{proof} This is an immediate consequence of the form of the eigenvectors. \end{proof} \subsubsection{Proof of Theorem \ref{thm:hypdyn1}} This will be an immediate consequence of Lemmas~\ref{lem:intcases}, \ref{lem:hypcond}, \ref{lem:hypcond2}. We fix a compatible polyhomoclinic orbit $\Omega=(\Omega_1,\ldots,\Omega_\ell)$ and keep the previous assumptions and notation for the sections and the entrance and exit points. In particular $$ {\sigma}(b_i)={\sigma}(a_{i+1}) $$ for $1\leq i\leq \ell$, with the cyclic order. By Lemma~\ref{lem:intcases}, the transition matrix of the horseshoe satisfies $$ \alpha(i,i+1)=1\quad\textrm{for}\quad1\leq i\leq {\ell}. $$ The existence and hyperbolicity of the horseshoe come from Lemma~\ref{lem:hypcond} and Lemma~\ref{lem:hypcond2}. The statement on $m(e)$ is a direct consequence of Lemma~\ref{lem:hypcond} applied to the iterate $\Phi^{\ell}$. Theorem~\ref{thm:hypdyn1} is proved. \subsubsection{Proof of Theorem \ref{thm:hypdyn2}} Now $\Omega_0$ and $\Omega_1$ are compatible and satisfy the sign condition (\ref{eq:condsign}). We will work at negative energies, therefore, by Lemma~\ref{lem:intcases} $$ {\sigma}(a^0)={\sigma}(b^0),\qquad {\sigma}(a^1)={\sigma}(b^1),\qquad {\sigma}(a^0)=-{\sigma}(a^1). $$ One immediately checks that the transition matrix of the horseshoe reads $$ A=\left[ \begin{array}{lll} 0&1\\ 1&0\\ \end{array} \right]. $$ The statement on the existence and hyperbolicity of the horseshoe immediately comes from Lemma~\ref{lem:hypcond} and Lemma~\ref{lem:hypcond2}, while the statement on the periodic point $m(e)$ is a direct consequence of Lemma~\ref{lem:hypcond} applied to $\Phi^2$. Theorem~\ref{thm:hypdyn2} is proved. \section{A reminder on horseshoes}\label{sec:horses} \setcounter{paraga}{0} We will need some additional definitions concerning horseshoes and hyperbolic dynamics. We will follow the approach of Moser in \cite{Mos}, which is perfectly adapted to our two--dimensional situation. See also \cite{HK} for a more general point of view. \paraga Consider a rectangular subset $R$ of ${\mathbb R}^2$ of the form $R=I^h\times I^v$, where $I^h$ and $I^v$ are two compact nontrivial intervals of ${\mathbb R}$. Given $\mu>0$, a {\em $\mu$--horizontal curve} is the graph of a $\mu$--Lipschitzian map $c^h:I^h\to I^v$, while a {\em $\mu$--vertical curve} is the graph of a $\mu$--Lipschitzian map $c^v:I^v\to I^h$. A $\mu$--horizontal strip is a subset of $R$ limited by two nonintersecting horizontal curves, that is a set of the form $$ \{(x,y)\in R\mid c^h(x)\leq y\leq d^h(x)\} $$ where $c^h$ and $d^h$ are two $\mu$--Lipschitzian maps satisfying $c^h(x)<d^h(x)$ for $x\in I^h$. One defines similarly the $\mu$--vertical strips. \begin{figure} \caption{Horizontal and vertical strips} \label{Fig:strips} \end{figure} \paraga The definition of a horseshoe we use here is from \cite{Mos} \begin{Def}\label{def:horse} Consider a finite family of rectangles $R_i=I^h_i\times I^v_i$, $i\in\{0,\ldots,m\}$, in ${\mathbb R}^2$ and let $\Phi$ be a $C^1$--diffeomorphism defined on a neighborhood of $R=R_0\cup \cdots\cup R_m$. We say that $R$ is a horseshoe for $\Phi$ when there exists $\mu>0$ such that \begin{enumerate} \item for $(i,j)\in\{0,\ldots,m\}^2$, $\Phi(R_i)\cap R_j$ is either $\emptyset$ or a $\mu$--horizontal strip $H_{ij}$ and $\Phi^{-1}(R_j)\cap R_i$ is either $\emptyset$ or a $\mu$--vertical strip $V_{ij}$, so that $\Phi(V_{ij})=H_{ij}$; \item for each $z\in I=\big(\cup_{ij}H_{ij}\big)\cap \big(\cup_{ij}V_{ij}\big)$, there exists a sector $S^h_z\subset T_z{\mathbb R}^2\sim{\mathbb R}^2$, of the form $$ S^h_z=\{(\xi,\eta)\in{\mathbb R}^2\mid \abs{\eta}\leq \mu\abs{\xi}\}, $$ which satisfies the stability condition $D_z\Phi(S^h_z)\subset S_{\Phi(z)}$ for all $z\in H$ together with the dilatation condition $$ \forall (\xi,\eta)\in S^h_z, \quad\textit{setting}\quad D_z\Phi(\xi,\eta)=(\xi',\eta'), \quad\textit{then}\quad \abs{\xi'}\geq \mu^{-1}\abs{\xi}; $$ \item for each $z\in I$, there exists a sector $S^v_z\subset T_z{\mathbb R}^2$, of the form $$ S^v_z=\{(\xi,\eta)\in{\mathbb R}^2\mid \abs{\xi}\leq \mu\abs{\eta}\}, $$ which satisfies the stability condition $D_z\Phi^{-1}(S^v_z)\subset S^v_{\Phi^{-1}(z)}$ for all $z\in V$ together with the dilatation condition $$ \forall (\xi,\eta)\in S^v_z, \quad\textit{setting}\quad D_z\Phi^{-1}(\xi,\eta)=(\xi',\eta'), \quad\textit{then}\quad \abs{\eta'}\geq \mu^{-1}\abs{\eta}; $$ \item for all $z\in V$, $\abs{\det D_z\Phi}\leq {\tfrac{1}{2}}\mu^{-2}$ and for all $z\in H$, $\abs{\det D_z\Phi^{-1}}\leq {\tfrac{1}{2}}\mu^{-2}$. \end{enumerate} \end{Def} \vskip2mm Given a horseshoe $R=(R_k)_{1\leq k\leq m}$ for $\Phi$, one defines its {\em transition matrix} as the matrix $A=\big(\alpha(i,j)\big)\in M_m(\{0,1\})$ whose coefficient $\alpha(i,j)$ is $0$ when $H_{ij}=\emptyset$ and is $1$ when $H_{ij}\neq\emptyset$. Given such a transition matrix $A=\big(\alpha(i,j)\big)$, one defines as usual the $A$--admissible subset ${\mathscr S}_A$ of $\{0,\ldots,m\}^{\mathbb Z}$ by $$ (s_k)_{k\in{\mathbb Z}}\in {\mathscr S}_A\Longleftrightarrow \alpha(s_k,s_{k+1})=1,\quad \forall k\in{\mathbb Z}. $$ \paraga We can now set out the main result on horseshoes. \vskip3mm \noindent{\bf Theorem \cite{Mos}.} {\em Let $R=(R_k)_{1\leq k\leq m}$ be a horseshoe for the $C^1$ diffeomorphism $\Phi$, with transition matrix $A$. Equip ${\mathscr S}_A$ with the induced product topology and let $$ {\mathscr I}=\bigcap_{k\in{\mathbb Z}}\,\Phi^{-k} (R) $$ be the the maximal invariant set for $\Phi$ contained in $R$. Then there exists a homeomorphism ${\mathscr C}:{\mathscr S}_A\to {\mathscr I}$ such that ${\mathscr C}\circ {\sigma}=\Phi\circ {\mathscr C}$, where ${\sigma}$ stands for the right Bernoulli subshift on ${\mathscr S}_A$, defined by $$ {\sigma}\big((s_k)\big)_{k\in{\mathbb Z}}=(\bar s_k)_{k\in{\mathbb Z}},\qquad \bar s_k=s_{k+1}. $$ Morover, the invariant set ${\mathscr I}$ is hyperbolic in the sense that there exists two continuous line bundles $L^h$ and $L^v$ defined over ${\mathscr I}$ and invariant under $D\Phi$, such that for all $z\in {\mathscr I}$: $$ \norm{D_z\Phi({\zeta})}\geq\mu^{-1}\norm{{\zeta}},\ \forall {\zeta}\in L_z^h,\qquad \norm{D_z\Phi^{-1}({\zeta})}\geq\mu^{-1}\norm{{\zeta}}, \ \forall {\zeta}\in L_z^v. $$ } Finally, let us notice that one can be more explicit for the coding of points of ${\mathscr I}$ by sequences induced by~${\mathscr C}$. Namely, for $m\in{\mathscr I}$: $$ {\mathscr C}(m)=(s_k)_{k\in{\mathbb Z}} \Longleftrightarrow \Phi^k(m)\in R_{s_k},\quad \forall k\in{\mathbb Z}. $$ \def$'${$'$} \end{document}
arXiv
Can we quantitatively understand quark and gluon confinement in quantum chromodynamics and the existence of a mass gap? How do we know $\theta_\mathrm{QCD} \ne \pi$? Is the confinement mechanism understood in 1+1 QCD ('t Hooft's model)? Why is a superposition of vacuum states possible in QCD, but not in electroweak theory? Is even the perturbative expansion of the QCD beta function expected to be divergent? What's the deepest reason why QCD bound states have integer charge? Commutation and Anticommutation relations in lattice QCD Why group elements associated with gauge transformations of finite action field configurations in QCD don't depend in $r$? Large $N_{c}$ QCD: motivation QCD and random matrix theory When can we say we fully understand QCD? + 11 like - 0 dislike What constitutes a sufficient "understanding" of QCD? Take an analogy, if an undergrad understands Newton's three laws well, the formula for Newtonian gravity and know how to solve for conic sectional orbits, then I won't cringe if this undergrad claims to have understood Newtonian gravity. Now, I'd like to ask what makes people say we haven't understood low energy QCD yet: if it means we must have a systematic nonperturbative computational scheme to calculate everything of interest, isn't lattice QCD (in principle) enough? If it means we must have a rigorous mathematical foundation for it, then in the Newtonian analogy, wouldn't that imply no physicist understood Newtonian gravity before mathematical analysis was discovered, which is an absurd statement? Still, my impression is that people talk about QCD as if some fundamentally new enlightenment is needed, what's the reason for that? What do we really hope to accomplish so that a complete "understanding" can be claimed? How do we know we are not just pushing the technical boundary further and further? quantum-chromodynamics soft-question asked Mar 11, 2015 in Theoretical Physics by Jia Yiyang (2,640 points) [ revision history ] edited Mar 12, 2015 by Jia Yiyang It is true that we know the formula. Like we know the formular for newton's law of mechanics, But does that mean you can compute the precise orbits of 9 planets accurate to 1000 years? You would simply put it in a computer, if you had to do that kind of a calculation. I don't know of anyone who seriously thinks he can solve a 9 body problem analytically. There is probably very little hope, unless ofcourse someone comes along and actually does solve it. It seems to me that there is some hope with QCD to say atleast something more analytically. Yang mills mass gap conjucture for instance is one way to motivate research in that direction. I think the problem is strictly of a mathematical nature. However there are other related reseach ideas, especially in understanding the supersymmetric versions of QCD to provide deeper insight into the structure of physics itself. commented Mar 11, 2015 by Prathyush (695 points) [ revision history ] @Prathyush, Like we know the formular for newton's law of mechanics, But does that mean you can compute the precise orbits of 9 plants accurate to 1000 years? You would simply put it in a computer, if you had to do that kind of a calculation. What do you want to convey by this paragraph? Not knowing how to do a calculation that precise doesn't mean we don't understand Newtonian gravity. But again in the Newtonian analogy, could mass gap problem be like the problem of stability of the solar system? It's also strictly of mathematical nature, and heavy machinery of analytic nature is probably crucial, but not understanding it seems to pose no big threat to the claim that we understand Newtonian gravity. I admit I'm partly struggling with the philosophical underpinning of "understanding". I need to go to bed after writing this comment, don't hold your breath for my next reply:-) commented Mar 11, 2015 by Jia Yiyang (2,640 points) [ no revision ] Yes, there are 2 senses we use the word understanding. One is to obtain the correct action principle. We which we do have for QCD. The second is use the action to make predictions and confirm Experiments. We have some predictive power, when it comes to observed phenomenon.(Gell-mann quark models and so on) However in most cases there are no exact answers, like mass of a proton and so on. The second problem is much like stability of the solar system. You can do it on a lattice, but you are confined to accuracy of the simulations, and can't say anything beyond that scope. When I said there is some hope in QCD, I mean in simplifying the expressions analytically. When in the case of the 9 body problem it is seemingly impossible. As I said the problem is strictly mathematical. @JiaYiyang Take a look at this for instance, Even though we know something about QCD a lot of stuff need explanation. http://guava.physics.uiuc.edu/~nigel/courses/569/Essays_Fall2011/Files/Garcia.pdf edited Apr 11, 2015 by Prathyush The mass gap is a statement about the smallest mass bound state of the theory, corresponding in the solar analogy to the Kepler problem, which can be solved exactly and hence is easy to understand. Whereas the stability of the solar system is the analogue of the question of whether a particular set of particles with given properties (sun and planets) form a bound state. Clearly understanding the latter is far more demanding than understanding the former. commented Apr 12, 2015 by Arnold Neumaier (13,989 points) [ no revision ] The understanding of an ordinary differential equation has nothing to do with being able to successfully execute a Runge-Kutta method. The latter only gives numerical values for an individual trajectory (or if called multiple times, for many). But understanding means to know where its fixed points are, how it behaves for large times, how sensitive the solutions are to a change of initial conditions, etc.. Not single numbers or curves but the general pattern of arbitrary solutions. We are far for such an understanding of QCD except in the very high energy region. Thus people say that QCD is not understood because in the infrared domain (confinement, mass gap, bound states) we have very little grasp on how to obtain properties of arbitrary solutions at a level that conveys more than individual numerical numbers. Lattice QCD is just a black box that spits out a (fairly inaccurate) number for every numerical question we ask, it give not understanding in any sense. One can say we understand QCD if we can derive from its action the low energy Hamiltonian and the bound state content (mesons, baryons, and perhaps glueballs) to an extent that we can match the meson and baryon data from the Particle Data Group to its particle spectrum. We wouldn't understand Newtonian gravity if we coudn't do a qualitative analysis of the 3-body problem. There is no analytic solution but still we understand (and can approximate) essentially everything about its behavior, independent of the detailed parameters of the problem. Lattice calculations in the 3-body problem would correspond to discretizing the dynamics using second-order divided differences, which very poorly resolves the dynamics of a 3-body problem, so a very fine lattice would be required to give good results over a significant time span, and then it would just be for a single system - nothing general. Perturbation theory around a 2-body problem gives very useful analytic approximations not just for a single system but for all problems in the class, and one can deduce a lot fronm its sutdy, whereas even a better discretization method (like modern symplectic integrators) give just a single trajectory, or if repeated a bunch of trajectories, from which one cannot deduce much about the qualitative behavior. Understanding always means to be able to derive qualitative understanding, not just numbers. (And even the numbers obtained from lattice QCD are not impressive. I haven't seen even a single attempt to compute the full baryon and meson spectrum from QCD. Given that QCD needs no numerical input to define it, the accuracy for basic predictions such as the proton mass (by lattice QCD or by Schwinger-Dyson equations) is perhaps 5 percent, and it will not grow much even if the speed of computers and algorithms increase by a factor of $10^6$ (which is not realistic). No, we need a much better understanding! We understand QED much better than QCD, because there are many good approximation schemes which give qualitative information about almost everything of interest. But even QED is not completely understood as we don't have a logically satisfying setting for the theory, and things like the nonperturbative existence of the Landau pole are unsettled. answered Mar 11, 2015 by Arnold Neumaier (13,989 points) [ revision history ] edited Mar 15, 2015 by Arnold Neumaier Most voted comments show all comments Hi Arnold, thanks for all the valuable input. I'm aware it's usually too exacting to envision what's the next big thing(or if there is any) before a big thing actually emerges, but I'm currently facing a choice problem between going into more formal QFT/string or going into more down-to-earth QCD study for a PhD, so I can't help thinking about these "big-picture" issues, which sometimes can simply be ill-posed(but I certainly hope it hasn't been the case for this question). Let me chew on the issue for some more time. commented Mar 12, 2015 by Jia Yiyang (2,640 points) [ revision history ] @JiaYiyang: I think there are many unsolved issues in QCD (and some even in QED). It is a pity that many of the best minds go to a more speculative side of theoretical physics such as string theory rather than work on conventional QFT. I believe that standard QFT remains valid even at the Planck scale and below, and that the real progress in fundamental physics will come from getting a stronger nonperturbative analytic grasp on QFT - e.g., through finding a valid Hilbert space setting - rather than from changing the foundations. (Of course, this doesn't necessarily affect the choice of a Ph.D. topic, as this must more be something tractable rather than something aimed at the physics of the future.) commented Mar 13, 2015 by Arnold Neumaier (13,989 points) [ no revision ] A lot of intuition to understanding standard non perturbative QFT arises from string theory. Most of the string theorist are primarily experts in QFT. Although a lot of work is being done in understanding string theory on its own merit a lot of work is being done in understanding QFT in general using tools from string theory. I think that at this time only using string tools we can get a better grasp into QFT. Tools like non-perturbative dualities, AdS/CFT and so on. commented Mar 13, 2015 by conformal_gk (3,625 points) [ no revision ] @ArnoldNeumaier: D-branes are one of the most important objects in string theory. The whole AdS/CFT and its extensions are based in completely geometrical constructions using D-branes. Indeed, QCD is still far from being understood via a holographic dual description, but this, nonetheless, is a very promising area of research and one needs to understand to some extent string theory in order to understand the various constructions. In any case most of the current string theory research that is not focused on phenomenology it is focused on understanding some sort of dual field theories. @ArnoldNeumaier I will not pretend I know a lot about the models of holographic QCD but in principle you do calculations on the bulk where SUGRA is present. In that sense was my comment above. I agree, you do not need to understand a whole big deal of string theory to apply holography in some models and this is why a lot of CMT people have shifted in holography. To create new ones that would be better and better approximations to various kinds of dual field theories though, you need to understand very well string theory and the various geometric constructions. @ArnoldNeumaier, @conformal_gk, @VladimirKalitvianski, @Prathyush, thank you all and you've been helpful, I'd like to reply but I've unfortunately run out of thought...... @conformal_gk: Nothing in AdS/CFT as applied to QCD uses the notion of a string. Of course string theorists develop a lot of QFT along the side, but the tools they develop for the latter are QFT tools, not string theory tools. Only the motivation for having developed them comes from string theory. We will understand completely QCD when we are able to compute any quantity we want at any energy scale we want. The main object of interest are of course $n$-point correlation functions in the IR below $\Lambda_{\text{QCD}}$. We also need to understand QCD instantons, QCD phase transitions like the transition to the quark-gluon plasma. Also, an open problem is the mathematical proof of the existence of the Yang-Mills (e.g. QCD) mass gap. The problem with the lattice is that it is Euclidean and that you solve numerically. Understanding the theory and being able to predict requires in some sense analytical and exact results. Lattice as well as other approaches give you intuition but this does not mean you have solved the theory. answered Mar 11, 2015 by conformal_gk (3,625 points) [ no revision ] QCD instantons? I havent heard about them. Can you comment on this topic, even references would be appriciated. commented Mar 11, 2015 by Prathyush (695 points) [ no revision ] I don't know, just google search it. It is very standard for example http://arxiv.org/abs/hep-ph/9610451 p$\hbar$ysi$\varnothing$sOverflow
CommonCrawl
Association of kidney function-related dietary pattern, weight status, and cardiovascular risk factors with severity of impaired kidney function in middle-aged and older adults with chronic kidney disease: a cross-sectional population study Adi Lukas Kurniawan1, Chien-Yeh Hsu2, 3, Hsiao-Hsien Rau4, Li-Yin Lin1 and Jane C.-J. Chao1, 3, 5Email authorView ORCID ID profile Nutrition Journal201918:27 Chronic Kidney Disease (CKD), characterized by impaired kidney function, affects over 1.5 million individuals in Taiwan. Cardiovascular disease (CVD) is commonly found in patients with CKD, and the increased prevalence of obesity can have some implications for the risk of both CKD and CVD. Since diet plays an important role in the development of obesity, CVD and CKD, our study was designed to investigate the association of kidney function-related dietary pattern with weight status, cardiovascular risk factors, and the severity of impaired kidney function in middle-aged and older adults in Taiwan. A total of 41,128 participants aged 40 to 95 years old with an estimated glomerular filtration rate (eGFR) less than 90 mL/min/1.73 m2 and proteinuria were recruited from Mei Jau Health Institute between 2008 and 2010. The kidney function-related dietary pattern was identified using reduced rank regression (RRR) and was known as high consumption of preserved or processed food, meat, organ meats, rice/flour products, and, low consumption of fruit, dark-colored vegetables, bread, and beans. A multivariable logistic regression analysis was used to identify the association of weight status and cardiovascular risk factors with moderately/severely impaired kidney function (eGFR < 60 mL/min/1.73 m2) and the association of dietary pattern with the outcomes aforementioned. Moderately/severely impaired kidney function participants were heavier and had higher abnormality of cardiovascular risk factors compared with those with mildly impaired kidney function. Weight status (OR = 1.28, 95% CI 1.12–1.45, P < 0.001 for obesity) and cardiovascular risk factors (OR = 1.52, 95% CI 1.31–1.77, P < 0.001 for high total cholesterol/HDL-C ratio and OR = 1.56, 95% CI 1.41–1.72, P < 0.001 for hypercalcemia) were positively associated with increased risk of moderately/severely impaired kidney function. The kidney function-related dietary pattern was correlated with overweight or obese (OR = 2.07, 95% CI 1.89–2.27, P < 0.01) weight status, increased cardiovascular risk by 10–31%, and the risk of moderately/severely impaired kidney function (OR = 1.15, 95% CI 1.02–1.29, P < 0.05). The RRR-derived kidney function-related dietary pattern, characterized by high intake of processed and animal foods and low intake of plant foods, predicts the risks for developing cardiovascular disease and moderately/severely impaired kidney function among middle-aged and older adults. Dietary pattern Reduced rank regression Weight status Kidney function Chronic kidney disease (CKD), characterized by impaired kidney function, has surfaced as a global health problem. In Taiwan, the prevalence of CKD stage 1–5 in 2007 was 9.8–11.9%, meaning more than 1.5 million individuals suffered from CKD [1]. Cardiovascular disease (CVD) is an adverse outcome of kidney disease and is associated with increased major causes of mortality and morbidity [2]. A population-based prospective cohort study in Iceland reported that adjusted hazard ratio (HR) for CVD was 1.55 to 4.29 in CKD stage 1 to 4 [3], and thus CKD was associated with increased risk for CVD mortality by 100% (HR = 2.00, 95% CI 1.78–2.25) [4]. The traditional risk factors for CVD including hypertension, diabetes, lipid abnormalities, and obesity have been known as important determinants for the risk of developing CVD in patients with CKD [5, 6]. Moreover, abnormal calcium and phosphorus metabolism represented as non-traditional CVD risk factors. Both high calcium and phosphorus levels can directly increase vascular calcification [7]. This alteration in mineral metabolism characterized by hypercalcemia and elevated serum phosphate levels are common in patients with CKD and may lead to calcification and other cardiovascular events [8]. However, few studies in Taiwan have investigated whether abnormal weight status and both traditional as well as non-traditional CVD risk factors are associated with the severity of impaired kidney function. In addition, diet also have been associated with cardiovascular risk factors and other health-related outcomes. A healthy dietary pattern, characterized by high consumption of whole grains, fruit, vegetables, and unsaturated oil, was correlated with reduced cardiovascular risk factors and increased kidney function [9, 10]. In contrast, a western dietary pattern with high consumption of deep-fried foods, processed foods, meat, and organ meats was positively associated with increased cardiovascular risk factors and progression of CKD [11, 12]. In this study, we used reduced rank regression (RRR) [13] to derive kidney function-related dietary pattern. The RRR method is a multivariable linear functions where it combines a priori and a posteriori approaches to derive dietary patterns [14]. Recently, RRR has been widely used to assess dietary patterns in several studies [15–17]. By using this method, researchers are able to identify a linear combination of predictor variables, select response variables based on prior knowledge, and find dietary patterns related to the disease of interest [13, 14]. Food items or food groups derived from food frequency questionnaire (FFQ) have been used as predictor variables, while response variables refer to nutrients or blood biomarkers as early predictors of a disease [13, 14]. Additionally, compared with other method for deriving dietary patterns such as principal component analysis (PCA), the RRR method is more likely to be associated with health-related outcomes [13, 18, 19]. To our knowledge, there is no study using RRR to derive dietary patterns that are associated with kidney function. Therefore, the aims of this study were to (1) investigate the association of abnormal weight status and cardiovascular risk factors with the severity of impaired kidney function and (2) identify whether RRR-derived kidney function-related dietary pattern is associated with abnormal weight status, cardiovascular risk factors, and the severity of impaired kidney function among middle-aged and older participants with CKD. Study participants This study was conducted using health-screening data from Mei Jau (MJ) Health Institute, Taiwan. The MJ Health Institute is a private institute with four health-screening centers (Taipei, Taoyuan, Taichung, and Kaohsiung) in Taiwan, and it provides periodic health check-up (on average one examination per year per person) to its members. All participants had a series of health check-up including anthropometric assessment, blood tests, stool and urine tests, physical examination, and completed a self-reported questionnaire to collect information about sociodemographic, lifestyle, medical history as well as dietary habits. In addition, every participant had signed the consent form authorized by the MJ Health Institute for research purpose only and no personal identification information would be released. The Joint Institutional Review Board of Taipei Medical University (TMU-JIRB N201802006) approved this study. We included 151,206 participants with an estimated glomerular filtration rate (eGFR) less than 90 mL/min/1.73 m2 and proteinuria from the MJ Health Institute database between years of 2008 and 2010. After excluding 110,078 participants who were (1) aged less than 40 y (n = 39,066), (2) with any disease condition such as cancer, cirrhosis, autoimmune disease, or virus infection (n = 48,169), (3) with history of kidney surgery (n = 1765), (4) with error results in blood analysis (n = 1128), (5) failed to complete the questionnaire (n = 212), (6) with missing data in dietary habit (n = 11,184), or (7) with multiple entries in the database (n = 8554), a total of 41,128 participants were included in this study. Clinical and biochemical data and definition of the disease Body weight, height, waist or hip circumference, body fat mass, and blood pressure were measured by an auto-anthropometers during health check-up. Fasting blood glucose (FBG), triglycerides (TG), total cholesterol (TC), high density lipoprotein-cholesterol (HDL-C), low density lipoprotein-cholesterol (LDL-C), C-reactive protein (CRP), blood urea nitrogen (BUN), creatinine, albumin, calcium, and phosphorus were analyzed at the MJ Health Institute's central laboratory. Body Mass Index (BMI) status was defined as follows: normal (18.5 kg/m2 ≤ BMI < 24 kg/m2), overweight (24 kg/m2 ≤ BMI < 27 kg/m2), and obese (BMI ≥27 kg/m2) [20]. High waist circumference was defined as waist circumference ≥ 80 cm for female and ≥ 90 cm for male [21]. High waist-to-hip ratio (WHR) was defined as ≥0.85 for female and ≥ 0.90 for male [21]. High body fat mass was defined as body fat mass ≥ 35% for female and ≥ 24% for male [22]. Hypertension was defined as having at least one of the followings: (1) systolic blood pressure (SBP) ≥ 140 mmHg, (2) diastolic blood pressure (DBP) ≥ 90 mmHg, (3) use of antihypertensive medication, or (4) self-reported hypertension [23]. Diabetes was defined as at least one of the followings: (1) FBG ≥ 7.0 mmol/L (≥ 126 mg/dL), (2) use of hypoglycemic medication, or (3) self-reported diabetes [24]. The definition of abnormal blood lipids were TG ≥ 2.3 mmol/L (≥ 200 mg/dL) for high TG, TC ≥ 6.2 mmol/L (≥ 240 mg/dL) and/or use of lipid-lowering drugs for high TC, HDL-C < 1.0 mmol/L (< 40 mg/dL) for low HDL-C, LDL-C ≥ 4.1 mmol/L (≥ 160 mg/dL) and/or use of lipid-lowering drugs for high LDL-C [25], and TC-to-HDL-C ratio (TC/HDL-C ratio) ≥ 5.0 for high TC/HDL-C ratio [26]. Proteinuria was reported as one or more pluses (+). We used the Modification of Diet in Renal Disease Study (MDRD) equation to calculate eGFR as an indicator of kidney function [27]: $$ \mathrm{eGFR}=186.3\times {\left(\mathrm{serum}\ \mathrm{creatinine}\ \mathrm{in}\ \mathrm{mg}/\mathrm{dL}\right)}^{\hbox{-} 1.154}\times {\left(\mathrm{age}\right)}^{\hbox{-} 0.203}\times \left(0.742\ \mathrm{if}\ \mathrm{female}\right) $$ Moreover, based on eGFR levels, we further classified impaired kidney function into two categories: (1) mildly impaired (stage 2) defined as eGFR at 60–89 mL/min/1.73 m2 and (2) moderately/severely impaired (stage 3–5) defined as eGFR < 60 mL/min/1.73 m2 [28]. Hypercalcemia was defined as serum calcium levels ≥2.37 mmol/L based on National Kidney Foundation guidelines [29], while serum phosphorus levels were categorized into high (≥ median) or low (< median) level. Dietary assessment and other covariates Dietary assessment was evaluated using a standardized and validated self-administered semi-quantitative food frequency questionnaire (SQ-FFQ) [11, 30]. The frequency and servings of dietary intake were investigated according to the consumption of twenty-two food groups at per day or per week in the past month and categorized into five response options as previously described [11]. The other covariates collected using a self-reported questionnaire were age, smoking status (none, former, and current), drinking status (no: < 1 time/week or yes: ≥ 1–2 times/week), physical activity status (no: < 1 h/week or yes: ≥ 1–2 h/week), medical history of CVD, hypertension, or diabetes, and use of cardiovascular, hypertension, or diabetes medication. High cardiovascular risk profile was defined as having a history of CVD and/or use of cardiovascular medication. However, participants who had been previously diagnosed with CVD might have changed their lifestyle, and thus we decided to adjust for CVD risk profile in the analysis. Statistical analysis was performed by using SAS 9.4 (SAS Institute Inc., Cary, NC, USA) and IBM SPSS 20 (IBM Corp., Armonk, NY, USA). Continuous and categorical variables are presented as a mean ± standard deviation (SD) and a number (percentage), respectively. A Mann-Whitney U test and a chi-square test were used for comparing the baseline characteristics between two continuous and categorical groups, respectively. A multivariable logistic regression analysis, expressed as odds ratios (OR) and 95% confidence intervals (CIs), was performed to identify: (1) the association between weight status and cardiovascular risk factors with moderately/severely impaired kidney function and (2) the association between dietary pattern scores across tertiles with weight status, cardiovascular risk factors, and the severity of impaired kidney function. A P-value < 0.05 was considered statistically significant. Dietary pattern associated with kidney function was identified by RRR using PROC PLS function in SAS 9.4. In the RRR, food groups and biomarkers were used as predictor and response variables, respectively (Fig. 1). The RRR method focuses on identifying linear functions of food groups, which explained as much variation as possible in a set of intermediate response variables [14]. We identified the response variables based on the significant correlation between eGFR and other variables by using Spearman's correlation after adjustment with age, gender, BMI, smoking status, drinking status, physical activity, high cardiovascular risk profile, hypertension status, diabetes status, albumin, and CRP (Additional file 1: Table S1). The absolute value of factor loading ≥0.20 were selected to derive dietary pattern associated with kidney function. When eight response variables identified by Spearman's correlation were included in the RRR, eight dietary factors were derived. Finally, we retained only the first dietary factor for the analysis as it explained the largest amount of variation in response variables. Moreover, RRR allows researchers to identify the percentage of explained variation in each food group corresponding to the response variables. This explained variation would contribute to the factor loading in each food group, meaning that food groups with the greater explained variation will produce greater factor loading. The dietary pattern derived from the reduced rank regression model. WHR waist-to-hip ratio, TG triglycerides, LDL-C low density lipoprotein-cholesterol, TC/HDL-C total cholesterol-to-high density lipoprotein-cholesterol ratio, BUN blood urea nitrogen Characteristics of the participants In our study, 37,882 (92.1%) middle-aged and older participants had mildly impaired kidney function and 3246 (7.9%) participants had moderately to severely impaired kidney function. The mean age and eGFR levels were 52.6 ± 9.9 y and 73.7 ± 9.9 mL/min/1.73 m2, respectively. The prevalence rate of overweight, obesity, high waist circumference, high WHR, and high body fat mass were 30.2, 16.7, 24.8, 38.1, and 37.0% respectively (data not shown). The prevalence rate of hypertension, diabetes, high TG, high TC, low HDL-C, high LDL-C, high TC/HDL-C ratio, and hypercalcemia was 28.6, 9.7, 13.7, 18.3, 5.8, 14.2, 8.8, and 36.8%, respectively (data not shown). Moreover, participants with moderately/severely impaired kidney function were older, heavier, had higher blood pressure, blood glucose, blood lipids, and CRP levels, but lower albumin levels compared with those who had mildly impaired kidney function (Table 1). Characteristics of the participants aged ≥40 years old by kidney function status obtained from MJ Health Institute between 2008 and 2010 (n = 41,128) Mildly impaired kidney functiona Moderately/Severely impaired kidney functionb (n = 3246) P c Age (y) 52.6 ± 9.9 62.9 ± 11.1 < 0.001 Sex, males 1694 (52.2) Smoking status, current Drinking status, yes Physical activity, yes High cardiovascular risk profile 2489 (6.1) Hypertension status Diabetes status BMI (kg/m2) 0.9 ± 1.4 Body fat mass (%) SBP (mmHg) 122.5 ± 18.2 DBP (mmHg) FBG (mmol/L) TG (mmol/L) TC (mmol/L) HDL-C (mmol/L) LDL-C (mmol/L) TC/HDL-C Calcium (mmol/L) Phosphorus (mmol/L) Albumin, inflammatory biomarker, and kidney function Albumin (g/dL) CRP (nmol/L) BUN (mmol/L) Creatinine (μmol/L) eGFR (mL/min/1.73 m2) + 1 ≥ + 3 BMI body mass index, WHR waist-to-hip ratio, SBP systolic blood pressure, DBP diastolic blood pressure, FBG fasting blood glucose, TG triglycerides, TC total cholesterol, HDL-C high density lipoprotein-cholesterol, LDL-C low density lipoprotein-cholesterol, TC/HDL-C total cholesterol-to-HDL-C ratio, CRP C-reactive protein, BUN blood urea nitrogen, eGFR estimated glomerular filtration rate. Continuous data are presented as mean ± SD, and categorical data are presented as numbers (percentage) aMildly impaired kidney function was defined as eGFR 60–89 mL/min/1.73 m2 bModerately/severely impaired kidney function was defined as eGFR < 60 mL/min/1.73 m2 cThe P-value was analyzed using Mann-Whitney U test for continuous variables and chi-square test for categorical variables Weight status and cardiovascular risk factors in relation to the severity of impaired kidney function We next investigated the association of weight status and cardiovascular risk factors with the severity of impaired kidney function (Table 2). A fully-adjusted multivariable logistic regression analysis (model 2) showed that participants who were overweight, obese, or had high WHR were significantly associated with a higher risk of moderately/severely impaired kidney function (overweight: OR = 1.25, 95% CI 1.12–1.39, P < 0.001, obesity: OR = 1.28, 95% CI 1.12–1.45, P < 0.001, and high WHR: OR = 1.11, 95% CI 1.00–1.23, P = 0.039, respectively) compared with normal weight participants. Meanwhile, high waist circumference only showed the tendency to be associated with the severity of impaired kidney function (P = 0.052). Cardiovascular risk factors were positively associated with moderately/severely impaired kidney function (P < 0.001), and high TC/HDL-C ratio and hypercalcemia had the highest odds ratio among all the risk factors (high TC/HDL-C: OR = 1.52, 95% CI 1.31–1.77, P < 0.001 and hypercalcemia: OR = 1.56, 95% CI 1.41–1.72, P < 0.001). Multivariable logistic regression of weight status and cardiovascular risk factors for moderately/severely impaired kidney function Model 1a Model 2b OR (95% CI) BMI, ref.: normal Overweight (n = 12,407) 1.32 (1.21–1.44) Obese (n = 6852) High waist circumference (n = 10,187) High WHR (n = 15,673) High body fat mass (n = 15,215) Hypertension (n = 11,779) Diabetes (n = 3974) High TG (n = 5627) High TC (n = 7510) Low HDL-C (n = 2335) High LDL-C (n = 5709) High TC/HDL-C (n = 3527) Hypercalcemia (n = 12,589) High phosphorus (n = 420) BMI body mass index, WHR waist-to-hip ratio, TG triglycerides, TC total cholesterol, HDL-C high density lipoprotein-cholesterol, LDL-C low density lipoprotein-cholesterol, TC/HDL-C total cholesterol-to-HDL-C ratio aModel 1 was adjusted for age and gender (for weight status category). For cardiovascular risk factors category, model 1 was adjusted for age, gender, and BMI bModel 2 was adjusted for age, gender, smoking status, drinking status, physical activity, high cardiovascular risk profile, hypertension status, diabetes status, albumin, and CRP (for weight status category). For cardiovascular risk factor category, model 2 was adjusted for age, gender, BMI, smoking status, drinking status, physical activity, high cardiovascular risk profile, hypertension status (except hypertension variable), diabetes status (except diabetes variable), albumin, and CRP Dietary pattern in relation to weight status, cardiovascular risk factors, and the severity of impaired kidney function The RRR-derived kidney function-related dietary pattern showed that food groups such as preserved vegetables, processed meat or fish, rice or flour products, meat, soy sauce, organ meats, fried rice or flour products, and instant noodles were positively correlated with dietary pattern scores (factor loading ≥0.20). In contrast, food groups like fruits, dark-colored vegetables, bread, and beans or bean products were negatively correlated with dietary pattern scores (factor loading ≥ − 0.20) (Table 3). The cumulative percentage of variation explained by RRR-derived kidney function-related dietary pattern was 6.67%. The eight response variables were explained 2.7% for the total variation and largely driven by the explained variation in WHR (6.7%), TC/HDL-C ratio (2.6%), and TG (2.2%). The baseline characteristics of the participants across tertiles of dietary pattern scores are shown in Additional file 2: Table S2. Participants with higher adherence to the dietary pattern were likely to be males, younger, current smokers and drinkers, inactive, heavier, hypertensive or diabetic, and with abnormal blood lipid levels. Factor loadings and variance of dietary pattern scores identified by the reduced rank regression model Food group Explained variance (%) Factor loadinga Preserved vegetables, processed meat or fish Rice/flour products Soy sauce or other dipping sauce Organ meats Fried rice/flour products Dark-colored vegetables Beans/bean products Deep fried foods Sugary drinks Light-colored vegetables Root crops Fried vegetables/salad dressing Jam/honey a factor loadings are the correlations between food groups and dietary pattern scores. A positive factor loading value of food groups indicates a positive correlation with dietary pattern score, and vice versa The kidney function-related dietary pattern scores across tertiles in relation to weight status, cardiovascular risk factors, and the severity of impaired kidney function are demonstrated in Table 4. A multivariable logistic regression analysis demonstrated that participants who showed higher adherence to the dietary pattern (tertile 2 and tertile 3 of dietary pattern scores) were strongly associated with increased overweight (tertile 2: OR = 1.13, 95% CI 1.06–1.21, tertile 3: OR = 1.36, 95% CI 1.27–1.46, P all < 0.001) and obesity (tertile 2: OR = 1.43, 95% CI 1.31–1.57, tertile 3: OR = 2.07, 95% CI 1.89–2.27, P all < 0.001) risk by 13–36% and 43–107%, respectively. Participants in tertile 3 of dietary pattern scores were also positively associated with all the cardiovascular risk factors except for hypertension (OR = 1.04, 95% CI 0.96–1.12, P = 0.31), low HDL-C (OR = 1.00, 95% CI 0.88–1.15, P = 0.96) and high serum phosphorus levels (≥ 1.2 mmol/L) (OR = 1.00, 95% CI 0.94–1.07, P = 0.98). Furthermore, the dietary pattern scores of tertile 2 and tertile 3 were significantly associated with a higher risks of moderately/severely impaired kidney function (tertile 2: OR = 1.12, 95% CI 1.00–1.25, P < 0.05, tertile 3: OR = 1.15, 95% CI 1.02–1.29, P < 0.05) and having BUN ≥7.14 mmol/L (tertile 2: OR = 1.31, 95% CI 1.16–1.47, P < 0.001, tertile 3: OR = 1.32, 95% CI 1.18–1.49, P < 0.001) (data not shown). The association of dietary pattern scores with weight status, cardiovascular risk factors, and the severity of impaired kidney function (n = 41,128)a Dietary pattern scores Tertile 1 (Ref) Tertile 2b Tertile 3c High waist circumference High WHR High body fat mass High TG High TC Low HDL-C High LDL-C High TC/HDL-C High phosphorus (≥ 1.2 mmol/L) Severity of kidney function Moderately/severely impaired aAdjusted for model 2. Weight status category was adjusted for age, gender, smoking status, drinking status, physical activity, high cardiovascular risk profile, hypertension status, diabetes status, albumin, and CRP. Cardiovascular risk factors and the severity of kidney function categories were adjusted for age, gender, BMI, smoking status, drinking status, physical activity, high cardiovascular risk profile, hypertension status (except hypertension variable), diabetes status (except diabetes variable), albumin, and CRP. Tertile 1 (dietary pattern score: −1.34-1.22, n = 13,769) was used for the reference bTertile 2 (dietary pattern scores: 1.23–1.89, n = 13,656) cTertile 3 (dietary pattern scores: 1.90–6.55, n = 13,703) To our knowledge, the present study is the first study that identify kidney function-related dietary pattern by using RRR. Overall, we found that kidney function-related dietary pattern was correlated with increased obesity risk and exacerbation of cardiovascular risk factors. In this dietary pattern, the food groups containing preserved or processed foods, meat, organ meats, and sauces contributed to 64% explained variation. Adding soy sauce or other sauces to preserve or process foods and meat is common in Taiwanese's culture. Consistent with our study, a diet rich in meat and processed foods was associated with increased body weight in Asian and US adults [31, 32]. Meat, processed foods, and organ meats are commonly high in calories, saturated fat, and cholesterol, which may contribute to a surplus of energy intake. We also found that participants who ate preserved or processed foods, meat, organ meats, and sauces were also likely to consume rice or flour products and noodles. Similar studies conducted in Korean and Japanese population reported that diet rich in white rice was correlated with high risk of obesity [33, 34]. However, Xu and colleagues found an inverse association between the traditional Chinese dietary pattern, characterized by high intake of rice and pork, with the risk of obesity [35]. The conflicting findings might be due to the different food groups in the dietary pattern, lifestyle, and eating behavior in China. In our study, more than 70% of participants were physically inactive, which may also contribute to these different results. In addition, a recent population study in adults has reported that 1 SD increment of fruits and vegetables intake was inversely associated with BMI by 0.12 kg/m2, waist circumference by 0.40 cm, and percentage fat mass by 0.30% [36]. Fruits and vegetables are commonly known to have a higher amount of dietary fiber, phytochemicals, vitamins, and minerals, which may enhance satiety and lead to a lower energy absorption. In addition, fruits and vegetables also have anti-oxidative effects against obesity-induced oxidative stress [36]. However, our RRR derived kidney function-related dietary pattern was characterized by a low fruits and vegetables intake and this may increase the risk of obesity. Our study reported that kidney function-related dietary pattern was correlated with an increased abnormality of most cardiovascular risk factors, except for hypertension, low HDL-C, and high serum phosphorus levels. The relationship between dietary pattern and hypertension or high serum phosphorus levels was weakened by potential confounders after adjusting for covariates in model 2; however, it remains positively correlated after adjusting for age, sex, and BMI (tertile 3: OR = 1.07, 95% CI 1.01–1.14, P < 0.05 for hypertension and OR = 1.06, 95% CI 1.01–1.12, P < 0.05 for high phosphorus levels). Other studies have found that dietary pattern consisting of high consumption in animal fat, processed meat, organ meats, or refined carbohydrate was positively associated with cardiovascular risk factors [11, 37, 38]. An imbalance between saturated and unsaturated fats and low fiber content in the animal food, together with high salt content in processed food, could have an influence over blood lipids, blood pressure, and blood glucose levels [38]. In comparison, a healthy dietary pattern with high intakes of whole grains, fruits, and vegetables was found to reduce TC by 0.07 mmol/L, LDL-C by 0.05 mmol/L, TG by 0.22 mmol/L, and increase HDL-C by 0.01 mmol/L in the study of multi-ethnic Asian people [9]. Another study conducted in a Korean adult population reported that an increased dietary fat intake (% of energy) was associated with increased TC and LDL-C levels but inversely associated with the risk of having low HDL-C and high TG levels [39]. In contrast, dietary carbohydrate intake (% of energy) was positively correlated with increased TG and low HDL-C levels [37, 39], but negatively correlated with increased TC and LDL-C levels [39]. Our RRR-derived kidney function-related dietary pattern was characterized by high intake of both fat and carbohydrate, and this might be the reason that there was no association between this dietary pattern and low HDL-C levels. Taken altogether, our findings suggest that kidney function-related dietary pattern may alter blood lipid profile and glucose metabolism, and thus may lead to increase cardiovascular disease risk. In the present study, RRR-derived kidney function-related dietary pattern increased moderately/severely impaired kidney function risk by 12–15%. Similarly, recent studies also stated that red meat, processed meat, saturated fat, and refined carbohydrate were associated with rapid eGFR decline and kidney failure [12, 40, 41]. Meat, processed foods, and refined carbohydrate are foods high in dietary acid load [42]. A high dietary acid load is known to be related to the progression of CKD and end-stage renal disease [43]. On the other hand, a diet high in plant protein, fruits, and vegetables is commonly known as an alkaline diet, and this type of diet was associated with reduce renal acid load and kidney injury [44]. Our RRR-derived kidney function-related dietary pattern was low in fruits, vegetables, and beans consumption, which explained the positive association between dietary pattern and the severity of impaired kidney function. Furthermore, overweight or obese weight status and all cardiovascular risk factors were positively correlated with moderately/severely impaired kidney function. Abdominal adiposity has been suggested to play a crucial role in exacerbating kidney disease by stimulating chronic inflammation or endocrine dysfunction [45]. In contrast, a recent study in Taiwan reported that CKD patients with overweight or obese status was not significantly associated with decline in eGFR [46]. Another study in South Korea found that either obesity or central obesity was associated with increased risk of stage 3a CKD, but not significantly correlated with increased risk of advanced stage 4/5 CKD [45]. Both studies suggested that waist circumference or central obesity might be a better predictor for the risk of obesity-related disease [45, 46]. BMI alone is insufficient to indicate central obesity due to the variations in individual's body composition. However, our study found that BMI (P < 0.001), waist circumference (P = 0.052), and WHR (P = 0.039) were associated with advanced impaired kidney function. Our results also suggest that cardiovascular risk factors were correlated with increased moderately/severely impaired kidney function risk by 11–56%. Consistent with our results, hypertension, diabetes, and abnormal lipid profile were independent risk factors to develop severe CKD stages [47–49]. Moreover, hypercalcemia is considered as cardiovascular risk factors because elevated serum calcium levels accelerated vascular calcification and were associated with increased mortality in CKD patients [8, 50]. Thus, hypercalcemia explained the highest odds ratio (OR = 1.56) in relation to moderately/severely impaired kidney function risk in our study. The strength of this study is that we used RRR to derive kidney function-related dietary pattern. The RRR method, which is an advanced method to identify a diet-and-disease relationship, can generate potential mediators between dietary pattern and the disease of interest. Compared with factor analysis, patterns derived from RRR is more likely to be associated with the disease of interest because the patterns are driven by disease-specific responses [19]. The RRR model may extract dietary pattern scores by maximizing the explained variation in the biomarkers for a specific diet-related disease [51]. In comparison, PCA focuses only to explain the total variation in intake of food groups and does not provide an explanation of the variation in important biomarkers [13]. Additionally, the RRR method allows researchers to identify the percentage variation from predictor variables and response variables, which both contributing to the dietary factor. Extracted factor scores can be evaluated by their corresponding response scores and by the explained variation in predictor variables. Thus, the association between food groups and response variables can be used to interpret the beneficial effects of individual food group as components of predictor variables in the dietary pattern [13]. However, this method requires prior knowledge to select intermediate biomarkers-related disease. Selecting response variables can be personally subjective and may not completely reflect the current state of knowledge. Thus, it may result in different patterns in different studies. Meanwhile, our study has several limitations. First, the cross-sectional study design cannot establish a causal relationship and it only shows a condition of one point-in-time. Hence, the possibility of reversed causation also exists. Future studies using prospective cohort or randomized trial designs are needed to explain and confirm a causal relationship between kidney function-related dietary patterns and cardiovascular risk factors. Second, relatively lower number of the participants with moderately or severely impaired kidney function compared to those with mildly impaired kidney function. Third, a self-reported FFQ may have some reporting bias, errors, and only provides information on habitual food consumption but cannot provide accurate information on actual nutrient intake for an individual. Fourth, the clinical definition of CKD is either decreased kidney function (eGFR < 60 mL/min/1.73 m2) in the absence of persistent albuminuria or having kidney damage (albumin-creatinine ratio > 30 mg/g), which presents for 3 months or more. Although, MJ Health Institute provided periodic health check-up (on average one examination per year per person) for its members, but not all participants had the annual examination and the clinical diagnosis of CKD cannot be made based on single measurement only. Therefore, the results found in this study may not truly represent the clinically diagnosed CKD participants. Finally, we have adjusted our results with some potential confounders. However, there are still some confounders that should be considered in the future study such as energy and protein intake and the use of renal medications, which can influence the findings of the present study. In conclusion, our findings suggest that kidney function-related dietary pattern with high intake of preserved or processed foods, meat, organ meats, rice/flour products, and instant noodles but low intake of fruit, vegetables, bread, and beans was positively correlated with abnormal weight status and cardiovascular risk factors. This type of dietary pattern may further increase the risk of cardiovascular disease and the severity of impaired kidney function. CKD: CRP: CVD: eGFR: estimated glomerular filtration rate FBG: fasting blood glucose HDL-C: high density lipoprotein-cholesterol LDL-C: low density lipoprotein-cholesterol TC: TC/HDL-C: total cholesterol-to-HDL-C ratio The authors thank to Mei Jau Health Institute for collecting and providing their database available for this study. This research received no external funding. The data that support the findings of this study are available from Mei Jau (MJ) Health Institute, but restricted for research use only. The data are not publicly available. Data are available from the authors upon reasonable request and with permission of MJ Health Institute. ALK and JCJC conceived and designed the study; CYH and HHR managed the dataset and retrieved the data; ALK analyzed and interpreted the data; ALK and JCJC wrote the manuscript. All authors have read and approved the final manuscript. The study was approved by the Taipei Medical University-Joint Institutional Review Board (TMU-JIRB) no. 201802006. All the participants signed a written informed consent authorized by Mei Jau (MJ) Health Institute. The data provided by Mei Jau (MJ) Health Institute to the researchers did not include any personal information, and all participants were adults. Not applicable. Additional file 1: Table S1. Adjusted Spearman's correlation coefficient (r) between the variables and estimated glomerular filtration rate a. (DOCX 24 kb) Additional file 2: Table S2. Baseline characteristics of participants across tertiles of dietary pattern scores a. (DOCX 19 kb) School of Nutrition and Health Sciences, College of Nutrition, Taipei Medical University, 250 Wu-Hsing Street, Taipei, 110, Taiwan Department of Information Management, National Taipei University of Nursing and Health Sciences, 365 Ming-Te Road, Peitou District, Taipei, 112, Taiwan Master Program in Global Health and Development, College of Public Health, Taipei Medical University, 250 Wu-Hsing Street, Taipei, 110, Taiwan Joint Commission of Taiwan. 31 Sec.2 Sanmin Road, Banqiao District, New Taipei City, 220, Taiwan Nutrition Research Center, Taipei Medical University Hospital, 252 Wu-Hsing Street, Taipei, 110, Taiwan Hwang SJ, Tsai JC, Chen HC. Epidemiology, impact and preventive care of chronic kidney disease in Taiwan. Nephrology. 2010;15:3–9.View ArticleGoogle Scholar Menon V, Gul A, Sarnak MJ. Cardiovascular risk factors in chronic kidney disease. Kidney Int. 2005;68:1413–8.View ArticleGoogle Scholar Di Angelantonio E, Chowdhury R, Sarwar N, Aspelund T, Danesh J, Gudnason V. Chronic kidney disease and risk of major cardiovascular disease and non-vascular mortality: prospective population based cohort study. Br Med J. 2010;341:c4986.View ArticleGoogle Scholar Wen CP, Cheng TY, Tsai MK, Chang YC, Chan HT, Tsai SP, et al. All-cause mortality attributable to chronic kidney disease: a prospective cohort study based on 462 293 adults in Taiwan. Lancet. 2008;371:2173–82.View ArticleGoogle Scholar Parikh NI, Hwang SJ, Larson MG, Meigs JB, Levy D, Fox CS. Cardiovascular disease risk factors in chronic kidney disease: overall burden and rates of treatment and control. Arch Intern Med. 2006;166:1884–91.View ArticleGoogle Scholar Fan HM, Li XL, Zheng L, Chen XL, Lan Q, Wu H, et al. Abdominal obesity is strongly associated with cardiovascular disease and its risk factors in elderly and very elderly community-dwelling Chinese. Sci Rep. 2016;6:21521.View ArticleGoogle Scholar Dhingra R, Sullivan LM, Fox CS, Wang TJ, D'Agostino RB, Gaziano JM, et al. Relations of serum phosphorus and calcium levels to the incidence of cardiovascular disease in the community. Arch Intern Med. 2007;167:879–85.View ArticleGoogle Scholar Shanahan CM, Crouthamel MH, Kapustin A, Giachelli CM. Arterial calcification in chronic kidney disease: key roles for calcium and phosphate. Circ Res. 2011;109:697–711.View ArticleGoogle Scholar Whitton C, Rebello SA, Lee J, Tai ES, van Dam RM. A healthy asian a posteriori dietary pattern correlates with a priori dietary patterns and is associated with cardiovascular disease risk factors in a multiethnic Asian population. J Nutr. 2018;148:616–23.View ArticleGoogle Scholar Hsu CC, Jhang HR, Chang WT, Lin CH, Shin SJ, Hwang SJ, et al. Associations between dietary patterns and kidney function indicators in type 2 diabetes. Clin Nutr. 2014;33:98–105.View ArticleGoogle Scholar Muga MA, Owili PO, Hsu CY, Rau HH, Chao JCJ. Association between dietary patterns and cardiovascular risk factors among middle-aged and elderly adults in Taiwan: a population-based study from 2003 to 2012. PLoS One. 2016;11:e0157745.View ArticleGoogle Scholar Paterson EN, Neville CE, Silvestri G, Montgomery S, Moore E, Silvestri V, et al. Dietary patterns and chronic kidney disease: a cross-sectional association in the Irish Nun eye study. Sci Rep. 2018;8:6654.View ArticleGoogle Scholar Hoffmann K, Schulze MB, Schienkiewitz A, Nothlings U, Boeing H. Application of a new statistical method to derive dietary patterns in nutritional epidemiology. Am J Epidemiol. 2004;159:935–44.View ArticleGoogle Scholar Weikert C, Schulze MB. Evaluating dietary patterns: the role of reduced rank regression. Curr Opin Clin Nutr Metab Care. 2016;19:341–6.View ArticleGoogle Scholar Barbaresko J, Siegert S, Koch M, Aits I, Lieb W, Nikolaus S, et al. Comparison of two exploratory dietary patterns in association with the metabolic syndrome in a northern German population. Br J Nutr. 2014;112:1364–72.View ArticleGoogle Scholar Miki T, Kochi T, Kuwahara K, Eguchi M, Kurotani K, Tsuruoka H, et al. Dietary patterns derived by reduced rank regression (RRR) and depressive symptoms in Japanese employees: the Furukawa nutrition and health study. Psychiatry Res. 2015;229:214–9.View ArticleGoogle Scholar Frank LK, Jannasch F, Kröger J, Bedu-Addo G, Mockenhaupt FP, Schulze MB, et al. A dietary pattern derived by reduced rank regression is associated with type 2 diabetes in an urban Ghanaian population. Nutrients. 2015;7:5497–514.View ArticleGoogle Scholar Nettleton JA, Steffen LM, Schulze MB, Jenny NS, Barr RG, Bertoni AG, et al. Associations between markers of subclinical atherosclerosis and dietary patterns derived by principal components analysis and reduced rank regression in the multi-ethnic study of atherosclerosis (MESA). Am J Clin Nutr. 2007;85:1615–25.View ArticleGoogle Scholar Manios Y, Kourlaba G, Grammatikaki E, Androutsos O, Ioannou E, Roma-Giannikou E. Comparison of two methods for identifying dietary patterns associated with obesity in preschool children: the GENESIS study. Eur J Clin Nutr. 2010;64:1407–14.View ArticleGoogle Scholar Department of Health, Executive Yuan, R.O.C. (Taiwan). Identification, evaluation, and treatment of overweight and obesity in adults in Taiwan. Taipei: Department of Health, Executive Yuan, R.O.C. (Taiwan); 2003.Google Scholar World Health Organization. Waist circumference and waist-hip ratio: report of a WHO expert consultation, Geneva, 8–11 December 2008. Geneva: World Health Organization; 2011. p. 27–8.Google Scholar Gallagher D, Heymsfield SB, Heo M, Jebb SA, Murgatroyd PR, Sakamoto Y. Healthy percentage body fat ranges: an approach for developing guidelines based on body mass index. Am J Clin Nutr. 2000;72:694–701.View ArticleGoogle Scholar Giles TD, Materson BJ, Cohn JN, Kostis JB. Definition and classification of hypertension: an update. J Clin Hypertens (Greenwich). 2009;11:611–4.View ArticleGoogle Scholar American Diabetes Association. Diagnosis and classification of diabetes mellitus. Diabetes Care. 2014;37(Suppl 1):S81–90.View ArticleGoogle Scholar Grundy SM, Becker D, Clark LT, Cooper RS, Denke MA, Howard J, et al. Detection, evaluation, and treatment of high blood cholesterol in adults (adult treatment panel iii). Circulation. 2002;106:3143–421.View ArticleGoogle Scholar Chang HY, Yeh WT, Chang YH, Tsai KS, Pan WH. Prevalence of dyslipidemia and mean blood lipid values in Taiwan: results from the nutrition and health survey in Taiwan (NAHSIT, 1993-1996). Chin J Physiol. 2002;45:187–97.PubMedGoogle Scholar Levey AS, Coresh J. Chronic kidney disease. Lancet. 2012;379:165–80.View ArticleGoogle Scholar Kidney Disease Improving Global Outcome. Chapter 1: definition and classification of CKD. Kidney Int Suppl. 2013;3:19–62.View ArticleGoogle Scholar Bailie GR, Massry SG. Clinical practice guidelines for bone metabolism and disease in chronic kidney disease: an overview. Pharmacotherapy. 2005;25:1687–707.View ArticleGoogle Scholar Lyu LC, Lin CF, Chang FH, Chen HF, Lo CC, Ho HF. Meal distribution, relative validity and reproducibility of a meal-based food frequency questionnaire in Taiwan. Asia Pac J Clin Nutr. 2007;16:766–76.PubMedGoogle Scholar Wang Y, Beydoun MA. Meat consumption is associated with obesity and central obesity among US adults. Int J Obesity. 2009;33:621–8.View ArticleGoogle Scholar Cho YA, Shin A, Kim J. Dietary patterns are associated with body mass index in a Korean population. J Am Diet Assoc. 2011;111:1182–6.View ArticleGoogle Scholar Kim J, Jo I, Joung H. A rice-based traditional dietary pattern is associated with obesity in Korean adults. J Acad Nutr Diet. 2012;112:246–53.View ArticleGoogle Scholar Okubo H, Sasaki S, Murakami K, Kim MK, Takahashi Y, Hosoi Y, et al. Three major dietary patterns are all independently related to the risk of obesity among 3760 Japanese women aged 18-20 years. Int J Obesity. 2008;32:541–9.View ArticleGoogle Scholar Xu XY, Hall J, Byles J, Shi ZM. Dietary pattern is associated with obesity in older people in China: data from China health and nutrition survey (CHNS). Nutrients. 2015;7:8170–88.View ArticleGoogle Scholar Yu ZM, DeClercq V, Cui Y, Forbes C, Grandy S, Keats M, et al. Fruit and vegetable intake and body adiposity among populations in eastern Canada: the Atlantic partnership for tomorrow's health study. BMJ Open. 2018;8:e018060.View ArticleGoogle Scholar Park SH, Lee KS, Park HY. Dietary carbohydrate intake is associated with cardiovascular disease risk in Korean: analysis of the third Korea National Health and nutrition examination survey (KNHANES III). Int J Cardiol. 2010;139:234–40.View ArticleGoogle Scholar Shridhar K, Satija A, Dhillon PK, Agrawal S, Gupta R, Bowen L, et al. Association between empirically derived dietary patterns with blood lipids, fasting blood glucose and blood pressure in adults-the India migration study. Nutr J. 2018;17:15.View ArticleGoogle Scholar Song S, Song WO, Song Y. Dietary carbohydrate and fat intakes are differentially associated with lipid abnormalities in Korean adults. J Clin Lipidol. 2017;11:338–47 e333.View ArticleGoogle Scholar Lin JL, Fung TT, Hu FB, Curhan GC. Association of dietary patterns with albuminuria and kidney function decline in older white women: a subgroup analysis from the nurses' health study. Am J Kidney Dis. 2011;57:245–54.View ArticleGoogle Scholar Lew QLJ, Jafar TH, Koh HWL, Jin AZ, Chow KY, Yuan JM, et al. Red meat intake and risk of ESRD. J Am Soc Nephrol. 2017;28:304–12.View ArticleGoogle Scholar Scialla JJ, Anderson CA. Dietary acid load: a novel nutritional target in chronic kidney disease? Adv Chronic Kidney Dis. 2013;20:141–9.View ArticleGoogle Scholar Banerjee T, Crews DC, Wesson DE, Tilea AM, Saran R, Rios-Burrows N, et al. High dietary acid load predicts ESRD among adults with CKD. J Am Soc Nephrol. 2015;26:1693–700.View ArticleGoogle Scholar Goraya N, Simoni J, Jo CH, Wesson DE. A comparison of treating metabolic acidosis in CKD stage 4 hypertensive kidney disease with fruits and vegetables or sodium bicarbonate. Clin J Am Soc Nephrol. 2013;8:371–81.View ArticleGoogle Scholar Evangelista LS, Cho WK, Kim Y. Obesity and chronic kidney disease: a population-based study among south Koreans. PLoS One. 2018;13:e0193559.View ArticleGoogle Scholar Chang TJ, Zheng CM, Wu MY, Chen TT, Wu YC, Wu YL, et al. Relationship between body mass index and renal function deterioration among the Taiwanese chronic kidney disease population. Sci Rep. 2018;8:6908.View ArticleGoogle Scholar Chang PY, Chien LN, Lin YF, Wu MS, Chiu WT, Chiou HY. Risk factors of gender for renal progression in patients with early chronic kidney disease. Medicine (Baltimore). 2016;95:e4203.View ArticleGoogle Scholar Chen SC, Hung CC, Kuo MC, Lee JJ, Chiu YW, Chang JM, et al. Association of dyslipidemia with renal outcomes in chronic kidney disease. PLoS One. 2013;8:e55643.View ArticleGoogle Scholar Shurraw S, Hemmelgarn B, Lin M, Majumdar SR, Klarenbach S, Manns B, et al. Association between glycemic control and adverse outcomes in people with diabetes mellitus and chronic kidney disease a population-based cohort study. Arch Intern Med. 2011;171:1920–7.View ArticleGoogle Scholar Kovesdy CP, Kuchmak O, Lu JL, Kalantar-Zadeh K. Outcomes associated with serum calcium level in men with non-dialysis-dependent chronic kidney disease. Clin J Am Soc Nephrol. 2010;5:468–76.View ArticleGoogle Scholar Heidemann C, Hoffmann K, Spranger J, Klipstein-Grobusch K, Möhlig M, Pfeiffer A, et al. A dietary pattern protective against type 2 diabetes in the European prospective investigation into Cancer and nutrition (EPIC)—Potsdam study cohort. Diabetologia. 2005;48:1126–34.View ArticleGoogle Scholar
CommonCrawl
\begin{document} \sloppy \title{ Non-Cooperative Rational Interactive Proofs ootnote{A preliminary version of this paper appeared at the 27th European Symposium on Algorithms (ESA 2019). This work has been partially supported by NSF CAREER Award CCF 1553385, CNS 1408695, CCF 1439084, IIS 1247726, IIS 1251137, CCF 1217708, by Sandia National Laboratories, and by the European Research Council under the European Union's 7th Framework Programme (FP7/2007-2013)~/~ERC grant agreement no. 614331. BARC, Basic Algorithms Research Copenhagen, is supported by the VILLUM Foundation grant 16582.} \begin{abstract} Interactive-proof games model the scenario where an honest party interacts with powerful but strategic provers, to elicit from them the correct answer to a computational question. Interactive proofs are increasingly used as a framework to design protocols for computation outsourcing. Existing interactive-proof games largely fall into two categories: either as games of cooperation such as multi-prover interactive proofs and cooperative rational proofs, where the provers work together as a team; or as games of conflict such as refereed games, where the provers directly compete with each other in a zero-sum game. Neither of these extremes truly capture the strategic nature of service providers in outsourcing applications. How to design and analyze non-cooperative interactive proofs is an important open problem. In this paper, we introduce a mechanism-design approach to define a multi-prover interactive-proof model in which the provers are {\em rational} and {\em non-cooperative}---they act to maximize their expected utility given others' strategies. We define a strong notion of backwards induction as our solution concept to analyze the resulting extensive-form game with imperfect information. We fully characterize the complexity of our proof system under different \emph{utility gap} guarantees. (At a high level, a utility gap of $u$ means that the protocol is robust against provers that may not care about a utility loss of $1/u$.) We show, for example, that the power of non-cooperative rational interactive proofs with a polynomial utility gap is exactly equal to the complexity class $\sf{P^{NEXP}}$. \end{abstract} \section{Introduction} \seclabel{intro} Game theory has played a central role in analyzing the conflict and cooperation in interactive proof games. These games model the scenario where an honest party interacts with powerful but strategic agents, to elicit from them the correct answer to a computational question. The extensive study of these games over decades has fueled our understanding of important complexity classes~(e.g.,~\cite{babai1991non,fortnow1994power,feige1992two,lund1992algebraic, chandra1976alternation, feige1997making, feige1992multi, feigenbaum1995game}). From a modern perspective, these games capture the essence of computation outsourcing---the honest party is a client outsourcing his computation to powerful rational service providers in exchange for money. In this paper, we consider a natural type of interactive-proof game. For the moment, let us call our client Arthur. Arthur hires a service provider Merlin to solve a computational problem for him, and hires a second service provider Megan to cross-check Merlin's answer. Arthur wants the game (and associated payments) to be designed such that if Merlin gives the correct answer, Megan agrees with him; however, if Merlin cheats and gives a wrong answer, Megan is incentivized to contradict him, informing Arthur of Merlin's dishonesty. This means that Merlin and Megan are not purely cooperative nor purely competitive. Each is simply a rational agent who wants to maximize their own utility. This is a mechanism design problem---how can Arthur incentivize non-cooperative rational agents (Merlin and Megan) to give truthful answers to his questions, helping him solve a computational problem? This problem is the focus of our paper. \paragraph*{Structure of the game.} We borrow the structure and terminology of interactive proofs~\cite{babai1985trading,IP,ben1988multi}, as was done in previous work on rational proofs~\cite{azar2012rational, azar2013super, guo2014rational, guo2016rational, ChenMcSi16, ChenMcSi18, CMS17arxiv, campanelli2015sequentially, campanelli2017efficient} and refereed games~\cite{ chandra1976alternation, feige1990noisy, feige1997making, feige1992multi, reif1984complexity, feigenbaum1995game, koller1992complexity}. We call Arthur the \defn{verifier} and assume that he is computationally bounded (he may be probabilistic, but must run in polynomial time). Arthur's coin flips are treated as Nature moves in the game. We call Merlin and Megan the \defn{provers}; they have unbounded computational power. The verifier exchanges messages with the provers in order to determine the answer to a decision problem. The exchange proceeds in rounds: in a round, either a verifier sends a message to all provers or receives a response from each. The provers cannot observe the messages exchanged between the verifier and other provers. At the end, the verifier gives a payment to {\em each} prover. Our goal is to design protocols and payments such that, under an appropriate solution concept of the resulting game, the provers' best strategies lead the verifier to the correct answer. The interactive protocols described above form an extensive-form game of imperfect information. To analyze them, we essentially use a strong notion of backward induction as our solution concept. We refine it further by eliminating strategies that are weakly dominated on ``subgames'' within the entire game. We define the solution concept formally in~\secref{sse}. \paragraph*{Comparison to previous work.} The model of our games is based on interactive proof systems~\cite{babai1985trading,IP}, in which a verifier exchanges messages with untrustworty provers and at the end either accepts or rejects their claim. Interactive proofs guarantee that, roughly speaking, the verifier accepts a truthful claim with probability at least 2/3 (\defn{completeness}) and no strategy of the provers can make the verifier accept a false claim with probability more than 1/3 (\defn{soundness}). The study of interactive proofs has found extensive applications in both theory and practice. Classical results on IPs have led us to better understand complexity classes through characterizations such as $\sf{IP}=\sf{PSPACE}$~\cite{Shamir92,lund1992algebraic} and $\sf{MIP = NEXP}$~\cite{babai1991non,fortnow1994power,feige1992two}, and later led to the important area of probabilistically checkable proofs~\cite{sudan2009probabilistically}. More recently, the study of IPs has resulted in extremely efficient (e.g., near linear or even logarithmic time) protocols for delegation of computation~\cite{bitansky2012succinct, goldwasser2008delegating, canetti2013refereed, braun2013verifying, rothblum2013interactive }. Such super-efficient IPs have brought theory closer to practice, resulting in ``nearly practical'' systems~(e.g., see~\cite{ blumberg2014verifiable, walfish2015verifying, thaler2012verifiable, canetti2011practical }). Indeed, interactive proofs are not only a fundamental theoretical concept but an indispensable framework to design efficient computation-outsourcing protocols. \paragraph*{Existing interactive-proof games} Interactive-proof systems with multiple provers have largely been studied as games that fall into two categories: either as games of cooperation such as MIP~\cite{ben1988multi}, cooperative multi-prover rational proofs (MRIP)~\cite{ChenMcSi16}, and variants~\cite{fortnow1994power,babai1991non, cai1992games, ito2012multi, goldwasser2008delegating}, where the provers work together to convince the verifier of their joint claim; or as games of conflict such as refereed games~\cite{chandra1976alternation, feige1997making, feige1992multi, feigenbaum1995game,canetti2013refereed, kol2013competing, canetti2012two}, where the provers directly compete with each other to convince the verifier of their conflicting claims. Both of these categories have limitations. In a game of cooperation, provers cannot be leveraged directly against each other. That is, the verifier cannot directly ask one prover if another prover is lying. On the other hand, in a game of conflict, such as refereed games, one prover must ``win'' the zero-sum game. Thus, such games need to assume that at least one prover---who must be the winning prover in a correct protocol---can be trusted to always tell the truth. Despite their limitations, both models have proved to be fundamental constructs to understand and characterize important complexity classes~\cite{ babai1991non, feigenbaum1995game, feige1997making, chandra1976alternation, ChenMcSi16}, and to design efficient computation outsourcing protocols~\cite{bitansky2012succinct, blumberg2014verifiable, goldwasser2008delegating, canetti2013refereed, canetti2012two}. \subsection{Contributions and Results} In this paper, we introduce a new interactive-proof game, {\em non-cooperative rational interactive proofs (ncRIP)}. This model generalizes multi-prover rational proofs~\cite{ChenMcSi16,CMS17arxiv,ChenMcSi18}. \paragraph*{Solution concept for ncRIP} We define a refinement of sequential equilibrium~\cite{kreps1982sequential}, \defn{strong sequential equilibrium} (SSE), that essentially says that players' beliefs about the histories that led them to an unreachable information set should be irrelevant to their best response. From a mechanism-design perspective, we want to design the protocols and payments that allow this strong guarantee to hold---letting the players' best responses be unaffected by their beliefs.\footnote{We believe that SSE is of independent interest as a solution concept for designing extensive-form mechanisms (e.g.~\cite{ glazer1996virtual,vartiainen2007subgame,duggan1998extensive}). In~\secref{strongse}, we prove important properties of SSE that may prove useful in future studies.} Finally, we eliminate SSE strategies that are suboptimal within ``subgames'' by defining and enforcing a backward-induction-compatible notion of dominance. Roughly speaking, we say a protocol is a ncRIP if there exists a strategy profile of the provers that is a dominant SSE among the \emph{subforms} of the extensive form game, and under this strategy the provers' lead the verifier to the correct answer. We define the model formally in~\secref{ncmrip}. \paragraph*{Utility gap for non-cooperative provers} Utility gap is a fundamental concept for rational proofs~\cite{azar2013super, guo2014rational, ChenMcSi16, ChenMcSi18} which is analogous to {\em soundness gap} in interactive proofs. It measures how robust a protocol is against the provers' possible deviations from the desired strategy. This notion is straightforward to define for cooperative rational protocols---they have a utility gap of $u$ if the {\em total} expected payment decreases by $1/u$ whenever the provers report the wrong answer. In non-cooperative protocols, however, it is not a priori clear how to define such a payment loss or to choose which prover should incur the loss. A payment loss solely imposed on the total payment may not prevent some provers from deviating, and a loss solely imposed on the provers' final payments may not prevent them from deviating within subgames. We define a meaningful notion of utility gap for ncRIP that is naturally incorporated in a backward-induction-compatible way to the dominant SSE concept. \paragraph*{Tight characterizations of ncRIP classes} In this paper, we completely characterize the power of non-cooperative rational proofs under different utility-gap guarantees. We construct ncRIP protocols with constant, polynomial, and exponential utility gaps for powerful complexity classes, demonstrating the strength of our solution concept. Our protocols are simple and intuitive (requiring only a few careful tweaks from their cooperative counterparts), and are thus easy to explain and implement. However, proving their correctness involves analyzing the extensive-game (including subtleties in the incentives and beliefs of each player at each round) to show that the protocol meets the strong solution-concept and utility-gap requirements. We then prove {tight} upper bounds for all three ncRIP classes. Proving tight upper bounds is the most technically challenging part of the paper. We prove the upper bounds by simulating the decisions of the verifier and provers with a Turing Machine. However, there are several obstacles to attain the correct bounds. For example, the polynomial randomness of the verifier can induce an exponential-sized game tree, which is too large to be verified by the polynomial-time machine in Theorems~\ref{thm:constchar} and~\ref{thm:polychar}. Furthermore, an NEXP oracle cannot itself verify whether a strategy profile is a dominant SSE. The key lemma that helps us overcome these challenges is the pruning lemma~(\lemref{pruning}). At a high level, it shows that we can prune the nature moves of the verifier in the resulting game tree, while preserving the dominant-SSE and utility-gap guarantees. Our results are summarized in~\figref{mainresults}, where we use $\sf{O}(1)\mbox{-}\sf{ncRIP}$, $\sf{poly}(n)\mbox{-}\sf{ncRIP}$ and $\sf{exp}(n)\mbox{-}\sf{ncRIP}$ to denote ncRIP classes with constant, polynomial and exponential utility gaps respectively. The notations are analogous for MRIP~\cite{CMS17arxiv} (the cooperative variant). We characterize ncRIP classes via oracle Turing machines. In particular, $\sf{P^{NEXP[O(1)]}}$ is the class of languages decided by a polynomial-time Turing machine that makes $O(1)$ queries to an $\sf{NEXP}$ oracle, and $\sf{EXP^{poly\mbox{-}NEXP}}$ is the class decided by an exponential-time Turing machine with polynomial-length queries to an $\sf{NEXP}$ oracle. \newlength{\thmfiguretab} \setlength{\thmfiguretab}{-4pt} \noindent \begin{figure} \caption{Summary of our results.} \end{figure} \paragraph*{Power of non-cooperative vs. cooperative and competitive provers} Interestingly, in the case of constant and exponential utility gap, the power of ncRIP and MRIP coincide. This can be explained by the power of adaptive versus non-adaptive queries in oracle Turing machines. Indeed, our results reveal the main difference between non-cooperative and cooperative provers: the former can be used to handle adaptive oracle queries, the latter cannot (see~\cite{ChenMcSi16, CMS17arxiv}). Intuitively, this makes sense---cooperative provers may collude across adaptive queries, answering some of them incorrectly to gain on future queries. On the other hand, non-cooperativeness allows us to treat the subgame involving the oracle queries as a separate game from the rest. Our results also show that non-cooperative provers are more powerful than competing provers. Feige and Kilian~\cite{feige1997making} proved that the power of refereed games with imperfect information and perfect recall is equal to $\sf{EXP}$. \section{Non-Cooperative Rational Interactive Proofs}\seclabel{ncmrip} In this section we introduce the model for ncRIP. \paragraph*{Notation.} First, we review the structure of ncRIP protocols and related notation; this is largely the same as~\cite{ChenMcSi16}. The decision problem being solved by an interactive proof is modeled as whether a given string $x$ is in language $L$. An interactive protocol is a pair $(V, \vec{P})$, where $V$ is the \defn{verifier}, $\vec{P} = (P_1,\ldots,P_{p(n)})$ is the vector of $p(n)$ \defn{provers}, where $p(n)$ is polynomial in $n = |x|$. The verifier runs in polynomial time and flips private coins. Each $P_i$ is computationally unbounded. The verifier and provers are given the input $x$. Similar to classical multi-prover interactive proofs, the verifier can communicate with each prover privately, but no two provers can communicate with each other once the protocol begins. In a \defn{round}, either each prover sends a message to $V$, or $V$ sends a message to each prover, and these two cases alternate. The length of each message $\ell(n)$, and the number of rounds $k(n)$ are both polynomial in $n$. The final transcript $\vec{m}$ of the protocol is a random variable depending on $r$, the random string used by $V$. At the end of the communication, the verifier computes an \defn{answer bit} $c\in\{0, 1\}$ for the membership of $x$ in $L$ based on $x$, $r$, and $\vec{m}$. $V$ also computes a payment vector $\vec{R} = (R_1, R_2, \ldots, R_{p(n)})$, where $R_i$ is the payment given to $P_i$, $R_i \in [-1,1]$, and the total $\sum_{i=1}^{p(n)} R_i \in [-1,1]$ as well.\footnote{Negative payments are used to reflect punishment. The individual payments and the total payment can be shifted and scaled to lie in $[0,1]$.} The protocol and the payment function $\vec{R}$ are public knowledge. Each prover $P_i$'s \defn{strategy} at round $j$ maps the transcript seen at the beginning of round $j$ to the message he sends in that round. Let $s_i =(s_{i1},\ldots, s_{ik(n)})$ be the strategy of prover $P_i$, and $s = (s_1,\dots, s_{p(n)})$ be the \defn{strategy profile} of the provers. Given input $x$, and strategy profile $s$, let $u_k(x, s, (V, \vec{P}))$ denote the expected payment of prover $P_k$ in the protocol $(V, \vec{P})$ based on randomness $r$, input $x$ and $s$; if $(V,\vec{P})$ is clear from context, we shorten this to $u_k(x,s)$ or $u_k(s)$. The protocol forms an \defn{extensive-form game with imperfect information} which we describe in the next section. The protocol and payments should be designed such that the provers are incentivized to reach an equilibrium that leads $V$ to the correct answer bit $c$. We formalize the solution concept in~\secref{sse}. \subsection{ Extensive-form Games and ncRIP}\seclabel{extensive} We describe the underlying extensive-form game resulting from ncRIP protocols in this section. For details on extensive-form games, we refer to the textbook by Osborne and Rubinstein~\cite{osborne1994course}. In a protocol $(V, \vec{P})$ with input $x$, the set of provers $\vec{P}=(P_1,\ldots,P_{p(n)})$ are the \defn{players}. $V$ is not a player of the game---the deterministic moves of $V$ form the structure of the game tree and the randomized moves of $V$ are treated as \defn{Nature} moves. A \defn{history} $h$ of the game is a sequence of actions taken by the players, written $h = (a^1, a^2, \ldots, a^K)$ for some actions $a^1, \ldots, a^K$. The set of histories (including $\phi$, the empty history corresponding to the root) is denoted by ${H}$. Note that every prefix of $h = (a^1, a^2, \ldots, a^K) \in H$ must also be a valid history, that is, $(a^1, a^2, \ldots, a^L) \in {H}$ for any $L < K$. A history $h = (a^1, \ldots, a^K)$ is \defn{terminal} if it corresponds to a leaf in the game tree---there is no $K+1$ such that $(a^1, \ldots, a^K, a^{K+1}) \in H$---and \defn{non-terminal} otherwise. Let $Z(h)$ denote the player whose turn it is to act following a non-terminal history $h$---note that even though in an ncRIP protocol more than one prover may send a message to the verifier in a round, without loss of generality we can increase the number of rounds such that only a single prover acts in each round. Let $A(h)$ denote the set of actions available to the acting player at a non-terminal history $h$: that is, $A(h) = \{a \mbox{ : } (h, a) \in H\}$. If $Z(h)$ is Nature, then $A(h)$ is the set of possible coin flips and messages of the verifier following $h$; otherwise $A(h)$ is the set of possible messages that $Z(h)$ may send to the verifier. For each terminal history $h$, the {\em utility} of a player $i$ following $h$, $u_i(h)$, is the payment $R_i$ computed by the verifier given $x$ and $h$. As the verifier's coins are private and the verifier exchanges private messages with each of the provers, an ncRIP protocol forms an extensive-form game of imperfect information. An \defn{information set} $I_i$ of a player $P_i$ is a subset of all possible histories $h$ with $Z(h) = P_i$, and represents all the information that the player knows when acting in one of the decision nodes in $I_i$. That is, when a decision node in $I_i$ is reached, $P_i$ knows that $I_i$ has been reached but does not know exactly which node he is at. The set of actions available to player $i$ at every decision node in a particular information set is the same, i.e., $A(h) = A(h')$ for all $h, h' \in I_i$. Let $A(I_i)$ denote the set of available actions at an information set $I_i$. The set of all information sets of $P_i$ forms a partition of the set $\{h \in H \mbox{ : } Z(h) = P_i\}$, and let $\mathcal{I}_i$ to denote this partition, referred to as the {information partition} of $P_i$. In terms of the protocol, $\mathcal{I}_i$ is in a one-to-one correspondence with the set of possible message sequences $(m_{i1}, \dots, m_{ij})$ seen by $P_i$, where $j\in \{1, \dots, p(n)\}$ and $P_i$ is acting in round $j$. A \defn{pure strategy} $s_i$ of a player $P_i$ in an extensive-form game is a function that assigns an action in $A(I_i)$ to each information set $I_i \in \mathcal{I}_i$. A \defn{behavioral strategy} $\beta_i$ of $P_i$ is a collection $(\beta_i(I_i))_{I_i \in \mathcal{I}_i}$ of independent probability measures, where $\beta_i(I_i)$ is a probability measure over the action set $A(I_i)$. A behavioral strategy $\beta_i$ is \defn{completely mixed} if each $\beta_i(I_i)$ assigns a positive probability to every action in $A(I_i)$. In this paper, the provers are deterministic and thus we only consider pure strategies. However, the solution concept introduced in this paper applies to behavioral strategies as well. A player $i$'s \defn{utility under a strategy profile $s$}, $u_i(s)$, is his expected utility over the distribution of histories induced by $s$ and the verifier's randomness. The provers are computationally unbounded and never ``forget'' anything and thus the corresponding extensive-form game has \defn{perfect recall}. That is, for any two histories $h$ and $h'$ in the same information set $I_i$ of a player $P_i$, $h$ and $h'$ pass the same sequence of information sets to player $P_i$. Furthermore, for any information set in this sequence, player $P_i$ took the same action in $h$ and $h'$. This holds in any ncRIP protocol since all histories of prover $P_i$ in the same information set $I_i$ at round $j$ correspond to the sequence of messages $(m_{i1}, \dots, m_{ij})$ seen by $P_i$ up to round~$j$. \subsection{Solution concept for ncRIP}\seclabel{sse} We want the solution concept for ncRIP to satisfy a strong notion of backward induction~\cite{osborne1994course}, a standard criterion applied to extensive-form games based on the common knowledge of rationality. Backwards induction refers to the condition of being ``sequentially rational'' in an extensive-form game, that is, each player must play his best response at each node where he has to move, even if his rationality implies that such a node will not be reached. If an interactive protocol forms an extensive-form game of perfect information, it is easy to formalize this condition. A strategy $s$ is \defn{sequentially rational} or satisfies \defn{backward induction}, if for every player $i$ and every decision node of $i$, conditioned on reaching the decision node, $s_i$ is a best response to $s_{-i}$, that is, $u_i(s_i, s_{-i}) \geq u_i(s_i', s_{-i})$ for any strategy $s_i'$ of prover $i$. In other words, $s$ induces a best response at every subgame.\footnote{A subgame is a subtree that can be treated as a separate well-defined game. In a perfect-information game, every node starts a new subgame. ``Backward induction'' and ``{subgame-perfect equilibrium}'' are used interchangeably in the literature~\cite{glazer1996virtual}.} In a game of imperfect information, the decision nodes corresponding to a player's turn are partitioned into \defn{information sets}, where the player is unable to distinguish between the possible histories within an information set. To reason about sequential rationality we need a probability distribution $u_I$ on each information set $I$, so as to determine the players' expected utility conditioned on reaching $I$ and thus their best response at $I$. The probability distribution $\mu_I$ is referred to as the player's \defn{beliefs} about the potential histories leading to $I$. Given a strategy profile $s$, beliefs~$u_I$ at \defn{reachable information sets} (reached with non-zero probability under~$s$) are derived from $s$ using Bayes' rule; this is a standard derivation used in most solution concepts for extensive-form games~\cite{osborne1994course}. We sometimes write~$\mu_I^s$ to emphasize that the beliefs depend on~$s$. Past work has introduced a variety of methods for defining the beliefs $u_I^s$ at \defn{unreachable information sets} $I$ (i.e. information sets reached with probability zero under $s$); see e.g.~\cite{kreps1982sequential, selten1975reexamination, cho1987signaling, mclennan1985justifiable}. The most well-known is sequential equilibrium~\cite{kreps1982sequential}, which demands an explicit system of beliefs that satisfies a (somewhat artificial) consistency condition. Other equilibria, like trembling hand~\cite{selten1975reexamination}, reason implicitly about beliefs at unreachable information sets by assigning a negligible probability with which the player's hand ``trembles,'' and reaches an otherwise-unreachable information set. Further refinements of these take the structure and payoffs of the game into account~\cite{cho1987signaling, mclennan1985justifiable, banks1987equilibrium}. The treatment of beliefs at unreachable information sets in these solution concepts is often focused on ensuring that they can be used to analyze {\em every} extensive-form game. From a mechanism-design perspective, our focus is different---we want to design mechanisms in such a way that they admit much stronger equilibrium requirements, even if such an equilibrium cannot be used to analyze every game. At a high-level, we want the players' beliefs to be irrelevant in determining their best response at unreachable information sets. We call this notion \defn{strong sequential rationality}. A strategy profile $s$ is \defn{strongly sequentially rational} if for every information set $I$, conditioned on reaching $I$, $s_i$ is a best response to $s_{-i}$ with respect to $\mu_I^s$, where \begin{itemize} \item $\mu_I^s$ is derived using Bayes's if $I$ is reachable under $s$, and \item $\mu_I^s$ is \emph{any} arbitrary probability distribution if $I$ is unreachable under $s$. \end{itemize} In~\secref{strongse}, we show that this requirement is equivalent to saying that, at an unreachable information set $I$, $s_i$ must be a best response to $s_{-i}$ conditioned on reaching each history $h \in I$. In other words, at an unreachable information set $I$, each player must have a \emph{single} action that is the best response to every possible history in $I$. We say a strategy profile is a \defn{strong sequential equilibrium} (SSE) if it satisfies strong sequential rationality. We refine our solution concept further to eliminate strategies that are weakly dominated within ``subgames'' of the entire game. This is crucial to deal with equilibrium selection, in particular, because the players' cannot unilaterally deviate out of a suboptimal equilibria. We say an SSE $s$ \defn{weakly dominates} another SSE $s'$ if, for any player $i$, $u_i(s) \geq u_i(s')$. A strategy $s$ is \defn{weakly dominant} if it dominates all SSEs. Next we eliminate SSEs that are weakly dominated in subgames of the entire game. We use the generalized notion of subgames, called \defn{subforms}, defined by Kreps and Wilson~\cite{kreps1982sequential} for extensive-form games with imperfect information. To review the definition of subforms, we need further notation. Let $H$ be the set of histories of the game. Recall that a history is a sequence $(a^1, \ldots, a^K)$ of actions taken by the players. For histories $h, h' \in H$, we say $h$ has $h'$ as a \defn{prefix} if there exists some sequence of actions $b^1, \ldots, b^L$ (possibly empty) such that $h = (h', b^1, \ldots, b^L)$. For a history $h \in H$, let $I(h)$ be the unique information set containing $h$. For an information set $I$, let $H_I$ be the set of all histories following $I$, that is, $H_I$ is the set of all histories $h \in H$ such that $h$ has a prefix in $I$. We say that $H_I$ is a \defn{subform \em rooted at $I$} if for every information set $I'$ such that $I' \cap H_I \neq \emptyset$, it holds that $I' \subseteq H_I$. Roughly speaking, a subform $H_I$ ``completely contains'' all histories of the information sets following $I$, so there is no information asymmetry between the players acting within $H_I$. Thus, given a strategy profile, the subform $H_I$ together with the probability distribution $\mu_I^s$ on $I$, can be treated as a well-defined game. We say an SSE $s$ \defn{weakly dominates} SSE $s'$ \defn{on a subform} $H_I$ if, for any player $j$ acting in $H_I$, the expected utility of $j$ under $s_I$ in the game $(H_I, \mu_I^s)$ is greater than or equal to their utility under $s_I'$ in the game $(H_I, \mu_I^{s'})$. We eliminate weakly dominated strategies by imposing this dominance condition in a backward-induction-compatible way on the subforms as follows. \begin{definition}[Dominant Strong Sequential Equilibrium] \deflabel{max-sse} A strategy profile $s$ is a dominant strong sequential equilibrium if $s$ is an SSE and \begin{itemize} \item for every subform $H_I$ of height $1$: $s$ weakly dominates $s$' on $H_I$ for any SSE $s'$ \item for every subform $H_I$ subgame of height $>1$: $s$ weakly dominates $s'$ on $H_I$ for any SSE $s'$ that is a dominant SSE in all subforms of height at most $h-1$. \end{itemize} \end{definition} We are ready to define non-cooperative rational interactive proofs. \begin{definition}[Non-Cooperative Rational Interactive Proof] \deflabel{mripnc} Fix an arbitrary string $x$ and language $L$. An interactive protocol $(V, \vec{P})$ is a {\em non-cooperative rational interactive proof} (ncRIP) protocol for $L$ if there exists a strategy profile $s$ of the provers that is a dominant SSE in the resulting extensive-form game, and under any dominant SSE, the answer bit $c$ output by the verifier is correct (i.e., $c=1$ iff $x\in L$) with probability 1, where the probability is taken over the verifier's randomness. \end{definition} \subsection{Utility Gap in ncRIP Protocols}\label{sec:gap-model} In game theory, players are assumed to be perfectly rational and ``sensitive'' to arbitrarily small utility losses. In reality, some provers may not care about small losses. Such provers may not have sufficient incentive to reach a dominant SSE, and could end up leading the verifier to the wrong answer. To design ncRIP protocols that are robust against such ``insensitive'' provers, we define the notion of \defn{utility gap}. Informally, a utility gap of $u$ means that if a strategy profile $s$ leads the verifier to the wrong answer, there must exist a subform, such that some provers must lose at least a $1/u$ amount in their final individual payments (compared to their optimal strategy in that subform). As a consequence, these provers will not deviate to $s$, {as long as} they care about $1/u$ payment losses. We formalize this notion below. (We say a subform $H_I$ is reachable under $s$ if the information set $I$ is reached under $s$ with non-zero probability.) \begin{definition}[Utility Gap]\deflabel{rewardgap} Let~$(V, \vec{P})$ be an ncRIP protocol for a language~$L$ and~$s^*$ be a dominant SSE of the resulting game. The protocol~$(V, \vec{P})$ has an {\em $\alpha(n)$-utility gap} or $\alpha(n)$-gap, if for any strategy profile $s'$ under which the answer bit~$c'$ is wrong, there exists a subform~$H_{I}$ reachable under $s'$, and a prover~$P_j$ acting in~$H_I$ who has deviated from~$s^*$ such that \[u_j (x, (s_{-I}', s_{I}^*), (V, \vec{P})) - u_j (x, (s_{-I}', s_{I}'), (V,\vec{P})) > 1/\alpha(n),\] where $s_{-I}'$ denotes the strategy profile $s'$ outside subform $H_I$, that is, $s_{-I}'=s' \setminus s_{I}'$. \end{definition} The class of languages that have an ncRIP protocol with \defn{constant}, \defn{polynomial} and \defn{exponential} utility gap, are denoted by $\sf{O}(1)\mbox{-}\sf{ncRIP}$, $\sf{poly}(n)\mbox{-}\sf{ncRIP}$, and $\sf{exp}(n)\mbox{-}\sf{ncRIP}$ respectively.\footnote{These classes are formally defined by taking the union over languages with $\alpha(n)$ utility gap, for every $\alpha(n)$ that is constant, polynomial and exponential in $n$ respectively.} Note that $\alpha(n)$ gap corresponds to a payment loss of $1/\alpha(n)$, so an exponential utility gap is the weakest guarantee. \section{Lower Bounds: ncRIP Protocols with Utility Gap}\seclabel{lower} In this section, we give an $O(1)$-utility gap ncRIP protocol for the class $\sf{NEXP}$ and use it to give an $O(\alpha(n))$-utility gap ncRIP protocol for the class $\sf{P^{NEXP[\alpha(n)]}}$. Setting $\alpha(n)$ to be a constant or polynomial in $n$ gives us $\sf{P^{NEXP[O(1)]}} \subseteq \sf{O}(1)\mbox{-}\sf{ncRIP}$ and $\sf{P^{NEXP}} \subseteq \sf{poly}(n)\mbox{-}\sf{ncRIP}$ respectively. \paragraph*{A constant-gap ncRIP protocol for {\boldmath $\sf{NEXP}$}} The ncRIP protocol for any language in~$\sf{NEXP}$ is in~\figref{nexp-ncrip-protocol}. The protocol uses the 2-prover 1-round MIP for $\sf{NEXP}$~\cite{feige1992two} as a blackbox.\footnote{It is also possible to give a scoring-rule based ncRIP protocol for $\sf{NEXP}$, similar to MRIP~\cite{ChenMcSi16}. However, such a protocol has an exponential utility gap.} The protocol in~\figref{nexp-ncrip-protocol} essentially forces the non-cooperative provers to coordinate by giving them identical payments. As a result, it is almost identical to the MRIP protocol for $\sf{NEXP}$~\cite{ChenMcSi16}. While the payment scheme is simple, in the analysis we have to open up the black-box MIP. In particular, if $P_1$ sends $c=0$ in round~\ref{step:answer}, all the information sets of $P_1$ and $P_2$ in round~\ref{simple-3} become unreachable. To show that an SSE exists, we show that the provers have a best response at these unreachable sets, which is argued based on the messages exchanged in the MIP protocol. \begin{lemma}\lemlabel{nexp-const} \lemlabel{nexp-mip} Any language $L \in \sf{NEXP}$ has a 2-prover 3-round $6/5$-gap ncRIP protocol. \end{lemma} \begin{proof} The ncRIP protocol for any language $L \in \sf{NEXP}$ is given in~\figref{nexp-ncrip-protocol}. We show that there exists a strategy profile $s = (s_1,s_2)$ of provers $P_1$ and $P_2$ respectively that is a dominant SSE of the game tree corresponding to the protocol $(V, P_1, P_2)$ and under any dominant SSE, the answer bit $c=1$ if and only if $x \in L$. In the protocol, if $c=0$, no player acts. If $c=1$, the verifier executes the 1-round blackbox MIP protocol with $P_1$ and $P_2$. To exhibit a strategy that is a best response for $P_1$ and $P_2$ on their information sets at step~\ref{simple-3}, we look at the messages the verifier sends to each prover in the classic $\sf{MIP}$ protocol. In the $\sf{MIP}$ protocol, the verifier sends $P_1$ a tuple of message pairs $\vec{m}_1 = ((q_1, x_1),\ldots, (q_m,x_m))$ where $m$ is a polynomial in $n$ and $V$ sends $P_2$ a tuple of random messages $\vec{m}_2 = (y_1, \ldots, y_m)$. $P_1$ sends back a polynomial $P(t)$ and $P_2$ sends back the value of the polynomial $P(t)$ for $t$ satisfying $q_j + t x_j = y_j$. The verifier rejects if their answers are inconsistent. To analyze the SSE strategy, without loss of generality, suppose $P_1$ moves last in the MIP protocol. Any information set $I_1$ of $P_1$ at step~\ref{simple-3} is characterized by the message $\vec{m}_1$ he receives. The decision nodes in $I_1$ correspond to each possible message $\vec{m}_2$ that $P_2$ could have received. Because the $V$ gives the largest payment when the MIP protocol accepts, given $P_2$'s strategy, if any information set $I_1$ of $P_1$ is reached under $s$ then $P_1$'s best response at $I_1$ is to maximize the acceptance-probability of the MIP protocol given his beliefs on $I_1$. Similarly, given $P_2$'s strategy, if any information set $I_1$ of $P_1$ is unreachable under $s$ then, $P_1$'s best response at $I_1$ for every decision node in $I_1$ is the following: given $\vec{m}_1 = ((q_1, x_1),\ldots, (q_m,x_m))$, respond with a polynomial $P(t)$ such that $P(t)$'s value at all $t$ coincides with $P_2$'s reply on all $y_j$ where $q_j + t x_j = y_j$. Given $P_1$'s strategy of committing to a polynomial $P(t)$ that matches $P_2$ on all values of $t$, $P_2$' best response at any information set $I_2$ (reachable or unreachable under $s$) at step~\ref{simple-3} at every decision node in $I_2$ is to answer the tuple of queries $(y_1, \ldots, y_m)$ so as to maximize the acceptance probability of the MIP protocol. The verifier's move at step~\ref{simple-3} is the root of a non-trivial subform. Conditioned on step~\ref{simple-3} being reached, any dominant SSE at this subform corresponds to a strategy profile $s$ that is an SSE, which when restricted to this subform, maximizes the acceptance probaility of the MIP protocol. Under any such dominant SSE, we show that $P_1$'s best response at step~\ref{step:answer} is to send the correct answer bit. Suppose $x \in L$. If $P_1$ sends $c=0$, then $R_1=1/2$ with probability $1$. On the other hand, if $P_1$ sends $c=1$, by the soundness condition of the MIP protocol, the acceptance probability is $1$, leading to $R_1 =1$. Thus for $x \in L$, $s$ is a dominant SSE iff $P_1$ sends $c=1$. Suppose $x \notin L$. If $P_1$ reports $c=0$, then $R_1=1/2$ with probability $1$. On the other hand if $P_1$ reports $c=1$, then by the soundness condition of the MIP protocol, the maximum acceptance probability is $1/3$ leading to $R_1=1$. The protocol rejects with probability at least $2/3$ leading to $R_1=-1$. Thus, $P_1$'s expected payment for misreporting the answer bit is at most $R_1=-1/3$. Thus for $x \notin L$, $s$ is a dominant SSE iff $P_1$ sends $c=0$. Thus, under $s$ which is a dominant SSE, $c=1$ if and only if $x \in L$. Furthermore, the payment incurred by the provers when the answer bit sent in the first round is incorrect is at least $5/6$ for both provers and thus the protocol has constant utility gap. \end{proof} \begin{figure} \caption{A simple $O(1)$-utility gap ncRIP protocol for $\sf{NEXP}$.} \label{step:answer} \label{simple-2} \label{simple-3} \end{figure} \paragraph*{An {\boldmath $O(\alpha(n))$}-gap ncRIP protocol for {\boldmath$\sf{P^{NEXP[\alpha(n)]}}$}} Using the above $\sf{NEXP}$ protocol as a subroutine, we give an ncRIP protocol with~$O(\alpha(n))$-utility gap for the class $\sf{P^{NEXP[\alpha(n)]}}$. This protocol works for any function $\alpha(n)$ which~(1) is a positive integer for all $n$,~(2) is upper-bounded by a polynomial in~$n$,~and~(3) is polynomial-time computable.\footnote{For~\thmref{constchar} and~\thmref{polychar},~$\alpha(n)$ need only be a constant or polynomial in $n$. However,~\lemref{notc-lower} holds for all $\alpha(n)$'s that are polynomial-time computable (given $1^n$) and polynomially bounded, such as $\log n$, $\sqrt{n}$, etc.} The ncRIP protocol for any $L\in \sf{P^{NEXP[\alpha(n)]}}$ is in \figref{constpolygap}. It is fairly intuitive---$V$ simulates the polynomial-time machine directly, and uses the ncRIP protocol for~$\sf{NEXP}$ for the oracle queries. \begin{figure}\label{step:answer-bits} \label{step:output-test} \label{step: q-index} \label{step: oracle} \end{figure} We first argue the correctness of this protocol at a high-level and then present the formal proof. Under any strategy of $P_1$, the resulting $\sf{NEXP}$ queries in the protocol in~\figref{constpolygap} are the roots of non-trivial subforms. Which of these subforms are reachable under a strategy profile $s$ is determined solely by the strategy of $P_1$. However, because weak dominance is imposed on all subforms in a bottom-up fashion, $P_2$ and $P_3$ must play their optimal strategy in these subforms regardless of their reachability---and therefore, they must play optimally for any strategy of $P_1$. (This is one example of why ruling out weakly-dominated strategies in subforms in the definition of dominant SSEs is crucial to arguing correctness.) From the correctness of the $\sf{NEXP}$ protocol in \figref{nexp-ncrip-protocol}, we know that the optimal strategy of $P_2$ and $P_3$ is to compute the $\sf{NEXP}$ queries correctly. Given that the best response of $P_2$ and $P_3$ is to solve the $\sf{NEXP}$ queries correctly, and given that $V$ randomly verifies $1$ out of $\alpha(n)$ queries, $P_1$ must commit to correct answer bits in the first round, or risk losing a $O(1/\alpha(n))$ amount from his expected payment. If $P_1$ gives the correct answer bits in step 1, but $P_2$ or $P_3$ deviate within a subform corresponding to an $\sf{NEXP}$ query $\phi_q$, then with probability $1/\alpha(n)$, $V$ simulates the protocol in \figref{constpolygap} on $\phi_q$, in which case they lose a constant amount of their expected payment. \begin{lemma}\label{lem:notc-lower} Any language~$L \in \sf{P^{NEXP[\alpha(n)]}}$ has a 3-prover 5-round ncRIP protocol that has a utility gap of $6/(5\alpha(n))$. \end{lemma} \begin{proof} Consider any language $L \in \sf{P^{NEXP[\alpha(n)]}}$. Let $M$ be a polynomial-time Turing machine deciding $L$, with access to an oracle $O$ for an $\sf{NEXP}$ language. The ncRIP protocol for $L$ is given in \figref{constpolygap}. Let $s_1, s_2, s_3$ denote the strategy used by $P_1$, $P_2$ and $P_3$ for the protocol in \figref{constpolygap}, and $s=(s_1, s_2, s_3)$. First, note that regardless of $s_2$ and $s_3$, $P_1$'s best response at step~\ref{step:answer-bits} is to send the bits $c, c_1, \ldots, c_\alpha(n)$ such that the verification in step~\ref{step:output-test} goes through. In particular, if $s_1$ is such that the output of $M$ on input $x$, using $c_1,\ldots, c_{\alpha(n)}$ as answers to $\sf{NEXP}$ queries $\phi_1, \ldots, \phi_{\alpha(n)}$ is consistent with $c$, then $P_1$ gets $R_1\geq 0$. Meanwhile, if the verification in step~\ref{step:output-test} fails then $R=-1$. Thus, under any SSE $s$, the answer bits $c_1$, \ldots, $c_{\alpha(n)}$ sent by $P_1$ must be consistent with the computation of $M$ on $x$ and the final the answer bit $c$, regardless of $s_2$ and $s_3$. We now argue using backward induction. Each random index $i'$ chosen by $V$ in step~\ref{step: q-index} together with $\phi_{i'}$ starts a subform. In particular, since $P_2$ and $P_3$ both know $(i', \phi_{i'})$, all their information sets starting from step~\ref{step: oracle} are completely disjoint from information sets reached under a different index and $\sf{NEXP}$ query. By \lemref{nexp-mip}, there exists a dominant SSE $s$ on each such subform simulating an $\sf{NEXP}$ query, and under any dominant SSE, $s_2$ and $s_3$ are such that $c_{i'}^*$ is the correct answer to the $\sf{NEXP}$ query. Moving up the tree, the next subform is induced by $V$'s nature move at step~\ref{step: q-index} assigning a probability to each subsequent subform. Since under any dominant SSE, the expected payments of $P_2$ and $P_3$ (conditioned on reaching these subforms) are maximized, the overall expected payments under $V$'s nature move at step~\ref{step: q-index} is also maximized. We move up a further level in the tree to the root. We show that $P_1$'s best response at step~\ref{step:answer-bits} is to send the correct answer bits, given that under any dominant SSE $s$: \begin{itemize}[topsep=0pt,noitemsep] \item $P_2$ and $P_3$ answer each $\sf{NEXP}$ query $\phi_{i'}$ determined by $s_1$ and index $i'$ correctly, and \item the verification in step~\ref{step:output-test} goes through (i.e. $P$ does not set $R_1 = -1$) under $s_1$. \end{itemize} Suppose $s_1$ is such that there exists an $\sf{NEXP}$ query where $P_1$ lies. Let $k$ be the first $\sf{NEXP}$ query index such that $c_k$ is not the correct answer to query $\phi_k$, where $1 \le k \le \alpha(n)$. In particular, the instance $\phi_k$ is evaluated correctly (by running $M$ on $x$ using the correct answers to previous queries, $c_1, \ldots, c_{k-1}$) but the answer $c_k$ is not evaluated correctly based on $\phi_k$. Then with probability $1/\alpha(n)$, $V$ picks $k$ in step~\ref{step: q-index} and crosschecks the $c_k$ with $c_{i'}^*$, in which case the verification fails and $R_1 =0$. Thus, $P_1$'s expected payment is at most $1-1/\alpha(n)$. If $P_1$ answers all $\sf{NEXP}$ queries correctly, since the verification in step~\ref{step:output-test} goes through, $P_1$ gets $R_1=1$ with probability $1$. Thus, $c, c_1, \ldots, c_{\alpha(n)}$ are correct under any dominant SSE $s$, and $c=0$ if and only if $x \in L$. Now, we show that protocol $(V, \vec{P})$ has $O(\alpha(n))$ utility gap. Let $s^*$ be a dominant SSE of the game resulting from $(V, \vec{P})$. Suppose $s'$ is such that the answer bit $c'$ under $s'$ is incorrect. We go ``bottom-up'' in the game tree and exhibit a subform $H_I$ (reachable under $s'$) such that some prover acting in that subform loses $O(1/\alpha(n))$ compared to the strategy where $s^*_{I}$ is played on $H_I$, keeping the rest of the strategy fixed. First, consider all the $\sf{NEXP}$ queries at step~\ref{step: oracle} that start subforms. Suppose there exists a query $\phi_k$ committed under $s_1'$, for $1 \le k \le \alpha(n)$, such that $c_k*$ is the wrong answer to $\phi_k$. By~\lemref{nexp-const}, both $P_2$ and $P_3$ lose a constant amount ($5/6$ in particular) from their expected payment (conditioned on reaching this subform) compared to the dominant SSE strategy profile $s_{\phi_k}^*$ which reports the correct answer to $\phi_k$. Since $V$ chooses $\phi_k$ with probability $1/\alpha(n)$, $P_2$ and $P_3$ can gain $O(1/\alpha(n))$ in their overall expected payment by deviating to strategy profile $s_{\phi_k}$, at the subform corresponding to $(k,\phi_k)$ keeping $s_{-\phi_k}'$ fixed. Specifically, \[\mu_i \left(x, r, (s_{-\phi_k}', s_{\phi_k}^*), (V, \vec{P})\right) - \mu_i \left(x, r, (s_{-\phi_k}', s_{\phi_k}'), (V, \vec{P})\right) > \frac{1}{\alpha(n)} \left(\frac{5}{6}\right),~~\mbox{for}~~i \in \{2,3\}.\] Finally, suppose $P_2$ and $P_3$ answer all $\sf{NEXP}$ queries (reachable under $s'$) correctly. Then, $P_1$ loses at least $1/\alpha(n)$ at the subform at the root---the entire game. Since the answer bit $c'$ under $s'$ is incorrect, either step~\ref{step:output-test} fails or $P_1$ lies on some $\sf{NEXP}$ query. In the first case, $P_1$ gets $-1$ with probability $1$ compared to an expected payment of $1$ under $s^*$. In the second case, $P_1$ gets caught in step~\ref{step: oracle} with probability $1/\alpha(n)$, and gets an expected payment of at most $1-1/\alpha(n)$, losing at least $1/\alpha(n)$ compared to $s^*$. Thus, the protocol $(V, \vec{P})$ is an ncRIP protocol for $\sf{P^{NEXP[O(\alpha(n)])}}$ and has $O(\alpha(n))$ utility gap. \end{proof} \paragraph*{Exponential utility gap} We show how to simulate a general MRIP protocol $(V, \vec{P})$ with $p(n)$ provers and $k(n)$ rounds for a language $L$ using a 2-prover 3-round ncRIP protocol $(V', {P_1'},P_2')$ with exponential-utility gap. (The protocol $(V', {P_1'},P_2')$ is in~\figref{mrip-to-ncrip}.) Essentially, $V'$ gives all the randomness of $V$ to $P_1'$ and asks for the entire transcript and uses $P_2'$ to commit to a single prover's message, and cross-checks their answers. However, we don't want $P_1'$ who has access to all the randomness to dictate what information sets of $P_2'$ are reachable. Because the ncRIP protocol only needs an exponential utility gap, $V'$ asks one prover a totally random question (independent of $P_1'$), and with exponentially small probability this random message is exactly the message $V'$ intended to check. This protocol shows why exponential gap guarantees do not lead to meaningful protocols---a verifier that asks random questions can still extract honest behavior from rational provers through the exponentially small changes in expected payments. \begin{lemma}\lemlabel{mrip-ncrip} Any MRIP protocol can be simulated using a $2$-prover $3$-round ncRIP protocol with $O(1/2^{n^k})$-utility gap, for some constant $k$, where $n$ is the length of the input. \end{lemma} \begin{proof} Without loss of generality, let each message in the protocol be of length $\ell(n)$ for any input of length $n$, where $\ell(n)$ is a polynomial in $n$. We shift and rescale the payment function of~$V$, so that the payment is always in $[0, 1]$, and the expected payment is strictly greater than $0$ under the provers' best strategy profile. We simulate $(V,\vec{P}')$ using an ncRIP protocol $(V', (P_1', P_2'))$, given in~\figref{mrip-to-ncrip}. \begin{figure} \caption{Simulating any MRIP using an ncRIP protocol with exponential utility gap.} \label{firstround} \label{newrandomness} \label{p2-response} \label{originalrandomness} \label{p1-response} \label{consistency} \label{correctness} \end{figure} Let $s_1'$ and $s_2'$ denote the strategy of the provers $P_1'$ and $P_2'$ respectively and $s'=(s_1',s_2')$. Since $P_2'$ is queried only once and about a single message in Step~\ref{p2-response}, any strategy $s_2'$ of $P_2'$ de facto commits to a strategy profile for the provers in $(V, \vec{P})$. We analyze the game tree of the protocol $(V', \vec{P}')$ bottom-up. The last move is by $P_1'$ sending the entire transcript $\vec{m}$ at step~\ref{p1-response}. Any information set $I_1'$ of $P_1'$ is characterized by the randomness $r$ received by $P_1'$~in~step~\ref{originalrandomness} and all information sets are reachable under any $s'$. The decision nodes in $I_2'$ correspond to different strings $\tilde{m}_{ij}$ that $P_2'$ could have been asked in~step~\ref{newrandomness}. Given $s_2'$, the best response of $P_1'$ at any information set $I_1'$, for any beliefs at $I_1'$, is to match the transcript committed by $P_2'$ and make the verification in~step~\ref{consistency} go through. Suppose there exists a prover index $i$ and round $j$ such that the message $m_{ij}$ in $\vec{m}$ that is inconsistent with the corresponding message $m_{ij}'$ committed under $s_2'$. With probability $\frac{1}{2^{(j-1)\ell(n)}}$, the random string $\tilde{m}_{ij}$ generated by $V'$ in Step~\ref{newrandomness} is equal to $(m_{i1},\dots, m_{i(j-1)})$, otherwise the protocol ends with $R_1'=0$. With probability at least $\frac{1}{p(n)k(n)}$, $V'$ chooses $(i,j)$ in~step~\ref{newrandomness}, and queries~$P_2'$ for $m_{ij}'$ and $R_1'=-1$. If $(i,j)$ is not chosen then $R_1'=0$. Thus, $P_1'$ expected payment at $I_1'$ is at most \[\sum_{i\leq p(n), 1\leq j\leq k(n)} \frac{1}{2^{(j-1)\ell(n)}} \cdot \frac{1}{p(n)k(n)} \cdot \left( \mathbb{I}_{m_{ij}\neq m_{ij}'}\cdot (-1) + \mathbb{I}_{m_{ij}= m_{ij}'} \cdot 0\right) < 0.\] On the other hand, matching $s_2'$ on all messages leads to an expected payment of $0$ at $I_1'$ for $P_1'$. Given that $P_1'$ best response is to make the verifier in~step~\ref{consistency} go through for every randomness $r$, we analyze $P_2'$ move at step~\ref{p2-response}. Any information set $I_2'$ of $P_2'$ is characterized by the random string $\tilde{m}_{ij}$ received by $P_2'$~in~step~\ref{newrandomness} and all information sets are reachable under any $s'$. The decision nodes in $I_1'$ correspond to different random strings $r$ that $P_1'$ could have been asked in~step~\ref{newrandomness}. The best response of $P_2'$ at any information set $I_1'$, for any beliefs at $I_1'$, is to commit to the correct strategy profile $s$ of the provers $\vec{P}$. Suppose $P_2'$ commits to a strategy profile $s'$ such that the answer bit under $s'$ is wrong. With probability $\frac{1}{2^{(j-1)\ell(n)}}$, the random string $\tilde{m}_{ij}$ generated by $V'$ in Step~\ref{newrandomness} matches $(m_{i1},\dots, m_{i(j-1)})$, otherwise the protocol ends with $R_2'=0$. If it matches, then $P_2'$ expected payment is determined by the expected payment that $\tilde s$ gets in $(V, \vec{P})$ given $x$ and randomness $r$, which is strictly less than the expected payment under the strategy profile $s$ which commits to the correct answer bit (by correctness of the original MRIP protocol). That is, \[\sum_{1\leq j\leq k(n)} \frac{1}{k(n)} \cdot \frac{1}{2^{(j-1)\ell(n)}} \cdot u_{(V, \vec{P})}(x,\tilde{s}) <\sum_{1\leq j\leq k(n)} \frac{1}{k(n)} \cdot \frac{1}{2^{(j-1)\ell(n)}} \cdot u_{(V, \vec{P})}(x,{s}). \] Thus, given that $s_1'$ matches $s_2'$ for every randomness $r$, the best response by $P_2'$ is to commit to a strategy profile $s_2'=s$ that maximizes the total expected payment of the original protocol $(V,\vec{P})$ and thus has the correct answer bit. There are no non-trivial subform in the game. Any weakly-dominant SSE is a dominant SSE, under which both $P_1'$ and $P_2'$ maximize their expected payments---$P_1'$ matches $P_2'$ on all messages and $P_2'$ commits to the correct strategy profile $s$. Thus, the protocol $(V, \vec{P})$ is correct. \end{proof} \section{Upper Bounds: ncRIP Protocols with Utility Gap}\seclabel{upper} In this section, we prove matching upper bounds on the classes of ncRIP protocols with constant and polynomial utility gaps. In particular, we show that any language in $\sf{O}(1)\mbox{-}\sf{ncRIP}$ (or $\sf{poly}(n)\mbox{-}\sf{ncRIP}$) can be decided by a polynomial-time Turing machine with a constant (resp. polynomial) number of queries to an $\sf{NEXP}$ oracle. To simulate an ncRIP protocol, we need to find a strategy profile ``close enough'' to the dominant SSE so that the answer bit is still correct, i.e. a strategy profile that satisfies the utility-gap guarantee. We formalize this restatement of~\defref{rewardgap} below. \begin{observation}\label{gaprestate} Given input $x$ and an ncRIP protocol $(V, \vec{P})$ with $\alpha(n)$-utility gap, let $s$ be a strategy profile such that for all reachable subforms $H_I$ and all provers $P_j$ acting in $H_I$, \[u_j (x, r,(V, \vec{P}), (s_{-I}, s_{I}^*)) - u_j (x, r,(V,\vec{P}), (s_{-I}, s_{I})) < \frac{1}{\alpha(n)},\] where $s^*$ is a dominant SSE. Then, the answer bit $c$ under $s$ must be correct. \end{observation} There are several challenges involved in finding a strategy profile satisfying Observation~\ref{gaprestate}. First, the size of the game tree of any ncRIP protocol---small gap notwithstanding---can be exponential in $n$. Even if the polynomial-time machine considers a single strategy profile $s$ at a time, since $V$ can flip polynomially many coins, the part of the tree ``in play''---the number of decision nodes reached with positive probability under $s$---can be exponential in $n$. The second (and related) challenge is that of verifying whether a strategy profile is a dominant SSE. While the $\sf{NEXP}$ oracle can guess and verify an SSE, it cannot directly help with dominant SSEs. The polynomial-time machine must check using backward induction if an SSE is dominant on all its reachable subforms, which can again be exponential in $n$. Finally, the polynomial-time machine needs to search through the exponentially large strategy-profile space in an efficient way to find one which leads to the correct answer. In the remainder of the section we address these challenges. In \lemref{pruning} we show that we can prune the game tree, resolving the first two challenges. Then in Lemmas \ref{intervalsse} and \ref{lem:maxsse}, we show how to efficiently search through the strategy-profile space. \paragraph*{Pruning Nature moves in ncRIP protocols} We now give our main technical lemma for the upper bound, which shows that we can limit ourselves to examining protocols with bounded game trees without loss of generality. Recall that a verifier's coin flips in an ncRIP protocol represent {Nature moves} in the resulting game. The problem is that a polynomial-time verifier can result in Nature moves that impose nonzero probabilities over exponentially many outcomes. We prune the Nature moves of a verifier so that a polynomial-time Turing machine simulating an $\alpha(n)$-utility-gap protocol can traverse the game tree reachable under a given $s$. This pruning operation takes exponential time (linear in the size of the game tree), and can be performed by the $\sf{NEXP}$ oracle. \begin{lemma}[{\bf Pruning Lemma}]\lemlabel{pruning} Let $L \in \sf{\alpha}(n)\mbox{-}\sf{ncRIP}$ and let $(V, \vec{P})$ be an ncRIP protocol for $L$ with $\alpha(n)$ utility gap and $p(n)$ provers. Given an input $x$ and a strategy $s$, the protocol $(V, \vec{P})$ can be transformed in exponential time to a new protocol $(V', \vec{P})$, where \begin{itemize} \item the probability distribution on the outcomes imposed by the Nature moves of $V'$ for input $x$ has $O(\alpha(n))$ support, \item if $s$ is a dominant SSE of $(V, \vec{P})$, then $s$ induces a dominant SSE in $(V', \vec{P})$, \item $\lvert {u}_j(x, s, (V, \vec P)) - {u}_j(x, s, (V', \vec P)) \rvert < {1}/({4 \alpha(n))}$ for all $j \in \{1, \ldots, p(n)\}$, and \item the utility gap guarantee is preserved, that is, if the answer bit under $s$ is wrong, then there exists a subform $H_{I}$ in the game $(V', \vec{P})$ (reachable under $s$) and a prover $P_j$ acting at $H_I$, such that $P_j$ loses a $1/(2\alpha(n))$ amount in his expected payment compared to a strategy profile where $s_{I}$ (induced by $s$ on $H_I$) is replaced by $s_{I}^*$ (the dominant SSE on $H_I$), keeping the strategy profile outside $H_I$, $s_{-I}$, fixed. \end{itemize} \end{lemma} We prove~\lemref{pruning} in several parts. First, given an input $x$ and a strategy $s$ of the provers, we show how to transform any verifier $V$ that imposes a probability distribution over outcomes with exponential support into a verifier $V'$ that imposes a probability distribution with $O(\alpha(n))$ support. Let $(V, \vec{P})$ use $p(n)$ provers and let the running time of $V$ be $n^{k}$ for some constant $k$. There can be at most $2^{n^{k}}$ different payments that $V$ can generate for a particular prover given input $x$. Given $x$ and $s$, fix a prover index $j \in \{1,\ldots, p(n)\}$. Let ${R}_1, {R}_2, \ldots, {R}_m$ be the payments generated by $V$ on $s$ for $P_j$. Let $V$'s randomness assign probability distribution $\mu = (p_1, p_2, \ldots, p_m)$ to ${R}_1, {R}_2, \ldots, {R}_m$ respectively. Then, the expected payment of $P_j$ under $s$ is ${u}_j(x,s, (V, \vec P)) = \sum_{i=1}^m p_i {R}_i$. Recall that ${u}_j(x,s, (V, \vec P)) \in [-1,1]$ for all $1 \le j \le p(n)$. For each prover $P_{j}$, divide the interval $[-1,1]$ into $4 \alpha(n)$ intervals, each of length $1/(2\alpha(n))$. In other words, prover $P_{j}$'s $i$th interval is $[i/2\alpha(n),(i+1)/2\alpha(n))$,\footnote{To include $1$ as a possible payment, interval $2\alpha(n)-1$ should be closed on both sides; we ignore this for simplicity.} for each $i \in \{-2\alpha(n), \ldots, 2\alpha(n)-1\}$. We round the possible payments for~$P_j$ to a representative of the their corresponding interval. Specifically, we map each payment~$R_i$ to~$r_j$ as described in Equation~\ref{paymentmap}. \begin{figure}\label{paymentmap} \label{probmap} \end{figure} There are potentially exponentially many different payments ${R}_i$, and only polynomially many different payments ${r}_j$, so several ${R}_i$ must map to the same ${r}_j$. Let $T_j = \{i : R_i \mbox{~maps to~} r_j\}$. Let $\mathcal{T} = \cup_j \{T_j\}$. Thus the total number of distinct $r_j$ is $8 \alpha(n)$, so $|\mathcal{T}| = O(\alpha(n))$. Let $S: \{1,\ldots,m\}\rightarrow {\mathcal T}$ such that $S(i) = T_j$ if and only if $i \in T_j$. For each $T_j \in \mathcal T$, let $f(T_j)$ denote a unique index in the set $T_j$. Without loss of generality, let $f(T_j)$ be the lowest index in $T_j$. We define a new probability distribution $\mu'= (p_1', \ldots, p_h')$ over the payments $R_1, \ldots, R_h$ respectively, given by~Equation~\ref{probmap}. In particular, for every $T_j \in \mathcal{T}$, assign $R_{f(T_j)}$ probability $\sum_{k \in T_j}p_k$ and for every other index $\ell \in T_j$, $\ell \neq f(T_j)$, assign $R_\ell$ probability $0$. Define $V'$ as a polynomial-time verifier that simulates all deterministic computation of~$V$. For a fixed input~$x$, $V'$ imposes a probability distribution $\mu'$ with $O(\alpha(n))$ support for any probability distribution $\mu$ imposed by $V$. For other inputs, $V'$ simulates $V$ without any modification. Note that given input $x$, a strategy profile $s$ and the protocol $(V, \vec{P})$, transforming the distribution $\mu$ to $\mu'$ takes time linear in the size of the game tree, and thus exponential in $n$. (This means that an $\sf{NEXP}$ oracle, given $x$, can guess a particular $s$ and perform the transformation.) The remainder of the proof of \lemref{pruning} consists of the following three claims. First, we show that if the strategy profile $s$ is a dominant SSE of $(V, \vec{P})$, then $s$ restricted to the pruned game tree of $(V', \vec{P})$ imposes a dominant SSE on $(V', \vec{P})$ as well. \begin{cclaim}\label{validprotocol} Any dominant SSE $s$ of the game formed by $(V, \vec{P})$ induces a dominant SSE in the game formed by $(V', \vec{P})$. \end{cclaim} \begin{proof} By contradiction, suppose $s$ is not an SSE of $(V', \vec{P})$. Then there exists an information set $I= \{h_1, \ldots, h_m\}$, such that, conditioned on reaching $I$, the prover acting at $I$ can improve his expected payment by deviating (given his belief $u_I'$ at $I$ if $I$ is reachable under $s$ and for any belief he may hold at $I$ if $I$ is unreachable under $s$). We split into two cases: $I$ is either reachable or unreachable under $s$. By construction, if $I$ is reachable under $s$ in $(V',\vec{P})$, then $I$ must also be reachable under $s$ in $(V, \vec{P})$. Let $\mu_I' = (p_1', \ldots, p_m')$, where $p_i'$ is the probability assigned to $h_i$ and the support of $\mu_I'$ is $O(\alpha(n))$. Let $R_1, \ldots, R_m$ be the payments that the player acting on $I$ gets under $s$ conditioned on reaching $h_1, \ldots, h_m$ respectively. Similarly, let $R_1', \ldots, R_m'$ be the payments conditioned on reaching $h_1, \ldots, h_m$ respectively under the strategy to which the player at $I$ deviates from $s$. Then, $ \sum_{i = 1}^m p_i' R_i' > \sum_{i =1}^m p_i' R_i$. Let $\mu_I = (p_1, \ldots, p_{m})$ be the beliefs on $I$ under $s$ in $(V, \vec{P})$. We use the relationship between the distributions $\mu_I'$ and $\mu_I$, to show that such a deviation in $(V', \vec{P})$ would imply a deviation in $(V, \vec{P})$. In particular, mapping $\mu_I'$ back to $\mu_I$, using~Equation~\ref{probmap} we get: \begin{align} \sum_{i = 1}^m \bigg( \mathbb{I}_{i =f(S(i))} \cdot \sum_{k \in S(i)} p_k \bigg)R_i' &> \sum_{i =1}^m \bigg( \mathbb{I}_{i =f(S(i))} \cdot \sum_{k \in S(i)} p_k \bigg) R_i \notag \\%\label{ineq:mapping}\\ \sum_{i = 1}^m \bigg( \mathbb{I}_{i =f(S(i))} \cdot \sum_{k \in S(i)} p_k \bigg) \cdot \min_{k \in S(i)} R_k' &> \sum_{i = 1}^m \bigg( \mathbb{I}_{i =f(S(i))} \cdot \sum_{k \in S(i)} p_k \bigg) \cdot \max_{k \in S(i)} R_k\label{ineq:trans} \\ \sum_{i = 1}^m \bigg( \mathbb{I}_{i =f(S(i))} \cdot \sum_{k \in S(i)} p_k R_k'\bigg) &> \sum_{i = 1}^m \bigg( \mathbb{I}_{i =f(S(i))} \cdot \sum_{k \in S(i)} p_k R_k\bigg)\notag\\ \sum_{i=1}^{m} p_i R_i' &>\sum_{i=1}^{m} p_i R_i\label{ineq:final} \end{align} Inequality~\ref{ineq:trans} holds because $R_{f(S(i))}' > R_{f(S(i))}$, and so the two payments lie in different intervals in the mapping (Equation~\ref{paymentmap}). Thus the minimum payment in the interval of $R_{f(S(i))}'$ will be greater than the maximum payment in the interval of $R_{f(S(i))}$. Finally, Inequality~\ref{ineq:final} contradicts the fact that $s$ was an SSE in $(V, \vec{P})$, achieving a contradiction for the case of reachable information sets. For unreachable information sets the argument is easy. If $I$ is unreachable under $s$ in $(V', \vec{P})$, then $I$ must be unreachable under $s$ in $(V, \vec{P})$. If the action of prover acting at $I$ is not his best response in $(V', \vec{P})$ for some history $h \in I$ then, it contradicts the fact that $s$ is an SSE of $(V, \vec{P})$. Now, suppose $s$ is not a dominant SSE of $(V', \vec{P})$. Then there exists a subgame $H_I$ of height $k$ such that $s$ is dominant on all subgames following $H_I$ of height $<k$ but not weakly-dominant at $H_I$ (among SSE's that are dominant at all subforms following $H_I$). Let $s^*$ be dominant on $H_I$, then the expected payment of at least one prover $P_j$ is better under $s^*$, while everyone else does just as well (given the beliefs at $I$ derived using Bayes' rule if $I$ is reachable under $s$ or given any beliefs if $I$ is unreachable under $s$). Writing out the expression of expected payment of $P_j$ conditioned on reaching $H_I$ and ``unfolding'' the probability distribution back to the original game, we get a contradiction that $s$ could not have been a dominant SSE of the original game, as the same strategy $s^*$ would give $P_j$ a better expected payment at $H_I$ while doing as well for other provers. The proof is similar to the above and we omit the details. \end{proof} \iffalse First, we prove that $s$ is an SSE of $(V', \vec{P})$. Suppose by contradiction that $s$ is not a SSE of $(V', \vec{P})$. Then there exists an information set $I$, such that, conditioned on reaching $I$, the prover acting at $I$ can improve his expected payment by deviating (given his belief $u_I'$ at $I$ if $I$ is reachable under $s$ and for any belief he may hold at $I$ if $I$ is unreachable under $s$). Writing out their expected payments, accounting for the probabilistic transformation between $V$ and $V'$, in both cases leads to a contradiction to the assumption that $s$ was an SSE in $(V, \vec{P})$. We then argue that a similar contradiction holds for proving that $s$ is a dominant SSE of $(V', \vec{P})$. \fi The following claim states that for a given $s$, the expected payments of the provers under $(V, \vec{P})$ and under $(V', \vec{P})$ are not too far off. This claim is one of the bullet points in \lemref{pruning}, and will be used to prove Claim~\ref{gapsame}. \begin{cclaim} \label{paydiff} For all $j \in \{1, \ldots, p(n)\}$, $\lvert {u}_j(x, s, (V, \vec P)) - {u}_j(x, s, (V', \vec P)) \rvert < {1}/{(4 \alpha(n)})$. \end{cclaim} \begin{proof} Given input $x$ and strategy profile $s$, fix a~prover~$P_j$. Let $V$ generate payments ${R}_1, {R}_2, \ldots, {R}_m$ under $s$ for $P_j$, and assign the probability distribution $\mu = (p_1, p_2, \ldots, p_m)$ on ${R}_1, {R}_2, \ldots, {R}_m$ respectively. Using~Equations~(\ref{paymentmap}) and~(\ref{probmap}) we compare $P_j$'s expected payment: \begin{align*} &\lvert {u}_j(s,x, (V, \vec P)) - {u}_j(s,x, (V', \vec P)) \rvert = \bigg| \sum_{i=1}^m p_i {R}_i - \sum_{T_j \in \mathcal T} \bigg( \sum_{k \in T_j} p_k\bigg) {R}_{f(T_j)} \bigg| \\ =&\sum_{T_j \in \mathcal T} \sum_{k \in T_j} p_k \bigg( | {R}_{f(T_j)} - {R}_i |\bigg) < \sum_{T_j \in \mathcal T} \sum_{k \in T_j} p_i \bigg( \frac{1}{4 \alpha(n)}\bigg) = \bigg(\sum_{i=1}^m p_i \bigg) \frac{1}{4 \alpha(n)} =\frac{1}{4 \alpha(n)}\qedhere \end{align*} \end{proof} To complete the proof of~\lemref{pruning}, we show that $(V', \vec{P})$ preserves utility gap guarantees. \begin{cclaim}\label{gapsame} Given input $x$, if the answer bit under $s$ is wrong, then there exists a subform $H_I$ reachable under $s$ in $(V', \vec{P})$ and $P_j$ acting at $H_I$, such that $P_j$'s expected payment under~$s$ is $\frac{1}{2\alpha(n)}$ less than his expected payment under~$(s_{-I},s_I^*)$, where $s_{I}^*$ is a dominant SSE on $H_I$. \end{cclaim} \begin{proof} Consider a strategy profile $s^*$ that is a dominant SSE in the game tree of $(V, \vec{P})$. Since $s$ gives the wrong answer bit, from the $\alpha(n)$-utility gap guarantee of $(V, \vec{P})$ and~\defref{rewardgap}, there exists a subform $H_I$ reachable under $s$, such that a prover $P_j$ acting in $H_I$ loses $1/\alpha(n)$ in his expected payment under $s$ compared to the strategy profile~$(s_{-I},s_I^*)$. That is, \begin{equation}\label{claimineq:1} {u}_j(x, (s_{-I}, s_{I}^*), (V, \vec P)) - {u}_j(x, (s_{-I}, s_{I}), (V, \vec P)) > \frac{1}{\alpha(n)}. \end{equation} Using Claim~\ref{validprotocol}, $s^*$ also induces a dominant SSE in the game tree of $(V', \vec{P})$. And since $H_I$ is reachable under $s$ in $(V, \vec{P})$, it is reachable under $s$ in $(V', \vec{P})$ as well. We show that: \begin{equation}\label{claimineq:4} {u}_j(x, (s_{-I}, s_{I}^*), (V', \vec P)) - {u}_j(x, (s_{-I}, s_{I}), (V', \vec P)) > \frac{1}{2\alpha(n)}. \end{equation} Using~Claim~\ref{paydiff}, prover $P_j$'s expected payments in the two protocols under~$s$ and $s^*$ follow: \begin{align} \lvert {u}_j(x, (s_{-I}, s_{I}^*), (V, \vec P)) - {u}_j(x, (s_{-I}, s_{I}^*), (V', \vec P)) \rvert &< \frac{1}{4 \alpha(n)}\label{claimineq:2}\\ \lvert {u}_j(x, (s_{-I}, s_{I}), (V, \vec P)) - {u}_j(x, (s_{-I}, s_{I}), (V', \vec P)) \rvert &< \frac{1}{4 \alpha(n)}\label{claimineq:3} \end{align} There are four cases depending on the sign of the left hand side of Inequalities~(\ref{claimineq:2}) and~(\ref{claimineq:3}). We show that~Claim~\ref{gapsame} holds for one of the cases and omit the details of the others, which are similar. Suppose the left hand side of both inequalities is positive, that is, $u_j(x, (s_{-I}, s_{I}^*), (V, \vec P)) > u_j( x, (s_{-I}, s_{I}^*), (V', \vec P))$, and $u_j(x, (s_{-I}, s_{I}), (V, \vec P)) >u_j(x, (s_{-I}, s_{I}), (V', \vec P))$. Then \begin{align*} &u_j(x, (s_{-I}, s_{I}^*), (V', \vec P)) - u_j(x, (s_{-I}, s_{I}), (V', \vec P))\\ &\qquad\qquad> \bigg(u_j(x, (s_{-I}, s_{I}^*), (V, \vec P)) - \frac{1}{4\alpha(n)}\bigg) - u_j(s', x, (V', \vec P))\\ &\qquad\qquad> \bigg(u_j(x, (s_{-I}, s_{I}), (V, \vec P)) + \frac{1}{\alpha(n)} \bigg) -\frac{1}{4\alpha(n)}- u_j(x, (s_{-I}, s_{I}), (V', \vec P)) > \frac{3}{4\alpha(n)}. \qedhere \end{align*} \end{proof} \iffalse Consider a strategy profile $s^*$ that is a dominant SSE in the game tree of $(V, \vec{P})$. Since $s$ gives the wrong answer bit, from the $\alpha(n)$-utility gap guarantee of $(V, \vec{P})$ and~\defref{rewardgap}, there exists a subform $H_I$ reachable under $s$, such that a prover $P_j$ acting in $H_I$ loses $1/\alpha(n)$ in his expected payment under $s$ compared to the strategy profile~$(s_{-I},s_I^*)$. Using Claim~\ref{validprotocol}, $s^*$ also induces a dominant SSE in the game tree of $(V', \vec{P})$. And since $H_I$ is reachable under $s$ in $(V, \vec{P})$, it is reachable under $s$ in $(V', \vec{P})$ as well. Finally, Claim~\ref{gapsame} follows by applying~Claim~\ref{paydiff} twice: once to show that payments under $V$ and $V'$ are similar under $s$, and once to show that the payments are similar under $(s_{-I},s_I^*)$. In the worst case this leads to a payment difference of $1/(4\alpha(n)) + 1/(4\alpha(n)) = 1/(2\alpha(n))$. \fi Using~\lemref{pruning}, given an $O(\alpha(n))$-gap ncRIP protocol (where $\alpha(n)$ is constant or polynomial), a polynomial-time oracle Turing machine can use its $\sf{NEXP}$ oracle to guess a strategy profile $s$, prune the verifier's Nature moves, and report the resulting $O(\alpha(n))$-support distribution bit-by-bit. Thus, it can simulate the new distribution and find the decision nodes that are reachable under $s$. \paragraph*{Searching through the strategy-profile space efficiently} The next question is: how should the polynomial-time Turing machine navigate the potential strategy-profile space (in polynomial time) to find the strategy profile that satisfies~Observation~\ref{gaprestate}? To cut down on the search space, we invoke a recurring idea: divide each prover's expected payment interval $[-1,1]$, evenly into $8 \alpha(n)$ \defn{subintervals} of length $1/(4\alpha(n))$, and consider \defn{subinterval profiles} (a tuple of subintervals, one for each prover). \begin{lemma}\label{intervalsse} Given an input $x$ and an ncRIP protocol $(V, \vec{P})$ with $\alpha(n)$-utility gap, consider a {subinterval profile} $(L_1, \ldots, L_{p(n)})$, where each $L_i = [{k}/({4\alpha}), ({k+1})/({4 \alpha +1}))$ denotes a subinterval of prover $P_i$ in $[-1,1]$, for some $k \in \{-2\alpha(n), \ldots, 2\alpha(n)-1\}$. Let $s$ be an SSE that has an expected payment profile $\tilde{u}(x,s)$ such that $u_i(x,s) \in L_i$ for all $1 \le i \le p(n)$, and $s$ does not satisfy~Observation~\ref{gaprestate}. Then the expected payment profile $\tilde{u}(x,s^*)$ under a dominant SSE $s^*$ cannot lie in the same subinterval profile, that is, there exists a prover index $j$ such that $u_j(x, s^*) \notin L_j$. \end{lemma} \begin{proof} Since $s$ does not satisfy~Observation~\ref{gaprestate}, there exists a reachable subform~$H_I$ and prover~$P_j$ acting on $H_I$ such that the following holds. Without loss of generality, let $\mu_j(s,x) \in L_k$. \begin{align*} &u_j (x, (s_{-I}, s_{I}^*), (V, \vec{P})) - u_j (x, (s_{-I}, s_{I}), (V,\vec{P})) > \frac{1}{\alpha(n)}\\ &u_j (x, s^* , (V, \vec{P})) > \frac{1}{\alpha(n)} +\frac{k}{4 \alpha(n)} \implies u_j (x, s^* , (V, \vec{P})) \notin L_k\qedhere \end{align*} \end{proof} Using Lemma~\ref{intervalsse}, if the polynomial-time Turing machine is able to test {\em any} SSE $s$ with $\tilde{u}(x,s)$ in a subinterval profile, for all subinterval profiles, then it is guaranteed to find one that satisfies~Observation~\ref{gaprestate}. This is because a dominant SSE of an ncRIP protocol is guaranteed to exist and its expected payment profile must belong to some subinterval profile. However, there are still $O(\alpha(n))$ subintervals for each prover, and thus $O(\alpha(n)^{p(n)})$ total subinterval profiles. A polynomial-time machine cannot test SSEs for each of them. To reduce the search space further, we show that it is sufficient to consider subintervals of the {total expected payment} rather than individual and test an SSE $s$ for each of them. Recall that a SSE $s$ is weakly dominant if for any player~$i$ and SSE~$s'$, $u_i(s)\geq u_i(s')$. \begin{lemma}\lemlabel{maxsse} If a weakly-dominant SSE exists, then a strategy profile $s$ is a weakly-dominant SSE if and only if $s$ is an SSE and $s$ maximizes the sum of utilities of all players among all SSEs. \end{lemma} We are now ready to prove the upper bound for ncRIP classes with constant, polynomial, and exponential utility gap. \paragraph*{Constant utility gap} Using~\lemref{pruning} and~\lemref{maxsse}, simulating a constant-gap protocol using a $\sf{P^{\sf{NEXP}[O(1)]}}$ machine $M$ is straightforward. We give a high-level overview below. There are at most $O(1)$ subforms that are reachable under any strategy profile $s$, and the total expected payment of the provers conditioned on reaching these subforms will be in one of the $O(1)$ subintervals. Thus, there are $O(1)$~combinations of total expected payments on all subforms (including the whole game). $M$ queries its $\sf{NEXP}$ oracle whether there exists an SSE that achieves that combination of total expected payments on those subforms, for all combinations. \begin{lemma}\lemlabel{const-upper} $\sf{O}(1)\mbox{-}\sf{ncRIP} \subseteq \sf{P^{NEXP[O(1)]}}$. \end{lemma} \begin{proof} Given any $L \in \sf{\alpha}(n)\mbox{-}\sf{ncRIP}$, let $(V, \vec{P})$ be the MRIP protocol with $\alpha(n)$ utility gap for~$L$, where $\alpha(n)$ is a constant. Given an input $x$ of lenth $n$, consider the following deterministic polynomial-time oracle Turing machine $M$ with access to an oracle $O$ for an $\sf{NEXP}$ language. Similar to the proof of~\lemref{poly-upper}, $M$ divides $[-1,1]$ into $8 \alpha(n)$ intervals, each of length $1/4\alpha(n)$. In other words, the $i$th interval is $[i/4\alpha(n),(i+1)/4\alpha(n))$ for each $i \in \{-4\alpha(n), \ldots, 4\alpha(n)-1\}$.\footnote{To include $1$ as a possible reward, interval $4\alpha(n)-1$ should be closed on both sides; we ignore this for simplicity.} Using~\lemref{pruning}, under a given input $x$ and strategy profile $s$, there are at most $8 \alpha(n)$ subforms are reached under any $s$ in the modified game. Total expected payment of provers acting within any subform (conditioned on reaching the subform) must lie in any one of the $8\alpha(n)$ intervals in $[-1,1]$. Thus overall, there are $O(\alpha(n)^\alpha(n))$ combinations of total expected payments over subforms, which is still $O(1)$. Let $(u, u_{I_1}, \ldots, u_{I_k})$ be a tuple of total expected payments, where $k = 8 \alpha(n)$, the maximum number of subforms reachable under any $s$, and $u$ represents the total expected payment of the whole game, whereas $u_{I_j}$ represents total expected payment of the provers acting in subform $I_j$ (conditioned on reaching $I_j)$. For each combination $(u, u_{I_1}, \ldots, u_{I_k})$, $M$ queries $O$: {\em does there exists a strategy profile that is an SSE and the total expected payments over reachable subforms under $s$ and $O(\alpha(n))$ support Nature moves imposed by~\lemref{pruning} is $(u, u_{I_1}, \ldots, u_{I_k})$ (conditioned on reaching the subforms)?} Among the queries to which the oracle's answer is ``yes'', $M$ finds the combination that achieves maximum total expected payment for all subforms. Such a combination is guaranteed to exist because $(V, \vec{P})$ is an ncRIP protocol, and a dominant SSE of the game exists. \end{proof} \begin{remark} \label{rem:nonadaptiveconst} The polynomial-time oracle Turing machine in~\lemref{const-upper} can issue all its queries {\em non-adaptively}. That is, $\sf{\alpha}(n)\mbox{-}\sf{ncRIP} \subseteq \sf{P_{||}^{NEXP[O(1)]}}$. Furthermore in~\secref{lower}, we show that $\sf{O}(1)\mbox{-}\sf{ncRIP} \subseteq \sf{P^{NEXP[O(1)]}}$. Indeed, the two classes are equal: $\sf{P^{NEXP[O(1)]}_{||}}= \sf{P^{NEXP[O(1)]}}$. Since $\sf{O}(1)\mbox{-}\sf{MRIP} = \sf{P_{||}^{NEXP[O(1)]}}$~\cite{ChenMcSi16,CMS17arxiv}, this shows that cooperative provers are as powerful as non-cooperative provers under constant utility-gap guarantees, and we obtain \corref{c-mrip-ncrip}. \end{remark} \paragraph*{Polynomial utility gap} Next, we prove the upper bound of the case of polynomial utility gap. We note that the simple strategy of querying all possible payment combinations as in~\lemref{const-upper} does not work (there are $O(\alpha(n)^{\alpha(n)})$ total combinations). To simulate a polynomial-utility gap ncRIP protocol $(V, \vec{P})$, using a $\sf{P^{NEXP}}$~machine $M$, we put to use all the structure we have established in this section. For each of the $O(\alpha(n))$ total payment subintervals of the interval $[-1,1]$ that correspond to an SSE, $M$ does a recursive search to find an exact total expected payment $u(x,s)$ that is generated by an SSE. (We can restrict ourselves to $O(\alpha(n))$ oracle queries due to \lemref{maxsse}.) In particular, $M$ queries the $\sf{NEXP}$ oracle: {\em Does there exist an SSE with total expected payment in the first half of the $i$th interval?}. If the answer is {\em yes} then $M$ recurses on the first half of the $i$th interval; $M$ does not need to search the second half by Lemma~\ref{intervalsse}. Otherwise (if the answer is {\em no}) then $M$ recurses on the second half. Thus, in polynomial time and with polynomial queries, $M$ can find an exact $u(x,s)$ for an SSE $s$ in the subinterval using the power of its {\em adaptive} queries. Next, $M$ simulates the protocol $(V, \vec{P})$ with the help of the oracle, under the SSE $s$ for a given subinterval. \lemref{pruning} is crucial for $M$ to simulate the verifier's moves, because $V$ in general can induce exponential-size distributions. $M$ traverses the tree reachable under $s$ ``top-down'' using the oracle to learn the pruned distributions and provers' moves. Finally, $M$ goes ``bottom-up'' to test whether $s$ satisfies~Observation~\ref{gaprestate} on all its reachable subgames. \begin{lemma}\lemlabel{poly-upper} $\sf{poly}(n)\mbox{-}\sf{ncRIP} \subseteq \sf{P^{NEXP}}$. \end{lemma} \begin{proof} Given any $L \in \sf{poly}(n)\mbox{-}\sf{ncRIP}$, let $(V, \vec{P})$ be the ncRIP protocol with $\alpha(n)$ utility gap for~$L$, where $\alpha(n)= n^k$ for some constant $k$. Given an input $x$ of lenth $n$, consider the following deterministic polynomial-time oracle Turing machine $M$ with access to an oracle $O$ for an $\sf{NEXP}$ language. $M$ divides $[-1,1]$ into $8 \alpha(n)$ intervals, each of length $1/4\alpha(n)$. In other words, the $i$th interval is $[i/4\alpha(n),(i+1)/4\alpha(n))$ for each $i \in \{-4\alpha(n), \ldots, 4\alpha(n)-1\}$.\footnote{To include $1$ as a possible reward, interval $4\alpha(n)-1$ should be closed on both sides; we ignore this for simplicity.} For each interval $[i/4\alpha(n),(i+1)/4\alpha(n))$, $M$ makes the following queries to $O$: {\em does there exist a strategy profile ${s}$ that is an SSE and the sum of expected payments of all provers $u(x, s)$ is in the $i$th interval?} Let $L$ denote the set of intervals for which the answer to the query is ``yes''. For each interval $[\ell/4\alpha(n), (\ell+1)/4\alpha(n)) \in L$, $M$ queries $O$: {\em does there exist a strategy profile $s$ that is an SSE and the sum of expected payments of all provers $u(x, s)$ is in the first half of the $\ell$th interval?} If the answer is ``yes'', then $M$ recurses on the first half, else $M$ recurses on the second half of the interval. In polynomial time and polynomial queries, $M$ can find the exact total expected payment $u(x, s, (V, \vec{P}))$ in the interval that is generated by an SSE. $M$ asks further queries to figure out the exact payment profile under such an SSE. For $k \in \{1,\ldots, p(n)\}$, where $p(n)$ is the total number of provers in $(V, \vec{P})$, and for each $j \in \{1, \ldots, n^{k'}\}$, where $n^{k'}$ is the running time of $V$ ($k'$ is a constant), $M$ asks the following queries adaptivily: {\em under an SSE where $\sum_{i=1}^{p(n)} \mu_i (x,s) = u(x, s)$, what is the $j$th bit in the expected payment $\mu_k(x, s)$ of prover $P_k$, given and the first $j-1$ bits of $\mu_k(x,s)$ and $\mu_1(x,s), \ldots, \mu_{k-1}(x,s)$}. In $O(n^{k'} p(n))$ queries, $M$ can figure out the exact payment profile $\tilde{u}(x,s)=(\mu_1, (x, s) \ldots, \mu_k (x,s))$ under an SSE $s$, such that the total expected payment is in the $\ell$th interval. $M$ now verifies whether the SSE corresponding to the payment profile $\tilde{u}(x,s)$ satisfies the condition of Observation~\ref{gaprestate}. $M$ proceeds in two phases: first, $M$ wants to go ``top-down'' figuring out what part of the game tree is being played under $s$ on input $x$, using the oracle to simulate the provers and the verifier. Then, it goes ``bottom-up'' in the tree being played under $s$, to check whether all subforms are ``$(1/\alpha(n))$-close'' to the dominant strategy at that subform. \paragraph*{Top-down phase.} Let $k(n)$ be the total number of rounds in $(V, \vec{P})$. Note that $k(n)$ is polynomial in $n$. Let $m_{ij}$ denote the message sent by prover $P_i$ at round $j$. Then, for each round $j$ and each prover $i$ where $1\le j \le k(n)$ and $1 \le k \le p(n)$, $M$ first asks the oracle to give the ``pruned'' $O(\alpha(n))$ support distribution imposed by the Nature move of $V$ at round $j$ bit by bit as follows: {\em ``under an SSE where the expected payment profile is $\tilde u (x,s)$, what is the $r$th bit of the distribution imposed by $V'$~using $V$ and~\lemref{pruning}?''} This requires a polynomial number of bits (and therefore queries) because the distribution is polynomial sized. The pruned distribution preserves the dominant SSE and changes the utility gap by only a factor $2$ (this factor does not affect the proof as our intervals are scaled down to handle it). Given this distribution, $M$ simulates $V$ on the support of the distribution to figure out the messages that $V$ sends to the provers in round $j$. In particular, $M$ does not have access to random bits, so instead it simulates \emph{every} action of $V$ in the support. To simulate the provers at round $j$, $M$ similarly queries $O$ bit by bit: {\em ``under an SSE where the expected payment profile is $\tilde u (x,s)$, what is the $r$th bit of the message sent by $P_k$''}. Thus, after simulating the moves of $V$ and $P$ under $s$, $M$ has sketched out the $O(\alpha(n))$ size part of the game tree being played under $s$ corresponding to $\tilde u (x,s)$. \paragraph*{Bottom-up phase.} Given the $O(\alpha(n))$ nodes of the game tree under play, $M$ can mark out the subforms reachable under $s$ corresponding to $\tilde{u}(x,s)$. Going from the last level up, for each subform $H_I$ reachable under $s$, $M$ uses the oracle to figure out which payment interval the expected payments of the weakly-dominant SSE on $H_I$ lie in (given the expected weakly-dominant SSE payments on the reachable subforms verified so far), until it finds a subform that violates the condition of Observation~\ref{gaprestate}. In particular, for each subform $H_I$ of height $k$, let $\tilde{u}(x, s, I')$ denote the tuple of total expected payments under $s$ on all subforms $H_{I'}$ of height $<k$ following $I$ (conditioned on reaching $I$) verified so far. $M$ divides the interval $[-1,1]$ into $8 \alpha(n)$ intervals of size $\alpha(n)/4$ as before and for each interval queries the oracle $O$: {\em does there exist a strategy profile ${s_I}$ on subagme $H_I$ that is an SSE and the sum of expected payments of all provers $u(x, s, I)$ is in the $x$th interval, and gets a total expected payments on subforms $H_{I'}$ of height $<k$ following $I$ equal to $\tilde{u}(x, s, I')$}.\footnote{$M$ does not {\em need} to send the total expected payments of the subforms at lower levels. Instead, $M$ can just send the total expected payment $u(x, s)$ at the root and ask $O$ to guess $s$ as well. An $\sf{NEXP}$ can verify if one SSE weakly dominates another. This observation is crucial in extending this proof to exponential utility gap.} Then, $M$ finds the maximum interval $[i/4\alpha(n), (i+1)/4\alpha(n))$ among the intervals for which the oracle says yes. By \lemref{maxsse}, the weakly-dominant SSE $s_I^{\mbox{max}}$ at $H_I$ also lies in the $i$th interval. Using the probability $p_I$ assigned by $H_I$ ($M$ knows the distribution imposed by all ``pruned'' Nature moves), $M$ checks whether the total expected payment of weakly-dominant SSE $s_I^{\mbox{max}}$ is in the same interval as the sum of expected payments of provers in $Z_I$ under $s$. If it is not, then $s$ fails the test and $M$ continues to the next interval in $L$. Otherwise, $M$ continues to the next reachable subform. If $s$ passes the test for all subforms (including at the root), then by Observation~\ref{gaprestate}, the answer bit under $s$ is correct. $M$'s final query to $O$ is: {\em ``under an SSE where the expected payment profile is $\tilde u (x,s)$, what is the answer bit $c$?} If $c=1$, then $M$ accepts $x$, otherwise $M$ rejects $x$. $M$ is guaranteed to find a payment profile $\tilde u(x,s)$ (and thus a strategy profile $s$) that passes the test. Since $(V, \vec{P})$ is an ncRIP protocol for $L$, there exists a dominant SSE $s^*$ in some interval in $L$. By Obversation~\ref{gaprestate}, if a strategy profile $s'$ fails the test, the dominant SSE can not get a total expected payment in the same interval as $s'$. Thus, we can rule out intervals by checking any SSE with total expected payment in that interval. Since a dominant SSE $s^*$ exists, $M$ must eventually find an interval, where the corresponding SSE passes the test. To complete the proof, we note that (a) $M$ runs in polynomial time, (b) each query to the oracle is polynomial, and, (c) the oracle queries can be answered in non-deterministic exponential time. First, (a) holds because each top-down and bottom-up phase is executed $O(\alpha(n))$ times and each of the phases take polynomial time. In the top-down phase, $M$ simulates the protocol on strategy $s$ using the oracle while restricting the verifier's Nature moves to be of $O(\alpha(n))$ support. Thus this phase takes polynomial time. For the bottom-up phase, $M$ finds weakly-dominant SSEs at each reachable subforms under $s$. Since there are at most $O(\alpha(n))$ subforms and at most $O(\alpha(n))$ interval queries for each subform, the bottom-up phase takes time polynomial in $n$. Second, (b) holds each oracle query involves a total expected payment $\tilde{u} (x,s)$ or an interval of size $\alpha(n)/2$, both of which can be generated by $V$ and hence are polynomial in $n$. To prove (c), it is sufficient to show that an $\sf{NEXP}$ machine can guess a strategy profile and verify if it is an SSE and if it gets expected payments in a certain interval. Since the transcript of any ncRIP protocol is polynomial in $n$, a strategy profile $s$ of the provers can be represented in exponential bits, and thus $O$ can guess such an $s$. Now given $s$ and the protocol $(V, \vec{P})$, by~\lemref{verify-sse}, it is possible to verify whether ${s}$ is an SSE of the game in time linear in the size of the game tree, and thus exponential in $n$. Furthermore, it can compute the expected payments of the provers under ${s}$ in exponential time as well, which is sufficient to answer all the queries made by $M$. \end{proof} \paragraph*{Exponential utility gap} We conclude by giving a tight upper bound on the class of ncRIP protocols with exponential utility gaps. The proof follows immediately from that of~\lemref{poly-upper}. In fact, it is simpler as the exponential-time Turing machine is powerful enough to (a) simulate $V$'s Nature moves directly, and (b) test all possible payment profiles. Thus, in the case of exponential utility gap, we do not need~\lemref{pruning} or the notion of subintervals. \begin{lemma}\lemlabel{exp-upper} $\sf{ncRIP} \subseteq \sf{EXP^{poly-NEXP}}$. \end{lemma} \begin{remark}\label{rem:nonadaptiveexp} Since $\sf{EXP^{poly-NEXP}} \subseteq \sf{EXP_{||}^{poly-NEXP}} = \sf{EXP_{||}^{NP}}$, and $\sf{EXP_{||}^{NP}} \subseteq \sf{MRIP}$~\cite{ChenMcSi16}, \lemref{exp-upper} shows that $\sf{exp}(n)\mbox{-}\sf{ncRIP} \subseteq \sf{exp}(n)\mbox{-}\sf{MRIP}$ and using~\lemref{mrip-ncrip}, we get that in general the two classes coincide. In other words, non-cooperative rational proofs are as powerful as cooperative multi-prover rational proofs under exponential utility gap and we obtain \corref{exp-mrip-ncrip}. \end{remark} \section{Additional Related Work}\seclabel{additionalrelated} \paragraph*{Rational Proofs} The model of single-prover \defn{rational interactive proofs} (RIP) was introduced by Azar and Micali~\cite{azar2012rational}, who used scoring rules as the main tool to construct simple and efficient RIP protocols. In a follow-up work~\cite{azar2013super}, they extended this work to design super-efficient rational proofs that have sublinear verification and computation compelexity. Guo et al. present rational \defn{arguments} for a computationally bounded prover and a sublinear verifier in~\cite{guo2014rational}, and construct rational arguments for all languages in $\sf{P}$~\cite{guo2016rational}. Campanelli and Rosario~\cite{campanelli2015sequentially} study sequentially composable rational proofs and rational proofs for space bounded computations~\cite{campanelli2017efficient}, while Zhang and Blanton~\cite{zhang2014efficient} design protocols to outsource matrix multiplications to a rational cloud. The model of \defn{multi-prover (cooperative) rational interactive proofs} (MRIP) was introduced by Chen et al.~\cite{ChenMcSi16}. In this model, the provers work together to maximize their \emph{total payment}. They show that the class equals $\sf{EXP^{||NP}}$ under exponential utility gap and $\sf{P^{||NEXP}}$ under polynomial utility gap. In the full version~\cite{CMS17arxiv}, they show that MRIP under constant utility gap is equal to $\sf{P^{||NEXP[O(1)]}}$. In follow-up work~\cite{ChenMcSi18}, the authors scale down the power of the verifier and design super-efficient MRIP protocols with strong utility-gap guarantees. \paragraph*{Game-Theoretic Characterization of Complexity Classes} Game-theoretic characterization of complexity classes has been largely studied in the form of \defn{refereed games}~\cite{chandra1976alternation, feige1990noisy, feige1997making, feige1992multi, reif1984complexity, feigenbaum1995game,koller1992complexity}. Chandra and Stockmeyer~\cite{chandra1976alternation} show that any language in $\sf{PSPACE}$ is refereeable by a game of perfect information. Feige and Kilian~\cite{feige1997making} show that the class of imperfect information, perfect recall refereed games is exactly $\sf{EXP}$. Feigenbaum, Koller and Shor~\cite{feigenbaum1995game} show that if provers are allowed to have imperfect recall (essentially acting as oracles), refereed games can simulate $\sf{EXP^{NP}}$. \paragraph*{Query Complexity and Related Complexity Classes} The query complexity of oracle Turing machines has been widely studied in the literature~\cite{beigel1991bounded,wagner1990bounded,buhrman1999quantum}. In this paper, we give game-theoretic characterizations of the classes $\sf{P^{NEXP[O(1)]}}$. $\sf{P^{NEXP}}$, and $\sf{EXP^{poly-NEXP}}$. \section{Properties of Strong Sequential Equilibrium} \seclabel{strongse} In this section, we prove several important properties of strong sequential equilibrium, which make it a good candidate solution concept in designing extensive-form mechanisms. \paragraph*{Strong sequential equilibrium admits a sequential equilibrium} We first show that, given a strategy profile ${s}$ that is a strong sequential equilibrium (thus does not rely on a belief system), we can construct a belief system ${\mu}$ such that the pair $({s}, {\mu})$ forms a sequential equilibrium. \begin{lemma}\lemlabel{lem:SE} For any strategy profile ${s}$ that is a strong sequential equilibrium, there exists a belief system ${\mu}$ such that $({s}, {\mu})$ is a sequential equilibrium. \end{lemma} \begin{proof} The sequential-rationality requirement will follow easily from the definition of SSE. To prove that $s$ admits a sequential equilibrium, the key is to pair it with a consistent belief system; see~\secref{ncmrip} for definition. Indeed, we construct a belief system $\mu$ and show that, there exists a sequence of pairs $(s^\varepsilon, \mu^\varepsilon)_{\varepsilon\rightarrow 0}$ which converges to $(s,\mu)$, as $\varepsilon$ goes to $0$, where each $s^\varepsilon$ is a profile of completely mixed behavioral strategies and each $u^\varepsilon$ is the belief system derived from $s^\varepsilon$ using Bayes' rule. Recall that a strategy profile ${s}$ defines a probability distribution over the actions available to a player at an information set where he acts. That is, for each information set $I_i$ of a player $i$, $s_i(I_i)$ is a probability distribution over $A(I_i)$, the set of actions available to player $i$ at $I_i$. In particular, if $A(I_i) = (a_1, \ldots, a_k)$, then $s_i(I_i) = (p_i(a_1), \ldots, p_i(a_k))$ where $p_i(a_\ell)$ is the probability that player~$i$ chooses action $a_\ell$ at $I_i$. Let $A^+(I_i)$ and $A^0(I_i)$ be the set of actions at information set $I_i$ which player $i$ chooses with positive probability and zero probability respectively; that is, $A^+(I_i) = \{a_\ell \in A(I_i) \mbox{ $|$ } p_i(a_\ell) >0\}$ and $A^0(I_i) = A(I_i) \setminus A^+(I_i)$. For any $\varepsilon\in (0, 1)$, we define $s^\varepsilon_i$ for player $i$ at information set $I_i$ as follows: if $A^0(I_i) = \emptyset$ then $s^\varepsilon_i(I_i) = s_i(I_i)$; otherwise, \[ s^\varepsilon_i(I_i)(a_\ell) = \left\{ \begin{array}{ll} (1-\varepsilon) \cdot p_i(a_\ell) & \mbox{for each $a_\ell \in A^+(I_i)$}; \\ \frac{\varepsilon}{|A^0(I_i)|} & \mbox{for each $a_\ell\in A^0(I_i)$}. \end{array} \right. \] By construction, $s^\varepsilon_i(I_i)$ is a valid probability distribution over $I_i$ and is completely mixed, that is, assigns a positive probability to every action in $I_i$. Indeed, because $\sum_{\ell = 1}^k p_i(a_\ell) = \sum_{a_\ell \in A^+(I_i)} p_i(a_\ell) = 1$, when $A^0(I_i)\neq \emptyset$ we have $\sum_{a_\ell \in A(I_i)} s^\varepsilon_i(I_i)(a_\ell) = \sum_{a_\ell \in A^+(I_i)} (1-\varepsilon) p_i(a_\ell) + \varepsilon = 1$. It is easy to see that $s^\varepsilon_i$ converges to $s_i$ when $\varepsilon \rightarrow 0$. Given the strategy profile $s^{\varepsilon}$, to define $\mu_i^\varepsilon$, the belief system of a player $i$, consider an arbitrary information set $I_i$ where player $i$ acts. The probability that a particular history $h= (a^1, \ldots, a^K)\in I_i$ occurs can be derived from $s^\varepsilon$ as follows. For any history $h'=(a^1, \ldots, a^w)$ with $0\leq w\leq K-1$, recall that $Z(h')$ is the player acting following history $h'$. For any action $a \in A(h')$, let $s_{Z(h')}^\varepsilon (h')(a)$ denote the probability assigned by $s_{Z(h')}^\varepsilon$ to action $a$ at history $h'$ (i.e., at the information set containing $h'$). We have \[ \prob{h \mbox{ occurs under } s^\varepsilon} = \prod_{w=0}^{K-1} s_{Z(a^1, \ldots, a^w)}^\varepsilon(a^1, \ldots, a^w)(a^{w+1}) = c_h \varepsilon^{e_h} (1-\varepsilon)^{f_h}, \] where $c_h, e_h$ and $f_h$ are positive constants depending on $s$ and $h$, but not on $\varepsilon$. In particular, letting $S^0$ be the set of actions $a^{w+1}$ in $h$ that are assigned zero probability by $s_{Z(h')}$ at history $h' = (a^1,\dots, a^{w})$, we have $e_h = |S^0|$. $f_h$ is the number of actions $a^{w+1}$ in $h$ such that $a^{w+1}$ is not in $S^0$ but $s_{Z(h')}$ is not completely mixed at $h'$ either. Finally, \[ c_h = \prod_{\substack{0\leq w\leq K-1\\ a^{w+1}\notin S^0}} s_{Z(a^1,\dots, a^w)}(a^1,\dots, a^w)(a^{w+1}) ~\cdot~ \prod_{\substack{0\leq w\leq K-1\\ a^{w+1}\in S^0}} \frac{1}{|A^0(a^1,\dots, a^w)|}, \] where the second term is defined to be 1 if $S^0 = \emptyset$. Note that $\prob{h \mbox{ occurs under } s^\varepsilon}>0$ for every $h\in I_i$. The probability that the information set $I_i$ is reached under $s^\varepsilon$ is $\mathcal{P}(I_i) \triangleq \sum_{h \in I_i} \prob{h \mbox{ occurs under } s^\varepsilon} = \sum_{h\in I_i} c_h \varepsilon^{e_h} (1-\varepsilon)^{f_h}>0$. Then $\mathcal{P}(I_i)$ can be written as a polynomial in $\varepsilon$, that is, $\mathcal{P}(I_i) = b_0 + b_1 \varepsilon + b_2 \varepsilon^2 + \ldots + b_r \varepsilon^r$, where the coefficients $b_0, \ldots, b_r$ may be zero, positive or negative. Following Bayes' rule, for any history $h\in I_i$, $$\mu^\varepsilon_i(I_i)(h) = \frac{c_h \varepsilon^{e_h} (1-\varepsilon)^{f_h}}{\mathcal{P}(I_i)} = \frac{c_h \varepsilon^{e_h} (1-\varepsilon)^{f_h}}{ b_0 + b_1 \varepsilon + b_2 \varepsilon^2 + \ldots + b_r \varepsilon^r}>0.$$ To define the belief system $\mu$, let $d$ be the minimum degree of $\varepsilon$ in $\mathcal{P}(I_i)$ such that $b_d\neq 0$. As the minimum degree of $\varepsilon$ in each term $c_h \varepsilon^{e_h} (1-\varepsilon)^{f_h}$ is $e_h$ with coefficient $c_h>0$, we have $d = \min_{h\in I_i} e_h$ and $b_d = \sum_{h\in I_i, e_h=d} c_h >0$. For any $h\in I_i$, we define $\mu_i(I_i)(h) = {c_h}/{b_d}(>0)$ if $e_h =d$, and $\mu_i(I_i)(h) = 0$ if $e_h > d$. It is easy to see that $\mu_i(I_i)$ is a probability distribution over $I_i$. Moreover, $\lim_{\varepsilon \rightarrow 0} \mu^\varepsilon_i(I_i)(h) = c_h/b_d$ when $e_h = d$, and $\lim_{\varepsilon \rightarrow 0} \mu^\varepsilon_i(I_i)(h) = 0$ when $e_h>d$. Thus, $\lim_{\varepsilon \rightarrow 0} \mu^\varepsilon_i(I_i)(h) = \mu_i(I_i)(h)$ for any player $i$, information set $I_i$ of $i$ and history $h\in I_i$, and $\mu^\varepsilon$ converges to $\mu$ as $\varepsilon\rightarrow 0$. Since $s^{\varepsilon}$ converges to $s$ as we have seen, $s$ and $\mu$ are consistent. For sequential rationality, the only thing we need to show is that, at a reachable information set, the belief specified by $\mu$ is derived from $s$ using Bayes' rule. To do so, consider an arbitrary player $i$ and an information set $I_i$ of $i$ that is reachable by $s$. By definition, there exists $h\in I_i$ such that $e_h = 0$, thus $d = 0$ for $\mathcal{P}(I_i)$ and $b_0 = \sum_{h\in I_i, e_h=0} c_h$. Therefore $\mu_i(I_i)$ is indeed the probability distribution derived from $s$ using Bayes' rule. Sequential rationality of $s$ (with respect to $\mu$) then follows from the definition of SSE. Thus $({s}, \mu)$ is a sequential equilibrium. \end{proof} \paragraph*{Alternate definition of strong sequential equilibrium} The notion of strong sequential equilibrium requires that at any unreachable information set, regardless of the belief the acting player holds at that set, his action should be a best response to that belief and the other players' strategies. We now give an equivalent definition of SSE, which says that a player's strategy at an unreachable information set should be optimal following {\em every history} in that information set. This definition is more convenient when proving that a strategy profile is an SSE. \begin{definition} \deflabel{strong-se-2} A strategy profile $s$ is a {\em strong sequential equilibrium} if for every player $i$ and information set $I_i$ of $i$, we have: \begin{itemize}[noitemsep,nolistsep,leftmargin=*] \item {\bf At reachable information sets $I$:} conditional on $I_i$ being reached, player $i$'s strategy $s_i$ is a best response to $s_{-i}$, given $i$'s beliefs at $I_i$ being derived from $s$ using Bayes' rule. \item {\bf At unreachable information sets $I_i$:} for every history $h\in I_i$, conditional on $I_i$ being reached, player $i$'s strategy $s_i$ is a best response to $s_{-i}$, given $i$'s belief that he is at $h$ with probability 1. \end{itemize} \end{definition} We now prove the equivalence of the two definitions of SSE in the following lemma. W.l.o.g., $s$ is a profile of pure strategies. \begin{lemma} \lemlabel{unreachable} For any strategy profile ${s}$, any player $i$ and information set $I_i$ of $i$ that is not reached with positive probability under ${s}$, conditional on $I_i$ being reached, $s_i$ is a best response to $s_{-i}$ with respect to all possible beliefs that player $i$ may hold at $I_i$ if and only if for every history $h\in I_i$, $s_i$ is a best response to $s_{-i}$ given $i$'s belief that he is at $h$ with probability 1. \end{lemma} \begin{proof} The ``only if'' part is immediate, because for any history $h\in I_i$, ``at $h$ with probability~1 (and any other history with probability 0)'' is a specific belief that $i$ may hold at $I_i$. The ``if'' part is also easy to show. Suppose that $s_i$ is a best response to $s_{-i}$ conditional on every history $h \in I_i$ (i.e., at $h$ with probability 1). To show that $s_i$ is a best response to $s_{-i}$ conditional on all possible beliefs player $i$ may hold at information set $I_i$, arbitrarily fix a belief $\mu_i(I_i)$ over $I_i$ and a strategy $s_i'$. Let $I_i = \{h_1, h_2, \ldots, h_m\}$ and $\mu_i(I_i) = (\mu_i(I_i)(h_1), \mu_i(I_i)(h_2), \ldots, \mu_i(I_i)(h_m))$, where $\mu_i(I_i)(h_k)$ is the probability with which player $i$ believes that history $h_k$ occurs conditional on $I_i$ being reached. Then, player $i$'s expected utilities under $s_i$ and $s_i'$ respectively, conditioned on $I_i$, $\mu_i(I_i)$ and $s_{-i}$, are \[ u_i(s_i, s_{-i}|\mu_i(I_i)) = \sum_{k =1}^m \mu_i(I_i)(h_k) \cdot u_i( s_i, s_{-i} | h_k) \mbox{ and } u_i(s'_i, s_{-i}|\mu_i(I_i)) = \sum_{k =1}^m \mu_i(I_i)(h_k) \cdot u_i( s_i', s_{-i} | h_k),\] where $u_i( s_i, s_{-i} | h_k)$ is player $i$'s utility under $(s_i, s_{-i})$, conditioned on history $h_k$ being reached at $I_i$. Since $s_i$ is a best response to $s_{-i}$ at every $h_k \in I_i$, we have $u_i(s_i, s_{-i} | h_k) \geq u_i(s'_i, s_{-i} |h_k) \ \forall k \in \{1, \ldots, m\}.$ Thus $u_i(s_i, s_{-i}|\mu_i(I_i)) \geq u_i(s'_i, s_{-i}|\mu_i(I_i))$ and the ``if'' part holds. \end{proof} \paragraph*{One-shot deviation for strong sequential equilibrium} Informally, the one-shot deviation principle says that a player cannot change his action at a single information set (without changing the rest of his strategy) and improve his expected reward. In the context of sequential equilibrium, it is well known that given a consistent belief system $\mu$, $(s, \mu)$ is a sequential equilibrium if and only if the {\em one-shot deviation principle holds}, that is, no player $i$ has an information set $I_i$ at which a change in $s_i(I_i)$---holding the remainding of $s_i$ fixed---increases his expected utility conditional on reaching $I_i$~\cite{osborne1994course, hendon1996one}. Since strong sequential equilibrium does not require artificial notion of beliefs for unreachable information sets, we define a stronger notion of one-shot deviation at those information sets--- for every decision node (i.e., history) in an unreachable information set of player $i$, there does not exist a one-shot deviation at that node which improves player $i$'s utility conditional on that node being reached. Note that at reachable information sets, both the definition and proof of the one-shot deviation condition for SSE are exactly the same as in SE~\cite{hendon1996one}. \begin{lemma}[One-shot deviation for strong sequential equilibrium] \lemlabel{one-shot} For any strategy profile~${s}$, ${s}$ is a strong sequential equilibrium if and only if it satisfies the following {\em one-shot deviation principle}: For every player $i$ and every information set $I_i$ of $i$, \begin{itemize}[noitemsep,nolistsep,leftmargin=*] \item {\bf If $I_i$ is reachable under ${s}$:} there does not exist a change in $s_i(I_i)$ (holding the rest of $s_i$ fixed) that increases player $i$'s expected utility conditional on reaching $I_i$, given his belief at $I_i$ derived using Bayes' rule. \item {\bf If $I_i$ is unreachable under ${s}$:} for every history $h \in I_i$, there does not exist a change in $s_i(I_i)$ (holding the rest of $s_i$ fixed) that increases player $i$'s expected utility conditional on reaching $h$. \end{itemize} \end{lemma} \begin{proof} The ``only if'' part follows immediately from \defref{strong-se-2} and the fact that a one-shot deviation results in a different strategy for the deviating player. We now prove the ``if'' part, that is, if $s$ satisfies the one-shot deviation principle then it is a strong sequential equilibrium. {\bf Reachable information sets.} First of all, similar to the proof of \lemref{lem:SE}, we can construct a belief system $\mu$ such that $s$ and $\mu$ are consistent. Indeed, the construction of $\mu$ only depends on the actions taken by $s$ and does not depend on the utilities induced by $s$ at all. Since $s$ satisfies the one-shot deviation principle at every reachable information set and at every history in each unreachable information set, it is not hard to see that $s$ satisfies the one-shot deviation principle with respect to $\mu$. Thus $(s, \mu)$ is a sequential equilibrium. Accordingly, for any player $i$ and information set $I_i$ of $i$ that is reachable by $s$, $s_i$ is a best response to $s_{-i}$ conditional on $\mu_i(I_i)$ (which is derived from $s$ using Bayes' rule at $I_i$), as desired by the definition of SSE. {\bf Unreachable information sets.} Next, we use backward induction to show that, for any player $i$, information set $I_i$ of $i$ that is unreachable by $s$, and history $h\in I_i$, $s_i$ is a best response to $s_{-i}$ conditional on reaching $h$. To begin with, if $h$ is of height 1 then this immediately holds: indeed, the strategy induced by $s_i$ following $h$ is exactly the action $s_i(I_i)$, thus the one-shot deviation principle implies that $s_i$ is a best response to $s_{-i}$ at $h$. Now, arbitrarily fix a player $i$, information set $I_i$ of $i$ unreachable by $s$, and a history $h\in I_i$ of height larger than 1. By induction, assume that for any information set $I'_i$ of $i$ unreachable by $s$, and history $h'\in I'_i$ of height smaller than that of $h$, $s_i$ is a best response to $s_{-i}$ at $h'$. For the sake of contradiction, suppose player $i$ can deviate to strategy $s_i'$ and increase his utility conditional on reaching $h$, that is, $$u_i(s_i', s_{-i}|h)> u_i(s_i, s_{-i}|h).$$ If $s'_i(I_i)=s_i(I_i)$, consider the first history $h'$ following $h$ where player $i$ acts and $s_i'$ differs from $s_i$. As $h$ is unreachable by $s$, $h'$ is unreachable by $s$ as well. However, the height of $h'$ is smaller than that of $h$ and $u_i(s_i', s_{-i}|h') = u_i(s_i', s_{-i}|h)> u_i(s_i, s_{-i}|h) = u_i(s_i, s_{-i}|h')$, contradicting the inductive hypothesis. Thus we have $$s'_i(I_i)\neq s_i(I_i).$$ If $s'_i$ is the same as $s_i$ at all the histories following $(h, s'_i(I_i))$ where player $i$ acts, then the one-shot deviation principle is violated. Accordingly, there must exist a history following $(h, s'_i(I_i))$, where player $i$ acts and $s'_i$ differ from $s_i$. Letting $h'$ be the first such history, we have that the height of $h'$ is smaller than that of $h$. Since $h'$ is unreachable by $s$, by the inductive hypothesis we have that $s_i$ is a best response to $s_{-i}$ at $h'$. Thus $u_i(s_i, s_{-i}|h')\geq u_i(s'_i, s_{-i}|h')$. As $u_i(s'_i, s_{-i}|h') = u_i(s'_i, s_{-i}|h)> u_i(s_i, s_{-i}|h)$, we have $$u_i(s_i, s_{-i}|h')> u_i(s_i, s_{-i}|h).$$ Let strategy $s''_i$ be such that, it follows $s_i$ till history $h$, then follows action $s'_i(I_i)$, then follows $s'_i$ (and $s_i$ as well, because they are the same after $(h, s'_i(I_i))$ and before $h'$) till history $h'$, and then follows $s_i$ for the rest. Note that $s''_i$ can be obtained from $s_i$ by a one-shot deviation from $s_i(I_i)$ to $s'_i(I_i)$. However, $$u_i(s''_i, s_{-i}|h) = u_i(s''_i, s_{-i}|h') = u_i(s_i, s_{-i}|h')> u_i(s_i, s_{-i}|h),$$ contradicting the one-shot deviation principle. Therefore $s_i$ is a best response to $s_{-i}$ conditional on reaching $h$, as desired. Combining everything together, by~\defref{strong-se-2}, ${s}$ is an SSE and \lemref{one-shot} holds. \end{proof} \paragraph*{Verifying strong sequential equilibrium} Given an extensive-form game with arbitrary number of players, it is possible to decide whether a pair $(s, \mu)$ is a sequential equilibrium in time polynomial in the size of the game tree~\cite{gatti2012new}. However, if only a strategy profile $s$ is given, then it is NP-hard to decide whether $s$ is part of an SE (that is, whether there exists a belief system $\mu$ such that $(s, \mu)$ is an SE) \cite{hansen2010computational}. As strong sequential equilibrium does not rely on belief systems, we prove the following. \begin{lemma} \lemlabel{verify-sse} Given an extensive-form game and a strategy profile ${s}$ of the players, deciding whether ${s}$ is a SSE of the game can be done in time polynomial in the size of the game tree. \end{lemma} \begin{proof} First of all, we can traverse the game tree in polynomial time, mark each information set whether it is reachable by $s$ or not, and compute, for each player $i$ and each reachable information set $I_i$ of $i$, the belief $\mu_i(I_i)$ derived from $s$ using Bayes' rule. Next, we apply the one-shot deviation principle following \lemref{one-shot}. To do so, we start from the bottom level of the tree and proceed up. For every player $i$ and every information set $I_i$ of $i$, if $I_i$ is unreachable under $s$, then we go through each $h \in I_i$ and each $a \in A(I_i)$, and check if changing $s_i(I_i)$ to $a$ improves $i$'s utility conditional on reaching $h$. If so then $s$ is not an SSE. If $I_i$ is reachable under $s$, then we go through every $a \in A(I_i)$, and check if changing $s_i(I_i)$ to $a$ improves $i$'s expected utility conditional on $I_i$ and $\mu_i(I_i)$. If so then again $s$ is not an SSE. If all the checks above pass, then $s$ is an SSE. Since this procedure goes through each decision node of the game tree at most once, and since it takes polynomial time to compute player $i$'s (expected) utility under $s$ following a decision node (or an information set), deciding whether $s$ is an SSE takes polynomial time in the size of the tree. \end{proof} \input{paper.bbl} \end{document}
arXiv
I have a bag with blue marbles and yellow marbles in it. At the moment, the ratio of blue marbles to yellow marbles is 8:5. If I remove 12 blue marbles and add 21 yellow marbles, the ratio will be 1:3. How many blue marbles were in the bag before I removed some? Let $x$ be the number of blue marbles and $y$ the number of yellow marbles before I added more. We are given that the ratio of blue to yellow is 8:5, so $\dfrac{x}{y}=\dfrac{8}{5}$. Additionally, after we remove blue marbles and add yellow marbles the total number of blue marbles and yellow marbles will be $x-12$ and $y+21$ respectively. We're given that at this point the ratio will be $1:3$, so $\dfrac{x-12}{y+21}=\dfrac{1}{3}$. Cross multiplying the first equation gives $5x=8y$ and cross multiplying the second gives $3(x-12)=1(y+21)$. Solving two linear equations on two variables is routine; we get the solution $y=15$, $x=24$. Since $x$ represents the number of blue marbles before some were removed, the answer to the problem is just $\boxed{24}$.
Math Dataset
\begin{definition}[Definition:Matroid/Dependent Set] Let $M = \struct{S, \mathscr I}$ be a matroid. A subset of $S$ that is not an element of $\mathscr I$ is called a '''dependent set''' of $M$. \end{definition}
ProofWiki
Nonlinear estimation of BOLD signals with the aid of cerebral blood volume imaging Yan Zhang1, Zuli Wang1, Zhongzhou Cai2, Qiang Lin3 & Zhenghui Hu3 The hemodynamic balloon model describes the change in coupling from underlying neural activity to observed blood oxygen level dependent (BOLD) response. It plays an increasing important role in brain research using magnetic resonance imaging (MRI) techniques. However, changes in the BOLD signal are sensitive to the resting blood volume fraction (i.e., \(V_0\)) associated with the regional vasculature. In previous studies the value was arbitrarily set to a physiologically plausible value to circumvent the ill-posedness of the inverse problem. These approaches fail to explore actual \(V_0\) value and could yield inaccurate model estimation. The present study represents the first empiric attempt to derive the actual \(V_0\) from data obtained using cerebral blood volume imaging, with the aim of augmenting the existing estimation schemes. Bimanual finger tapping experiments were performed to determine how \(V_0\) influences the model estimation of BOLD signals within a single-region and multiple-regions (i.e., dynamic causal modeling). In order to show the significance of applying the true \(V_0\), we have presented the different results obtained when using the real \(V_0\) and assumed \(V_0\) in terms of single-region model estimation and dynamic causal modeling. The results show that \(V_0\) significantly influences the estimation results within a single-region and multiple-regions. Using the actual \(V_0\) might yield more realistic and physiologically meaningful model estimation results. Incorporating regional venous information in the analysis of the hemodynamic model can provide more reliable and accurate parameter estimations and model predictions, and improve the inference about brain connectivity based on fMRI data. Functional magnetic resonance imaging (fMRI) offers a noninvasive technology to examine hemodynamic signals in the cerebrovascular system. The hemodynamic balloon model was introduced in 1998 to reveal the coupling dynamics between neural activity and blood oxygen level dependent (BOLD) responses by Buxton et al. [1]. The balloon model describes the causal mechanisms within a hemodynamic process in a certain region of interest (ROI) during brain activation. BOLD responses can be observed via the dynamic changes in cerebral blood volume (CBV) v, cerebral blood flow f, and vein deoxyhemoglobin (dHb) content q. This model is especially helpful in understand the potential consequences of interactions between physiological mechanisms. Since the inception of this model, there has been growing interest in using it to interpret observed fMRI data. The model can be used to infer biologically meaningful parameters that could be employed to investigate the changes in underlying physiological variables during brain activation [2–5], restrict the activation detection process with classic statistical techniques [6, 7], and deduce similar systems or different driving conditions [8–11]. The primary causes of unreliability in model estimation is that the BOLD fMRI technique is sensitive to changes in the signal from venous blood. The change in the signal intensity of a particular voxel is strongly dependent on what fraction of the voxel the vessel occupies. Moreove, changes in BOLD signal intensities during task activation are related not only to multiple physiological states but also regional vessel occupancy, including capillaries and large veins. Indeed, the evaluation of model structure also indicates that the blood volume fraction (BVF) greatly influences the uncertainty of model output [12]. However, this problem has been ignored in all previous studies. Most studies performed to data have avoided the ill-conditioning problem simply by employing a physiological plausible value of \(V_0=0.02\) instead of investigating the actual value in a particular ROI [2–5, 7, 13, 14] or throughout the brain [6, 15]. Given the importance of the true BVF, efforts are needed to incorporate actual vascular information of the voxel in the hemodynamic model estimation. Firstly, when a voxel includes only brain tissue, the assumption of \(V_0=0.02\) is reasonable [2, 16]. However, when a voxel is mostly or totally occupied by a vessel or vessels, the value might typically be above 0.6 [17]. Secondly, voxels associated with a larger amount of blood are always more likely to show significant BOLD activation due to the inherent nature of the fMRI technique. In this situation, employing an unrealistic \(V_0\) value might yield an unreliable model that does not reflect the physiological reality. This illustrates the importance of taking into account the actual BVF during the estimation procedure. Several methods have been applied in attempts to obtain the true BVF. We recently showed that magnetic resonance angiography (MRA) might provide a method for roughly estimating the BVF value [18]. The results inferred that the \(V_0\) value in a voxel consists of two derivative components: (1) a constant tissue blood volume component \(V_s=0.02\), which is the small-vessel blood volume that includes capillaries and small postscapulaes, and (2) a variable large blood vessels component \(V_l\), which is the blood volume of large blood vessels. However, this method has not been used to obtain the actual \(V_0\) directly. Indeed, the regional CBV can be measured by another imaging modality, called the dynamic susceptibility contrast (DSC) material-enhanced gradient-echo (GE) MR technique [19]. The present study therefore augmented the true BVF acquired from CBV imaging in order to focuses on the influence of \(V_0\) on hemodynamic model estimation and the importance of using the true BVF in the analysis. This paper is organized as follows. Firstly, we briefly review the hemodynamic Balloon model which constitutes the fundamental component of hemodynamic model estimation. Secondly, we explain the important influence of \(V_0\) with the adoption of a realistic value obtained from the CBV imaging technique. Lastly, the influence of \(V_0\) on model estimation within a single-region and multiple-regions according to the results of a classic bimanual finger tapping experiment is discussed in terms of the impacts of the actual \(V_0\) on parameter estimates and state-space reconstruction. Hemodynamic balloon model The hemodynamic balloon model describes the dynamic interrelationship between the blood flow f (neural activity to changes in flow), the regional blood volume information v (changes in flow to changes in blood volume and venous outflow), and the vein dHb content q (changes in flow, volume and oxygen extraction fraction to changes in dHb). The hemodynamic process can be described as the follows: $$\begin{aligned} {\left\{ \begin{array}{ll} \ddot{f}=\epsilon u(t)-\frac{\dot{f}}{\tau _s}-\frac{f-1}{\tau _f} \\ \dot{v}=\frac{1}{\tau _0}(f-v^{1/\alpha }) \\ \dot{q}=\frac{1}{\tau _0}\left( f\frac{1-{(1-E_0)^{1/f}}}{E_0} -v^{1/{\alpha }}\frac{q}{v}\right) , \end{array}\right. } \end{aligned}$$ where \(\tau _s\) reflects signal decay, \(\tau _f\) is the feedback autoregulation time constant, \(\tau _0\) is the transit time, \(\alpha\) is a stiffness parameter, \(\epsilon\) is the neuronal efficacy, u(t) is the neuronal input, and \(E_0\) represents the resting oxygen extraction fraction. The variables f, v, and q are expressed in normalized form, relative to resting values. The balloon model can account for the hemodynamic responses in sparse, noisy fMRI measurements [12, 15]. However, since the above describing equations contain a second-order time derivative, we can introduce a new variable \(s=\dot{f}\) to express this hemodynamic system as a set of four first-order ordinary differential equations. Then the observed response BOLD signal could be expressed as follows: $$\begin{aligned} {\left\{ \begin{array}{ll} y(t)=V_0(k_1(1-q)+k_2(1-\frac{q}{v})+k_3(1-v)), \\ k_1=7E_0, \quad k_2=2, \quad k_3=2E_0-0.2. \end{array}\right. } \end{aligned}$$ This equation is appropriate when using an fMRI machine 1.5-T magnet. The observed y is normalized relative to the value at rest, and \(V_0\) is the resting BVF [2]. Equations 1 and 2 consist of the architecture of hemodynamic input-and-output system. The model architecture is depicted in Fig. 1. Schematic illustration of the hemodynamic balloon model. This model consists of three linked subsystems: (1) neural activity u(t) to changes in the cerebral blood flow f, the second-order time derivative equation is written as a set of two first-order time derivative equations by introducing a new state variable \(s=\dot{f}\); (2) changes in flow f to changes in the cerebral blood volume v; (3) changes in flow f, volume v and oxygen extraction fraction to changes in the veins in the vein dHb content q Example of an axial CBV image (left) and the observed signal-intensity-versus-time curves (S(t), blue circles in right graphic) and fitted concentration–time curves (red line in right graphic). Red area denotes the estimated \(V_0\) The BOLD response is associated with all of these parameters, but, we know that parameter \(V_0\) can not be identified along with other parameters simultaneously, instead only their product. Most previous efforts have imposed a physiologically plausible value of \(V_0= 0.02\) to handle the ill-conditioned nature of the problem [2–10]. Changes in the BOLD signal are strongly affected by \(V_0\), and so an unrealistic \(V_0\) may lead to unreliable model parameter estimation. Two human subjects participated in this study. The experiment was approved by the Health Sciences Research Ethics Committee of Zhejiang University, and written informed consent was obtained from both subjects. Functional images were acquired on a 1.5-T scanner using a standard fMRI echo planar imaging protocol (resolution: \(64\times 64\) matrix; repetition time \(\mathrm{TR}=2\) s). In total, 110 acquisitions were made in a block-designed finger tapping experiment, giving 11 20-s blocks. The conditions for successive blocks alternated between rest and task performance, starting with rest. Furthermore, the CBV imaging sequence consisted of 30 T\(2^*\)-weighted images that were collected with a GE sequence (resolution: \(128\times 128\) matrix; 0.1 mmol/kg Gd-DTPA administered using a powered injector). In order to achieve a sufficient signal-to-noise ratio and complete coverage of the brain, TR was increased to 3.1 s, since a typical value is 1 s. The other sequence parameters remained unchanged. All CBV images were down-sampled to make their spatial resolution identical to that of the fMRI image, and thereby allow voxel-by-voxel curve analysis. Concentration–time curves were created for each voxel [20–23]. The calculated \(V_0\) was then used in an existing data estimation procedure [24]. Figure 2 shows an example of an axial CBV image and the observed S and fitted concentration–time curves from one voxel. Data preprocessing and statistical analysis were performed using the SPM5 program (Wellcome Department of Cognitive Neurology, http://www.fil.ion.ucl.ac.uk/spm). The activation map was obtained by applying t-tests between all bimanual motor conditions and resting baselines with a cutoff for statistical significance of \(P < 0.001\). Selected ROIs based on typical activated areas detected in the bimanual tapping task. The activation map was obtained by applying t-tests between all bimanual motor conditions and resting baselines, with a cutoff for statistical significance of \(P<0.001\) Estimated BOLD signal (a) and reconstructed physiological states (b) for the maximally activated voxel of subject 1. For comparison, model estimation was also performed with the typically assumed value of \(V_0=0.02\). The real \(V_0\) value of this voxel was 0.0172. It is evident that differences in the estimated physiological states are relevant to deviations from the actual BVF value Impact of BVF on single-region model estimation We now compare and evaluate the respective impact of the realistic and assumed BVFs on hemodynamic model estimation within a single-region. Firstly, we chose the maximally activated voxel in the left primary motor cortex (LPM) on the basis of the analyzed fMRI data from SPM5 as the ROI (Fig. 2) and then defined the cluster based on faces and edges excluding corners in order for this voxel to have six neighbors. We extracted the ultimate time series to be analyzed by averaging over the time series of seven voxels. This procedure allowed the model parameters and state-space functions for each of the two subjects to be estimated. Furthermore, for the sake of simplicity, we assumed that the neural parameter had the same value throughout all trails: \(\epsilon _1=\epsilon _2=\cdots =\epsilon _n\), where n denotes the number of trials (i.e., \(n=5\) here). A control random search algorithm was applied in the parameter estimation procedure [25]. Estimated BOLD signal (a) and reconstructed physiological states (b) for the maximally activated voxel of subject 2. For comparison, model estimation was also performed with the typically assumed value of \(V_0=0.02\). The real \(V_0\) value of this voxel was 0.0308. It is also evident from this subject that differences in the estimated physiological states are relevant to deviations from the actual BVF value Results of a DCM analysis applied to the finger tapping experiment. The value indicates the connection strength (\(a_{ij}\) in Eq. 3) in DCM. The coupling parameters calculated with the real \(V_0\) are shown alongside the corresponding connections. The values in brackets indicate the deviations from parameters estimated when using the assumed \(V_0\). \(V_0=0.0185\) in the LPM, \(V_0=0.0308\) in the RPM, and the assumed \(V_0=0.02\) in both areas. \(u_1\) and \(u_2\) represent external inputs to the system, \(y_1\) and \(y_2\) are the hemodynamic observations, and arrows indicate connections Figures 4 and 5 show the BOLD signal and underlying physiological variables of the two subjects for the real \(V_0\) derived from CBV imaging in the maximally activated voxel. The estimated BOLD signal and state variables for an assumed value of \(V_0=0.02\) are also drawn in Figs. 3 and 4 (as dashed lines). The comparison indicates that the assumed and true \(V_0\) could produce similar BOLD estimates in terms of magnitude and shape, with only a slight distinction in the plateau period. This result is consistent with those of previous studies involving the balloon model. However, we also found a large difference between the assumed and actual \(V_0\) values in terms of the reconstructed physiological states. We can conclude that the intensity of changes in the underlying state variables with the assumed \(V_0\) were double those with the true \(V_0\); that is, underestimating \(V_0\) produced an overestimation of the physiological state variables. Moreover, Figs. 4 and 5 indicate that a larger difference between actual \(V_0\) and hypothetical \(V_0\), resulted in a greater difference between estimated physiological state. This means that attention should be paid to ensuring that a realistic \(V_0\) is used in model estimation. The presence of a larger amount of blood in an activated voxel magnifies the effects induced by neuronal activity, lead to an excessive signal for that voxel and unrealistic activity predictions. Similar BOLD changes in a voxel associated with larger veins will change f, v, and q less than for a voxel with a smaller blood fraction. Most activation detection techniques are only capable of indicating the neural activity from changes in BOLD signal or activity maps, and they do not direct infer whether the underlying physiological variation is closely related to \(V_0\) and actually reflects neural activity. Under this circumstance, the use of an arbitrary value of \(V_0\) will influence the spatial specificity of fMRI signals in statistical testing. However, we can assume that functional activated regions induced by an experimental event rather than large regional amounts of blood and the employment of an unrealistic \(V_0\) are suitable when fMRI signal estimation and activation detection are exclusively needed. Table 1 indicates that the uncertainty of \(V_0\) induces changes in other parameters, with \(V_0\) exerting a complicated, nonlinear, and inconsistent influence on the entire hemodynamic process. Table 1 also indicates that \(V_0\) has a greater influence on the estimated neuronal efficacy parameter \(\epsilon\) than on the other parameters (\(\epsilon\) is 0.3910 with the true \(V_0\), and 0.9089 with the hypothetical \(V_0\)). A previous study found that the uncertainty of model output was more sensitive to variation of \(\epsilon\) than those of other parameters, except \(V_0\) [12]. The defined \(\epsilon\) represents the efficacy with which neural activity causes an increased BOLD signal. As a consequence, if we could use the true \(V_0\), the estimated \(\epsilon\) could offer a better and more intuitive reflection of the activation level, enhancing the functional specificity of fMRI. Table 1 Model parameters estimated using the true value (\(V_t\)) and a typical assumed value (\(V_a\)) for the maximally activated voxels of two subjects Impact of BVF on dynamic causal models As for balloon model research, dynamic causal modeling (DCM) has been introduced to explore effective connectivity based on hemodynamic observations [8, 9]. DCM extends the balloon model from a single region to multiple regions by utilizing a multiple-input, multiple-output system. Single-region model estimation supposes that the extrinsic experimental input consistently accesses all brain regions and that a certain brain area only receives input in this way (\(\epsilon u\) in Eq. 1), whereas DCM assumes that responses (\(x_i\) in Eq. 3) are elicited by two distinct inputs sources: the extrinsic influence of the sensory input (\(\epsilon u\) in Eq. 3) and the intrinsic influence of the interaction regions (\(a_{ij} x_k\) in Eq. 3). In other words, DCM uses estimated neural activities (internal and external) to evaluate the causal correlation among brain areas. While the uncertain \(V_0\) has an important influence on parameter \(\epsilon\) in the hemodynamic model, it is interesting to know how the \(V_0\) influences DCM. In this study we therefore also investigated the effect of \(V_0\) on DCM. We constructed the simplest two-region hierarchical system in order to demonstrate the significant effect of BVF on the DCM system. From the two brain areas that interact with and influence each other, we could measure the observed BOLD signals that each of the two regions produced, the relationship can be expressed as follows: $$\begin{aligned} {\left\{ \begin{array}{ll} \dot{x}_1=a_{11}x_1+a_{12}x_2+c_{11}u_1 \\ \dot{x}_2=a_{22}x_2+a_{21}x_1+c_{22}u_2\\ \end{array}\right. } \end{aligned}$$ where \(x_1\) and \(x_2\) are the neuronal dynamics in two regions, \(u_1\) and \(u_2\) represent external inputs to the system, \(a_{11}\) and \(a_{22}\) represent the internal connectivity within a region without input, \(a_{12}\) and \(a_{21}\) encode the fixed inter-region connectivity without input, and \(c_{11}\) and \(c_{22}\) embody the extrinsic influences of input on neuronal activity. One can augmented the state vector consisting of the model parameters at two regions by concatenating them into a single higher dimensional state space and the measurement vector was also expanded to include two observations in two areas [8]. In the experiment, we adopted a 0–1 square-wave function as two inputs, and the system output was two time series from two regions, \(x_1\) and \(x_2\). While attempting to determine the dimension of the parameters, a more efficient filtering strategy was used to deal with the model estimation problem [26, 27]. The estimation scheme employed for DCM is formally identical to that reported previously [5, 15]. The results of this analysis are presented in Fig. 5, in which the effective connections are presented as directed black arrows along with coupling parameters calculated with the real \(V_0\) and assumed \(V_0\). In order to construct the model system, we chose two regions in the left primary (LPM) and the right primary motor cortex (RPM) containing the two maxima of the activation map. The output region-specific time series comprised all adjacent (based on faces and edges but not corners) voxels of each maximum (a total of seven voxels), the location is shown in Fig. 2. The conflicts between the motor preparation were interpreted as inhibitory connections between the LPM and RPM [28, 29]. The fixed connectivity from the RPM to the LPM is actually slightly weaker than that from the LPM to the RPM. This indicates that backward influences (RPM to LPM) are stronger than forward connections (LPM to RPM). Furthermore, the fixed connectivity in the RPM is stronger than that in the LPM, indicating that the right path-way is used more frequently than the pathway on the left side. From Fig.6 we conclude that the two different \(V_0\) have different impacts, with the largest deviation being about \(40\,\%\) for the strength of the visual input to the LPM or RPM. This study focused on the important but long ignored issue of how the resting cerebral BVF (i.e., \(V_0\)) impacts hemodynamic models. Previous studies have used a physiologically plausible value of \(V_0= 0.02\) instead of exploring the actual \(V_0\) in the model estimation procedure. However, the intensity of any hemodynamic signal change is greatly affected by the regional BVF, since the active domains subject to model estimation often overlap with those areas characterized by a large BVF [30]. Under such circumstances, an inaccurate \(V_0\) may give rise to inaccurate estimates of the parameters and the reconstructed physiological state. This study used CBV imaging to augment the true \(V_0\) calculated in the hemodynamic model. In order to show the significance of applying the true \(V_0\), we have presented the different results obtained when using the real \(V_0\) and assumed \(V_0\) in terms of single-region model estimation and DCM. It was found that using the actual \(V_0\), yielded more realistic and physiologically meaningful model estimation results. The results obtained in this study indicate that \(V_0\) has a rather complicated impact on estimated model parameters. Despite the BOLD responses being similar when using the assumed and real \(V_0\), there was a huge difference in the estimated parameters and the derived physiological state in the ROI. Because the balloon model describes the causal mechanism of a hemodynamic system, its order is higher than the externally observable system, which results in poorly identifiable model parameters due to the nature of nonlinear optimization and temporally sparse sampling. These model parameters have clear physiological meanings, and they should be justified and interpreted with caution [13, 31]. If the actual \(V_0\) is adopted, \(\epsilon\) can be more reliably observe via fMRI measurements. Therefore, \(V_0\) significantly influences the evaluations of brain connectivity. There have recently been extensive discussions on DCM and Granger causal modeling (GCM), with an emphasis on the connectivity among distributed brain systems [32–34]. In order to obtain a more robust understanding of brain causality, we used a biophysical model to eliminate signal bias in imaging procedure and variations of the hemodynamic response in diverse brain domains. However, an unrealistic \(V_0\) might degraded such efforts. A potential limitation of the present study is to the extent that \(V_0\) as measured by CBV imaging is affected by the amount of blood associated with BOLD signals. We consider that both CBV imaging and the BOLD contrast have tiny difference in terms of the \(V_0\). The former contains the volume of blood across arteries, capillaries, and veins, whereas the latter is relevant to capillaries and veins [35]. Although the arterial fraction of CBV is much less than the venous BVF [36, 37], CBV imaging also partly removes the effect of overestimates about BVF. This is therefore a suitable method for approximating the value of \(V_0\). In addition, this study concentrated on explaining the influence of BVF on hemodynamic model estimation, and the results demonstrated the importance of taking advantage of actual BVF information in the estimation procedure. The argument about the origin of the two modalities were beyond the scope of this paper. The present study presented the first empiric attempt to derive the actual \(V_0\) from data obtained using CBV imaging, with the aim of augmenting the existing estimation schemes. The results show that \(V_0\) significantly influences the estimation results within a single-region model estimation and DCM. Using the actual \(V_0\) can provide more reliable and accurate parameterizations and model predictions, and improve brain connectivity estimation based on fMRI data. BOLD: blood oxygen level dependent BVF: blood volume fraction CBV: cerebral blood volume DCM: dynamic causal modeling region of interest dHb: deoxyhemoglobin DSC: dynamic susceptibility contrast LPM: left primary motor RPM: right primary motor Buxton RB, Frank LR. A model for the coupling between cerebral blood flow and oxygen metabolism during neural stimulation. J Cerebral Blood Flow Metab. 1997;17:64–72. Friston KJ, Mechelli A, Turner R, Price CJ. Nonlinear responses in fMRI: the balloon model, Volterra kernels, and other hemodynamics. NeuroImage. 2000;12:466–77. Riera JJ, Watanabe J, Kazuki I, Naoki M, Aubert E, Ozaki T, Kawashima R. A state-space model of the hemodynamic approach: nonlinear filtering of BOLD signals. NeuroImage. 2004;21:547–67. Johnston LA, Duff E, Egan GF. Particle filtering for nonlinear BOLD signal analysis. In: 9th international conference on medical image computing and computer assisted intervention (MICCAI), Copenhagen, Denmark. 2006. p. 292–9. Hu ZH, Zhao XH, Liu HF, Shi PC. Nonlinear analysis of the BOLD signal. EURASIP J Adv Signal Process. 2009;2009:1–13. Deneux T, Faugeras O. Using nonlinear models in fMRI data analysis: model selection and activation detection. NeuroImage. 2006;32:1669–89. Hu ZH, Zhang HY, Wang LW, Song XL, Shi PC. Joint estimation for nonlinear dynamic system from fMRI time series. In: 10th international conference on medical image computing and computer assisted intervention (MICCAI), Brisbane, Australia. 2007. p. 734–41. Friston KJ, Harrison L, Penny W. Dynamic causal modelling. NeuroImage. 2003;19:1273–302. Stephan KE, Kasper L, Harrison LM, Daunizeau J, Ouden HEM, Breakspear M, Friston KJ. Nonlinear dynamic causal models for fMRI. NeuroImage. 2008;42:649–62. Li XF, Marrelec G, Hess RF, Benali H. A nonlinear identification method to study effective connectivity in functional MRI. Med Image Anal. 2010;14:30–8. Li XF, Coyle D, Maguire L, McGinnity TM, Benali H. A model selection method for nonlinear system identification based fMRI effective connectivity analysis. IEEE Trans Med Imaging. 2011;30(7):1365–80. Hu ZH, Shi PC. Sensitivity analysis for biomedical models. IEEE Trans Med Imaging. 2010;29(11):1870–81. Johnston LA, Duff E, Mareels I, Egan GF. Nonlinear estimation of the BOLD signal. NeuroImage. 2008;40:504–14. Hettiarachchi IT, Pathirana PN, Brotchie P. A state space based approach in non-linear hemodynamic response modeling with fMRI data. In: 32nd annual international conference of the IEEE EMBS, Buenos Aires, Argentina. 2010. p. 2391–4. Hu ZH, Shi PC. Nonlinear analysis of BOLD signal: biophysical modeling, physiological states, and functional activation. In: 2007 IEEE international conference on image processing (ICIP), San Antonio, Texas, USA. 2007. p. 145–8. Jezzard P, Matt PM, Smith SM. Functional MRI: an introduction to methods. New York: Oxford University Press; 2001. Lu HZ, Law M, Johnson G, Ge Y, van Zijl PCM, Helpern JA. Novel approach to the measurement of absolute cerebral blood volume using vascular-space-occupancy magnetic resonance imaging. Magn Reson Med. 2005;54:1403–11. Hu ZH, Liu C, Liu PS, Liu HF. Exploiting magnetic resonance angiography imaging improves model estimation of BOLD signal. PLoS One. 2012;7(2):31612. Rempp KA, Brix G, Wenz F, Becker CR, Lorenz FGWJ. Quantification of regional cerebral blood flow and volume with dynamic susceptibility contrast-enhanced MR imaging. Radiology. 1994;193:637–41. Rosen BR, Belliveau JW, Buchbinder BR, McKinstry RC, Porkka LM, Kennedy DN, Neuder MS, Fisel CR, Aronen HJ, Kwong KK, Weisskoff RM, Cohen MS, Brady TJ. Contrast agents and cerebral hemodynamics. Magn Reson Med. 1991;19:285–92. Norman D, Axel L, Berninger WH, Edwards MS, Cann CE, Redington RW, Cox L. Dynamic computed tomography of the brain: techniques, data analysis, and applications. Am J Roentgenol. 1981;136(4):1–12. Madsen MT. A simplified formulation of the gamma variate function. Phys Med Biol. 1992;37(7):1597–600. Chan AA, Nelson SJ. Simplified gamma-variate fitting of perfusion curves. In: 2th IEEE international symposium on biomedical imaging (ISBI), Arlington, VA, USA. 2004. p. 1067–70. Hu ZH, Peng JL, Kong DX, Chen YM, Zhang HY, Lu MH, Liu HF. A novel statistical optimization strategy for estimating intravascular indicator dynamics using susceptibility contrast-enhanced MRI. IEEE Trans Med Imaging (submitted) Hu ZH, Ni PY, Liu C, Zhao XH, Liu HF, Shi PC. Quantitative evaluation of activation state in functional brain imaging. Brain Topogr. 2012;25:362–73. Julier SJ, Uhlmann JK. Unscented filtering and nonlinear estimation. Proc IEEE. 2004;92(3):401–22. Merwe R, Wan EA. The square-root unscented Kalman filter for state and parameter-estimation. In: 2001 IEEE international conference on acoustics, speech and signal processing, Salt Lake City, Utah, USA. 2001. p. 3461–4. Immisch I, Waldvogel D, VanGelderen P, Hallett M. The role of the medial wall and its anatomical variations for bimanual antiphase and in-phase movements. NeuroImage. 2001;14:674–84. Weerd PD, Reinke K, Ryan L, McIsaac T, Perschler P, Schnyer D, Trouard T, Gmitrof A. Cortical mechanisms for acquisition and performance of bimanual motor sequences. NeuroImage. 2003;19:1405–16. Kim DS, Duong TQ, Kim SG. High-resolution mapping of isoorientation columns by fMRI. Nat Neurosci. 2000;3:164–9. David O, Guillemain I, Saillet S, Reyt S, Deransart C, Segebarth C, Depaulis A. Identifying neural drivers with functional MRI: an electrophysiological validation. PLoS Biol. 2008;6(12):e315. Roebroeck A, Formisano E, Goebel R. The identification of interacting networks in the brain using fMRI: model selection, causality and deconvolution. NeuroImage. 2011;58:296–302. Lohmann G, Erfurth K, Muller K, Turner R. Critical comments on dynamic causal modelling. NeuroImage. 2011;59(3):2322–9. Friston KJ, Li BJ, Daunizeau J, Stephan KE. Network discovery with DCM. NeuroImage. 2011;56(2):1202–21. Uǧurbil K, Adriany G, Andersen P, Chen W, Gruetter R, Hu XP, Merkle H, Kim DS, Kim SG, Strupp J, Zhu XH, Ogawa S. Magnetic resonance studies of brain function and neurochemistry. Ann Rev Biomed Eng. 2000;2:633–60. Ito H, Kanno I, Lida H, Hatazawa J, Shimosegawa E, Tamura H, Okudera T. Arterial fraction of cerebral blood volume in humans measured by positron emission tomography. Ann Nucl Med. 2001;15(2):111–6. An HY, Lin WL. Cerebral oxygen extraction fraction and cerebral venous blood volume measurements using MRI: effects of magnetic field variation. Magn Reson Med. 2002;47:958–66. YZ lead data collection, performed the data analysis and drafted the manuscript. ZLW assisted with data collection, data analysis and the drafting of the manuscript. ZZC performed data collection. QL supported partly this study and drafted the manuscript. ZHH conceived of the study, guided its design and coordination, participated in data collection, performed the statistical analysis and drafted the manuscript. All authors read and approved the final manuscript. The authors would like to thank the editor and two anonymous referees for their insightful suggestions and valuable comments, which helped to improve the quality of our presented work. This work is supported in part by the National Basic Research Program of China under Grant 2013CB329501, in part by the National High Technology Research and Development Program of China under Grant 2012AA011600, in part by the National Natural Science Foundation of China under Grant 81271645, in part by the Public Projects of Science Technology Department of Zhejiang Province under Grant 2013C33162, and in part by the Zhejiang Provincial Natural Science Foundation of China under Grant LY12H18004. College of Optical and Electronic Technology, China Jiliang University, Xueyuan Street 258, Hangzhou, 310018, China Yan Zhang & Zuli Wang College of Optical Science and Engineering, Zhejiang University, Zheda Road 38, Hangzhou, 310027, China Zhongzhou Cai Center for Optics and Optoelectronics Research, College of Science, Zhejiang University of Technology, Liuhe Road 288, Hangzhou, 310023, China Qiang Lin & Zhenghui Hu Yan Zhang Zuli Wang Qiang Lin Zhenghui Hu Correspondence to Zhenghui Hu. Yan Zhang and Zhenghui Hu contributed equally to this work Zhang, Y., Wang, Z., Cai, Z. et al. Nonlinear estimation of BOLD signals with the aid of cerebral blood volume imaging. BioMed Eng OnLine 15, 22 (2016). https://doi.org/10.1186/s12938-016-0137-6 Cerebral blood volume imaging
CommonCrawl
Advanced Modeling and Simulation in Engineering Sciences Fully convolutional networks for structural health monitoring through multivariate time series classification Luca Rosafalco1, Andrea Manzoni2, Stefano Mariani1 & Alberto Corigliano ORCID: orcid.org/0000-0002-1285-27241 Advanced Modeling and Simulation in Engineering Sciences volume 7, Article number: 38 (2020) Cite this article We propose a novel approach to structural health monitoring (SHM), aiming at the automatic identification of damage-sensitive features from data acquired through pervasive sensor systems. Damage detection and localization are formulated as classification problems, and tackled through fully convolutional networks (FCNs). A supervised training of the proposed network architecture is performed on data extracted from numerical simulations of a physics-based model (playing the role of digital twin of the structure to be monitored) accounting for different damage scenarios. By relying on this simplified model of the structure, several load conditions are considered during the training phase of the FCN, whose architecture has been designed to deal with time series of different length. The training of the neural network is done before the monitoring system starts operating, thus enabling a real time damage classification. The numerical performances of the proposed strategy are assessed on a numerical benchmark case consisting of an eight-story shear building subjected to two load types, one of which modeling random vibrations due to low-energy seismicity. Measurement noise has been added to the responses of the structure to mimic the outputs of a real monitoring system. Extremely good classification capacities are shown: among the nine possible alternatives (represented by the healthy state and by a damage at any floor), damage is correctly classified in up to \(95 \%\) of cases, thus showing the strong potential of the proposed approach in view of the application to real-life cases. Collapses of civil infrastructures strike public opinion more and more often. They are generally due to either structural deterioration or modified working conditions with respect to the design ones. The main challenge of structural health monitoring (SHM) is to increase the safety level of ageing structures by detecting, locating and quantifying the presence and the development of damages, possibly in real-time [1]. However, visual inspections—whose frequencies are usually determined by the importance and the age of the structure—are still the workhorse in this field, even if they are rarely able to provide a quantitative estimate of structural damages. Therefore, it is evident why recent advances in sensing technologies and signal processing, coupled to the increased availability of computing power, are creating huge expectations in the development of robust and continuous SHM systems [2]. SHM applications are often treated as classification problems [3] aiming (i) to distinguish the damage state of a structure from the undamaged state, starting from a set of available recordings of a monitoring sensor system, and (ii) to locate and quantify the current damage. In this framework, we have adopted the so-called simulation-based classification (SBC) approach [4], and we have exploited deep learning (DL) techniques for the sake of automatic classification. In our procedure, data are displacement and/or acceleration recordings of the structural response, and the classification task consists of recognizing which structural state, among a discrete set, could have most probably produced them. These structural states, characterized by the presence of damage in different positions and of different magnitudes, suitably represent different damage scenarios. To highlight the distinctive components of the SBC approach, we recall the general paradigm for a SHM system, according to [3]. A SHM system consists of four sequential procedures: (i) operational evaluation, (ii) data acquisition, (iii) features extraction and (iv) statistical inference. Operational evaluation defines what the object of the monitoring is and what the most probable damage scenarios are; data acquisition deals instead with the implementation of the sensing system; features extraction specifies how to exploit the acquired signals to derive features, that is, a reduced representation of the initial data, yet containing all their relevant information—for the case at hand, the onset and propagation of damage in the structure; statistical inference finally sets the criteria under which the classification task is performed. Focusing on stages (ii) and (iii), the vibration-based approach is nowadays the most common procedure in civil SHM. Its popularity is mainly due to the effective idea that the ongoing damage alters the structure vibration response [5] and, consequently, the associated modal information. By looking at the displacement and/or the acceleration time recordings acquired at a certain set of points of a building, the vibration-based approach enables the analysis of both global and local structural behaviors. The technology required to build this type of sensor system is mature and can be exploited on massive scale [6]. In most of the cases, features extraction relies on determining the system eigenfrequencies and the modal shapes. On the other hand, it might be necessary to employ more involved outcomes to distinguish between the effect of modified loading conditions and the true effect of damage [7], for instance by constructing parametric time series models [8]. By employing DL, we aim at dealing with these aspects automatically. Two competing approaches are employed in literature to deal with stage (iv), the a) model-based and the b) data-based approach, both introducing a sort of offline–online decomposition. By this expression, we mean the possibility to split the procedure into two phases: first, the offline phase is performed before the structure starts operating; then, the online phase is carried out during its normal operations. The model-based approach builds a physics-based model, initially calibrated to simulate the structural response. The model is updated whenever new observations become available and, accordingly, damage is detected and located. Data assimilation techniques such as Kalman filters have been employed to efficiently deal with model updating [9]. Model-based approaches are typically ill-conditioned, and many uncertainties related to the proper tuning of model parameters may prevent a correct damage estimation. Hence, data-based approaches are becoming more and more popular; they exploit a collection of structural responses and, either assess any deviation between real and simulated data, or assign to the measured data the relevant class label. The dataset construction can be done either experimentally [10] or numerically; however, the latter option is usually preferred, due to the frequent difficulties in reproducing the effects of damage in real-scale civil structures properly. To reduce the computational burden associated with the dataset construction, simplified models (e.g. mass-spring models for the dynamics of tall and slender buildings)—still able to catch the correct structural response—are preferred with respect to more expensive high-fidelity simulations, involving, e.g., the discretization of both structural and non-structural elements. By adopting the SBC method, we rely on a data-driven approach based on synthetic experiments. Once a dataset of possible damage scenarios has been constructed, machine learning (ML) proved to be suitable to perform the classification task [6]. The training of the ML classifier could be: supervised, when a label corresponding to one of the possible outputs of the classification task is associated to each structural response; unsupervised [11], when no labelling is available; semi-supervised [12], when the training data only refer to a reference condition. In the SBC framework, a semi-supervised approach was recently explored, e.g., in [13], leading to great computational savings and robust results when treating the anomaly detection task. In spite of their good performances, standard ML techniques based, e.g., on statistical distributions of the damage classes (as in the so-called decision boundary methods), as well as kernel-based methods (e.g. support vector machines), still rely on heavy data preprocessing, required to compute problem-specific sets of engineered features [14]. These features can be statistics of the signal, modal properties of the structure, or even more involved measures exploiting different types of signal transformation (e.g. Power Spectral Density and autocorrelation functions, to mention a few) [6]. Some relevant drawbacks arise, since: pre-computed engineered features are not well suited for non-standard problems, for which setting damage classification criteria can be anything but trivial; there is no way to assess the optimality of the employed features; a computationally expensive pre-processing of a huge amount of data is usually required. For these reasons, we rely on deep learning techniques, which allow both data dimensionality reduction and hierarchical pattern recognition at the same time [15, 16]. DL techniques allow us to: deal with non-standard problems, especially when different information sources have to be managed (as long as they are in the form of time series); detect a set of features, optimized with respect to the classification task, through the training of an artificial neural network. Despite these advantages, the use of DL for the sake of SHM has been quite limited so far [17, 18]. We have therefore decided to employ Fully Convolutional Networks (FCNs) [19], a particular Neural Network (NN) architecture, to deal with the Multivariate Time Series (MTS) produced by monitoring sensor systems. To face different information sources, we have applied separate convolutional branches and, at a second stage, performed the data fusion of the extracted information. SHM methodology We introduce in this section a detailed explanation of the proposed strategy to deal with the SHM problem exploiting a SBC approach. We provide a simplified physics-based model of the structure employing M degrees of freedom (dofs), assuming to record time-dependent signals through a monitoring system employing \(N_0 \le M\) sensors. Our aim is first to train, and then to use, two classifiers \({\mathcal {G}}_{d}\) and \({\mathcal {G}}_{l}\) for the sake of damage detection and localization, respectively, where $$\begin{aligned} {\mathcal {G}}_{d} : {\mathbb {R}}^{N_0 \times L_0} \rightarrow \lbrace 0 , 1 \rbrace ~, \qquad \qquad {\mathcal {G}}_{l} : {\mathbb {R}}^{N_0 \times L_0} \rightarrow \lbrace 0 , 1, \ldots , G \rbrace ~. \end{aligned}$$ In the former case, labels 0 and 1 denote absence or presence of damage, respectively; in the latter, \(G>1\) is a priori fixed and denotes the range of possible damage locations—also in this case, the undamaged state is denoted by 0. We have decided to include the undamaged state among the possible outputs of \({\mathcal {G}}_{l}\) not just to confirm the outcome of \({\mathcal {G}}_{d}\), but also to observe which damage scenarios, identified by their locations, are more often misclassified with the undamaged state. The training of \({\mathcal {G}}_{d}\) and \({\mathcal {G}}_{l}\) is performed using the two datasets \({\mathbb {D}}^{d}_{train}\) and \({\mathbb {D}}^{l}_{train}\), respectively. Each of these two datasets (for simplicity we only consider the formation of \({\mathbb {D}}^l_{train}\), being the process substantially equivalent for \({\mathbb {D}}^{d}_{train}\)) collects \(V^{train}\) structural responses, $$\begin{aligned} {\mathbb {D}}^l_{train} = \lbrace {\mathbb {U}}_1 , \ldots , {\mathbb {U}}_{V^{train}} \rbrace ~, \end{aligned}$$ under prescribed damage scenarios and loading conditions. We denote by \({\mathbb {U}}_i \in {\mathbb {R}}^{N_0 \times L_0}\), \(i=1,\ldots ,V^{train}\), a collection of \(N_0\) sensor recordings of displacement and/or acceleration time series of length \(L_0\), such that $$\begin{aligned} {\mathbb {U}}_i = \left[ {\varvec{u}}_1 \left( {\varvec{d}}_i,{\varvec{l}}_i \right) \ | \ \ldots \ | \ {\varvec{u}}_{N_0} \left( {\varvec{d}}_i,{\varvec{l}}_i \right) \right] , \qquad i=1,\ldots ,V^{train} ~; \end{aligned}$$ the time series \({\varvec{u}}_n \left( {\varvec{d}}_i,{\varvec{l}}_i \right) \) recorded by the n-th sensor depends on the damage scenario \({\varvec{d}}_i\) and the loading condition \({\varvec{l}}_i\), and can be seen as the sampling of a time-dependent signal \({\varvec{u}}_n \left( {\varvec{d}}_i,{\varvec{l}}_i \right) \). We assume to deal with recordings acquired at a set of \(L_0\) time instants uniformly distributed over the time interval of interest I. The damage scenario \({\varvec{d}}_i : {\mathcal {P}}_d \rightarrow {\mathbb {R}}^M\) is prescribed at each structural elementFootnote 1 and depends on a set of parameters \(\varvec{\eta }_d \in {\mathcal {P}}_d \subset {\mathbb {R}}^{D}\); the loading condition \({\varvec{l}}_i : I \times {\mathcal {P}}_l \rightarrow {\mathbb {R}}^M\), defined over the time interval I, is prescribed at each element, too, and depends on a set of parameters \(\varvec{\eta }_l \in {\mathcal {P}}_l \subset {\mathbb {R}}^{L}\). Here, we denote by \({\mathcal {P}}_d\) and \({\mathcal {P}}_l\) two sets of parameters, yielding the two sets \({\mathcal {C}}_d\) and \({\mathcal {C}}_l\) of admissible damage and loading scenarios, respectively, obtained when sampling \(\varvec{\eta }_d \in {\mathcal {P}}_d\) and \(\varvec{\eta }_l \in {\mathcal {P}}_l\). During the training procedure, the performances of \({\mathcal {G}}_d\) and \({\mathcal {G}}_l\) are tracked by looking at their classification capabilities on two datasets \({\mathbb {D}}^d_{val}\) and \({\mathbb {D}}^l_{val}\), each one collecting \(V^{val}\) structural responses \({\mathbb {U}}_i\) (defined as in Eq. (1)), \(i=1,\ldots ,V^{val}\). According to the SBC approach, the datasets \({\mathbb {D}}^d_{train}\), \({\mathbb {D}}^l_{train}\), \({\mathbb {D}}^d_{val}\) and \({\mathbb {D}}^l_{val}\) are constructed by exploiting a simplified physics-based model of the structure. For any damage scenario \({\varvec{d}} \in {\mathcal {C}}_d\) and loading conditions \({\varvec{l}} \in {\mathcal {C}}_l\) received as inputs, this numerical model—playing the role of digital twin of the structure to be monitored—returns a recorded displacement and/or acceleration time series \({\varvec{r}}_n \left( {\varvec{d}},{\varvec{l}} \right) \). Since these latter are deterministic, to make our data more conformal to real measurements \({\varvec{u}}_n \left( {\varvec{d}},{\varvec{l}} \right) \), we assume that each \({\varvec{r}}_n \left( {\varvec{d}},{\varvec{l}} \right) \) is affected by an additive measurement noise \(\varvec{\epsilon }_n\sim {\mathcal {N}} \left( \mathbf{0 }, \varvec{\Sigma }_{\epsilon } \right) \), so that $$\begin{aligned} {\varvec{u}}_n = {\varvec{r}}_n \left( {\varvec{d}},{\varvec{l}} \right) + \varvec{\epsilon }_n, \qquad n=1,\ldots ,N_0 ~. \end{aligned}$$ Here we consider each \(\varvec{\epsilon }_n \) normally distributed, with zero mean and covariance matrix \(\varvec{\Sigma }_{\epsilon } \in {\mathbb {R}}^{N_0 \times N_0}\), as related to a real monitoring system [20]. Regarding the auto-correlation of the records (\(j=1,\ldots , L_0\)) of each sensor (\(n=1,\ldots , N_0\)) in time, we assume them to be independent and identically distributed. The background model providing \({\varvec{r}}_n \left( {\varvec{d}},{\varvec{l}} \right) \) is here thought as being already tuned to accurately match the structural response in the undamaged case. Moving away from the baseline due to damage inception, with the adopted supervised strategy we therefore assume the possible damage scenarios to belong to a limited set, and for each of them relevant numerical analyses are exploited to mimick the real structural response, as affected by all the possible uncertainty sources. It should be also added that Eq. (2 ) accounts for the noise in the structural response induced by sensor measurements only. Since damage is a smeared measure of different phenomena occurring at the local scale (including or accompanied by, e.g. cracking and plasticity), it stands as a variable giving a measure of the unresolved dofs in a Mori–Zwangzig formalism, see [21]. In a state-space formulation like the one adopted for Kalman filtering [22], a further source of noise can be added through the state or model error, which accounts for the uncertainties linked to the unresolved dynamics of the system. An issue may thus arise in discerning the two noise sources linked to the model inaccuracy on one side, and to the sensor output and operational conditions on the other side. This discussion is indeed beyod the scope of this work, and interested readers may find relevant information in, e.g. [23 , 24]. The classifiers \({\mathcal {G}}_{d}\) and \({\mathcal {G}}_{l}\) are based on a fully convolutional neural network architecture (that will be detailed in the following section). The training of the network is supervised, and performed by feeding the FCN with multivariate time series \(\lbrace {\mathcal {F}}^{n}_0 \rbrace _{n=1}^{N_0}\) and associated labels (0 or 1 for \({\mathcal {G}}_{d}\), \(g \in \{0,1,\ldots , G\}\) for \({\mathcal {G}}_{l}\)). In this respect, hereon each multivariate time series \(\lbrace {\mathcal {F}}^{n}_0 \rbrace _{n=1}^{N_0}\) is referred to as an instance. In general, \(\lbrace {\mathcal {F}}^{n}_0 \rbrace _{n=1}^{N_0} = {\mathbb {U}}_i\); however, a single instance might be made up to W multivariate time series \({\mathbb {U}}_{iw}\), \(w=1,2,\ldots ,W\) of different lengths \(L_0^w\) to deal with the case of sensors recording time series of different length. Each component \({\mathcal {F}}^{n}_0 = {\varvec{u}}_{n}\) plays the role of input channel for the NN. The testing of the NN is done on instances \(\lbrace {\mathcal {F}}^{n}_{*} \rbrace _{n=1}^{N_0} = {\mathbb {U}}^*_i\), obtained through the numerical model as structural response $$\begin{aligned} {\mathbb {U}}^*_i = \left[ {\varvec{u}}_1 \left( {\varvec{d}}_i,{\varvec{l}}_i^* \right) \ | \ \ldots \ | \ {\varvec{u}}_{N_0} \left( {\varvec{d}}_i,{\varvec{l}}_i^* \right) \right] , \qquad i=1, \ldots , V^{test} \end{aligned}$$ to loading conditions \({\varvec{l}}^*_i \in {\mathcal {C}}_l\), \(i=1,\ldots , V^{test}\), unseen (that is, associated to testing values \(\varvec{\eta }_l\) from \({\mathcal {P}}_l\) not sampled) when building the datasets \({\mathbb {D}}^d_{train}\), \({\mathbb {D}}^l_{train}\), \({\mathbb {D}}^d_{val}\) and \({\mathbb {D}}^l_{val}\). All these instances are collected into two datasets \({\mathbb {D}}^d_{test}\) and \({\mathbb {D}}^l_{test}\). The testing is done by verifying the correct identification of the class (\(\{0,1\}\) for \({\mathcal {G}}_{d}\), \(\{0,1,\ldots , G\}\) for \({\mathcal {G}}_{l}\)) associated with the simulated signals. In concrete terms, a probability is estimated for each possible class, thus yielding the confidence level that the given class is assigned to the data, and the class with highest confidence is compared with the one associated to the simulated signal. No k-fold cross validation is used. Once tested, \({\mathcal {G}}_{d}\) and \({\mathcal {G}}_{l}\) can make a prediction once a new signal \(\lbrace {\mathcal {F}}^{n}_{*} \rbrace _{n=1}^{N_0} = {\mathbb {U}}^{*}\) is experimentally acquired from the real sensor network used to monitor the structure. SBC + FCN classifier. The offline phase is performed before the start of operations of the structure, while the online stage during its normal operations Let us now recap the procedure steps exploiting the schematic representation reported in Fig. 1. For the sake of convenience, we can split our procedure into: an offline phase, where, as first step, the loading conditions \({\mathcal {C}}_l \) (OFF-1#1) and the most probable damage scenarios \({\mathcal {C}}_d \) are evaluated (OFF-1#2). Accordingly, a sensor network with \(N_0\) sensors is designed (OFF-2). The datasets \({\mathbb {D}}^d_{train}\), \({\mathbb {D}}^l_{train}\), \({\mathbb {D}}^d_{val}\), \({\mathbb {D}}^l_{val}\), \({\mathbb {D}}^d_{test}\) and \({\mathbb {D}}^l_{test}\) are then constructed (OFF-3) by exploiting the physics-based digital twin of the structure. The classifiers \({\mathcal {G}}_{d}\) (OFF-4#1) and \({\mathcal {G}}_{l}\) (OFF-4#2) are therefore trained by using \({\mathbb {D}}^d_{train}\) and \({\mathbb {D}}^l_{train}\) and performing the validation using \({\mathbb {D}}^d_{val}\) and \({\mathbb {D}}^l_{val}\). Finally, the classification capacity of \({\mathcal {G}}_{d}\) and \({\mathcal {G}}_{l}\) is assessed by using numerically simulated signals \(\lbrace {\mathcal {F}}^{n}_{*} \rbrace _{n=1}^{N_0} = {\mathbb {U}}^*\) belonging to \({\mathbb {D}}^d_{test}\) and \({\mathbb {D}}^l_{test}\), respectively (OFF-5#1 and OFF-5#2); an online phase, in which for any new signal \(\lbrace {\mathcal {F}}^{n}_{*} \rbrace _{n=1}^{N_0} = {\mathbb {U}}^*\) acquired by the real monitoring system and provided to the classifiers (ON-1), damage detection (ON-2) is performed through \({\mathcal {G}}_{d}\), and damage localization is performed through \({\mathcal {G}}_{l}\) (ON-3). In lack of recordings coming from a real monitoring system, and having assumed the experimental signals \({\mathbb {U}}^*\) equal to the noise-corrupted output of the numerical model, steps OFF-5#1 and OFF-5#2 of the offline phase indeed coincide with steps ON-2 and ON-3 of the online procedure.Footnote 2 We highlight that only those damage scenarios \({\varvec{d}} \in {\mathcal {C}}_d\) that have been numerically simulated in the offline phase can be classified during the online phase. Moreover, damage is considered temporary frozen within a fixed observation interval, enabling to treat the structure as linear [2]. To model the effect of damage, we consider the stiffness degradation of each structural member; this assumption is acceptable if the rate of the evolving damage is sufficiently small with respect to the observation interval [25]. It is not possible to identify from the beginning the most suitable number of instances \(V^{train}\) to be used to train the network. The easiest procedure (even if time-consuming) would be to assess the performances of \({\mathcal {G}}_{d}\) and \({\mathcal {G}}_{l}\) for different sizes \(V^{train}\), aiming at finding a trade-off between the computational burden required to construct the dataset and train the NN, and the classification capabilities. Beyond a certain critical size, massive dataset enlargements might lead to small improvements in the NN performance, as shown in our numerical results. Finally, concerning the setting of the loading conditions \({\mathcal {C}}_l\), in this work we have (i) identified a set of possible loading scenarios that can significantly affect the response of the structure; (ii) subdivided this set into a certain number of subsets, representative of different possible dynamic effects of the applied load; (iii) sampled each subset almost the same number of times. Fully convolutional networks Neural network architecture We now describe the FCN architecture employed for the sake of classification. As discussed in the previous section, \(\lbrace {\mathcal {F}}^{n}_0 \rbrace _{n=1}^{N_0}\) are the inputs adopted during the training phase (for which we know the instance label associated), while \(\lbrace {\mathcal {F}}^{n}_{*} \rbrace _{n=1}^{N_0}\) are the inputs that we require the FCN to classify. FCN architecture in the case of a single data type. Here \(N_0\) represents the number of input channels and N represents the adopted number of filters. For sake of clarity, the dimensionality of the building blocks has been enhanced: a three-dimensional parallelepiped is used to depict the two-dimensional output of each convolutional layer; a two-dimensional rectangle is used to depict the one-dimensional output of the global pooling layer and of the softmax layer We have adopted a FCN stacking three convolutional layers \({\mathcal {L}}_i\), \(i=\lbrace 1,2,3 \rbrace \), with different filter sizes \(h_i\), followed by a global pooling layer and a softmax classifier (the choice of the NN hyperparameters will be discussed in the following). Each convolutional layer \({\mathcal {L}}_i\) has been used together with a Batch-Normalization (BN) layer \({\mathcal {B}}_i\) and a Rectified Linear Unit (ReLU) activation layer \({\mathcal {R}}_i\) [14, 19], see Fig. 2. When the input signals are made up by W multivariate time series with different length: $$\begin{aligned} \lbrace {\mathcal {F}}^{n}_0 \rbrace _{n=1}^{N^1_0 + ...+ N^i_0+... + N^W_0} = {\left\{ \begin{array}{ll} \lbrace {\mathcal {F}}^{n}_0 \rbrace _{n=1}^{N^1_0} = {\mathbb {U}}_{i1} \in {\mathbb {R}}^{N^1_0 \times L_0^1} \\ \lbrace {\mathcal {F}}^{n}_0 \rbrace _{n=N^1_0 +1 }^{N^1_0 + N^2_0} = {\mathbb {U}}_{i2} \in {\mathbb {R}}^{N^2_0 \times L_0^2} \\ \vdots \\ \lbrace {\mathcal {F}}^{n}_0 \rbrace _{n=N^1_0 + ... + N^{W-1}_0 +1 }^{N^1_0 + ... + N^W_0} = {\mathbb {U}}_{iW} \in {\mathbb {R}}^{N^W_0 \times L_0^W} \end{array}\right. } ~, \end{aligned}$$ for each one we first adopt the described convolutional architecture separately and then, through a concatenation layer, we perform data fusion on the extracted features. Classification is finally pursued through a softmax layer. The corresponding NN architecture is sketched in Fig. 3 in the case of time series with two different lengths \(L_0^1\) and \(L_0^2\), but can be easily generalised. Tensorflow [26] has been used for the sake of NN construction. FCN architecture in the case of two data types. Here \(N^1_0\) and \(N^2_0\) represent the number of input channels (possibly different) of the two NN branches; \(N^1\) and \(N^2\) represent the number of filters adopted. For sake of clarity, the dimensionality of the building blocks has been enhanced: a three-dimensional parallelepiped is used to depict the two-dimensional output of each convolutional layer; a two-dimensional rectangle is used to depict the one-dimensional output of the global pooling layer and of the softmax layer Use of convolutional layers Let us now show how convolutional layers can be adopted to extract features from multivariate time series. \(\lbrace {\mathcal {F}}^n_0 \rbrace _{n=1}^{N_0}\) are provided to the 1-st convolutional layer \({\mathcal {L}}_{1}\). The output of \({\mathcal {L}}_1\), \(\lbrace {\mathcal {F}}^{n}_{1} \rbrace _{n=1}^{N_1}\), still shaped as time series (of length \(L_1\)), do not represent displacement and/or acceleration any more. Indeed, they are features extracted from the input channels \(\lbrace {\mathcal {F}}^{n}_{0} \rbrace _{n=1}^{N_0}\). The following layers operate in the same manner: the outputs \(\lbrace {\mathcal {F}}^{n}_{i} \rbrace _{n=1}^{N_i}\) of the \((i-1)\)-th convolutional layer \({\mathcal {L}}_{i-1}\) are the inputs of the i-th convolutional layer \({\mathcal {L}}_{i}\) and become features of higher and higher level. In concrete terms, the tasks performed by the i-th convolutional layer \({\mathcal {L}}_{i}\) are: the subdivision of the inputs \(\lbrace {\mathcal {F}}^{n}_{i-1} \rbrace _{n= 1}^{N_{ i -1 }}\) into data sequences, whose length \(h_i\) determines the receptive field of \({\mathcal {L}}_{i}\); and the multiplication of each data sequence by a set of weights \({\varvec{w}}^{\left( i,m \right) }\) called filter, where the output \({\mathcal {F}}^{n}_{i}\) of each filter is called feature map. Mono-dimensional (1D) receptive field must be used in time series analysis, being each channel monodimensional. In Fig. 4 the fundamental architecture of \({\mathcal {L}}_i\) is depicted, linking the inputs \(\lbrace {\mathcal {F}}^{n}_{i-1} \rbrace _{n=1}^{N_{ i -1 }}\) and the outputs \(\lbrace {\mathcal {F}}^{m}_i \rbrace _{m=1}^{N_i}\) through: $$\begin{aligned} z^{\left( i,m \right) }_{h} = \sum _{q=0}^{h_i-1}\sum _{n=1}^{N_{i-1}} w^{\left( i,m \right) }_{q} x^{\left( i-1,n \right) }_{p} + b^{\left( i,m \right) } \qquad \text {with} \quad p=h + q ~, \end{aligned}$$ \(z^{\left( i,m \right) }_{h}\) is the h-th entry of \({\mathcal {F}}^{m}_{i}\); \( b^{\left( i,m \right) }\) is the bias of \({\mathcal {F}}^{m}_{i}\); \(x^{\left( i-1, n\right) }_{p}\) is the p-th entry of \({\mathcal {F}}^{n}_{i-1}\); \(w^{\left( i-1,n \right) }_{q}\) is the q-th connection weight of the m-th filter applied to the p-th input of \({\mathcal {F}}^{n}_{i-1}\). Sketch of a 1D convolutional layer. Here \(h_i\) specifies the kernel dimension. As a filter is associated to each feature map n, to represent it bars of different heights are used in relation to the amplitude of the filter weights As the goal of stacking several convolutional layers is to provide nonlinear transformations of \(\lbrace {\mathcal {F}}_0^n \rbrace _{n=1}^{N_0}\), their overall effect is to make the classes to be recognised linearly separable [27]. In this way, a linear classifier is suitable to carry out the final task. Every nonlinear transformation can be interpreted, as discussed, as an automatic extraction of features. Batch Normalization, ReLU activation, global pooling and softmax classifier The Batch Normalization (BN) layer \({\mathcal {B}}_i\) is introduced after each convolutional layer \({\mathcal {L}}_i\) to address the issue related to the vanishing/exploding gradients possibly experienced during the training of deep architectures [28]. It relies on normalization and zero-centering of the outputs \(\lbrace {\mathcal {F}}^{n}_{i} \rbrace _{n=1}^{N_i}\) of each layer \({\mathcal {L}}_{i}\). We express the output of \({\mathcal {B}}_i\) as \(\lbrace {\mathcal {F}}^{n}_{{\mathcal {B}} i} \rbrace _{n=1}^{N_i}\). For the same reason, the ReLU activation function is preferred instead of saturating ones [29]. The ReLU layer \({\mathcal {R}}_i\) transforms \(\lbrace {\mathcal {F}}^{n}_{{\mathcal {B}} i} \rbrace _{n=1}^{N_i}\), through $$\begin{aligned} {\mathcal {F}}^{n}_{{\mathcal {R}} i} \left( u \right) = \text {max} \left( 0,{\mathcal {F}}^{n}_{{\mathcal {B}} i} \left( u \right) \right) \qquad \text {with} \quad u=1,\ldots ,L_i ~. \end{aligned}$$ \({\mathcal {F}}^{n}_{{\mathcal {B}} i} \left( u \right) \) is the u-th entry of the n-th feature map of \({\mathcal {B}}_i\); \({\mathcal {F}}^{n}_{{\mathcal {R}} i} \left( u \right) \) is the u-th entry of the n-th feature map of \({\mathcal {R}}_i\). In the adopted FCN architecture, the features to be used in the classification task are extracted from \(\lbrace {\mathcal {F}}_0^n \rbrace _{n=1}^{N_0}\) by the blocks \(\lbrace {\mathcal {L}}_i + {\mathcal {B}}_i + {\mathcal {R}}_i \rbrace _{i=1}^3\). The final number of features equals the number \(N_3\) of filters of the last convolutional layer. By applying next a global average pooling [30], the extracted features \(\lbrace {\mathcal {F}}^{n}_{{\mathcal {R}} 3} \rbrace _{n=1}^{N_3}\) are condensed in a single channel \({\varvec{b}}\in {\mathbb {R}}^G\), being G the total number of classes. The softmax activation layer finally performs the classification task. First, the channel \({\varvec{b}}\) is mapped onto the target classes, by computing a score \(s_g\) $$\begin{aligned} s_g({\varvec{b}})=\varvec{\theta }^T_g \cdot {\varvec{b}}, \qquad g=1,\ldots ,G~, \end{aligned}$$ for each class g, where the vector \(\varvec{\theta }_g\in {\mathbb {R}}^G\) collects the weights related to the g-th class. The softmax function is then used to estimate the probability \(p_g\in \left[ 0,1\right] \) that the input channels belongs to the g-th class, according to: $$\begin{aligned} p_g=\frac{e^{s_g({\varvec{b}})}}{\sum _{j=1}^G e^{s_j({\varvec{b}})}} \quad g=1,\ldots , G~. \end{aligned}$$ The input channels \(\lbrace {\mathcal {F}}^{n}_0 \rbrace _{n=1}^{N_0}\) are then assigned to the class with associated label g featuring the highest estimated probability \(p_g\), which then represents the estimated confidence level that class g is assigned to the data. Neural Network training The NN training consists of tuning the weights \({\varvec{w}}^{\left( i,n \right) }\) and \(\varvec{\theta }_g\), respectively appearing in Eqs. (3) and (4) by minimizing a loss function depending on the data. In this respect, the Adam optimization method [31], a widespread stochastic gradient-based optimization method, has been used. For classification purposes, the most commonly adopted loss function is the cross entropy, defined for the classifier \({\mathcal {G}}_d\) as: $$\begin{aligned} J_d\left( {\varvec{Y}},{\varvec{p}}\right) =-\frac{1}{V^{train}}\sum ^{V^{train}}_{i=1}\sum ^G_{g=1}y_{i}^g \text {log}\left( p^g\right) ~, \end{aligned}$$ g is the label of the instance provided to the NN during the traning; \(y_{i}^g\in \lbrace 0,1 \rbrace \) is the confidence that the i-th instance should be labelled as the g-th class, with $$\begin{aligned} y_{i}^g= {\left\{ \begin{array}{ll} 1 &{} \text {if for the }i\text {-th instance the }g\text {-th class is the target class}\\ 0 &{} \text {otherwise}; \end{array}\right. } \end{aligned}$$ \({\varvec{Y}} \in \lbrace 0, 1 \rbrace ^{V^{train}}\) collects all the \(y_{i}^g\) confidence values; \({\varvec{p}}\in {\mathbb {R}}^{G}\) collects the estimated probabilities \(p^g\), see Eq. (5). The loss function \(J_l\left( {\varvec{Y}},{\varvec{p}}\right) \) for the classifier \({\mathcal {G}}_l\) is defined analogously. Regarding the employed datasets: \({\mathbb {D}}^d_{train}\) is used to train the NN by back-propagating the classification error; \({\mathbb {D}}^d_{val}\) is used to possibly interrupt the training in case of overfitting, but not to modify the NN weights; \({\mathbb {D}}^d_{test}\) is used to verify the prediction capabilities of the NN, after the training phase has been performed. The same splitting applies to the data used for training \({\mathcal {G}}_{l}\). In order to assess the offline phase of the proposed procedure, we have tested \({\mathcal {G}}_{d}\) and \({\mathcal {G}}_{l}\) on their respective test sets \({\mathbb {D}}_{test}^{d}\) and \({\mathbb {D}}_{test}^l\) (steps OFF5#1 and OFF5#2 of Fig. 1). The number of times \({\mathbb {D}}^d_{train}\) and \({\mathbb {D}}^l_{train}\) are evaluated during the training of \({\mathcal {G}}_d\) and \({\mathcal {G}}_l\) corresponds to the number of epochs: in this work, we have bounded to 1500 the maximum number of epochs allowed. We have also provided the possibility of an early-stop of the training when, after having performed at least 750 epochs, the validation loss has not decreased three times in a row. To control the training process, a learning rate \(\xi \) is usually introduced to scale the correction of the NN weights provided by back-propagating the classification error. In out case, the learning rate has been forced to decrease linearly with the number of epochs, moving from \(10^{-3}\) at the beginning of the training till \(10^{-4}\) at its end [32]. After having performed at least 750 epochs, an additional factor \(\zeta =1/\root 3 \of {2}\) is used to scale down the learning rate if the loss function \(J\left( {\varvec{Y}},{\varvec{p}}\right) \) is not reduced within the successive 100 epochs, as suggested in [32]. Random subsamples (also called minibatches) of the data points belonging to the training set are employed for the sake of gradient evaluation when running the Adam optimization method [27, 31]. Hyperparameters setting The setting of the NN hyperparameters, namely the dimensions of the kernels \(h_i\) and the number of feature maps \(N_i\), is done according to [14, 32]. In this work, we choose \(h_1=8\), \(h_2=5\), \(h_3=3\) as kernel dimensions for the three convolutional layers. Since no zero-padding has been employed, the dimension of the time series is progressively reduced passing through the convolutional layer \({\mathcal {L}}_i\) from \(L_{i-1}\) to \(L_{i} = L_{i-1}-h_i+1\). Accordingly, considering the parameters and the length of the time series used in this work, the dimension reduction related to a single convolutional layer is on the order of \(1\%\). We have verified that the classification accuracy is barely affected by this reduction and, more in general, by the use of the zero-padding. It is possible to further improve the NN performances by operating a (necessarily problem-dependent) finer tuning of the NN hyperparameters, but only at the cost of a time-consuming repeated evaluation of the NN outcomes. The number of filters \(N_i\) to be adopted depends on the complexity of the classification task: the more complex the classification, the higher the number of filters needed. However, increasing the number of filters beyond a certain threshold, which depends on the problem complexity and the task to be performed, has no effects on the prediction capabilities of the NN; indeed, the risk would be to increase computational costs, and to overfit the training dataset. Therefore, it looks convenient to initially employ a small number of filters, and then increase it if the NN performs poorly during the training phase. A possible choice suggested in [14] is to consider \(N_1 = 128\), \(N_2 = 256\), and \(N_3 = 128\) as a suitable choice, independently of the dataset to be analysed. Here we have kept the proportion \(N_1 = N\), \(N_2 = 2 N\), and \(N_3 = N\) as filter sequence, and verified that increasing N beyond \(N=16\) does not affect the NN performances. To carry out the comparison of FCN architectures with one or two convolutional branches, we have kept \(N=16\) independently of the classification task. Linear elastic shear model of an eight-story building with constant story mass and constant story stiffness Numerical results Dataset construction The proposed methodology is now assessed through the numerical benchmark shown in Fig. 5, and originally proposed in [33]. No real experimental measurements have been allowed for in the analysis; measurement noise has been instead introduced by corrupting the monitored structural response with uncorrelated random signals featuring different Signal to Noise Ratio (SNR) levels, to also assess the effect of sensors accuracy on the capability of the proposed approach. Further details are provided below. The considered structure is an idealised eight-story shear building model, featuring a constant floor mass of \(m = 625~\text {t}\) and a constant inter-story stiffness of \(k^{sh} = 10^6 \text {kN/m}\). The proposed SHM strategy has been designed to handle signals related to different types of damage-sensitive structural responses characterized by different magnitude and sampling rate. Hence, in the following both the horizontal and the vertical motions of each story are allowed for and recorded. The longitudinal stiffness of the columns has been set to \(k^{ax} = 10^8 \text {kN/m}\), and a slenderness (given by the ratio their length and thickness) of 10 has been assumed for the same columns. The numerical model employs \(M=16\) dofs (8 in the x direction and 8 in the z direction), and \(N_0 = 16\) virtual sensors are used to measure the noise-free displacements \({\varvec{r}}_n\) (collecting both horizontal displacements \({\varvec{r}}_n\), \(n=1,\ldots ,8\), and vertical displacements \({\varvec{r}}_n\), \(n=9,\ldots ,16\) the vertical displacements) at all the story levels. Although a non-classical damping was originally proposed in [33], the relevant effect on system identification or model update has shown to be marginal if the structure is continuously excited during the monitoring stage, see e.g. [2, 34 ]. Therefore, in this feasibility study no damping has been taken into account. The dofs are numbered from 1 for the ground floor up to 8 for the eighth floor in both directions. Due to the building geometry, eight different damage scenarios \({\varvec{d}}(1), \ldots , {\varvec{d}}(8)\) can be considered, each one characterized by a reduction of \(25\%\) of one inter-story stiffness only, that is, $$\begin{aligned} {\varvec{d}}\left( g \right) = \left\{ \begin{array}{ll} 0.75 k_j &{} \quad \text{ if } j= g \ \text{ or } j = g +8 \\ k_j &{} \quad \text{ otherwise } \end{array} \right. \end{aligned}$$ $$\begin{aligned} k_j = \left\{ \begin{array}{ll} k^{sh} &{} \quad \text{ if } j= 1, \ldots , 8 \\ k^{ax} &{} \quad \text{ if } j= 9, \ldots , 16. \\ \end{array} \right. \end{aligned}$$ The label g is used to denote each damage scenario, ranging from 1 for the first floor up to 8 for the eighth floor; by convention, \({\varvec{d}}(0)\) refers to the undamaged case. Before assessing the classification capability of the NN, a parametric analysis has been carried out to check the sensitivity to damage of the vibration frequencies. Table 1 collects the results regarding the horizontal motion; for the analysed system, the axial frequencies can be obtained by scaling the reported frequencies by a factor 10. Any considered damage state reduces all the frequencies, though the variation is rather limited even with a stiffness reduction by \(25\%\), see Table 1. The capability to perform damage localization just by exploiting these data can be largely ineffective, since some trends in the table, such as the monotonic dependence of the frequencies of a vibration mode on the damage inter-story, can be hardly recognized. As proposed in [2, 25], the shape of the vibration modes—in particular that of the fundamental one in the case of a building featuring constant mass and stiffness at each story as for the case at hand—should be taken into account in the analysis, in order to localise and quantify damage. As previously remarked, employing FCN allows us not only to analyse separately each recorded signal, but also to exploit their interplay. Moreover, even if the sensitivity to damage of displacements in horizontal and vertical directions is the same, their joint use enabled by the FCN can lead to an improvement of the NN performances. Table 1 Shear vibration frequencies of the considered eight-story building, for the undamaged case (0) and under different damage scenarios, each one featuring a reduction by \(25\%\) of the stiffness at the inter-story corresponding to the scenario label Due to the different range of values of vibration frequencies in the case of horizontal or vertical excitation of the structure, the axial response turns out to be richer in high-frequency vibrations. To correctly record the signals, the sampling rates have been set to 66.7 Hz to monitor the horizontal vibrations, and 667 Hz to monitor the vertical vibrations. Even for the higher vibration frequencies, output signals are assumed to be not distorted by the accelerometers: the transfer function of the sensor itself has to be very close to 1 for frequencies up to the mentioned values, so that the amplitude of sensor output very well matches the real structural response to be locally measured. If the structural vibration frequencies or the sampling rates get too close to the internal resonance frequency of the sensor, for some specific applications different, ad-hoc designed devices will be selected. In the analysis, each instance is made up by two multivariate time series, one for each excitation type, referring to different time intervals: \(I=[0,10]s\) for the shear case and \(I=[0,1]s\) for the axial case, respectively. Accordingly, the time series lengths are equal to \(L^1_0 = 667\) and to \(L^2_0 = 667\) for both the displacements in x and z direction. This benchmark has been exploited to test the FCN architecture with either one convolutional branch or two convolutional branches (see Figs. 2 , 3). Indeed, what we are going to assess is the NN ability to perform the fusion of the information extracted through the concatenation layer, rather than the capacity to deal with time series of different lengths. Two load types have been considered: first, we have excited the structure with lateral and vertical loads applied at each story and characterised by narrow frequency ranges, randomly sampled from an interval including, but not limited to, the structural frequencies; then we have applied, once again at each story, a white noise, assessing both the case in which all the shear frequencies have been excited, and the one in which just some of them have been covered by the noise frequency spectrum. With these two load types, we have been able to assess the NN performances in two different cases: case 1 (sinusoidal load case), in which the applied load is characterized by only few (a priori, random) frequencies; case 2 (white noise load case), in which the applied load is characterized by a higher number of (a priori, random) frequencies, lying in a given range. This latter case corresponds to the one of random vibrations, for instance due to low-energy seismicity of natural or anthropic (urban) source [35], and is frequently adopted in literature, see e.g. [36]; the characteristic frequency range of seismic vibrations is site-dependent, being determined by the geographical and geological properties of the site. For example, in deep soft basins, the seismic vibrations are richer in low frequency components with respect to the ones in rock sites. For this reason, without any site characterization, it makes sense to assume more than a single frequency spectrum for the random vibrations. Case 1 (sinusoidal load case) In this first analysis, two different load combinations in the horizontal (x) and vertical (z) directions have been considered, to affect both the shear and axial vibration modes of the building. For each direction, the loads applied to the stories of the structure are given by the sum of two sinusoidal functions, whose amplitudes and time variations have been randomly generated. This expression for the load has been adopted to keep its description simple and, in comparison with single sinusoidal component case, to increase the set of frequencies that excite the structure. The applied load \({\varvec{l}} = [{\varvec{l}}^{sh}, {\varvec{l}}^{ax} ]\) reads: $$\begin{aligned} {l}^{sh}_{i} \left( t, \varvec{\eta }^{sh}_l \right)&= \sum _{j=1}^{2} F_i^{sh}\gamma ^{sh}_{i,j} \text {sin}(2 \pi f^{sh}_{j} t), \qquad i=1,\ldots ,8 ~, \\ {l}^{ax}_{i} \left( t, \varvec{\eta }^{ax}_l \right)&= \sum _{j=1}^{2} F_i^{ax}\gamma ^{ax}_{j} \text {sin}(2 \pi f^{ax}_{j} t), \qquad i=1,\ldots ,8 ~, \end{aligned}$$ where: \(l^{sh}_{i}\left( t, \varvec{\eta }^{sh}_l \right) \) and \(l^{ax}_{i}\left( t, \varvec{\eta }^{ax}_l\right) \) are the amplitudes of the horizontal and vertical loads acting on the i-th floor; \(F_i^{sh}=10^4\) kN and \(F_i^{ax}=10^3\) kN are scaling parameters used to set the magnitude of the applied loads; \(\varvec{\eta }^{sh}_l = [\gamma ^{sh}, f^{sh} ]\) and \(\varvec{\eta }^{ax}_l = [\gamma ^{ax}, f^{ax} ]\); \(\gamma ^{sh} \in {\mathbb {R}}\) and \(\gamma ^{ax} \in {\mathbb {R}}\) are random scaling factors; \(f^{sh}, f^{ax} >0\) set the frequencies of the sinusoidal components (see Table 2 for the adopted random generation rules). Table 2 Adopted random generation rules for the parameters \(\varvec{\eta }^{1sh}_l\) and \(\varvec{\eta }^{1ax}_l\) tuning the frequency and the magnitude of the applied sinusoidal load components in the x (a) and z (b) directions respectively The two sets of values adopted for the generation rule of \(f^{sh}\) and \(f^{ax}\) are chosen on the basis of the structural frequencies that could be excited both in the horizontal and vertical directions. At the same time, thanks to the adopted sampling rule, \(f^{sh}\) and \(f^{ax}\) may exceed these frequency ranges, producing instances in which the shear frequencies and/or the axial frequencies of the structure are not excited. Regarding the generation rule of the scaling parameter \(\gamma ^{sh}\), its dependency on the dofs of the structure through the factor \(\gamma ^{dof}\) has been introduced in Table 2 in order to mimic the load distribution usually considered in a preliminary design process, when the shear behaviour of a regular building is evaluated. Keeping in mind that our principal interest here is to assess the prediction capacities of the NN architecture, this choice has enabled us to obtain displacement time series similar to the ones expected during the monitoring of the structure, although adopting a very simple generation rule for the applied lateral loads. Some examples of the time evolutions of the generated loads, applied to the first floor of the structure (hence of \(l_1^{sh}\) and \(l_1^{ax}\)), are shown in Fig. 6. Examples of time evolutions of the loads (case 1) applied to the first floor of the building in the x (left column) and z (right column) directions. For the sake of visualisation, the sketched time interval for the loads applied in the x direction has been restricted to \(I=\left[ 0,2.5\right] s\) Through Eq. (2), we have added a measurement noise to mimic the output of a real monitoring system. For the sake of simplicity, the covariance matrix \(\varvec{\Sigma }_{\epsilon } \in {\mathbb {R}}^{16 \times 16}\) of such noise has been assumed to be diagonal, i.e. \(\varvec{\Sigma }_{\epsilon } = \sigma ^2 {\mathbb {I}}\) where \(\sigma ^2\) is the variance of the measurement error \(\epsilon \) in the horizontal and vertical directions for each floor, and \({\mathbb {I}} \in {\mathbb {R}}^{16 \times 16}\) is the identity matrix. Two sources of randomness have been assumed for the noise, due to environmental effects and to the transmission of the electrical signal. Their effects are superimposed in the covariance matrix with diagonal entries respectively amounting to \({\sigma }_{env}^2\) and \({\sigma }_{el}^2\). Example of time evolutions of x displacements for stories 1, 4, 8 with SNR\(=15\) dB (from a–f) and SNR\(=10\) dB (from g–l), undamaged state. Low-noise case: \(f_{1,2}^{sh}=\left( 21.1, 69.2 \right) \), \(\gamma _{1,2}^{sh}=\left( -0.058,-0.199\right) \). High-noise case: \(f_{1,2}^{sh}=\left( 14.5, 2.36 \right) \), \(\gamma _{1,2}^{sh}=\left( 0.025,-0.159\right) \). Orange lines represent \({\varvec{u}}\), whereas black lines stand for \({\varvec{r}}\), according to Eq. (2). On the right side, a closer view for each left side plot is reported Example of time evolutions of z displacements for stories 1, 4, 8 with SNR\(=15\) dB (from a–f) and SNR\(=10\) dB (from g–l), undamaged state. Low-noise case: \(f_{1,2}^{ax}=\left( 32.8, 28.2 \right) \), \(\gamma _{1,2}^{ax}=\left( 1.38,1.38\right) \). High-noise case: \(f_{1,2}^{ax}=\left( 15.5, 22.0 \right) \), \(\gamma _{1,2}^{ax}=\left( 1.133,-1.140\right) \). Orange lines represent \({\varvec{u}}\), whereas black lines stand for \({\varvec{r}}\), according to Eq. (2). On the right side, a closer view for each left side plot is reported The environmental noise has been assumed to induce vibrations of the same amplitude and/or to affect in the same way the converted electrical signals, independently of the building floor. Given that horizontal motions at the top of the buildings are in general greater than displacements at the lower levels, this assumption leads small amplitude signals to be more affected, in relative terms, by the environmental noise. This is reasonable if we assume that the localised disturbances that arise because of the surrounding environment have the same magnitude indipendently of the building levels. Example of time evolutions of displacements in the x direction of the 8-th story for SNR\(=10\) dB, with \(f_{1,2}^{sh}=\left( 14.5, 2.36 \right) \), \(\gamma _{1,2}^{sh}=\left( 0.025,-0.159\right) \), in the undamaged scenario (a) and all possible damage scenarios (b–i). Orange lines represent \({\varvec{u}}\), whereas black lines stand for \({\varvec{r}}\), according to Eq. (2). To show the effects of damage on the structural dynamics, the black dotted lines in b–i report the noise-free structural dynamics related to the undamage scenario Examples of time evolutions of displacements in the z direction of the 8-th story for SNR\(=10\) dB, with \(f_{1,2}^{ax}=\left( 15.5, 22.0 \right) \), \(\gamma _{1,2}^{ax}=\left( 1.133,-1.140\right) \), in the undamaged scenario (a) and all possible damage scenarios (h–i). Orange lines represent \({\varvec{u}}\), whereas black lines stand for \({\varvec{r}}\), according to Eq. (2). To show the effects of damage on the structural dynamics, the black dotted lines in h–i report the noise-free structural dynamics related to the undamage scenario Regarding the electrical disturbance, the same noise level has been assumed both in directions x and z, in spite of the usually different technical specifications for sensors measuring displacements with different magnitude. This means that the electrical disturbances have the same effect, in statistical terms, on the measurement outcomes in the horizontal direction \({u}^{sh}_i\) and in the vertical direction \({u}^{ax}_i\). Figures 7 and 8 respectively show examples of time evolutions of horizontal and vertical displacements, to highlight the effects of the above assumptions on the structural signals. These displacement components always refer to the undamaged case, and to the load conditions specified in the captions. According to what highlighted, it is noted that the displacements of the 8th story are less affected by noise than the ones of the 1st story. Due to the random generation of the applied load, different structural frequencies are excited in each simulation. To provide different scenarios also in terms of sensor accuracy (see also [37]), two levels of SNR of 15 dB and 10 dB have been adopted. The SNR is a summary indicator, referring to the overall level of noise corruption for the displacements in one direction. Still referring to Figs. 7 and 8, differences in terms of corruption levels between the two sensor accuracy scenarios can be appreciated. To build the dataset required for the NN training, the procedure described so far has been adopted for all the damage scenarios. Figures 9 and 10 respectively show the effects of damage on \({u}^{sh}_8\) and \({u}^{ax}_8\), highlighting the sensitivity of this output to the handled damage state. To better highlight this sensitivity, the time evolutions in Figs. 9 and 10 are provided for \(I= \left[ 0,2.5\right] \)s and \(I= \left[ 0,0.25\right] \)s only, even though \(I= \left[ 0,10\right] \)s and \(I=\left[ 0,1\right] \)s have been adopted for the NN training. Drifts from the responses relevant to the undamaged case can be observed when the damage scenarios refer to the stiffness reduction of the lowest stories; however, it looks nearly impossible, in general, to perform any classification of the damage scenarios without any effectively trained classifier. White noise load case, \(f_{min}=15\) and \(f_{max}=17\) Hz. Time evolutions (left column) and Power Spectral Density (right column) of the forces applied to all the building stories in x (first row) and z direction (second row) White noise load case, \(f_{min} = 5\) and \(f_{min} = 7\) Hz. Time evolutions (left column) and Power Spectral Density (right column) of the forces enforced to all the building stories in x (first row) and z direction (second row) Case 2 (white noise load case) In the second load case we have accounted for random vibrations caused e.g. by low-energy seismicity [36]. The applied loads \({\varvec{l}} = [{\varvec{l}}^{sh}, {\varvec{l}}^{ax}]\), with \(i=1,\ldots ,8\), at each floor and each time instants are obtained by first sampling out the values from a normal distribution \({\mathcal {N}}\left( 0, 10^4 \right) \) and then low-pass filtering them with a "roll-off" set between frequencies \(f_{min}\) and \(f_{max}\). Two different scenarios have been considered for the frequency range of the applied excitations: \(f_{min}=15\) and \(f_{max}=17\) Hz; \(f_{min}=5\) and \(f_{max}=7\) Hz. In the first case all the shear modes and the first axial mode have been excited; in the second case, just the first three shear modes and no axial frequencies have been excited, see Table 1. Figures 11 and 12 respectively provide an overview of the simulated forces for the two cases. Dataset composition and NN training We now detail the construction of the employed datasets and the NN training phase. Each of the two classifiers has been trained on a different dataset, made by instances generated by evaluating the physics-based model for different loading and damage conditions. Each instance is made up by \(N_0 = 16\) time series recordings of displacements (in the two directions, for each of the 8 floors) of length \(L_0 = 667\). Due to the assumed shear-type behavior of the building, all the points belonging to each rigid floor share the same accelerations and displacements; hence, there is no need to plug-in specific optimal strategies to locate sensors in the network, which could be instead of interest in case of very localized damage events breaking the validity of the rigid floor assumption. Two global datasets \({\mathbb {D}}^d\) and \({\mathbb {D}}^l\) made by \(V=4608\) instances each have been generated, and then split onto a training, a validation and a testing set, thus yielding \({\mathbb {D}}^d = {\mathbb {D}}^d_{train} \cup {\mathbb {D}}^d_{val} \cup {\mathbb {D}}^d_{test}\) and \({\mathbb {D}}^l = {\mathbb {D}}^l_{train} \cup {\mathbb {D}}^l_{val} \cup {\mathbb {D}}^d_{test}\), with \(V = V^{train} + V^{val} + V^{test}\) in both cases. For the splitting of the dataset \({\mathbb {D}}^d\) into training \({\mathbb {D}}^d_{train}\), validation \({\mathbb {D}}^d_{val}\) and test \({\mathbb {D}}^d_{test}\) sets, no specific rules are available, and only some heuristics can be used – see, e.g., [27]. We have thus employed \(75\%\) of V to train and validate the NN (\(V^{train}\) and \(V^{val}\)), and the remaining \(25\%\) (\(V^{test}\)) to test it. Within the first subset, \(75\%\) of the instances have been in turn allocated for training, and the remaining \(25\%\) for validation. The final dataset subdivision then reads: \(V^{train}=56.25\% V\), \(V^{val}=18.75\% V\), and \(V^{test}=25\%V\). The splitting of \({\mathbb {D}}^l\) has been done identically. The large number of instances employed for validation and test has allowed us to perform a robust assessment of the NN generalization capabilities. This has been done without limiting the information content that can be employed for the NN training; in fact, the dataset dimensions can be arbitrarily enlarged, if necessary, through a synthetic generation of the new instances, still keeping the same proportions. During the training, an equal number of instances \(V_g^{train} = V^{train} / G\) related to each damage scenario \(g=0,\ldots , 8\) (the undamaged case has been considered, too, in addition to the \( G=8\) possible cases of damage) have been provided to the NN, to avoid the construction of a biased dataset \({\mathbb {D}}^d_{train}\); the same has been done for \({\mathbb {D}}^l_{train}\). In this way, we indeed prevent the NN to be prone to return the class labels that have been more frequently presented in the training stage. There are no specific rules to set \(V_g^{train}\) (and, therefore, the overall dimension \(V_g = V/G\) of simulated cases for each damage scenario) a priori. Only few theoretical studies provided some recommendations for specific cases, see, e.g., [38]; however, they are not applicable to FCNs. In general, the problem complexity and the employed NN architecture must be taken into account on a case-by-case basis. For this reason, we have evaluated the \({\mathcal {G}}_{d}\) and \({\mathcal {G}}_{l}\) classifiers accuracies \(A_d\) and \(A_{l}\) on the validation set \({\mathbb {D}}^d_{train}\) and \({\mathbb {D}}^l_{train}\), and the training time at varying \(V^{train}_g\). We have then chosen the best dataset size according to a tradeoff between the two aforementioned indicators, and keeping in mind that the time required to generate a dataset and to train the NN both scale linearly with \(V^{train}_g\). The \({\mathcal {G}}_{d}\) classifier accuracy is defined as the ratio \(A_d = {V_{\star }^{val}}/{V^{val}}\), where \(V_{\star }^{val}\) is the number of instances of \({\mathbb {D}}^l_{val}\) which are correctly classified by \({\mathcal {G}}_{d}\); the \({\mathcal {G}}_{l}\) classifier accuracy \(A_l\) is defined in a similar way. Damage localization, case 1. Dependence on \(V_g\) of the accuracy \(A_l\) of the classifier \({\mathcal {G}}_l\) Let us now see how we have determined the overall dataset size V by applying the heuristic approach previously discussed. In Fig. 13, the accuracy \(A_{l}\) at varying values of \(V_g\) is reported, by considering the local case 1. By increasing \(V_g\) from 256 to 384, \(A_{l}\) is highly affected, while a further increasing yields a smaller gain in accuracy. The non-monothonic variation of \(A_{l}\) with respect to \(V_g\) is due to the randomness of the procedure, and in particular to the initialization of the weights of the convolutional filters. For the above reasons, we have adopted \(V_g = 512\) during the training phase. Treating the damage detection task for case 1, a total number of \(V = 9216\) instances have been generated. Half of the instances refers to the undamaged conditions, half to damaged conditions. Each damage scenario is equally represented (\(V_g = 512\) instances each). Regarding instead the damage localization task, \(V = 4608\) and \(V_g = 512\) (including the undamaged case \(g=0\)) have been adopted. Still adopting the discussed heuristic criterion for the determination of the overall dataset dimension, \(V = 4096\) has been used for the damage detection task when the white noise load case is treated. Once again, half of the instances refers to the undamaged conditions, half to the damage condition. Each damage scenario is equally represented (\(V_g = 128\) instances each). Regarding the damage localization task, \(V = 4608\) and \(V_g = 128\) (including the undamaged case \(g=0\)) have been adopted. Classification outcomes We now report the numerical results obtained for the two load cases, and for the two required tasks of damage detection and damage localization. The obtained classification outcomes are affected by the NN architecture, either with one or two convolutional branches, depending on whether the horizontal and vertical sensing are both considered or not. In particular, when treating the damage localization task in presence of the white noise load condition, we will also try to assess the impact of each input channel \({\mathcal {F}}^{n}_0\) on the overall NN accuracy. Useful indications about the quality of the training can be derived from the behavior of the loss functions \(J_d\left( {\varvec{Y}},{\varvec{p}}\right) \) and \(J_l\left( {\varvec{Y}},{\varvec{p}}\right) \)—see Eq. (6 )—of \({\mathcal {G}}_d\) and \({\mathcal {G}}_l\), and of the accuracies \(A_d\) and \(A_l\) on the training and validation sets (\({\mathbb {D}}^t_{train}\) and \({\mathbb {D}}^t_{val}\) for \({\mathcal {G}}_d\); \({\mathbb {D}}_{train}\) and \({\mathbb {D}}_{val}\) for \({\mathcal {G}}_l\)) as a function of the number of iterations. This latter depends on both the number of epochs and the minibatch size chosen for the training.Footnote 3 To evaluate the NN performances, the adopted indices are still \(A_d\) and \(A_l\), yet evaluated on \({\mathbb {D}}_{test}^d\) and \({\mathbb {D}}^l_{test}\). These indices are always compared against the ones produced by a random guess, equal to 0.5 for \({\mathcal {G}}_d\), and to \(1/9=0.111\) for \({\mathcal {G}}_{l}\). For the damage localization case, the misclassification is measured by a confusion matrix in which the rows correspond to the target classes and the columns to the NN predictions. Damage detection and localization in case 1—sinusoidal load case In Table 3 the accuracies \(A_d\) of \({\mathcal {G}}_d\) on \({\mathbb {D}}^d_{test}\) for the two considered noise levels (SNR\(=15\) dB and SNR\(=10\) dB) are reported. NN architectures with both one and two convolutional branches have been tested. Table 3 Damage detection, case 1 The classifier \({\mathcal {G}}_d\) reaches \(A_d = 0.879\) for SNR\(=15\) dB and \(A_d = 0.775\) for SNR \(=10\) dB. These outcomes obtained on high-noise datasets show the potentialities of the proposed approach in view of facing real engineering applications. Indeed, noise effect is a principal concern especially when pervasive and low-cost microelectromechanical systems (MEMS) sensor networks are employed [39], so that the possibility to handle it through FCNs may enhance the application of MEMS networks. Moreover, thanks to our procedure, we have been able to avoid the data pre-processing required by any ML approach based on problem specific features. Figure 14 reports the evolution of the training and validation loss for the dataset with SNR\(=15\) dB and SNR\(=10\) dB. The iteration number accounts for the number of times the NN weights are modified during the training process. The depicted training and validation loss functions refer to the case in which a two branches convolutional architecture has been employed to detect damage. The several spikes observed both in the loss and accuracy graphs are due to the stochastic nature of the training algorithm. During the early stages of the training, the NN displays the most significative gains in terms of classification accuracy, while further increasing the number of iterations only yields a limited effect on the generalization capabilities of the NN. Due to the lack of improvements, the early-stopping criterion has finally terminated the training. Damage detection, case 1. Training and validation of the two branches convolutional architecture: evolution of the loss \(J_d \left( {\varvec{Y}}, {\varvec{p}} \right) \) on \({\mathbb {D}}^d_{train}\) and \({\mathbb {D}}^d_{val}\) (left column), and of \({\mathcal {G}}_d\) accuracy \(A_d\) (right column) on \({\mathbb {D}}^d_{train}\) and on \({\mathbb {D}}^d_{val}\), both for the SNR\(=15\) dB case (top row) and for the SNR\(=10\) dB case (bottom row) Moving to the damage localization task, Table 4 collects the results related to the outcomes of \({\mathcal {G}}_l\) on \({\mathbb {D}}^l_{test}\) obtained for two different noise levels. Table 4 Damage localization, case 1 The results show that the NN performances benefit from the employment of a two branches architectures: \(A_{l}\) increases, compared to the best outcome of the single convolutional layer architecture, from 0.769 to 0.812 for the SNR\(=15\) dB case, and from 0.654 to 0.707 for the SNR\(=10\) dB case. This means that the NN has succeeded in performing a data fusion of the extracted information for the sake of classification. The values of \(A_{d}\) and \(A_{l}\) are quite close, despite the greater complexity of the damage localization problem; this might be due to the intrinsic capability of the FCN to detect correlations between different sensor recordings, allowing us to perform a correct damage localization. Figure 15 reports the evolution of the training and validation loss functions on \({\mathbb {D}}^l_{val}\) and \({\mathbb {D}}^l_{test}\) for the datasets with SNR\(=15\) dB and SNR \(=10\) dB, in the case where a two branches convolutional architecture has been employed. Compared with Fig. 14, a smaller difference in terms of loss and accuracy can be highlighted. This is due to the greater complexity of the damage localization task, that requires to exploit the computational resources of the NN entirely. Indeed, the same number of filters \(N_1\), \(N_2\) and \(N_3\) has been used for both the classification tasks, in spite of their different complexity. On the other hand, we expect that \(A_d\) on \({\mathbb {D}}^d_{test}\), reported in Table 3, would not be affected by reducing the number of filters. This conclusion can be reached by looking at Fig. 14 and observing that, during the last stages of the training, \(A_d\) on \({\mathbb {D}}^d_{train}\) is shown to be always greater than the one obtained on \({\mathbb {D}}^d_{val}\). Damage localization, case 1. Training and validation of the two branches convolutional architecture: evolution of the loss \(J_l \left( {\varvec{Y}}, {\varvec{p}} \right) \) on \({\mathbb {D}}^l_{train}\) and on \({\mathbb {D}}^l_{val}\) (left column), and of \({\mathcal {G}}_l\) accuracy \(A_l\) (right column) on \({\mathbb {D}}^l_{train}\) and on \({\mathbb {D}}^l_{val}\), both for the SNR\(=15\) dB case (top row) and for the SNR\(=10\) dB case (bottom row) Damage localization, case 1. Confusion matrices, case 1, 15 dB (left picture) and 10 dB (right picture) SNR datasets In Fig. 16, the confusion matrices related to the two datasets (SNR\(=15\) dB and SNR\(=10\) dB) are reported. Most of the errors concern the classification of the damage scenarios in which the inter-story stiffness of the highest floors has been reduced, as shown by the entries of the 7-th and 8-th rows and columns of the matrices. This outcome is not surprising if we consider that these damage scenarios only induce small variations in the shear vibration frequencies. Moreover, by looking at Figs. 9 and 10, we can remark that the time evolution of the structural motions under these damage scenarios cannot be easily distinguished from the undamaged case. Damage detection and localization in case 2 We now consider the outcomes of the trained classifiers in the case where a random disturbance is applied to the structural system. Regarding the damage detection task, with this type of excitation the NN is able to distinguish between undamaged and damaged instances almost perfectly (see Table 5). Indeed, \(A_d=0.999\) and \(A_d=0.998\) have been reached by the two convolutional branches architecture when \(f_{min}=15\) and \(f_{max}=17\) Hz, or \(f_{min}=5\) and \(f_{max}=7\) Hz, have been selected as frequency ranges for the applied lateral and vertical forces. We next consider the NN outcomes for the damage localization task. With this type of excitation, the NN is able to accomplish an extremely accurate classification of the damaged scenarios, reaching \(A_{l}=0.986\) and \(A_{l}=0.993\) when \(f_{min}=15\) and \(f_{max}=17\) Hz or \(f_{min}=5\) and \(f_{max}=7\) Hz have been used, respectively. In the former case, the best classification performances have been obtained by the two convolutional branches architecture, as shown in Table 6. For the latter case, the NN employing as input \({\mathcal {F}}_{*} = \{{u}^{sh}_i\}_{i=1}^8\) provides the best classification result. The better performances of the NN employing \({\mathcal {F}}_{*} = \{{u}^{sh}_i\}_{i=1}^8\) rather than \({\mathcal {F}}_{*} = \{{u}^{ax}_i\}_{i=1}^8\) is likely due to the fact in this latter case no axial frequencies have been excited by the applied load, as remarked in Case 2 (white noise load case). However, this fact also shows that the data fusion operated by the two convolutional branches architecture has been only partially able to select the most important information required for the damage localization task. Nevertheless, very good results have been reached by also employing \({\mathcal {F}}_{*} = \{{u}^{ax}_i\}_{i=1}^8\) (see Table 6). We highlight the effect of each incoming signal on the classification outcomes (see Table 7), since the accuracy \(A^{l}\) on \({\mathbb {D}}^l_{test}\) changes for different numbers of input signals \(N_0\). The results refer to the case in which only some of the displacements \({u}^{sh}_i\), \(i=1,\ldots ,8\) have been considered, and \(f_{min}=5\) and \(f_{max}=7\) Hz. The corresponding confusion matrices are sketched in Fig. 17, showing that the classification error related to a damage scenario g is reduced when the corresponding \({u}^{sh}_g\), that is the signal acquired on the floor whose inter-story stiffness has been reduced, is used as input for the NN. \({\mathcal {G}}_{l}\) confusion matrices for different number \(N_0\) of input channels \({\mathcal {F}}^{i}_{*}\). Case 2, \(f_{min}=5\) and \(f_{max}=7\) Hz In this paper, we have investigated a new strategy for real-time structural health monitoring, treating damage detection and localization as classification tasks [3], and framing the proposed procedure in the family of SBC approaches [4]. We have proposed to employ fully convolutional networks to analyse time series coming from a set of sensors. Fully convolutional networks architectures differing for the number of convolutional branches have been exploited to deal with datasets including time signals of different length and sampling rate. Convolutional layers have been shown to enable the automatic extraction of features to be used for the classification task at hand. The neural network architecture has been trained in a supervised manner on data generated through the numerical solution of a physics-based model of the monitored structure under different damage scenarios. In the considered numerical benchmarks, we have obtained extremely good performances concerning both damage detection and damage localization, even in the presence of noise, when the applied loads can be characterized either (i) in terms of a few (a priori, random) frequencies, or (ii) by a higher number of frequencies, within a given range. Especially in the second case, the outcomes of the NN classifier have shown the potentialities of the proposed procedure in view of the application to real-life cases. In future works, we aim to employ the proposed architecture to deal with data coming from real monitoring systems, tackling the main limit of the proposed procedure concerning the mimicking of the real structural response. This is a well-known problem in the machine learning community [40]. By coupling recurrent layers branches to the proposed convolutional ones, we expect to further increase the NN performances. As further steps, we will try to exploit model order reduction techniques for the dataset construction, extending the proposed methodology to more complex structural configurations and damage scenarios, and to design the set of sensors according to a Bayesian optimization technique [20, 41, 42]. Both the numerical benchmark and the neural network architecture have been exhaustively described. The reader can verify the performance of the proposed method by running analogous numerical experiments. For simplicity, the number E of elements coincides with the number M of degrees of freedom; however, the generalization to the case in which \(E \ne M\) is straightforward. For this reason, the acquired signals are denoted with the same notation \({\mathbb {U}}^*\) employed for the recordings previously used to test the FCN to highlight that, for the time being, the experimental signals are taken as realizations of the noise-corrupted outputs of the numerical model. In other words, if the dataset is composed by 100 instances and a minibatch size of 10 instances is adopted, after the first epoch the iteration number is equal to 10. Chang PC, Flatau A, Liu SC. Health monitoring of civil infrastructure. Struct Health Monit. 2003;2(3):257–67. https://doi.org/10.1177/1475921703036169. Eftekhar Azam S, Mariani S. Online damage detection in structural systems via dynamic inverse analysis: a recursive bayesian approach. Eng Struct. 2018;159:28–45. https://doi.org/10.1016/j.engstruct.2017.12.031. Farrar CR, Doebling SW, Nix DA. Vibration-based structural damage identification. Philos Trans. 2001;359(1778):131–49. https://doi.org/10.1098/rsta.2000.0717. Article MATH Google Scholar Taddei T, Penn J, Yano M, Patera A. Simulation-based classification; a model-order-reduction approach for structural health monitoring. Arch Comput Methods Eng. 2018;25(1):23–45. Doebling SW, Farrar C, Prime M. A summary review of vibration-based damage identification methods. Shock Vibrat Digest. 1998;30:91–105. https://doi.org/10.1177/058310249803000201. Farrar C, Worden K. Structural health monitoring a machine learning perspective. Hoboken: Wiley; 2013. https://doi.org/10.1002/9781118443118. Sohn H, Worden K, Farrar CR. Statistical damage classification under changing environmental and operational conditions. J Intell Mater Syst Struct. 2002;13(9):561–74. https://doi.org/10.1106/104538902030904. Entezami A, Shariatmadar H. Damage localization under ambient excitations and non-stationary vibration signals by a new hybrid algorithm for feature extraction and multivariate distance correlation methods. Struct Health Monit. 2019;18(2):347–75. https://doi.org/10.1177/1475921718754372. Eftekhar AS. Online damage detection in structural systems. Cham: Springer; 2014. https://doi.org/10.1007/978-3-319-02559-9. Bouzenad AE, Mountassir M, Yaacoubi S, Dahmene F, Koabaz M, Buchheit L, Ke W. A semi-supervised based k-means algorithm for optimal guided waves structural health monitoring: a case study. Inventions. 2019;4:1. https://doi.org/10.3390/inventions4010017. Entezami A, Shariatmadar H. An unsupervised learning approach by novel damage indices in structural health monitoring for damage localization and quantification. Struct Health Monit. 2018;17(2):325–45. https://doi.org/10.1177/1475921717693572. Goldstein M, Uchida S. A comparative evaluation of unsupervised anomaly detection algorithms for multivariate data. PLOS ONE. 2016;11(4):1–31. https://doi.org/10.1371/journal.pone.0152173. Bigoni C, Hesthaven JS. Simulation-based anomaly detection and damage localization: an application to structural health monitoring. Comput Methods Appl Mech Eng. 2020;363:112896. https://doi.org/10.1016/j.cma.2020.112896. Wang Z, Yan W, Oates T. Time series classification from scratch with deep neural networks: a strong baseline. In: Proceedings of the International Joint Conference on Neural Networks (IJCNN), 14–19 May, Anchorage, 2017. p. 1578–85. https://doi.org/10.1109/IJCNN.2017.7966039. Hinton GE, Salakhutdinov RR. Reducing the dimensionality of data with neural networks. Science. 2006;313(5786):504–7. https://doi.org/10.1126/science.1127647. Goodfellow I, Bengio Y, Courville A. Deep Learning. Boston: MIT Press; 2016. http://www.deeplearningbook.org. Pathirage CSN, Li J, Li L, Hao H, Liu W, Wang R. Development and application of a deep learning-based sparse autoencoder framework for structural damage identification. Struct Health Monit. 2019;18(1):103–22. https://doi.org/10.1177/1475921718800363. Choy WA. Structural health monitoring with deep learning. Lecture Notes in Engineering and Computer Science. In: Proceedings of The International MultiConference of Engineers and Computer Scientists. 2018. p. 557–60. Karim F, Majumdar S, Darabi H, Harford S. Multivariate LSTM-FCNs for time series classification. Neural Netw. 2019;116:237–45. https://doi.org/10.1016/j.neunet.2019.04.014. Capellari G, Chatzi E, Mariani S. Structural health monitoring sensor network optimization through bayesian experimental design. ASCE-ASME J Risk Uncertainty Eng Syst. 2018;4:04018016. https://doi.org/10.1061/AJRUA6.0000966. Wang Q, Ripamonti N, Hesthaven JS. Recurrent neural network closure of parametric POD-Galerkin reduced-order models based on the Mori-Zwanzig formalism. J Comput Phys. 2020;410:109402. Eftekhar Azam S, Bagherinia M, Mariani S. Stochastic system identification via particle and sigma-point kalman filtering. Scientia Iranica. 2012;19:982–91. Teughels A, Maeck J, De Roeck G. Damage assessment by fe model updating using damage functions. Comput Struct. 2002;80:1869–79. Entezami A, Shariatmadar H. Damage localization under ambient excitations and non-stationary vibration signals by a new hybrid algorithm for feature extraction and multivariate distance correlation methods. Struct Health Monit. 2019;18:347–75. Eftekhar Azam S, Mariani S, Attari N. Online damage detection via a synergy of proper orthogonal decomposition and recursive bayesian filters. Nonlinear Dyn. 2017;89(2):1489–511. Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M, Ghemawat S, Goodfellow I, Harp A, Irving G, Isard M, Jia Y, Jozefowicz R, Kaiser L, Kudlur M, Levenberg J, Mané D, Monga R, Moore S, Murray D, Olah C, Schuster M, Shlens J, Steiner B, Sutskever I, Talwar K, Tucker P, Vanhoucke V, Vasudevan V, Viégas F, Vinyals O, Warden P, Wattenberg M, Wicke M, Yu Y, Zheng X. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. Software available from tensorflow.org 2015. https://www.tensorflow.org/. Haykin S. Neural networks and learning machines. Upper Saddle River: Prentice Hall; 2009. Ioffe S, Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning (ICML), 6-11 July, Lille, France 2015. Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural networks. In: JMLR W&CP: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS 2010), 13–15 May, vol. 9. Chia Laguna Resort, Sardinia, Italy, 2010. p. 249–56. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A. Going deeper with convolutions. In: The IEEE conference on computer vision and pattern recognition (CVPR), 26 June–1 July, Boston, MA, 2015. p. 1–9. https://doi.org/10.1109/CVPR.2015.7298594. Kingma D, Ba J. Adam: A method for stochastic optimization. San Diego: University of Amsterdam; 2015. p. 1–13. Karim F, Majumdar S, Darabi H. Insights into lstm fully convolutional networks for time series classification. IEEE Access. 2019;7:67718–25. https://doi.org/10.1109/ACCESS.2019.2916828. De Callafon RA, Moaveni B, Conte JP, He X, Udd E. General realization algorithm for modal identification of linear dynamic systems. J Eng Mech. 2008;134(9):712–22. https://doi.org/10.1061/(ASCE)0733-9399(2008)134:9(712). Corigliano A, Mariani S. Parameter identification in explicit structural dynamics: performance of the extended kalman filter. Computer Methods Appl Mech Eng. 2004;193(36–38):3807–35. https://doi.org/10.1016/j.cma.2004.02.003. Bonnefoy-Claudet S, Cotton F, Bard P-Y. The nature of noise wavefield and its applications for site effects studies: a literature review. Earth-Sci Rev. 2006;79(3–4):205–27. Ivanovic SS, Trifunac MD, Todorovska M. Ambient vibration tests of structures-a review. ISET J Earthquake Technol. 2000;37(4):165–97. Capellari G, Chatzi E, Mariani S, Azam Eftekhar S. Optimal design of sensor networks for damage detection. Procedia Eng. 2017;199:1864–9. Raudys SJ, Jain AK. Small sample size effects in statistical pattern recognition: recommendations for practitioners. IEEE Trans Pattern Anal Mach Intell. 1991;13(3):252–64. Ribeiro RR, Lameiras RM. Evaluation of low-cost mems accelerometers for shm: frequency and damping identification of civil structures. Latin Am J Solids Struct. 2019;. https://doi.org/10.1590/1679-78255308. Ben-David S, Blitzer J, Crammer K, Kulesza A, Pereira F, Vaughan JW. A theory of learning from different domains. Mach Learn. 2010;79(1):151–75. https://doi.org/10.1007/s10994-009-5152-4. Capellari G, Chatzi E, Mariani S et al. An optimal sensor placement method for shm based on bayesian experimental design and polynomial chaos expansion. In: European Congress on Computational Methods in Applied Sciences and Engineering (ECCOMAS), June 5–10, Athens, Greece, 2016. p. 6272–82. Capellari G, Chatzi E, Mariani S. Cost-benefit optimization of structural health monitoring sensor networks. Sensors. 2018;18(7):2174. https://doi.org/10.3390/s18072174. The authors thank Andrea Opreni (Politecnico di Milano) for fruitful discussions about DL architectures. LR, SM and AC gratefully acknowledge the financial support from MIUR Project PRIN 15-2015LYYXA 8 "Multi-scale mechanical models for the design and optimization of microstructured smart materials and metamaterials". Dipartimento di Ingegneria Civile e Ambientale, Politecnico di Milano, Piazza L. da Vinci 32, 20133, Milano, Italy Luca Rosafalco, Stefano Mariani & Alberto Corigliano MOX, Dipartimento di Matematica, Politecnico di Milano, Piazza L. da Vinci 32, 20133, Milano, Italy Andrea Manzoni Luca Rosafalco Stefano Mariani Alberto Corigliano The authors contributed equally to this work. All authors read and approved the final manuscript. Correspondence to Alberto Corigliano. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Rosafalco, L., Manzoni, A., Mariani, S. et al. Fully convolutional networks for structural health monitoring through multivariate time series classification. Adv. Model. and Simul. in Eng. Sci. 7, 38 (2020). https://doi.org/10.1186/s40323-020-00174-1 Structural health monitoring Damage localization Time series analysis Data Assimilation in Computational Mechanics – Recent Advances and New Trends
CommonCrawl
\begin{document} \title{ Riemann surfaces of second kind and effective finiteness theorems } \author{Burglind J\"oricke} \address{Max-Planck-Institute for Mathematics\\ Vivatsgasse 7, 53111 Bonn\\ Germany} \email{[email protected]} \keywords{finiteness theorems, Riemann surfaces of second kind, $3$-braids, torus bundles, Gromov's Oka principle.} \subjclass[2020]{Primary 32G13; Secondary 20F36, 32H35, 32Q56, 57Mxx} \begin{abstract} The Geometric Shafarevich Conjecture and the Theorem of de Franchis state the finiteness of the number of certain holomorphic objects on closed or punctured Riemann surfaces. The analog of these kind of theorems for Riemann surfaces of second kind is an estimate of the number of irreducible holomorphic objects up to homotopy (or isotopy, respectively). This analog can be interpreted as a quantitatve statement on the limitation for Gromov's Oka principle. For any finite open Riemann surface $X$ (maybe, of second kind) we give an effective upper bound for the number of irreducible holomorphic mappings up to homotopy from $X$ to the twice punctured complex plane, and an effective upper bound for the number of irreducible holomorphic torus bundles up to isotopy on such a Riemann surface. The bound depends on a conformal invariant of the Riemann surface. If $X_{\sigma}$ is the $\sigma$-neighbourhood of a skeleton of an open Riemann surface with finitely generated fundamental group, then the number of irreducible holomorphic mappings up to homotopy from $X_{\sigma}$ to the twice punctured complex plane grows exponentially in $\frac{1}{\sigma}$. \end{abstract} \maketitle \centerline \today \section{Introduction and statements of results}\label{sec:1} It seems that the oldest finiteness theorem for mappings between complex manifolds is the following theorem, which was published by de Franchis \cite{Fr} in 1913. \noindent {\bf Theorem A (de Franchis).} {\it For closed connected Riemann surfaces $X$ and $Y$ with $Y$ of genus at least $2$ there are at most finitely many non-constant holomorphic mappings from $X$ to $Y$.} There is a more comprehensive Theorem in this spirit. \noindent {\bf Theorem B (de Franchis-Severi).} {\it For a closed connected Riemann surface $X$ there are (up to isomorphism) only finitely many non-constant holomorphic mappings $f:X\to Y$ where $Y$ ranges over all closed Riemann surfaces of genus at least $2$.} A finiteness theorem which became more famous because of its relation to number theory was conjectured by Shafarevich \cite{Sh}. \noindent {\bf Theorem C (Geometric Shafarevich Conjecture.)} {\it For a given compact or punctured Riemann surface $X$ and given non-negative numbers $\textsf{g}$ and $\textsf{m}$ such that $2\textsf{g}-2+\textsf{m} >0$ there are only finitely many locally holomorphically non-trivial holomorphic fiber bundles over $X$ with fiber of type $(\textsf{g},\textsf{m})$.} A connected closed Riemann surface (or a smooth connected closed surface) is called of type $(\textsf{g},\textsf{m})$, if it has genus ${\sf{g}}$ and is equipped with ${\sf{m}}$ distinguished points. Recall that a closed Riemann surface with a finite number of points removed is called a punctured Riemann surface. The removed points are called punctures. Sometimes it is convenient to associate a punctured Riemann surface to a Riemann surface of type $(\textsf{g},\textsf{m})$ by removing the distinguished points. A Riemann surface is called finite if its fundamental group is finitely generated, and open if no connected component is compact. A finite connected Riemann surface is called of first kind, if it is a closed or a punctured Riemann surface, otherwise it is called of second kind. Each finite connected open Riemann surface $X$ is conformally equivalent to a domain (denoted again by $X$) on a closed Riemann surface $X^c$ such that each connected component of the complement $X^c \setminus X$ is either a point or a closed topological disc with smooth boundary \cite{Sto}. The connected components of the complement will be called holes. A finite Riemann surface $X$ is of first kind, if and only if all connected components of $X^c \setminus X$ are points. We will say that a connected finite open Riemann surface has only thick ends if all connected components of $X^c \setminus X$ are closed topological discs. Each finite Riemann surface whose universal covering is equal to the upper half-plane $\mathbb{C}_+$ (a finite hyperbolic Riemann surface for short) is conformally equivalent to the quotient of $\mathbb{C}_+$ by a Fuchsian group. The Riemann surface is of first kind if and only if the Fuchsian group is of first kind (\cite{Kra}, II, Theorem 3.2). We will not make use of Fuchsian groups here. Theorem C was conjectured by Shafarevich \cite{Sh} in the case of compact base and fibers of type $(\textsf{g},0)$. It was proved by Parshin \cite{Pa} in the case of compact base and fibers of type $(\textsf{g},0),\,\textsf{g}\geq 2,$ and by Arakelov \cite{Ar} for punctured Riemann surfaces as base and fibers of type $(\textsf{g},0)$. Imayoshi and Shiga \cite{IS} gave a proof of the quoted version using Teichm\"uller theory. The statement of Theorem C ''almost'' contains the so called Finiteness Theorem of Sections which is also called the Geometric Mordell conjecture (see \cite{Mc}), giving an important conceptional connection between geometry and number theory. For more details we refer to the surveys by C.McMullen \cite{Mc} and B.Mazur \cite{Ma}. Theorem A is a consequence of Theorem C, and Theorem A has analogs for the source $X$ and the target $Y$ being punctured Riemann surfaces. Indeed, we may associate to any holomorphic mapping $f:X\to Y$ of Theorem A the bundle over $X$ with fiber over $x\in X$ equal to $Y$ with distinguished point $\{f(x)\}$. Thus, the fibers are of type $(\textsf{g},1)$. A holomorphic self-isomorphism of a locally holomorphically non-trivial $(\textsf{g},1)$-bundle may lead to a new holomorphic mapping from $X$ to $Y$, but there are only finitely many different holomorphic self-isomorphisms. We will consider here analogs of Theorems A and C for the case when the base $X$ is a Riemann surface of second kind. Notice that finite hyperbolic Riemann surfaces of second kind are interesting from the point of view of spectral theory of the Laplace operator with respect to the hyperbolic metric (see also \cite{Bo}). There are interesting relations to scattering theory and (the Hausdoff dimension of) the limit set of the Fuchsian group defining $X$. The Theorems A and C do not hold literally if the base $X$ is of second kind. If the base is a Riemann surface of second kind the problem to be considered is the finiteness of the number of irreducible isotopy classes (homotopy classes, respectively) containing holomorphic objects. In case the base is a punctured Riemann surface this is equivalent to the finiteness of the number of holomorphic objects. For more detail see Sections \ref{sec:2} and \ref{sec:3}. We will prove finiteness theorems with effective estimates for Riemann surfaces of second kind. The estimates depend on a conformal invariant of the base manifold. To define the invariant we recall Ahlfors' definition of extremal length (see \cite{A1}). For an annulus $A=\{0\leq r<|z|<R\leq \infty\}$ (and for any open set that is conformally equivalent to $A$) the extremal length equals $\frac{2\pi}{\log\frac{R}{r}}$. For an open rectangle $R= \{z=x+iy:0<x<{\sf b},\,0<y<{\sf a}\,\}$ in the plane with sides parallel to the axes, and with horizontal side length $\textsf{b}$ and vertical side length $\textsf{a}$ the extremal length equals $\lambda(R)=\frac{\sf a}{\sf b}$. For a conformal mapping $\omega:R\to U$ of the rectangle $R$ onto a domain $U\subset \mathbb{C}$ the image $U$ is called a curvilinear rectangle, if $\omega$ extends to a continuous mapping on the closure $\bar R$, and the restriction to each (closed) side of $R$ is a homeomorphism onto its image. The images of the vertical (horizontal, respectively) sides of $R$ are called the vertical (horizontal, respectively) curvilinear sides of the curvilinear rectangle $\omega(R)$. The extremal length of the curvilinear rectangle $U$ equals the extremal length of $R$. (See \cite{A1}). Let $X$ be a connected open Riemann surface of genus $g\geq 0$ with $m+1$ holes, $m\geq 0$, equipped with a base point $q_0$. The fundamental group $\pi_1(X,q_0)$ of $X$ is a free group in $2g+m$ generators. We describe now the conformal invariant of the Riemann surface $X$ that will appear in the mentioned estimate. We take a bouquet of non-contractible circles $S$ in $X$ with base point $q_0$, such that $q_0$ is the only common point of any pair of circles in $S$. Moreover, $S$ is the union of simple closed oriented curves $\alpha_j,\,\beta_j$, $ j=1,\ldots ,g',$ and $\gamma_{k}$, $k=1,\ldots,m'$, with base point $q_0$ with the following property. Labeling the rays of the loops emerging from the base point $q_0$ by $\alpha_j^-,\,\beta_j^-$ $\gamma_j^-$, and the incoming rays by $\alpha_j^+,\,\beta_j^+$ $\gamma_j^+$, we require that when moving in counterclockwise direction along a small circle around $q_0$ we meet the rays in the order \begin{align*} \ldots, \alpha^-_j,\beta^-_j,\alpha^+_j,\beta^+_j,\ldots, \gamma^-_k,\gamma^+_k,\ldots\;. \end{align*} (See Figure \ref{fig8.1}.) \begin{figure} \caption{A standard bouquet of circles for a connected finite open Riemann surface} \label{fig8.1} \end{figure} We call such a bouquet of circles a standard bouquet of circles contained in $X$. If the collection $\mathcal{E}$ of elements of the fundamental group $\pi_1(X,q_0)$ represented by the collection of curves in $S$ is a system of generators of $\pi_1(X,q_0)$ (then in particular, $g'=g$, $m'=m$), we call $S$ a standard bouquet of circles for $X$, and say that the system $\mathcal{E }$ is associated to a standard bouquet of circles for $X$. The existence of a standard bouquet of circles for a connected finite open Riemann surface can be seen by looking at a fundamental polygon of the compact Riemann surface $X^c$ that contains a lift of each hole of $X$. The pairs of curves $\alpha_j$, $\beta_j$ correspond to the handles of $X^c$. Each curve $\gamma_k,\, k=1,\ldots,m$, surrounds a connected component $\mathcal{C}_k$ of $X^c\setminus X$ counterclockwise. More precisely, $\gamma_{k}$ is contractible in $X\cup \mathcal{C}_k$ and divides $X$ into two connected components, one of them containing $\mathcal{C}_k$. Moreover, moving along $\gamma_k$ we see $\mathcal{C}_k$ on the left. Vice versa, if a connected open Riemann surface $X$ contains a standard bouquet of circles consisting of $g$ pairs of curves $\alpha_j$,$\beta_j$, and $m$ curves $\gamma_k$ as above, that represent a system of generators of $\pi_1(X,q_0)$, then $X$ has genus $g$ and $m$ holes. To see this we cut the compact Riemann surface $X^c$ along the $\alpha_j$, $\beta_j$ and obtain a fundamental polygon which corresponds to a closed Riemann surface of genus $g$. The $\gamma_k$ are contractible in $X^c$, hence, each of them surrounds a hole. Label the generators $\mathcal{E}\subset \pi_1(X,q_0)$ of a standard bouquet of circles for $X$ as follows. The elements $e_{2j-1,0}\in \pi_1(X,q_0),$ $j=1,\ldots ,g,$ are represented by $\alpha_j$, the elements $e_{2j,0}\in \pi_1(X,q_0) , \, j=1,\ldots ,g,$ are represented by $\beta_j$, and the elements $e_{2g+k,0}\in \pi_1(X,q_0), \, k=1,\ldots ,m,$ of $\pi_1(X,q_0)$ are represented by $\gamma_k$. A standard bouquet of circles for a connected finite open Riemann surface is a deformation retract of $X$. We will fix the system of generators $\mathcal{E}$ of $\pi_1(X,q_0)$ throughout the paper. Let $\tilde X$ be the universal covering of $X$. For each element $e_0 \in \pi_1(X,q_0)$ we consider the subgroup $\langle e_0\rangle$ of $\pi_1(X,q_0)$ generated by $e_0$. Let $\sigma(e_0)$ be the covering transformation corresponding to $e_0$, and $\langle \sigma( e_0) \rangle$ the group generated by $\sigma(e_0)$. \begin{defn}\label{defn0} Denote by $\mathcal{E}_j,\, j=2,\ldots,10,$ the set of primitive elements of $\pi_1(X,q_0)$ which can be written as product of at most $j$ factors with each factor being either an element of $\mathcal{E}$ or an element of $\mathcal{E}^{-1}$, the set of inverses of elements of $\mathcal{E}$. Define $\lambda_j=\lambda_j(X)$ as the maximum over $e_0 \in \mathcal{E}_j$ of the extremal length of the annulus $\tilde{X} \diagup \langle \sigma( e_0) \rangle$. \end{defn} The quantity $\lambda_7(X)$ (for mappings to the twice punctured complex plane), or $\lambda_{10}(X)$ (for $(1,1)$-bundles) is the mentioned conformal invariant. Let $E$ be a finite subset of the Riemann sphere $\mathbb{P}^1$ which contains at least three points. Let $X$ be a finite open Riemann surface with non-trivial fundamental group. A continuous map $f:X \to \mathbb{P}^1\setminus E$ is reducible if it is homotopic (as a mapping to $\mathbb{P}^1\setminus E$) to a mapping whose image is contained in $D\setminus E$ for an open topological disc $D\subset \mathbb{P}^1$ with $E\setminus D$ containing at least two points of $E$. Otherwise the mapping is called irreducible. In the following theorem we take $E=\{-1,1,\infty\}$. We will often refer to $\mathbb{P}^1\setminus \{-1,1,\infty\}$ as the thrice punctured Riemann sphere or the twice punctured complex plane $\mathbb{C}\setminus \{-1,1\}$. Note that a continuous mapping from a Riemann surface to the twice punctured complex plane is reducible, iff it is homotopic to a mapping with image in a once punctured disc contained in $\mathbb{P}^1\setminus E$. (The puncture may be equal to $\infty$.) There are countably many non-homotopic reducible holomorphic mappings with target being the twice punctured complex plane and source being any finite open Riemann surface with only thick ends and non-trivial fundamental group (see the proof of Lemma 15 in \cite{Jo5}). On the other hand the following theorem holds. \begin{thm}\label{thm1} For each open connected Riemann surface $X$ of genus $g\geq0$ with $m+1\geq 1$ holes there are up to homotopy at most $3(\frac{3}{2}e^{24\pi \lambda_7(X)})^{2g+m}$ irreducible holomorphic mappings from $X$ into $Y\stackrel{def}=\mathbb{P}^1\setminus \{-1,1,\infty\}$. \end{thm} Notice that the Riemann surface $X$ is allowed to be of second kind. If $X$ is a torus with a hole, $\lambda_7(X)$ may be replaced by $\lambda_3(X)$. If $X$ is a planar domain, $\lambda_7(X)$ may be replaced by $\lambda_4(X)$ A holomorphic $(1,0)$-bundle is also called a holomorphic torus bundle. A holomorphic torus bundle equipped with a holomorphic section is also considered as a holomorphic $(1,1)$-bundle. The following lemma holds. \noindent {\bf Lemma D.} {\it A smooth $(0,1)$-bundle admits a smooth section. A holomorphic torus bundle is (smoothly) isotopic to a holomorphic torus bundle that admits a holomorphic section.} \noindent For a proof see \cite{Jo5}. \begin{thm}\label{thm2} Let $X$ be an open connected Riemann surface of genus $g\geq 0$ with $m+1\geq 1$ holes. Up to isotopy there are no more than $\big(2 \cdot 15^6\cdot\exp(36 \pi \lambda_{10}(X))\big)^{2g+m}$ irreducible holomorphic $(1,1)$-bundles over $X$. \end{thm} For the definition of irreducible $(\sf g,\sf m)$-bundles see Section \ref{sec:3} below. Since on each finite open Riemann surface with only thick ends and non-trivial fundamental group there are countably many non-homotopic reducible holomorphic mappings with target being the twice punctured complex plane, there are also countably many non-isotopic holomorphic $(1,1)$-bundles over each such Riemann surface (see Proposition \ref{prop1} below). We wish to point out that reducible $(\sf{g,m})$-bundles over finite open Riemann surfaces can be decomposed into irreducible bundle components, and each reducible bundle is determined by its bundle components up to commuting Dehn twists in the fiber over the base point. (For details see \cite{Jo5}.) Notice that Caporaso proved the existence of a uniform bound for the number of objects in Theorem C in case $X$ is a closed Riemann surface of genus $g$ with $m$ punctures, and the fibers are closed Riemann surfaces of genus $ \textsf{g} \geq 2$. The bound depends only on the numbers $g$, $\textsf{g}$ and $m$. Heier gave effective uniform estimates, but the constants are huge and depend in a complicated way on the parameters. Theorems \ref{thm1} and \ref{thm2} imply effective estimates for the number of locally holomorphically non-trivial holomorphic $(1,1)$-bundles over punctured Riemann surfaces, however, the constants depend also on the conformal type of the base. More precisely, the following corollaries hold. \begin{cor}\label{cor1a} There are no more than $3(\frac{3}{2}e^{24 \pi \lambda_7(X)})^{2g+m}$ non-constant holomorphic mappings from a Riemann surface $X$ of type $(g,m+1)$ to $\mathbb{P}^1\setminus \{-1,1,\infty\}$. \end{cor} \begin{cor}\label{cor1b} There are no more than $\big(2 \cdot 15^6\cdot\exp(36 \pi \lambda_{10}(X))\big)^{2g+m}$ locally holomorphically non-trivial holomorphic $(1,1)$-bundles over a Riemann surface $X$ of type $(g,m+1)$. \end{cor} The following examples demonstrate the different nature of the problem in the two cases, the case when the base is a punctured Riemann surface, and when it is a Riemann surface of second kind. \noindent {\bf Example 1.} There are no non-constant holomorphic mappings from a torus with one puncture to the twice punctured complex plane. Indeed, by Picard's Theorem each such mapping extends to a meromorphic mapping from the closed torus to the Riemann sphere. This implies that the preimage of the set $\{-1,1,\infty\}$ under the extended mapping must contain at least three points, which is impossible. The situation changes if $X$ is a torus with a large enough hole. Let $\alpha\geq 1$ and $\sigma \in (0,1)$. Consider the torus with a hole $T^{\alpha,\sigma}$ that is obtained from $\mathbb{\mathbb{C}}\diagup (\mathbb{Z} + i \alpha \mathbb{Z}),$ (with $\alpha\geq 1$ being a real number) by removing a closed geometric rectangle of vertical side length $\alpha-\sigma$ and horizontal side length $1-\sigma$ (i.e. we remove a closed subset that lifts to such a closed rectangle in $\mathbb{C}$). A fundamental domain for this Riemann surface is ''the golden cross on the Swedish flag'' turned by $\frac{\pi}{2}$ with width of the laths being $\sigma$ and length of the laths being $1$ and $\alpha$. \begin{prop}\label{prop1a} Up to homotopy there are at most $7 e^{3\cdot 2^4 \pi \frac{2\alpha +1}{\sigma}}$ irreducible holomorphic mappings from $T^{\alpha,\sigma}$ to the twice punctured complex plane. On the other hand, there are positive constants $c$, $C$, and $\sigma_0$ such that for any positive number $\sigma<\sigma_0$ and any $\alpha >1$ there are at least $ce^{C\frac{\alpha}{\sigma}}$ non-homotopic holomorphic mappings from $T^{\alpha,\sigma}$ to the twice punctured complex plane. \end{prop} \noindent {\bf Example 2.} There are only finitely many holomorphic maps from a thrice punctured Riemann sphere to another thrice punctured Riemann sphere. Indeed, after normalizing both, the source and the target space, by a M\"obius transformation we may assume that both are equal to $\mathbb{C} \setminus \{-1,1\}$. Each holomorphic map from $\mathbb{C} \setminus \{-1,1\}$ to itself extends to a meromorphic map from the Riemann sphere to itself, which maps the set $\{-1,1,\infty\}$ to itself and maps no other point to this set. By the Riemann-Hurwitz formula the meromorphic map takes each value exactly once. Indeed, suppose it takes each value $l$ times for a natural number $l$. Then each point in $\{-1,1,\infty\}$ has ramification index $l$. Apply the Riemann Hurwitz formula for the branched covering $ X=\mathbb{P}^1\to Y=\mathbb{P}^1$ of multiplicity $l$ $$ \chi(X)= l \cdot \chi(Y) -\sum_{x\in Y} (e_x-1). $$ Here $e_x$ is the ramification index at the point $x$. For the Euler characteristic we have $\chi(\mathbb{P}^1)=2$, and $\sum_{x\in Y} (e_x-1)\geq \sum_{x=-1,1,\infty} (e_x-1)=3\,(l-1)$. We obtain $2\leq 2\,l-3\,(l-1)$ which is possible only if $l=1$. We saw that each non-constant holomorphic mapping from $\mathbb{C}\setminus\{-1,1\}$ to itself extends to a conformal mapping from the Riemann sphere to the Riemann sphere that maps the set $ \{-1,1,\infty\}$ to itself. There are only finitely may such maps, each a M\"obius transformation commuting the three points. For Riemann surfaces of second kind the situation changes, as demonstrated in the following proposition. The proposition does not only concern the case when the Riemann surface equals $\mathbb{P}^1$ with three holes. We consider an open Riemann surface $X$ of genus $g$ with $m\geq 1$ holes. \begin{prop}\label{prop1b} Let $X$ be a connected finite open hyperbolic Riemann surface, that is equipped with a K\"ahler metric. Suppose $S$ is a standard bouquet of piecewise smooth circles in $X$ with base point $q_0$. We assume that $q_0$ is the only non-smooth point of the circles, and all tangent rays to circles in $S$ at $q_0$ divide a disc in the tangent space into equal sectors. Let $S_{\sigma}$ be the $\sigma$-neighbourhood of $S$ (in the K\"ahler metric on $X$). Then there exists a constant $\sigma_0>0$, and positive constants $C'$, $C''$, $c'$, $c''$, depending only on $X$, $S$ and the K\"ahler metric, such that for each positive $\sigma<\sigma_0$ the number $N_{S_{\sigma}}^{\mathbb{C}\setminus \{-1,1\}}$ of non-homotopic irreducible holomorphic mappings from $S_{\sigma}$ to the twice punctured complex plane satisfies the inequalities \begin{align}\label{eqabc} c'e^{\frac{c''}{\sigma}} \leq N_{S_{\sigma}}^{\mathbb{C}\setminus \{-1,1\}}\leq C'e^{\frac{C''}{\sigma}}\,. \end{align} \end{prop} The present results may be understood as quantitative statements with regard to limitations for Gromov's Oka principle. Gromov \cite{G} formulated his Oka principle as "an expression of an optimistic expectation with regard to the validity of the $h$-principle for holomorphic maps in the situation when the source manifold is Stein". Holomorphic maps $X\to Y$ from a complex manifold $X$ to a complex manifold $Y$ are said to satisfy the $h$-principle if each continuous map from $X$ to $Y$ is homotopic to a holomorphic map. We call a target manifold $Y$ a Gromov-Oka manifold if the $h$-principle holds for holomorphic maps from any Stein manifold to $Y$. Gromov \cite{G} gave a sufficient condition on a complex manifold $Y$ to be a Gromov-Oka manifold. The question of understanding Gromov-Oka manifolds received a lot of attention. It turned out to be fruitful to strengthen the requirement on the target $Y$ by combining the $h$-principle for holomorphic maps with a holomorphic approximation property. Manifolds $Y$ satisfying the stronger requirement are called Oka manifolds. For details and an account on modern development of Oka theory based on Oka manifolds see \cite{Forst}. The twice punctured complex plane $\mathbb{C}\setminus \{-1,1\}$ is not a Gromov-Oka manifold. Then the question becomes, what prevents a continuous map from a Stein manifold $X$ to $\mathbb{C}\setminus \{-1,1\}$ to be homotopic to a holomorphic map, and ''how many'' homotopy classes contain a holomorphic map? As for the first question in case the source manifold is a finite open Riemann surface $X$, Proposition \ref{prop2} below says that an irreducible map $X\to\mathbb{C}\setminus \{-1,1\}$ can only be homotopic to a holomorphic map, if the ''complexity'' of the monodromies of the map are compatible with conformal invariants of the source manifold. Theorem \ref{thm1} gives an upper bound related to the second question. Propositions \ref{prop1a} and \ref{prop1b} can be interpreted as statements related to the following question. Consider a family of Riemann surfaces $Y_{\sigma},\,\sigma\in (0,\sigma_0) $, obtained by continuously changing the conformal structure of a fixed Riemann surface. Determine the growth rate for $\sigma\to 0$ of the number of irreducible holomorphic mappings $X_\sigma\to \mathbb{C}\setminus\{-1,1\}$ up to homotopy. In Proposition \ref{prop1a} the family of Riemann surfaces depends also on a second parameter $\alpha$, and the growth rate is determined in $\alpha$ and $\sigma$. The proof of both propositions uses solutions of a $\overline{\partial}$-problem. The solution in the case of Proposition \ref{prop1a} uses a simple explicit formula. The author is grateful to B.Farb who suggested to use the concept of conformal module and extremal length for a proof of finiteness theorems, and to B.Berndtsson for proposing the kernel for solving the $\bar{\partial}$-problem that arises in the proof of Proposition \ref{prop1a}. The work on the paper was started while the author was visiting the Max-Planck-Institute and was finished during a stay at IHES. The author would like to thank these institutions for the support. The author is also indebted to Fanny Dufour for drawing the figures and to an anonymous referee whose critics helped to improve the overall quality of the paper. \section{preliminaries on mappings, coverings, and extremal length} \label{sec:1a} In this section we will prepare the proofs of the Theorems. \noindent {\bf The change of the base point.} Let $\mathcal{X}$ be a connected smooth open surface, and let $\alpha$ be an arc in $\mathcal{X}$ with initial point $x_0$ and terminating point $x$. Change the base point $x_0 \in \mathcal{X}$ along a curve $\alpha$ to the point $x \in \mathcal{X}$. This leads to an isomorphism $\mbox{Is}_{\alpha}: \pi_1(\mathcal{X},x_0) \to \pi_1(\mathcal{X},x)$ of fundamental groups induced by the correspondence $\gamma \to \alpha^{-1} \gamma \alpha$ for any loop $\gamma$ with base point $x_0$ and the arc $\alpha$ with initial point $x_0$ and terminating point $x$. We will denote the correspondence $\gamma \to \alpha^{-1} \gamma \alpha$ between curves also by $\mbox{Is}_{\alpha}$. We call two homomorphisms $h_j:G_1 \to G_2, \, j=1,2,\,$ from a group $G_1$ to a group $G_2$ conjugate if there is an element $g' \in G_2$ such that for each $g \in G_1$ the equality $h_2(g)= {g'}^{-1} h_1(g) g'$ holds. For two arcs $\alpha_1$ and $\alpha_2$ with initial point $x_0$ and terminating point $x$ we have $\alpha_2^{-1} \gamma \alpha_2= (\alpha_1^{-1}\alpha_2)^{-1} \alpha_1^{-1} \gamma \alpha_1 (\alpha_1^{-1}\alpha_2)$. Hence, the two isomorphisms $\mbox{Is}_{\alpha_1}$ and $\mbox{Is}_{\alpha_2}$ differ by conjugation with the element of $\pi_1(\mathcal{X},x)$ represented by $\alpha_1^{-1}\alpha_2$. Free homotopic curves are related by homotopy with fixed base point and an application of a homomorphism $\mbox{Is}_{\alpha}$ that is defined up to conjugation. Hence, free homotopy classes of curves can be identified with conjugacy classes of elements of the fundamental group $\pi_1(\mathcal{X},x_0)$ of $\mathcal{X}$. For two smooth manifolds $\mathcal{X}$ and $\mathcal{Y}$ with base points $x_0 \in \mathcal{X}$ and $y_0 \in \mathcal{Y}$ and a continuous mapping $F:\mathcal{X} \to \mathcal{Y}$ with $F(x_0)=y_0$ we denote by $F_*: \pi_1(\mathcal{X},x_0) \to \pi_1(\mathcal{Y},y_0) $ the induced map on fundamental groups. For each element $e_0\in \pi_1(\mathcal{X},x_0)$ the image $F_*(e_0)$ is called the monodromy along $e_0$, and the homomorphism $F_*$ is called the monodromy homomorphism corresponding to $F$. The homomorphism $F_*$ determines the homotopy class of $F$ with fixed base point in the source and fixed value at the base point. Consider a free homotopy $F_t, \, t \in (0,1)$, of homeomorphisms from $\mathcal{X}$ to $\mathcal{Y}$ such that the value $F_t(x_0)$ at the base point $x_0$ of the source space varies along a loop. Then the homomorphisms $(F_0)_*$ and $(F_1)_*$ are related by conjugation with the element of the fundamental group of $\mathcal{Y}$ represented by the loop. Using deformation retractions we see that each homomorphism $h: \pi_1(\mathcal{X},x_0) \to \pi_1(\mathcal{Y},y_0) $ equals $F_*$ for a continuous mapping $F:X\to Y$. Moreover, if two homomorphisms $h_j: \pi_1(\mathcal{X},x_0) \to \pi_1(\mathcal{Y},y_0),\, j=0,1, $ are related by conjugation, $h_1=e^{-1}h_2 e$ for an element $e\in \pi_1(\mathcal{Y},y_0)$, then there is a free homotopy $F_t$ of mappings $X\to Y$ such that $F_t(x_0)$ changes along a loop representing $e$ and $(F_0)_*=h_0$, $(F_1)_*=h_1$. Further, since the fundamental group $\pi_1(\mathcal{Y},y)$ with base point $y$ is related to the fundamental group $\pi_1(\mathcal{Y},y_0)$ with base point $y_0$ by an isomorphism determined up to conjugation we obtain the following theorem (see \cite{Ha},\cite{St}). \noindent {\bf Theorem E.} {\it The free homotopy classes of continuous mappings from $\mathcal{X}$ to $\mathcal{Y}$ are in one-to-one correspondence to the set of conjugacy classes of homomorphisms between the fundamental groups of $\mathcal{X}$ and $\mathcal{Y}$.} \noindent {\bf Extremal length.} The fundamental group $\pi_1 \stackrel{def}{=} \pi_1(\mathbb {C}\setminus \{-1,1\},0)$ is canonically isomorphic to the fundamental group $ \pi_1(\mathbb{C}\setminus \{-1,1\},q')$ for an arbitrary point $q' \in (-1,1)$. For the arc $\alpha$ defining the isomorphism we take the unique arc contained in $(-1,1)$ that joins $0$ and $q'$. The fundamental group $\pi_1(\mathbb{C}\setminus \{-1,1\},0)$ is a free group in two generators. We choose standard generators $a_1$ and $a_2$, where $a_1$ is represented by a simple closed curve with base point $0$ which surrounds $-1$ counterclockwise, and $a_2$ is represented by a simple closed curve with base point $0$ which surrounds $1$ counterclockwise. For $q' \in (-1,1)$ we also denote by $a_j$ the generator of $\pi_1(\mathbb{C}\setminus \{-1,1\},q')$ which is obtained from the respective standard generator of $\pi_1(\mathbb{C}\setminus\{-1,1\}, 0)$ by the standard isomorphism between fundamental groups with base point on $(-1,1)$. More detailed, $a_1$ is the generator of $\pi_1(\mathbb{C}\setminus \{-1,1\},q')$ which is represented by a loop with base point $q'$ that surrounds $-1$ counterclockwise, and $a_2$ is the generator of $\pi_1(\mathbb{C}\setminus \{-1,1\},q')$ which is represented by a loop with base point $q'$ that surrounds $1$ counterclockwise. We refer to $a_1$ and $a_2$ as to the standard generators of $\pi_1(\mathbb{C}\setminus \{-1,1\},q')$. Further, the group $ \pi_1(\mathbb {C}\setminus \{-1,1\},0)$ is canonically isomorphic to the relative fundamental group $\pi_1^{tr}(\mathbb {C}\setminus \{-1,1\}) \stackrel{def}{=}\pi_1(\mathbb {C}\setminus \{-1,1\},(-1,1))$ whose elements are homotopy classes of (not necessarily closed) curves in $\mathbb{C}\setminus \{-1,1\}$ with end points on the interval $(-1,1)$. We refer to $\pi_1^{tr}(\mathbb {C}\setminus \{-1,1\}) $ as fundamental group with totally real horizontal boundary values ($tr$-boundary values for short). For an element $w \in \pi_1(\mathbb{C}\setminus \{-1,1\}, q')$ with base point $q' \in (-1,1)$ we denote by $w_{tr}$ the element of the relative fundamental group $\pi_1^{tr}(\mathbb{C}\setminus \{-1,1\})$ with totally real boundary values, corresponding to $w$. For more details see \cite{Jo2}. Each element of a free group can be written uniquely as a reduced word in the generators. (A word is reduced if neighbouring terms are powers of different generators.) The degree (or word length) $d(w)$ of a reduced word $w$ in the generators of a free group is the sum of the absolute values of the powers of generators in the reduced word. If the word is the identity its degree is defined to be zero. We will identify elements of a free group with reduced words in generators of the group. For a rectangle $R$ let $f:R \to \mathbb{C} \setminus \{-1,1\}$ be a mapping which admits a continuous extension to the closure $\bar R$ (denoted again by $f$) which maps the (open) horizontal sides to $(-1,1)$. We say that the mapping $f$ represents an element $w_{tr} \in \pi_1^{tr}(\mathbb{C} \setminus \{-1,1\})$ if for each maximal vertical line segment contained in $R$ (i.e. $R$ intersected with a vertical line in $\mathbb{C}$) the restriction of $f$ to the closure of the line segment represents $w_{tr}$. The extremal length $\;\;\Lambda(w_{tr})\;\;$ of an element $\;\;w_{tr}\;\;$ in the relative fundamental group $\;\pi_1^{tr}(\mathbb{C}\setminus \{-1,1\})\;$ is defined as \begin{align}\label{eq1} \Lambda(w_{tr})\stackrel{def}{=}& \inf \{\lambda(R): R\, \mbox{ a rectangle which admits a holomorphic map to} \;\mathbb{C}\setminus \{-1,1\} \nonumber \\ & \,\mbox{ that represents}\; w_{tr}\}\,. \end{align} For an element $w\in \pi_1(\mathbb{C}\setminus \{-1,1\},q')$ and the associated element $w_{tr}$ we will also write $\Lambda_{tr}(w)$ instead of $\Lambda(w_{tr})$. Any reduced word $\;w\;$ in $\;\pi_1(\mathbb{C}\setminus \{-1,1\},q')\;$ can be uniquely decomposed into syllables. They are defined as follows. Each term $a_{j_i}^{k_i}$ with $|k_i|\geq 2$ is a syllable, and any maximal sequence of consecutive terms $a_{j_i}^{k_i}$, for which $|k_i|=1$ and all $k_i$ have the same sign, is a syllable (see \cite{Jo2}, \cite{Jo3}). Let $d_k$ be the degree of the $k$-th syllable from the left. (We consider each syllable as a reduced word in the elements of the fundamental group.) Put \begin{equation}\label{eq3+} \mathcal{L}_-(w)\stackrel{def}= \sum \log(3 d_k), \quad \mathcal{L}_+(w)\stackrel{def}= \sum \log(4 d_k)\,, \end{equation} where the sum runs over the degrees of all syllables of $w_{tr}$. Notice that $\mathcal{L}_{\pm}(w^{-1})=\mathcal{L}_{\pm}(w)$. We define $\mathcal{L}_-({\rm Id})=\mathcal{L}_+({\rm Id})=0$ for the identity ${\rm Id}$. We need the following theorem which is proved in \cite{Jo2} (see Theorem 1 there). \noindent {\bf Theorem F.} {\it If $w \in \pi_1(\mathbb{C}\setminus\{-1,1\},0)$ is not equal to a (trivial or non-trivial) power of $a_1$ or of $a_2$ then \begin{equation}\label{eq3} \frac{1}{ 2 \pi} \mathcal{L}_-(w)\leq \Lambda(w_{tr}) \leq 300 \mathcal{L}_+(w)\,. \end{equation} } \noindent {\bf Regular zero sets.} We will call a subset of a smooth manifold $\mathcal{X}$ a simple relatively closed curve if it is the connected component of a regular level set of a smooth real-valued function on $\mathcal{X}$. Let $\mathcal{X}$ be a connected finite open Riemann surface. Suppose the zero set $L$ of a non-constant smooth real valued function on $\mathcal{X}$ is regular. Each component of $L$ is either a simple closed curve or it can be parameterized by a continuous mapping $\ell:(-\infty,\infty)\to \mathcal{X}$. We call a component of the latter kind a simple relatively closed arc in $\mathcal{X}$. A relatively closed curve $\gamma$ in a connected finite open Riemann surface $\mathcal{X}$ is said to be contractible to a hole of $\mathcal{X}$, if the following holds. Consider $\mathcal{X}$ as domain $\mathcal{X}^c\setminus \cup\mathcal{C}_j$ on a closed Riemann surface $\mathcal{X}^c$. Here the $\mathcal{C}_j$ are the holes, each is either a closed topological disc with smooth boundary or a point. The condition is the following. For each pair $U_1$, $U_2$ of open subsets of $ \mathcal{X}^c$, $\cup \mathcal{C}_j\subset U_1\Subset U_2$, there exists a homotopy of $\gamma$ that fixes $\gamma\cap U_1$ and moves $\gamma$ into $U_2$. Taking for $U_2$ small enough neighbourhoods of $\cup \mathcal{C}_j$ we see that the homotopy moves $\gamma$ into an annulus adjacent to one of the holes. For each relatively compact domain $\mathcal{X}'\Subset \mathcal{X}$ in $\mathcal{X}$ there is a finite cover of $L\cap \overline{\mathcal{X}'}$ by open subsets $U_k$ of $\mathcal{X}$ such that each $L\cap U_k$ is connected. Each set $L\cap U_k$ is contained in a component of $L$. Hence, only finitely many connected components of $L$ intersect $\mathcal{X}'$. Let $L_0$ be a connected component of $L$ which is a simple relatively closed arc parameterized by $\ell_0:\mathbb{R}\to \mathcal{X}$. Since each set $L_0\cap U_k$ is connected it is the image of an interval under $\ell_0$. Take real numbers $t_0^-$ and $t_0^+$ such that all these intervals are contained in $(t_0^-,t_0^+)$. Then the images $\ell\big((-\infty,t_0^-)\big)$ and $\ell\big((t_0^+ ,+\infty)\big)$ are contained in $\mathcal{X}\setminus \mathcal{X}'$, maybe, in different components. Such parameters $t_0^-$ and $t_0^+$ can be found for each relatively compact deformation retract $\mathcal{X}'$ of $\mathcal{X}$. Hence for each relatively closed arc $L_0\subset L$ the set of limit points $L_0^+$ of $\ell_0(t)$ for $t\to \infty$ is contained in a boundary component of $\mathcal{X}$. Also, the set of limit points $L_0^-$ of $\ell_0(t)$ for $t\to -\infty$ is contained in a boundary component of $\mathcal{X}$. The boundary components may be equal or different. Moreover, if $\mathcal{X}'\Subset \mathcal{X}$ is a relatively compact domain in $\mathcal{X}$ which is a deformation retract of $\mathcal{X}$, and a connected component $L_0$ of $L$ does not intersect $\overline{\mathcal{X}'}$ then $L_0$ is contractible to a hole of $\mathcal{X}$. Indeed, $\mathcal{X}\setminus \overline{\mathcal{X}'}$ is the union of disjoint annuli, each of which is adjacent to a boundary component of $\mathcal{X}$, and the connected set $L_0$ must be contained in a single annulus. Further, denote by $L'$ the union of all connected components of $L$ that are simple relatively closed arcs. Consider those components $L_j$ of $L'$ that intersect $\mathcal{X}'$. There are finitely many such $L_j$. Parameterize each $L_j$ by a mapping $\ell_j:\mathbb{R}\to \mathcal{X}$. For each $j$ we let $[t_j^-,t_j^+]$ be a compact interval for which \begin{equation}\label{eq2d} \ell_j(\mathbb{R}\setminus [t_j^-,t_j^+]) \subset \mathcal{X}\setminus \overline{\mathcal{X}'}\,. \end{equation} Let $\mathcal{X}''$, $\mathcal{X}'\Subset \mathcal{X}''\Subset \mathcal{X}$, be a domain which is a deformation retract of $\mathcal{X}$ such that $\ell_j([t_j^-,t_j^+])\subset \mathcal{X}''$ for each $j$. Then all connected components of $L'\cap \mathcal{X}''$, that do not contain a set $\ell_j([t_j^-,t_j^+])$, are contractible to a hole of $\mathcal{X}''$. Indeed, each such component is contained in the union of annuli $\mathcal{X}''\setminus \overline{\mathcal{X}'}$. \noindent {\bf Some remarks on coverings.} By a covering $P:\mathcal{Y} \to \mathcal{X}$ we mean a continuous map $P$ from a topological space $\mathcal{Y}$ to a topological space $\mathcal{X}$ such that for each point $x \in \mathcal{X}$ there is a neighbourhood $V(x)$ of $x$ such that the mapping $P$ maps each connected component of the preimage of $V(x)$ homeomorphically onto $V(x)$. (Note that in function theory sometimes these objects are called unlimited unramified coverings to reserve the notion ''covering'' for more general objects.) Let $X$ be a connected finite open Riemann surface with base point $q_0$ and let ${\sf P}: \tilde X \to X$ be the universal covering map. Recall that a homeomorphism $\varphi:\tilde{X} \to \tilde{X}$ for which ${\sf P}\circ \varphi = {\sf P}$ is called a covering transformation (or deck transformation). The covering transformations form a group, denoted by ${\rm Deck}(\tilde{X},X)$. For each pair of points $\tilde{x}_1, \tilde{x}_2 \in \tilde{X}$ with ${\sf P}(\tilde{x}_1)={\sf P}(\tilde{x}_2)$ there exists exactly one covering transformation that maps $\tilde{x}_1$ to $\tilde{x}_2$. (See e.g. \cite{Fo}). Throughout the paper we will fix a base point $q_0\in X$ and a base point $\tilde{q}_0\in {\sf P}^{-1}(q_0)\subset \tilde{X}$. The group of covering transformations of $\tilde X$ can be identified with the fundamental group $\pi_1(X,q_0)$ of $X$ by the following correspondence. (See e.g. \cite{Fo}). Take a covering transformation $\sigma \in {\rm Deck}(\tilde{X},X)$. Let $\tilde{\gamma}_0$ be an arc in $\tilde X$ with initial point $\sigma(\tilde{q}_0)$ and terminating point $\tilde{q}_0$. Denote by ${\rm Is}^{\tilde{q}_0}(\sigma)$ the element of $\pi_1(X,q_0)$ represented by the loop ${\sf P}(\tilde{\gamma}_0)$. The mapping ${\rm Deck}(\tilde{X},X)\ni\sigma\to {\rm Is}^{\tilde{q}_0}(\sigma)\in \pi_1(X,q_0)$ is a group homomorphism. The homomorphism ${\rm Is}^{\tilde{q}_0}$ is injective and surjective, hence it is a group isomorphism. The inverse $({\rm Is}^{\tilde{q}_0})^{-1}$ is obtained as follows. Represent an element $e_0\in \pi_1(X,q_0)$ by a loop $\gamma_0$. Consider the lift $\tilde{\gamma}_0$ of $\gamma_0$ to $\tilde X$ that has terminating point $\tilde{q}_0$. Then $({\rm Is}^{\tilde{q}_0})^{-1}(e_0)$ is the covering transformation that maps $q_0$ to the initial point of of $\tilde{\gamma}_0$. For another point $\tilde{q}$ of $\tilde X$ and the point $q\stackrel{def}={\sf P}(\tilde{q})\in X$ the isomorphism ${\rm Is}^{\tilde{q}}:{\rm Deck}(\tilde{X},X)\to\pi_1(X,q)$ assigns to each $\sigma \in{\rm Deck}(\tilde{X},X)$ the element of $\pi_1(X,q)$ that is represented by ${\sf P}(\tilde{\gamma})$ for a curve $\tilde{\gamma}$ in $\tilde X$ that joins $\sigma(\tilde{q})$ with $\tilde q$. ${\rm Is}^{\tilde{q}}$ is related to ${\rm Is}^{\tilde{q}_0}$ as follows. Let $\tilde{\alpha}$ be an arc in $\tilde X$ with initial point $\tilde{q}_0$ and terminating point $\tilde{q}$. Put $q={\sf P}(\tilde{q})$ and $\alpha={\sf P}(\tilde{\alpha})$. Then for the isomorphism ${\rm Is}_{\alpha}:\pi_1(X,q_0)\to \pi_1(X,q)$ the equation \begin{equation}\label{eq1''} {\rm Is}^{\tilde{q}}(\sigma)= {\rm Is}_{\alpha}\circ{\rm Is}^{\tilde{q}_0}(\sigma),\;\; \sigma \in {\rm Deck}(\tilde{X},X), \end{equation} holds, i.e. the following diagram Figure \ref{fig2} is commutative. \begin{figure} \caption{A commutative diagram related to the change of the base point} \label{fig2} \end{figure} Indeed, let $\tilde{\alpha}^{-1}$ denote the curve that is obtained from $\tilde{\alpha}$ by inverting the direction on $\tilde{\alpha}$, i.e. moving from $\tilde{q}$ to $\tilde{q}_0$. The curve $\sigma((\tilde{\alpha})^{-1})$ has initial point $\sigma(\tilde{q})$ and terminating point $\sigma(\tilde{q}_0)$. Hence, for a curve $\tilde{\gamma}_0$ in $\tilde X$ that joins $\sigma(\tilde{q}_0)$ with $\tilde{q}_0$, the curve $\sigma((\tilde{\alpha})^{-1}) \; \tilde{\gamma}_0\;\tilde{\alpha}$ in $\tilde X$ has initial point $\sigma(\tilde{q})$ and terminating point $\tilde{q}$. Therefore ${\sf P}(\sigma((\tilde{\alpha})^{-1}) \; \tilde{\gamma}_0 \;\tilde{\alpha})$ represents ${\rm Is}^{\tilde{q}}(\sigma)$. On the other hand \begin{equation}\label{eq1'} {\sf P}(\sigma(\tilde{\alpha}^{-1}) \; \tilde{\gamma}_0\;\tilde{\alpha})= {\sf P}(\sigma(\tilde{\alpha}^{-1}))\;{\sf P}(\tilde{\gamma}_0)\; {\sf P}(\tilde{\alpha})= \alpha^{-1} \gamma_0 \,\alpha \end{equation} represents ${\rm Is}_{\alpha}(e_0)$ with $e_0={\rm Is}^{\tilde q_0}(\sigma)$. In particular, if $\tilde{q}'_0\in {\sf P}^{-1}(q_0)$ is another preimage of the base point $q_0$ under the projection $\sf P$, then the associated isomorphisms to the fundamental group $\pi_1(X,q_0)$ are conjugate, i.e. ${\rm Is}^{\tilde{q}'_0}(e_0)=(e'_0)^{-1} {\rm Is}^{\tilde{q}_0}(e_0) e'_0$ for each $e_0\in\pi_1(X,q_0)$. The element $e'_0$ is represented by the projection of an arc in $\tilde X$ with initial point $\tilde{q}_0$ and terminating point $\tilde{q}'_0$. Keeping fixed $\tilde{q}_0$ and $q_0$ we will say that a point $\tilde{q}\in \tilde X$ and a curve $\alpha$ in $X$ are compatible if the diagram Figure \ref{fig2} is commutative, equivalently, if equation \eqref{eq1''} holds. We may also start with choosing a curve $\alpha$ in $X$ with initial point $q_0$ and terminating point $q$. Then there is a point $\tilde{q}=\tilde{q}(\alpha)$, such that $\tilde{q}$ and $\alpha$ are compatible. Indeed, let $\tilde{\alpha}$ be the lift of $\alpha$, that has initial point $\tilde{q}_0$. Denote the terminating point of $\tilde{\alpha}$ by $\tilde{q}(\alpha)$, and repeat the previous arguments. Let $N$ be a subgroup of $\pi_1(X,q_0)$. Denote by $X(N)$ the quotient $\tilde X \diagup ({\rm Is}^{\tilde{q}_0})^{-1}(N)$. We obtain a covering $\omega^N_{{\rm Id}} :\tilde X \to X(N)$ with group of covering transformations isomorphic to $N$. The fundamental group of $X(N)$ with base point $(q_0)_N\stackrel{def}= \omega_{\rm Id}^N(\tilde{q}_0)$ can be identified with $N$. If $N_1$ and $N_2$ are subgroups of $\pi_1(X,q_0)$ and $N_1$ is a subgroup of $N_2$ (we write $N_1 \leq N_2$), then there is a covering map $\omega^{N_2}_{N_1} :\tilde X \diagup ({\rm Is}^{\tilde{q}_0})^{-1}(N_1) \to \tilde X \diagup ({\rm Is}^{\tilde{q}_0})^{-1}(N_2)$, such that $\omega^{N_2}_{N_1} \circ \omega^{N_1}_{\rm Id}=\omega^{N_2}_{\rm Id}$. Moreover, the diagram Figure \ref{fig3} below is commutative. \begin{figure} \caption{A commutative diagram related to subgroups of the group of covering transformations} \label{fig3} \end{figure} Indeed, take any point $x_1 \in \tilde X \diagup ({\rm Is}^{\tilde{q}_0})^{-1}(N_1)$ and a preimage $\tilde x$ of $x_1$ under $\omega^{N_1}_ {\rm Id} $. There exists a neighbourhood $V(\tilde x)$ of $\tilde x$ in $\tilde X$ such that $V(\tilde x)\cap \sigma(V(\tilde x))=\emptyset$ for all covering transformations $\sigma\in {\rm Deck}(\tilde{X}, X)$. Then for $j=1,2$ the mapping $\omega^{N_j,\tilde x} _{\rm Id} \stackrel{def}{=} \omega^{N_j}_{\rm Id} \mid V(\tilde x)$ is a homeomorphism from $V(\tilde x)$ onto its image denoted by $V_j$. Put $x_2=\omega^{N_2,\tilde x} _{\rm Id}(\tilde{x})$. The set $V_j\subset \tilde{X}\diagup ({\rm Is}^{\tilde{q}_0})^{-1}(N_j) $ is a neighbourhood of $x_j$ for $j=1,2$. For each preimage $\tilde{x}' \in (\omega^{N_1}_{\rm Id})^{-1}(x_1)$ there is a covering transformation $\varphi_{\tilde x,\tilde{x}'}$ in $({\rm Is}^{\tilde{q}_0})^{-1}(N_1)$ which maps a neighbourhood $V(\tilde {x}')$ of $\tilde{x}'$ conformally onto the neighbourhood $V(\tilde {x})$ of $\tilde{x}$ so that on $V(\tilde{x}')$ the equality $\omega^{N_1,\tilde{x}'} _{\rm Id} = \omega^{N_1,\tilde x}_{\rm Id} \circ \varphi _{\tilde x,\tilde{x}'}$ holds. Choose $\tilde{x} \in (\omega^{N_1} _{\rm Id})^{-1}(x_1)$ and define \begin{equation}\label{eq1a'} \omega^{N_2}_{N_1}(y) = \omega^{N_2, \tilde x}_{N_1}(y) \stackrel{def}= \omega^{N_2, \tilde x} _{\rm Id} ((\omega^{N_1,\tilde x} _{\rm Id} )^{-1}(y)) {\mbox{ \;for each\;}} y \in V_1\,. \end{equation} We get a correctly defined mapping from $V_1$ onto $V_2$. Indeed, since $N_1$ is a subgroup of $N_2$, the covering transformation $\varphi_{\tilde x,\tilde{x}'}$ is contained in $({\rm Is}^{\tilde{q}_0})^{-1}(N_2)$, and we get for another point $\tilde{x}' \in (\omega^{N_1}_ {\rm Id})^{-1}(x_1)$ the equality $\omega^{N_2, \tilde{ x}'} _{\rm Id} = \omega^{N_2, \tilde x} _{\rm Id} \circ \varphi _{\tilde x,\tilde{x}'}$. Hence, for $y \in V_1(x_1)$ \begin{equation}\label{eq1b'} \omega^{N_2, \tilde{ x}'} _{\rm Id} \circ(\omega^{N_1,\tilde{x}'} _{\rm Id})^{-1} (y) = (\omega^{N_2, \tilde x} _{\rm Id} \circ \varphi_{\tilde x,\tilde{x}'})\circ (\omega^{N_1,\tilde x} _{\rm Id} \circ \varphi_{\tilde x,\tilde{x}'})^{-1}(y)= \omega^{N_2,\tilde x} _{\rm Id} \circ(\omega^{N_1,\tilde x} _{\rm Id} )^{-1}(y). \end{equation} Since each mapping $\omega^{N_j,\tilde{x}}_ {\rm Id}$, $j=1,2,$ is a homeomorphism from $V(\tilde{x})$ onto its image, the mapping $\omega^{N_2} _{N_1}$ is a homeomorphism from $V(x_1)$ onto $V(x_2)$. The same holds for all preimages of $V(x_2)$ under $\omega^{N_2} _{N_1}$. Hence, $\omega^{N_2} _{N_1}$ is a covering map. The commutativity of the part of the diagram that involves the mappings $\omega^{N_1} _{\rm Id} $, $\omega^{N_2} _{\rm Id} $, and $\omega_{N_1}^{N_2}$ follows from equation \eqref{eq1a'}. The existence of $\omega_{N_1}^{\pi_1(X,q_0)}$ and the equality ${\sf P}= \omega_{N_1}^{\pi_1(X,q_0)} \circ \omega^{N_1} _{\rm Id}$ follows by applying the above arguments with $N_2= \pi_1(X,q_0)$. The equality ${\sf P}= \omega_{N_2}^{\pi_1(X,q_0)} \circ \omega^{N_2} _{\rm Id}$ follows in the same way. Since \begin{align} {\sf P} = &\omega_{N_2}^{\pi_1(X,q_0)} \circ \omega_{N_1}^{N_2}\circ \omega^{N_1} _{\rm Id} \nonumber \\ = &\omega_{N_1}^{\pi_1(X,q_0)}\quad\quad\;\;\, \circ \omega^{N_1} _{\rm Id},\nonumber \end{align} we have \begin{equation}\nonumber \omega_{N_2}^{\pi_1(X,q_0)} \circ \omega_{N_1}^{N_2} = \omega_{N_1}^{\pi_1(X,q_0)} \end{equation} We will also use the notation $\omega^N \stackrel{def}= \omega^N _{\rm Id}$ and $\omega_N \stackrel{def}= \omega_N ^{\pi_1(X,q_0)}$ for a subgroup $N$ of $\pi_1(X,q_0)$. Let again $N_1 \leq N_2$ be subgroups of $\pi_1(X,q_0)$. Consider the covering $\omega_{N_1}^{N_2} :\tilde X \diagup ({\rm Is}^{{\tilde q}_0})^{-1}(N_1) \to \tilde X \diagup ({\rm Is}^{{\tilde q}_0})^{-1}(N_2)$. Let $\beta$ be a simple relatively closed curve in $\tilde X \diagup ({\rm Is}^{\tilde q_0})^{-1}(N_2)$. Then $(\omega_{N_1}^{N_2})^{-1}(\beta)$ is the union of simple relatively closed curves in $\tilde X \diagup ({\rm Is}^{{\tilde q}_0})^{-1}( N_1)$ and $\omega_{N_1}^{N_2}: (\omega_{N_1}^{N_2})^{-1}(\beta) \to \beta$ is a covering. Indeed, we cover $\beta$ by small discs $U_k$ in $\tilde X \diagup ({\rm Is}^{{\tilde q}_0})^{-1}( N_1)$ such that for each $k$ the restriction of $\omega_{N_1}^{N_2}$ to each connected component of $(\omega_{N_1}^{N_2})^{-1}(U_k)$ is a homeomorphism onto $U_k$, and $U_k$ intersects $\beta$ along a connected set. Take any $k$ with $U_k \cap \beta \neq \emptyset$. Consider the preimages $(\omega_{N_1}^{N_2})^{-1}(U_k)$. Restrict $(\omega_{N_1}^{N_2})$ to the intersection of each preimage $(\omega_{N_1}^{N_2})^{-1}(U_k)$ with $(\omega_{N_1}^{N_2})^{-1}(\beta)$. We obtain a homeomorphism onto $U_k \cap \beta$. It follows that the map $(\omega_{N_1}^{N_2})$ is a covering from each connected component of $(\omega_{N_1}^{N_2})^{-1}(\beta)$ onto $\beta$. \noindent{\bf The extremal length of monodromies.} Let as before $X$ be a connected finite open Riemann surface with base point $q_0$, and $\tilde{q}_0$ a point in the universal covering $\tilde X$ for which ${\sf P}(\tilde{q}_0)=q_0$ for the covering map ${\sf P}:\tilde{X}\to X$. Recall that for an arbitrary point $q\in X$ the free homotopy class of an element $e$ of the fundamental group $\pi_1(X,q)$ can be identified with the conjugacy class of elements of $\pi_1(X,q)$ containing $e$ and is denoted by $\widehat{e}$. Notice that for $e_0\in \pi_1(X,q_0)$ and a curve $\alpha$ in $X$ with initial point $q_0$ and terminating point $q$ the free homotopy classes of $e_0$ and of $e={\rm Is}_{\alpha}(e_0)$ coincide, i.e. $\widehat{e}=\widehat{e}_0$. Consider a simple smooth relatively closed curve $L$ in $X$. We will say that a free homotopy class of curves $\widehat {e_0}$ intersects $L$ if each representative of $\widehat{e_0} $ intersects $L$. Choose an orientation of $L$. The intersection number of $\widehat {e_0}$ with the oriented curve $L$ is the intersection number with $L$ of some (and, hence, of each) smooth loop representing $\widehat {e_0}$ that intersects $L$ transversally. This intersection number is the sum of the intersection numbers over all intersection points. The intersection number at an intersection point equals $+ 1$ if the orientation determined by the tangent vector to the curve representing $\widehat {e_0}$ followed by the tangent vector to $L$ is the orientation of $X$, and equals $-1$ otherwise. Let $A$ be an annulus equipped with an orientation (called positive orientation) of simple closed dividing curves in $A$. (A relatively closed curve in a surface $X$ is called dividing, if $X\setminus \gamma$ consists of two connected components.) A continuous mapping $\omega:A\to X$ is said to represent a conjugacy class $\widehat{ e}$ of elements of the fundamental group $\pi_1(X,q)$ for a point $q\in X$, if the composition $\omega\circ\gamma$ represents $\widehat{ e}$ for each positively oriented dividing curve $\gamma$ in $A$ . Let $A$ be an annulus with base point $p$ with a chosen positive orientation of simple closed dividing curves in $A$. Let $\omega$ be a continuous mapping from $A$ to a finite Riemann surface ${X}$ with base point $q$ such that $\omega(p)=q$. We write $\omega:(A,p) \to ({X},q)$. The mapping is said to represent the element $e$ of the fundamental group $ \pi_1({X},q)$ if $\omega\circ \gamma$ represents $e$ for some (and hence for each) positively oriented simple closed dividing curve $\gamma$ in $A$ with base point $q$. We associate to each element $e_0 \in \pi_1(X,q_0)$ of the free group $\pi_1(X,q_0)$ the annulus $X(\langle e_0 \rangle)=\tilde{X}\diagup ({\rm Is}^{\tilde{q}_0})^{-1}(\langle e_0 \rangle)$ with base point $q_{\langle e_0\rangle}= \omega_{\rm Id}^{\langle e_0 \rangle}(\tilde{q}_0)$ and the covering map $\omega_{\langle e_0 \rangle} \stackrel{def}=\omega^{\pi_1(X,q_0)}_{\langle e_0 \rangle}:X(\langle e_0 \rangle) \to X$. By the commutative diagram \ref{fig3} the equality $\omega_{\langle e_0 \rangle }(q_{\langle e_0 \rangle})= q_0$ holds. We choose the orientation of simple closed dividing curves in $X(\langle e_0 \rangle)=\tilde{X}\diagup ({\rm Is}^{\tilde{q}_0})^{-1}(\langle e_0 \rangle)$ so that for a curve $\tilde \gamma$ in $\tilde X$ with terminating point $\tilde{q}_0$ and initial point ${\rm Is}^{\tilde{q}_0}(e_0)$ the curve $\gamma_{\langle e_0 \rangle}\stackrel{def}=\omega_{\langle e_0 \rangle}(\tilde{ \gamma})$ is positively oriented. The locally conformal mapping $\omega^{\pi_1(X,q_0)}_{\langle e_0 \rangle}:(X(\langle e_0 \rangle), q_{\langle e_0\rangle})\to (X,q_0)$ represents $e_0$. This follows from the equality $\omega_{\langle e_0 \rangle}(\gamma_{\langle e_0 \rangle })=\omega_{\langle e_0 \rangle}(\omega^{\langle e_0 \rangle}(\tilde{\gamma}))={\sf P}(\tilde{\gamma})=\gamma$, since ${\sf P}(\tilde{\gamma})$ represents $e_0$. Take a curve $\alpha$ in $X$ that joins $q_0$ and $q$, and a point $\tilde{q}=\tilde{q}({\alpha})\in \tilde X$ such that $\alpha$ and $\tilde{q}$ are compatible, i.e. ${\rm Is}^{\tilde{q}}={\rm Is}_{\alpha}\circ {\rm Is}^{\tilde{q}_0}$ (see equation \eqref{eq1''}). Put $e={\rm Is}_{\alpha}(e_0)$. By equation \eqref{eq1''} $({\rm Is}^{\tilde{q}})^{-1}(e)=({\rm Is}^{\tilde{q}_0})^{-1}(e_0)$, hence, $\tilde{X}\diagup ({\rm Is}^{\tilde{q}})^{-1}(\langle e\rangle)=\tilde{X}\diagup ({\rm Is}^{\tilde{q}_0})^{-1}(\langle e_0\rangle)=X(\langle e_0\rangle)$. The locally conformal mapping $\omega_{\langle e_0\rangle}: X(\langle e_0\rangle)\to X$ takes the point $q_{\langle e\rangle}\stackrel{def}=\omega^{\langle e_0 \rangle}(\tilde{q})$ to $q\in X$. Moreover, $\omega_{\langle e_0\rangle}: (X(\langle e_0\rangle),q_{\langle e\rangle}) \to (X,q)$ represents $e\in\pi_1(X,q)$. This can be seen by repeating the previous arguments. Let $\alpha$ be an arbitrary curve in $X$ joining $q_0$ with $q$, and $\tilde{q}\in {\sf P}^{-1}(q)$ be arbitrary (i.e. $\alpha$ and $\tilde{q}$ are not required to be compatible). Let $e\in \pi_1(X,q)$. Denote the projection $\tilde{X}\to\tilde{X}\diagup ({\rm Is}^{\tilde{q}})^{-1}(\langle e\rangle)$ by $\omega^{\langle e\rangle,\tilde{q}}$, and the projection $\tilde{X}\diagup ({\rm Is}^{\tilde{q}})^{-1}(\langle e\rangle)\to X$ by $\omega_{\langle e\rangle,\tilde{q}}$. Put $q_{\langle e\rangle,\tilde{q}}=\omega^{\langle e\rangle,\tilde{q}}(\tilde{q})$. For any such choice we choose the orientation of simple closed dividing curves on $\tilde{X}\diagup ({\rm Is}^{\tilde{q}})^{-1}(\langle e\rangle)$ so that $\omega^{\langle e\rangle,\tilde{q}}$ maps any curve $\tilde{\gamma}$ in $\tilde X$ with initial point $({\rm Is}^{\tilde{q}})^{-1}(\langle e\rangle)(\tilde{q})$ and terminal point $\tilde{q}$ to a positively oriented dividing curve. We will call it the standard orientation of dividing curves in $\tilde{X}\diagup ({\rm Is}^{\tilde{q}})^{-1}(\langle e\rangle)$. The mapping $\omega_{\langle e\rangle,\tilde{q}}: \big(\tilde{X}\diagup ({\rm Is}^{\tilde{q}})^{-1}(\langle e\rangle),q_{\langle e\rangle,\tilde{q}} \big)\to (X,q)$ represents $e$. Since the mapping $\;\;\omega_{\langle e_0 \rangle }:(X(\langle e_0 \rangle),(q_0)_{\langle e_0\rangle}) \to (X,q_0)\;\;$ represents $\;e_0\;$, the mapping $\omega_{\langle e_0 \rangle }:X(\langle e_0 \rangle) \to X$ represents the free homotopy class $\widehat{e_0}$. The following simple lemma will be useful. \begin{lemm}\label{lem0'} The annulus $X(\langle e_0 \rangle)$ has smallest extremal length among annuli which admit a holomorphic mapping to $X$, that represents the conjugacy class $\widehat{e_0}$. \end{lemm} \noindent In other words, $X(\langle e_0 \rangle)$ is the "thickest" annulus with the property stated in Lemma \ref{lem0'}. \noindent {\bf Proof}. Take an annulus $A$ with a choice of positive orientation of simple closed dividing curves. Suppose $A \overset{\omega}{-\!\!\!\longrightarrow}X$ is a holomorphic mapping that represents $\widehat{e_0}$. The annulus $A$ is conformally equivalent to a round annulus in the plane, hence, we may assume that $A$ has the form $A=\{z\in \mathbb{C}: \, r<|z|<R\}$ for $0\leq r<R\leq \infty$ and the positive orientation of dividing curves is the counterclockwise one. Take a positively oriented simple closed dividing curve $\gamma^{A}$ in $A$. Its image $\omega\circ \gamma^{A}$ under $\omega$ represents the class $\widehat{e_0}$. Choose a point $q^{A}$ in $\gamma^{A}$, and put $q=\omega(q^{A})$. Then $\gamma^{A}$ represents a generator of $\pi_1(A,q^{A})$ and $\gamma=\omega\circ\gamma^{A} $ represents an element $e$ of $\pi_1(X,q)$ in the conjugacy class $\widehat{e_0}$. Choose a curve $\alpha$ in $X$ with initial point $q_0$ and terminating point $q$, and a point $\tilde q$ in $\tilde X$ so that $\alpha$ and $\tilde q$ are compatible, and, hence, for $e={\rm Is}_{\alpha}(e_0)$ the equality $({\rm Is}^{\tilde{q}_0})^{-1}(e_0)=({\rm Is}^{\tilde{q}})^{-1}(e)$ holds. Let $L$ be the relatively closed arc $\{p\cdot r: r\in \mathbb{R}\} \cap A$ in $A$ that contains $p$. After a homotopy of $\gamma^{A}$ with fixed base point, we may assume that its base point $q^{A}$ is the only point of $\gamma^{A}$ that is contained in $L$. The restriction $\omega|(A\setminus L)$ lifts to a mapping $\tilde{\omega}:(A\setminus L)\to \tilde X$, that extends continuously to the two strands $L_{\pm}$ of $L$. (Here $L_-$ contains the initial point of $\gamma^{A}$.) Let $p_{\pm}$ be the copies of $p$ on the two strands $L_{\pm}$. We choose the lift $\tilde{\omega}$ so that $\tilde{\omega}(p_+)=\tilde{ q}$. Since the mapping $(A,q^{A})\to (X,q)$ represents $e$, we obtain $\tilde{\omega}(p_-)=\sigma(\tilde{ q})$ for $\sigma= ({\rm Is}^{\tilde{q}})^{-1}(e)$. Then for each $z\in L$ the covering transformation $\sigma$ maps the point $\tilde{z}_+\in\tilde{\omega}(L_+)$ for which ${\sf P}(\tilde{z}_+)=z$ to the point $\tilde{z}_-\in\tilde{\omega}(L_-)$ for which ${\sf P}(\tilde{z}_-)=z$. Hence $\omega$ lifts to a holomorphic mapping $\iota: A\to X(\langle e_0 \rangle)$. By Lemma 7 of \cite{Jo2} $\lambda(A) \geq \lambda( X(\langle e_0 \rangle))$. $\Box$ For each point $q\in X$ and each element $e\in \pi_1(X,q)$ we denote by $A(\widehat {e})$ the conformal class of the ''thickest'' annulus that admits a holomorphic mapping into $X$ that represents $\widehat{e}$. We saw that $\lambda(A(\widehat{e_0}))=\lambda(\tilde{X}\diagup ({\rm Is}^{\tilde{q}_0})^{-1}(\langle e_0 \rangle))$ for $e_0\in \pi_1(X,q_0)$. By the same reasoning as before $\lambda(A(\widehat{e}))=\lambda(\tilde{X}\diagup ({\rm Is}^{\tilde{q}'})^{-1}(\langle e \rangle))$ for each $\tilde{q}' \in \tilde X$ and each element $e \in \pi_1(X,{\sf P}(\tilde{q}'))$. Hence, if $e_0$ and $e$ are conjugate, then $\lambda(\tilde{X}\diagup ({\rm Is}^{\tilde{q}_0})^{-1}(\langle e_0 \rangle))=\lambda(\tilde{X}\diagup ({\rm Is}^{\tilde{q}})^{-1}(\langle e \rangle))$ for any $\tilde{q}_0\in {\sf P}^{-1}(q_0)\subset \tilde X$ and any $\tilde{q}\in {\sf P}^{-1}(q)$. Notice that $A(\reallywidehat{e^{-1}})=A(\widehat e)$ for each $e\in \pi_1(X,q),\, q\in X$. \section{Holomorphic mappings into the twice punctured plane} \label{sec:2} The following lemma will be crucial for the estimate of the $\mathcal{L}_-$-invariant of the monodromies of holomorphic mappings from a finite open Riemann surface to $\mathbb{C}\setminus \{-1,1\}$. \begin{lemm}\label{lem2} Let $f:X \to \mathbb{C}\setminus \{-1,1\}$ be a non-contractible holomorphic function on a connected finite open Riemann surface $X$, such that $0$ is a regular value of ${\rm Im}f$. Assume that $L_0$ is a simple relatively closed curve in $X$ such that $f(L_0)\subset (-1,1)$. Let $q \in L_0$ and $q'=f(q)$. If for an element $e\in \pi_1(X,q)$ the free homotopy class $\widehat e$ intersects $L_0$, then either the reduced word $f_*(e) \in \pi_1(\mathbb{C}\setminus \{-1,1\},q')$ is a non-zero power of a standard generator of $\pi_1(\mathbb{C}\setminus \{-1,1\}, q')$ or the inequality \begin{equation}\label{eq1a} \mathcal{L}_-(f_*(e)) \leq 2\pi \lambda(A(\widehat{e})) \end{equation} holds. \end{lemm} Notice that we make a normalization in the statement of the Lemma by requiring that $f$ maps $L_0$ into the interval $(-1,1)$, not merely into $\mathbb{R}\setminus \{-1,1\}$. Lemma \ref{lem2} will be a consequence of the following lemma. \begin{lemm}\label{lem1} Let $X$, $f$, $L_0$, $q\in L_0$ be as in Lemma \ref{lem2}, and $e\in \pi_1(X,q)$. Let $\tilde q$ be an arbitrary point in ${\sf P}^{-1}(q)$. Consider the annulus $A\stackrel{def}=\tilde{X}\diagup ({\rm Is}^{\tilde{q}})^{-1}(\langle e \rangle)$ and the holomorphic projection $\omega_{A}\stackrel{def}=\omega_{\langle e\rangle,\tilde{q}}$. Put $q_{A}\stackrel{def}=\omega^{\langle e\rangle,\tilde{q}}(\tilde{q})$ and let $L_{A}\subset A$ be the connected component of $(\omega_{A})^{-1}(L_0)$ that contains $q_{A}$. Then the mapping $\omega_{A}:(A,q_{A})\to (X,q)$ represents $e$. If $\widehat e$ intersects $L_0$, then $L_{A}$ is a relatively closed curve in $A$ that has limit points on both boundary components of $A$, and the lift $f\circ \omega_{A}$ is a holomorphic function on $A$ that maps $L_{A}$ into $(-1,1)$. \end{lemm} \noindent{\bf Proof of Lemma \ref{lem1}.} Let $\gamma:[0,1]\to X$ be a curve with base point $q$ in $X$ that represents $e$, and let $\tilde{\gamma}$ be the lift of $\gamma$ to $\tilde X$ with terminating point $\tilde{\gamma}(1)$ equal to $\tilde q$. Put $\sigma\stackrel{def}= ({\rm Is}^{\tilde{q}})^{-1}(e)$. Then the initial point $\tilde{\gamma}(0)$ equals $\sigma(\tilde{q})$. All connected components of ${\sf P}^{-1}(L_0)$ are relatively closed curves in $\tilde{X}\cong \mathbb{C}_+$ (where $\mathbb{C}_+$ denotes the upper half-plane) with limit points on the boundary of $\tilde X$. Indeed, the lift $f\circ {\sf P}$ of $f$ to $\tilde X$ takes values in $(-1,1)$ on ${\sf P}^{-1}(L_0)$. Hence, $|\exp(\pm \,i\, f\circ {\sf P})|=1$ on ${\sf P}^{-1}(L_0)$. A compact connected component of ${\sf P}^{-1}(L_0)$ would bound a relatively compact topological disc in $\tilde{X}=\mathbb{C}_+$, and by the maximum principle $|\exp(\pm \,i\, f\circ {\sf P})|=1$ on the disc. This would imply that $ f\circ {\sf P}$ is constant on $\tilde X$ in contrary to the assumptions. Let $\tilde{L}_{\tilde{q}}$ be the connected component of ${\sf P}^{-1}(L_0)$ that contains $\tilde{q}$. The point $\sigma(\tilde{q})$ cannot be contained in $\tilde{L}_{\tilde{q}}$. Indeed, assume the contrary. Then the arc $\tilde{\gamma '}$ on $\tilde{L}_{\tilde{q}}$ joining $\sigma(\tilde{q})$ and $\tilde{q}$ is homotopic in $\tilde X$ with fixed endpoints to $\tilde{\gamma}$. The projection $\gamma '={\sf P}(\tilde{\gamma '})$ is contained in $L_0$ and is homotopic in $X$ with fixed endpoints to $\gamma$. Since $\gamma$ represents $e$ and $e$ is a primitive element of the fundamental group $\pi_1(X,q)$, this is possible only if $L_0$ is compact (and after orienting it) $L_0$ represents $e$. A small translation of $\gamma'$ to a side of $L_0$ gives a curve in $X$ that does not intersect $L_0$ and represents the free homotopy class $\widehat e$ of $e$. This contradicts the fact that $\widehat{e}$ intersects $L_0$. Since $\sigma(L_0)$ is also a connected component of ${\sf P}^{-1}(L_0)$, the curves $\tilde{L}_{\tilde{q}}$ and $\sigma(\tilde{L}_{\tilde{q}})$ are disjoint. Each of the two connected components $\tilde{L}_{\tilde{q}}$ and $\sigma(\tilde{L}_{\tilde{q}})$ divides $\tilde X$. Let $\Omega$ be the domain on $\tilde X$ that is bounded by $\tilde{L}_{\tilde{q}}$ and $\sigma(\tilde{L}_{\tilde{q}})$ and parts of the boundary of $\tilde X$. After a homotopy of $\tilde{\gamma}$ that fixes the endpoints we may assume that $\tilde{\gamma}((0,1))$ is contained in $\Omega$. Indeed, for each connected component of $\tilde{\gamma}((0,1))\setminus \Omega$ there is a homotopy with fixed endpoints that moves the connected component to an arc on $\tilde{L}_{\tilde{q}}$ or $\sigma(\tilde{L}_{\tilde{q}})$. A small perturbation yields a curve $\tilde{\gamma}'$ which is homotopic with fixed endpoints to $\tilde{\gamma}$ and has interior contained in $\Omega$. Notice that by the same reasoning as above, $\tilde{\gamma}'((0,1))$ does not meet any $\sigma^k(\tilde{L}_{\tilde{q}})$. The curve $\omega^{\langle e\rangle,\tilde{q}}(\tilde{\gamma}')$ is a closed curve on $A$ that represents a generator of the fundamental group of $A$ with base point $q_A$. Moreover, $\omega_A\circ \omega^{\langle e\rangle,\tilde{q}}(\tilde{\gamma}')= \omega_{\langle e\rangle,\tilde{q}}\circ \omega^{\langle e\rangle,\tilde{q}}(\tilde{\gamma}')={\sf P}(\tilde{\gamma}')$ represents $e$. Hence, the mapping $\omega_A:(A,q_A)\to (X,q)$ represents $e$. The curve $\omega^{\langle e\rangle,\tilde{q}}(\tilde{\gamma}')$ intersects $L_{\langle e\rangle}=\omega^{\langle e\rangle,\tilde{q}}(\tilde{L}_{\tilde{q}})$ exactly once. Hence, $L_{\langle e\rangle}$ has limit points on both boundary circles of $A$ for otherwise $L_{\langle e\rangle}$ would intersect one of the components of $A\setminus \omega^{\langle e\rangle,\tilde{q}}(\tilde{\gamma}')$ along a set which is relatively compact in $A$, and $\tilde{\gamma}'$ would have intersection number zero with $L_{\langle e\rangle}$. It is clear that $f\circ \omega_A(L_A)=f(L_0)\subset (-1,1)$. The lemma is proved. $\Box$ \noindent {\bf Proof of Lemma \ref{lem2}.} Let $\omega_A:(A,q_A)\to (X,q)$ be the holomorphic mapping from Lemma \ref{lem1} that represents $e$, and let $L_A\ni q_A$ be the relatively closed curve in $A$ with limit set on both boundary components of $A$. Consider a positively oriented dividing curve ${\gamma}_A:[0,1]\to A$ with base point $\gamma(0)=\gamma(1)=q_A$ such that $\gamma_A((0,1))\subset A\setminus L_A$. The curve $\gamma=\omega_A(\gamma_A)$ represents $e$. The mapping $f\circ \omega_A$ is holomorphic on $A$ and $f\circ \omega_A(\gamma_A)=f(\gamma)$ represents $f_*(e)\in \pi_1(\mathbb{C}\setminus\{-1,1\}, q')$ with $q'=f\circ \omega_A(q_A)=f(q)\in (-1,1)$. Hence, $f\circ \omega_A(\gamma_A)$ also represents the element $(f_*(e))_{tr}\in \pi_1^{tr}(\mathbb{C}\setminus\{-1,1\})$ in the relative fundamental group $\pi_1(\mathbb{C}\setminus \{-1,1\},(-1,1))=\pi_1^{tr}(\mathbb{C}\setminus\{-1,1\})$ corresponding to $f_*(e)$. We prove now that $\Lambda_{tr}(f_*(e))\leq \lambda(A)$. Let $A_0\Subset A$ be any relatively compact annulus in $A$ with smooth boundary such that $q_A\in A_0$. If $A_0$ is sufficiently large, then the connected component $L_{A_0}$ of $L_A\cap A_0$ that contains $q_A$ has endpoints on different boundary components of $A_0$. The set $A_0 \setminus L_{A_0}$ is a curvilinear rectangle. The open horizontal curvilinear sides are the strands of the cut that are reachable from the curvilinear rectangle moving counterclockwise, or clockwise, respectively. The open vertical curvilinear sides are obtained from the boundary circles of $A_0$ by removing an endpoint of the arc $L_{A_0}$. Since $f\circ\omega_A$ maps $L_{A}$ to $(-1,1)$, the restriction of $f\circ\omega_A$ to $A_0 \setminus L_{A_0}$ represents $(f_*(e))_{tr}$. Hence, \begin{equation}\label{eq2a} \Lambda_{tr}(f_*(e))\leq \lambda(A_0 \setminus L_{A_0})\,. \end{equation} Moreover, \begin{equation}\label{eq2b} \lambda(A_0 \setminus L_{A_0})\leq \lambda(A_0)\,. \end{equation} This is a consequence of the following facts. First, $\lambda(A_0 \setminus L_{A_0})$ is equal to the extremal length $\lambda(\Gamma(A_0 \setminus L_{A_0}))$ in the sense of Ahlfors \cite{A1} of the family $\Gamma(A_0 \setminus L_{A_0})$ of curves in the curvilinear rectangle $A_0 \setminus L_{A_0}$ that join the two horizontal sides of the curvilinear rectangle. Further, $\lambda(A_0)$ is equal to the extremal length $\lambda(\Gamma(A_0)) $ \cite{A1} of the family $\Gamma(A_0)$ of curves in $A_0$ that are free homotopic to simple closed positively oriented dividing curves in $A_0$. Finally, by \cite{A1}, Ch.1 Theorem 2, the inequality \begin{align}\label{eq2} \lambda(\Gamma(A_0 \setminus L_{A_0})) \leq \lambda(\Gamma(A_0))\, \end{align} holds. We obtain the inequality $\Lambda_{tr}(f_*(e)) \leq \lambda(A_0)\,$ for each annulus $A_0\Subset A$, hence, since $A$ belongs to the class $A(\widehat{e})$ of conformally equivalent annuli, \begin{align}\label{eq2c} \Lambda_{tr}(f_*(e)) \leq \lambda(A(\widehat{e}))\,, \end{align} and the Lemma follows from Theorem F. $\Box$ \noindent {\bf The monodromies along two generators.} In the following Lemma we combine the information on the monodromies along two generators of the fundamental group $\pi_1(X,q)$. We allow the situation when the monodromy along one generator or along both generators of the fundamental group of $X$ is a power of a standard generator of $\pi_1(\mathbb{C}\setminus \{-1,1\},f(q))$. \begin{lemm}\label{lem3} Let $f:X \to \mathbb{C} \setminus \{-1,1\}$ be a holomorphic function on a connected open Riemann surface $X$ such that $0$ is a regular value of the imaginary part of $f$. Suppose $f$ maps a simple relatively closed curve $L_0$ in $X$ to $(-1,1)$, and $q$ is a point in $L_0$. Let $e^{(1)}$ and $e^{(2)}$ be primitive elements of $\pi_1(X,q)$. Suppose that for each $e= e^{(1)},\;e= e^{(2)}$, and $e=e^{(1)}\,e^{(2)}$, the free homotopy class $\widehat e$ intersects $L_0$. Then either $f_*(e^{(j)}),\, j=1,2,\,$ are (trivial or non-trivial) powers of the same standard generator of $\pi_1(\mathbb{C} \setminus \{-1,1\},q')$ with $q'=f(q) \in (-1,1)$, or each of them is the product of at most two elements $w_1$ and $w_2$ of $\pi_1(\mathbb{C} \setminus \{-1,1\},q')$ with \begin{equation}\nonumber \mathcal{L}_-(w_j) \leq 2\pi \lambda_{e^{(1)},e^{(2)}},\, j=1,2, \end{equation} where \begin{equation}\nonumber \lambda_{e^{(1)},e^{(2)}} \stackrel{def}=\max\{\lambda(A(\reallywidehat{e^{(1)}})),\, \lambda(A(\reallywidehat{e^{(2)}})),\, \lambda(A(\reallywidehat{e^{(1)}\,e^{(2)}}))\}. \end{equation} Hence, \begin{equation}\label{eq3e'} \mathcal{L}_-(f_*(e^{(j)})) \leq 4 \pi \lambda_{e^{(1)},e^{(2)}}, \, j=1,2. \end{equation} \end{lemm} \noindent {\bf Proof.} If the monodromies $f_*(e^{(1)})$ and $f_*(e^{(2)})$ are not powers of a single standard generator (the identity is considered as zeroth power of a standard generator) we obtain the following. At most two of the elements, $f_*(e^{(1)})$, $f_*(e^{(2)})$, and $f_*(e^{(1)}\, e^{(2)})= f_*(e^{(1)})\, f_*(e^{(2)})$, are powers of a standard generator, and if two of them are powers of a standard generator, then they are non-zero powers of different standard generators. If two of them are non-zero powers of standard generators, then the third has the form $a_{\ell }^{k} a_{\ell '}^{k'}$ with $a_{\ell }$ and $a_{\ell ' }$ being different generators and $k$ and $k'$ being non-zero integers. By Lemma \ref{lem2} the $\mathcal{L}_-$ of the third element does not exceed $2\pi \lambda_{e^{(1)},e^{(2)}}$. On the other hand it equals $\log(3|k'|)+ \log(3|k|)$. Hence, $\mathcal{L}_-(a_{\ell }^{k}) = \log(3|k|)\leq 2\pi \lambda_{e^{(1)},e^{(2)}}$ and $\mathcal{L}_-(a_{\ell' }^{k'}) = \log(3|k'|)\leq 2\pi \lambda_{e^{(1)},e^{(2)}}$. If two of the elements $f_*(e^{(1)})$, $f_*(e^{(2)})$, and $f_*(e^{(1)}\, e^{(2)})= f_*(e^{(1)})\, f_*(e^{(2)})$, are not powers of a standard generator, then the $\mathcal{L}_-$ of each of the two elements does not exceed $2\pi \lambda_{e^{(1)},e^{(2)}}$. Since the $\mathcal{L}_-$ of an element coincides with the $\mathcal{L}_-$ of its inverse, the third element is the product of two elements with $\mathcal{L}_-$ not exceeding $2\pi \lambda_{e^{(1)},e^{(2)}}$. Since for $x,x'\geq 2$ the inequality $\log(x+x')\leq \log x +\log x'$ holds, the $\mathcal{L}_-$ of the product does not exceed the sum of the $\mathcal{L}_-$ of the factors. Hence the $\mathcal{L}_-$ of the third element does not exceed $4\pi \lambda_{e^{(1)},e^{(2)}}$. Hence, inequality \eqref{eq3e'} holds. $\Box$ The following proposition states the existence of suitable connected components of the zero set of the imaginary part of certain analytic functions on tori with a hole and on planar domains. For any subset $\mathcal{E}'$ of $\pi_1(X;q_0)$ we denote by $(\mathcal{E}')^{-1}$ the set of all elements that are inverse to elements in $\mathcal{E}'$. Recall that $\mathcal{E}_j$ is the set of primitive elements of $ \pi_1(X,q_0)$ which can be written as product of at most $j$ elements of $\mathcal{E}\cup (\mathcal{E})^{-1} $ for the set $\mathcal{E}$ of generators of $ \pi_1(X,q_0)$ chosen in the introduction. \begin{prop}\label{prop2a} Let $X$ be a torus with a hole or a planar domain with base point $q_0$ and fundamental group $\pi_1(X,q_0)$, and let $\mathcal{E}$ be a set of generators of $\pi_1(X,q_0)$ that is associated to a standard bouquet of circles for $X$. Let $f: X \to \mathbb{C}\setminus \{-1,1\}$ be a non-contractible holomorphic mapping such that $0$ is a regular value of ${\rm{Im}} f$. Then there exist a simple relatively closed curve $L_0\subset X$ such that $f(L_0) \subset \mathbb{R}\setminus \{-1,1\},$ and a set $\mathcal{E}_2' \subset \mathcal{E}_2 \subset\pi_1(X,q_0)$ of primitive elements of $\pi_1(X,q_0)$, such that the following holds. Each element $e_{j,0} \in \mathcal{E} \subset \pi_1(X,q_0)$ is the product of at most two elements of $\mathcal{E}_2'\cup (\mathcal{E}_2')^{-1} $. Moreover, for each $e_0 \in \pi_1(X,q_0)$ which is the product of one or two elements from $\mathcal{E}_2'$ the free homotopy class $\widehat{ e_0}$ has positive intersection number with $L_0$ (after suitable orientation of $L_0$). If $X$ is a torus with a hole or $X$ equals $\mathbb{P}^1$ with three holes, we may chose $\mathcal{E}_2'$ consisting of two elements, one of them contained in $\mathcal{E}$, the other is either contained in $\mathcal{E}\cup \mathcal{E}^{-1}$ or is a product of two elements of $\mathcal{E}$. \end{prop} Notice the following facts. By Theorem E a mapping $f:X\to \mathbb{C}\setminus \{-1,1\}$ is contractible if and only if for each $e_0\in \pi_1(X,q_0)$ the monodromy $f_*(e_0)$ is equal to the identity. The mapping $f$ is reducible if and only if the mondromy mapping $f_*:\pi_1(X,q_0)\to \pi_1(\mathbb{C}\setminus\{-1,1\},f(q_0))$ is conjugate to a mapping into a subgroup $\Gamma$ of $\pi_1(\mathbb{C}\setminus\{-1,1\},f(q_0))$ that is generated by a single element that is represented by a curve which separates one of the points $1,-1$ or $\infty$ from the other points. In other words, $\Gamma$ is (after identifying fundamental groups with different base point up to conjugacy) generated by a conjugate of one of the elements $a_1$, $a_2$ or $a_1a_2$ of $\pi_1(\mathbb{C}\setminus\{-1,1\},0)$. If $f$ is irreducible, then it is not contractible, and, hence, the preimage $f^{-1}(\mathbb{R})$ is not empty. Denote by $M_1$ a M\"obius transformation which permutes the points $-1,\,1,\, \infty$ and maps the interval $(-\infty,-1)$ onto $(-1,1)$, and let $M_2$ be a M\"obius transformation which permutes the points $-1,\,1,\, \infty$ and maps the interval $(1,\infty)$ onto $(-1,1)$. Let $M_0\stackrel{def}= \mbox {Id}$. The main step for the proof of Theorem \ref{thm1} is the following Proposition \ref{prop2}. Recall that $\lambda_j(X)$ was defined in the introduction. Since for $e_0\in \pi_1(X,q_0)$ the equality $\lambda(\tilde{X}\diagup ({\rm Is}^{\tilde{q}_0})^{-1}(\langle e_0\rangle))=\lambda(A(\widehat{e_0}))$ holds, $\lambda_j(X)$ is the maximum of $\lambda(A(\widehat{e_0}))$ over $e_0\in \mathcal{E}_j$. \begin{prop}\label{prop2} Let $X$ be a connected finite open Riemann surface with base point $q_0$, and let $\mathcal{E}$ be the set of generators of $\pi_1(X,q_0)$ that was chosen in Section 1. Suppose $f:X \to \mathbb{C}\setminus \{-1,1\}$ is an irreducible holomorphic mapping, such that $0$ is a regular value of ${\rm{Im}}f$. Then for one of the functions $M_l \circ f,\, l=0,1,2,\,$ which we denote by $F$, there exists a point $q \in X$ (depending on $f$), such that the point $q' \stackrel{def}=F(q)$ is contained in $(-1,1)$, and a curve $\alpha$ in $X$ joining $q_0$ with $q$, such that the following holds. For each element $e_j\in\mbox{Is}_{\alpha}(\mathcal{E})$ the monodromy $F_*(e_j)$ is the product of at most four elements of $\pi_1(\mathbb{C}\setminus\{-1,1\},q')$ of $\mathcal{L}_-$ not exceeding $2 \pi \lambda_7(X)$ and, hence, \begin{equation}\label{eq3b} \mathcal{L}_-(F_*(e_j)) \leq 8\pi \lambda_7(X) \;\; \mbox{for each}\;\, j. \end{equation} If $X$ is a torus with a hole the proposition holds with $\lambda_7(X)$ replaced by $\lambda_3(X)$. If $X$ is a planar domain the proposition holds with $\lambda_4(X)$ instead of $\lambda_7(X)$. \end{prop} Notice, that all monodromies of contractible mappings are equal to the identity, hence the inequality \eqref{eq3b} holds automatically for contractible mappings. \noindent We postpone the proof of the two propositions and prove first the Theorem \ref{thm1}. \noindent {\bf Proof of Theorem \ref{thm1}.} Let $X$ be a connected finite open Riemann surface (possibly of second kind) with base point $q_0$. Consider an arbitrary open Riemann surface $X^0 \Subset X$ which is relatively compact in $X$ and is a deformation retract of $X$. Consider a free homotopy class of mappings from $X$ to $\mathbb{C}\setminus \{-1,1\}$, that is represented by an irreducible holomorphic mapping ${\sf f}:X\to \mathbb{C}\setminus \{-1,1\}$. Notice that the restriction ${\sf f}|X^0$ is also irreducible. Take a small enough positive number $\varepsilon$, such that the function $({\sf f} - i \varepsilon)\mid X^0$ takes values in $\mathbb{C}\setminus \{-1,1\}$ and $0$ is a regular value of its imaginary part. Put $f= ({\sf f} - i \varepsilon)\mid X^0$. If $\varepsilon$ is small enough, then the irreducible mapping $f$ on $X^0$ is free homotopic to ${\sf f}\mid X^0$. We identify the fundamental groups of $X$ and of $X^0$ by the inclusion mapping from $X^0$ to $X$. Proposition \ref{prop2} applied to the mapping $f:X^0\to \mathbb{C}\setminus\{-1,1\}$ provides a M\"obius transformation $M_l$, that maps one of the components of $\mathbb{R}\setminus \{-1,1\}$ onto $(-1,1)$, and further a point $q\in X^0$ and a curve $\alpha$ in $X^0$ with initial point $q_0$ and terminating point $q$, such for the mapping $F=M_l \circ f$ the point $q'\stackrel{def}= F(q)$ is contained in $(-1,1)$, and for the generators $e_j\stackrel{def}={\rm Is}_{\alpha}(e_{j,0}),\; e_{j,0}\in \mathcal{E},$ of $\pi_1(X^0,q)$ the inequalities \eqref{eq3b} hold. After identifying the fundamental groups $\pi_1(X,q)$ and $\pi_1(X,q_0)$ with different base point by an isomorphism that is defined up to conjugation, Theorem E states that the free homotopy class of $F$ corresponds to a conjugacy class of homomorphisms $$ \pi_1(X,q_0)\cong \pi_1(X,q)\to \pi_1(\mathbb{C}\setminus \{-1,1\},q')\,, $$ that is represented by a homomorphism $h$ for which $\mathcal{L}_-(h(e_{j,0})) \leq 8\pi \lambda_7(X^0)$ for each $e_{j,0}\in \mathcal{E}$. More explicitly, there exists a smooth mapping $\tilde{F}:X^0\to \mathbb{C}\setminus\{-1,1\}$, that is free homotopic to $F$, maps $q_0$ to $q'$, and satisfies the inequality \begin{equation}\label{eq3b+} \mathcal{L}_-(\tilde{ F}_*(e_{j,0})) \leq 8\pi \lambda_7(X^0) \end{equation} for each $e_{j,0}\in \mathcal{E}$. The existence of the smooth mapping $\tilde{F}$ can be seen explicitly as follows. Write $e=\mbox{Is}_{\alpha}(e_{0}) \in \pi_1(X,q)$ for each $e_{0}\in \pi_1(X,q_0)$. Parameterise $\alpha$ by the interval $[0,1]$. The image of $\alpha$ under the mapping $F$ is the curve $\beta= F \circ \alpha$ in $\mathbb{C} \setminus \{-1,1\}$ with initial point $F(q_0)$ and terminating point $F(q)=q'$. Then $F_*(e_{0})=(\mbox{Is}_{\beta})^{-1}(F_*(e))$. Choose a homotopy $F_t,\, t \in [0,1]$, that joins the mapping $F_0 \stackrel{def}=F$ with a (smooth) mapping $F_1$ denoted by $\tilde F$, so that $F_t(q_0)=\beta(t),\, t\in [0,1]$. The value $\beta(t)$ moves from the point $\beta(0)=F(q_0)$ to $\beta(1)=q'$ along the curve $\beta$. Then $\tilde F(q_0)=q'$ and $\tilde F_*(e_{0})=F_*(e)$ for each $e_0 \in \pi_1(X,q_0)$. Indeed, denote by $\beta_t$ the curve that runs from $\beta(t)$ to $\beta(1)$ along $\beta$. Then $\beta_0=\beta$ and $\beta_1$ is a constant curve. Let $\gamma_0$ be a curve that represents $e_0$. The base point of the curve $F_t(\gamma_0)$ equals $F_t(q_0)=\beta(t)$. Hence, we obtain a continuous family of curves $\beta_t^{-1} F_t(\gamma_0) \beta_t$ with base point $\beta(1)=F(q)$. For $t=1$ the curve is equal to $F_1(\gamma_0)=\tilde{F}(\gamma_0)$, for $t=0$ the curve is equal to $\beta^{-1} F_0(\gamma_0) \beta=F_0(\alpha^{-1} {\rm Is}_{\alpha}(\gamma_0)\alpha)=F_0(\gamma)$. Since the two curves $F_1(\gamma_0)$ and $F_0(\gamma)$ are homotopic and $F_1=\tilde F$, $F_0=F$, we obtain $\tilde{F}_*(e_0)=F_*(e)$. The inequalities \eqref{eq3b} imply the inequalities \eqref{eq3b+}. For each irreducible holomorphic mapping ${\sf f}:X\to \mathbb{C}\setminus \{-1,1\}$ we found a M\"obius transformation $M_l$ and a mapping $\tilde F :X^0\to \mathbb{C}\setminus \{-1,1\}$ that satisfies the condition ${\tilde F} (q_0)=q'\in (-1,1)$ and the inequalities \eqref{eq3b+}, and is free homotopic on $X^0$ to $M_l(({\sf f}-i \varepsilon)|X^0)$ for a small number $\varepsilon$, and, hence, is free homotopic to $M_l({\sf f}|X^0)$. Using a deformation retraction we obtain a mapping $\tilde{F}^X:X\to \mathbb{C}\setminus \{-1,1\}$ that is free homotopic on $X^0$ to $\tilde{F}$ and, hence, to $M_l({\sf f}|X^0)$. Identifying the fundamental groups $\pi_1(X^0,q_0)$ and $\pi_1(X,q_0)$ by the homomorphism induced by the inclusion and applying Theorem E, we obtain for each irreducible holomorphic mapping ${\sf f}:X\to \mathbb{C}\setminus \{-1,1\}$ a M\"obius transformation $M_l$ and a smooth mapping $\tilde{F}^X:X\to \mathbb{C}\setminus \{-1,1\}$ that is free homotopic to $M_l(\sf{ f})$ on $X$, and satisfies the condition ${\tilde F}^X (q_0)=q'\in (-1,1)$ and the inequalities \eqref{eq3b+}. If ${\sf f}:X\to \mathbb{C}\setminus \{-1,1\}$ is a contractible mapping, it is free homotopic to the function $\tilde{ F}^X\equiv 0$ on $X$, and the inequalities \eqref{eq3b+} are automatically satisfied for the monodromies of $\tilde{F}^X$. The number of free homotopy classes of mappings $X\to \mathbb{C}\setminus\{-1,1\}$, that contain a smooth mapping $\tilde{ F}^X$, which satisfies the condition $\tilde{F}^X(q_0)=q'\in (-1,1)$ and the inequalities \eqref{eq3b+}, are estimated from above as follows. By Lemma 1 of \cite{Jo4} there are at most $\frac{1}{2}e^{24\pi \lambda_7(X^0)}+1\leq \frac{3}{2}e^{24\pi \lambda_7(X^0)}$ different reduced words $w \in \pi_1(\mathcal{C}\setminus\{-1,1\}),0)$ (including the identity) with $\mathcal{L}_-(w)\leq 8\pi \lambda_7(X^0)$. Identify standard generators of $\pi_1(\mathbb{C}\setminus \{-1,1\},q')$ with standard generators of $\pi_1(\mathbb{C}\setminus \{-1,1\},0)$ by the canonical isomorphism. We saw, that there are at most $(\frac{3}{2}e^{24\pi \lambda_7(X^0)})^{2g+m}$ different homomorphisms $h:\pi_1(X^0,q_0)\cong \pi_1(X,q_0) \to \pi_1(\mathbb{C}\setminus \{-1,1\},q') \cong \pi_1(\mathbb{C}\setminus \{-1,1\},0)$ with $\mathcal{L}_-(h(e))\leq 8\pi \lambda_7(X^0)$ for each element $e$ of the set of generators $\mathcal{E}$ of $\pi_1(X^0,q_0)$. By Theorem E there are at most $(\frac{3}{2}e^{24\pi \lambda_7(X^0)})^{2g+m}$ different free homotopy classes of mappings $X\to \mathbb{C}\setminus \{-1,1\}$, that contain a smooth mapping $\tilde{ F}^X$ which satisfies the condition $\tilde{F}^X(q_0)=q'\in (-1,1)$ and the inequalities \eqref{eq3b+}. For each irreducible or contractible mapping $\sf{f}$ on $X$ one of the mappings $M_l\circ \sf{f}$, $l=0,1,2,$ is free homotopic to a mapping $\tilde{ F}^X$ which satisfies the condition $\tilde{F}^X(q_0)=q'\in (-1,1)$ and the inequalities \eqref{eq3b+}. The $(M_l)^{-1}\circ\tilde{ F}^X$ represent at most $3(\frac{3}{2}e^{24 \pi \lambda_7(X^0)})^{2g+m}$ free homotopy classes of irreducible or contractible mappings $X\to \mathbb{C}\setminus \{-1,1\}$. Theorem 1 is proved with the upper bound $3(\frac{3}{2}e^{24 \pi \lambda_7(X^0)})^{2g+m}$ for an arbitrary relatively compact domain $X^0\subset X$ that is a deformation retract of $X$. It remains to prove that $\lambda_7(X)= \inf\{\lambda_7(X^0): X^0 \Subset X \; \mbox{is\, a\, deformation\, retract\, of\,} X\,\}$. We have to prove that for each $e_0 \in \pi_1(X,q_0)$ the quantity $\lambda(A(\widehat{e_0}))=\lambda(\widetilde{X} \diagup ({\rm Is}^{\tilde{q}_0})^{-1}(\langle e_0 \rangle))$ is equal to the infimum of $\lambda( \widetilde{X^0}\diagup ({\rm Is}^{\tilde{q}_0})^{-1}(\langle e_0 \rangle))$ over all $X^0$ being open relatively compact subsets of $X$ which are deformation retracts of $X$. Here $\widetilde{X^0}$ is the universal covering of $X^0$, and the fundamental groups of $X$ and $X^0$ are identified. $\widetilde{X^0}$ ($\widetilde{X}$, respectively) can be defined as set of homotopy classes of arcs in $X^0$ (in $X$, respectively) joining $q_0$ with a point $q\in X^0$ (in $X$ respectively) equipped with the complex structure induced by the projection to the endpoint of the arcs, and the point $\tilde{q}_0$ corresponds to the class of the constant curve. The isomorphism $({\rm Is}^{\tilde{q}_0})^{-1}$ from $\pi_1(X^0,q_0)$ to the group of covering transformations on $\widetilde{X^0}$ is defined in the same way as it was done for $X$ instead of $X^0$. These considerations imply that there is a holomorphic mapping from $\widetilde{X}^0 \diagup ({\rm Is}^{\tilde{q}_0})^{-1}(\langle e_0 \rangle)$ into $\widetilde{X} \diagup ({\rm Is}^{\tilde{q}_0})^{-1}(\langle e_0 \rangle)$. Hence, the extremal length of the first set is not smaller than the extremal length of the second set. Vice versa, take any annulus $A^0$ which is a relatively compact subset of $A(\widehat{e_0})$ and is a deformation retract of $A(\widehat{e_0})$. Its projection to $X$ is relatively compact in $X$, hence, it is contained in a relatively compact deformation retract $X^0$ of $X$. Hence, $A^0$ can be considered as subset of $\widetilde{X^0} \diagup ({\rm Is}^{\tilde{q}_0})^{-1}(\langle e_0 \rangle)$, and, hence, $\lambda(\widetilde{X^0} \diagup ({\rm Is}^{\tilde{q}_0})^{-1}(\langle e_0 \rangle)) \leq \lambda(A^0)$. Since $\lambda (A(\widehat{e_0})) = \inf \{\lambda (A^0): A^0 \Subset A(\widehat{e_0}) \mbox{\,is\, a\, deformation\, retract\, of\,}A(\widehat{e_0}) \} $ we are done. $\Box$ We proved a slightly stronger statement, namely, the number of homotopy classes of mappings $X\to \mathbb{C}\setminus\{-1,1\}$ that contain a contractible holomorphic mapping or an irreducible holomorphic mapping does not exceed $(\frac{3}{2}e^{24\pi \lambda_7(X)})^{2g+m}$. \noindent {\bf Proof of Proposition \ref{prop2a}.} Denote the zero set $\{x\in X: \mbox{Im}f(x)=0\}$ by $L$. Since $f$ is not contractible, $L\neq \emptyset$. \noindent {\bf 1. A torus with a hole.} Assume first that $X$ is a torus with a hole with base point $q_0$. For notational convenience we denote by $e_0$ and ${e'_0}$ the two elements of the set of generators $\mathcal{E}$ of $\pi_1(X,q_0)$ that is associated to a standard bouquet of circles for $X$. We claim that there is a connected component $L_0$ of $L$ such that (after suitable orientation) the intersection number of the free homotopy class of one of the elements of $\mathcal{E}$, say of $\widehat{e_0}$ with $L_0$ is positive, and the intersection number with one of the classes $\reallywidehat{e'_0}$, or $\reallywidehat{(e'_0)^{-1}}$, or $\reallywidehat{e_0 \,e'_0}$ with $L_0$ is positive. The claim is easy to prove in the case when there is a component of $L_0$ which is a simple closed curve that is not contractible and not contractible to the hole of $X$. Indeed, consider the inclusion of $X$ into a closed torus $X^c$ and the homomorphism on fundamental groups $\pi_1(X,q_0)\to \pi_1(X^c,q_0)$ induced by the inclusion. Denote by $e_0^c$ and ${e'_0}^c$ the images of $e_0$ and $e_0'$ under this homomorphism. Notice that $e_0^c$ and ${e'_0}^c$ commute. The (image under the inclusion of the) curve $L_0$ is a simple closed non-contractible curve in $X^c$. It represents the free homotopy class of an element $(e_0^c)^j({e'_0}^c)^k$ for some integers $j$ and $k$ which are not both equal to zero. Hence, $L_0$ is not null-homologous in $X^c$, and by the Poincar\'{e} Duality Theorem for one of the generators, say for $e_0^c$, the representatives of the free homotopy class $\reallywidehat{e_0^c}$ have non-zero intersection number with $L_0$. After suitable orientation of $L_0$, we may assume that this intersection number is positive. There is a representative of the class $\reallywidehat{e_0^c}$ which is contained in $X$, hence, $\widehat{e_0}$ has positive intersection number with $L_0$. Suppose all compact connected components of $L$ are contractible or contractible to the hole of $X$. Consider a relatively compact domain ${X}''\Subset {X}$ in ${X}$ with smooth boundary which is a deformation retract of ${X}$ such that for each connected component of $L$ at most one component of its intersection with ${X}''$ is not contractible to the hole of ${X}''$. (See the paragraph on ''Regular zero sets''.) There is at least one component of $L\cap {X}''$ that is not contractible to the hole of ${X}''$. Indeed, otherwise the free homotopy class of each element of $\mathcal{E}$ could be represented by a loop avoiding $L$, and, hence, the monodromy of $f$ along each element of $\mathcal{E}$ would be conjugate to the identity, and, hence, equal to the identity, i.e. contrary to the assumption, $f:X\to \mathbb{C}\setminus \{-1,1\} $ would be free homotopic to a constant. Take a component $L_0''$ of $L\cap {X}''$ that is not contractible to the hole of ${X}''$. There is an arc of $\partial {X}''$ between the endpoints of $L_0''$ such that the union $\tilde{L}_0$ of the component $L_0''$ with this arc is a closed curve in ${X}$ that is not contractible and not contractible to the hole. Hence, for one of the elements of $\mathcal{E}$, say for $e_0$, the intersection number of the free homotopy class $\widehat{e_0}$ with the closed curve $\tilde{L}_0$ is positive after orienting the curve $\tilde{L}_0$ suitably. We may take a representative $\gamma$ of $\widehat{e_0}$ that is contained in ${X}''$. Then $\gamma$ has positive intersection number with $L_0''$. Denote the connected component of $L$ that contains $L_0''$ by $L_0$. All components of $L_0\cap {X}''$ different from $L_0''$ are contractible to the hole of ${X}''$. Hence, $\gamma$ has intersection number zero with each of these components. Hence, $\gamma$ has positive intersection number with $L_0$ since ${\gamma}\subset {X}''$. We proved that the class $\widehat{e_0}$ has positive intersection number with $L_0$. If $\reallywidehat{e'_0}$ also has non-zero intersection number with $L_0$ we define $e''_0=(e'_0)^{\pm 1}$ so that the intersection number of $\reallywidehat{e''_0}$ with $L_0$ is positive. If $\reallywidehat{e'_0}$ has zero intersection number with $L_0$ we put $e''_0=e_0 \,e'_0$. Then again the intersection number of $\reallywidehat{e''_0}$ with $L_0$ is positive. Also, the intersection number of $\reallywidehat{e_0 \, e''_0}$ with $L_0$ is positive. The set $\mathcal{E}_2'\stackrel{def}=\{e_0,e_0''\}$ satisfies the condition required in the proposition. We obtained Proposition \ref{prop2a} for a torus with a hole. \noindent {\bf 2. A planar domain.} Let $X$ be a planar domain. The domain $X$ is conformally equivalent to a disc with $m$ smoothly bounded holes, equivalently, to the Riemann sphere with $m+1$ smoothly bounded holes, $\mathbb{P}^1 \setminus \bigcup_{j=1}^{m+1}\mathcal{C}_j$, where $\mathcal{C}_{m+1}$ contains the point $\infty$. As before the base point of $X$ is denoted by $q_0$, and for each $j=1,\ldots,m,$ the generator $e_{j,0}\in \mathcal{E}\subset \pi_1(X,q_0)$ is represented by a curve that surrounds $\mathcal{C}_j$ once counterclockwise. Since $f$ is not contractible, there must be a connected component of $L$ that has limit points on some $\mathcal{C}_j$ with $j\leq m$. Indeed, otherwise the free homotopy class of each generator could be represented by a curve that avoids $L$. This would imply that all monodromies are equal to the identity. We claim that there exists a component $L_0$ of $L$ with limit points on the boundary of two components $\partial {\mathcal{C}}_{j'}$ and $\partial {\mathcal{C}}_{j''}$ for some $j', j'' \in \{1,\ldots,m+1\}$ with $j''\neq j'$. Indeed, assume the contrary. Then, if a component of $L$ has limit points on a component $\partial {\mathcal{C}}_{j},\, j\leq m,$ then all its limit points are on $\partial {\mathcal{C}}_{j}$. Take a smoothly bounded simply connected domain ${\mathcal{C}}'_{j}\Subset X\cup {{\mathcal{C}}_{j}}$ that contains the closed set ${{\mathcal{C}}_{j}} $ , so that its boundary $\partial {\mathcal{C}}'_{j}$ represents $\reallywidehat{e_{j,0}}$. Then all components $L'_k$ of $L\setminus {\mathcal{C}}'_{j} $ with an endpoint on $\partial \mathcal{C}'_{j}$ have another endpoint on this circle. The two endpoints of $L_k'$ on $\partial {\mathcal{C}}'_{j}$ divide $\partial {\mathcal{C}}'_{j}$ into two connected components. The union of $\overline{L_k'}$ with each of the two components of $\partial {\mathcal{C}}'_{j}\setminus \overline{L_k'}$ is a simple closed curve in $\mathbb{C}$, and, hence, by the Jordan Curve Theorem it bounds a relatively compact topological disc in $\mathbb{C}$. One of these discs contains $ {\mathcal{C}}'_{j}$, the other does not. Assign to each component $L_k'$ of $L \setminus {\mathcal{C}}'_{j} $ with both endpoints on $\partial \mathcal{C}'_{j} $ the closed arc $\alpha_k$ in $\partial \mathcal{C}'_{j}$ with the same endpoints as $L_k'$, whose union with $L_k'$ bounds a relatively compact topological disc in $\mathbb{C}$ that does not contain $\mathcal{C}'_{j}$. These discs are partially ordered by inclusion, since the $L_k'$ are pairwise disjoint. Hence, the arcs $\alpha_k$ are partially ordered by inclusion. For an arc $\alpha_k$ which contains no other of the arcs (a minimal arc) the curve $f \circ \alpha_k$ except its endpoints is contained in $\mathbb{C}\setminus \mathbb{R}$. Moreover, the endpoints of $f \circ \alpha_k$ lie on ${f(\overline{L_k'})}$, which is contained in one connected components of $\mathbb{R}\setminus \{-1,1\}$, since $\overline{L_k'}$ is connected. Hence, the curve $f \circ \alpha_k$ is homotopic in $\mathbb{C}\setminus \{-1,1\}$ (with fixed endpoints) to a curve in $\mathbb{R} \setminus \{-1,1\}$. The function $f$ either maps all points on $\partial \mathcal{C}'_{j}\setminus \alpha_k$ that are close to $\alpha_k$ to the open upper half-plane or maps them all to the open lower half-plane. (Recall, that zero is a regular value of $\mbox{Im}f$.) Hence, for an open arc $\alpha'_k \subset \partial \mathcal{C}'_{j} $ that contains $\alpha_k$ the curve $f \circ \alpha'_k $ is homotopic in $\mathbb{C}\setminus \{-1,1\}$ (with fixed endpoints) to a curve in $\mathbb{C}\setminus \mathbb{R}$. Consider the arcs $\alpha_k$ with the following property. For an open arc $\alpha'_k$ in $\partial \mathcal{C}'_{j} $ which contains the closed arc $\alpha_k$ the mapping $f \circ \alpha'_k$ is homotopic in $\mathbb{C}\setminus \{-1,1\}$ (with fixed endpoints) to a curve contained in $\mathbb{C}\setminus \mathbb{R}$. Induction on the arcs by inclusion shows that this property is satisfied for all maximal arcs among the $\alpha_k$ and, hence, $f \mid \partial \mathcal{C}'_{j} $ is contractible in $\mathbb{C}\setminus \{-1,1\}$. Hence, if the claim was not true, then for each hole $\mathcal{C}_j,\, j\leq m$, whose boundary contains limit points of a connected component of $L$, the monodromy along the curve $\mathcal{C}'_j$ (with any base point) that represents $\reallywidehat{e_{j,0}}$ would be trivial. Then all monodromies would be trivial, which contradicts the fact that the mapping is not contractible. The contradiction proves the claim. With $j'$ and $j''$ being the numbers of the claim and $j'\leq m$ we consider the set $\mathcal{E}_2' \subset \mathcal{E}_2$ which consists of the following primitive elements: $e_{j',0}$, the element $(e_{j'',0})^{-1}$ provided $j'' \neq m+1$, and $e_{j',0}\, e_{j,0}$ for all $j=1,\ldots,m,\, j\neq j', j \neq j''$. The free homotopy class of each element of $\mathcal{E}_2'$ has intersection number $1$ with $L_0$ after suitable orientation of the curve $L_0$. Each product of at most two different elements of $\mathcal{E}_2'$ is a primitive element of $\pi_1(X,q)$ and is contained in $\mathcal{E}_4$. Moreover, the intersection number with $L_0$ of the free homotopy class of each product of at most two different elements of $\mathcal{E}_2'$ equals $1$ or $2$. Each element of $\mathcal{E}$ is the product of at most two elements of $\mathcal{E}_2'\cup (\mathcal{E}_2')^{-1}$. The proposition is proved for the case of planar domains $X$. $\Box$ \noindent {\bf Proof of Proposition \ref{prop2}.} \noindent {\bf 1. A torus with a hole.} Consider the curve $L_0$ and the set $\mathcal{E}_2' \subset \pi_1(X,q_0)$ obtained in Proposition \ref{prop2a}. For one of the functions $M_l \circ f$, denoted by $F$, the image $F (L_0)$ is contained in $(-1,1)$. Let $e_0$, $e_0'$ be the two elements of $\mathcal{E}$. Move the base point $q_0$ to a point $q\in L_0$ along a curve $\alpha$ in $X$, and consider the generators $e=\mbox{Is}_{\alpha} (e_0)$ and $e'=\mbox{Is}_{\alpha} ({e'_0})$ of $ \pi_1(X,q)$, and the set $\mbox{Is}_{\alpha}(\mathcal{E}_2')\subset \pi_1(X,q)$. Then $e$ and $e'$ are products of at most two elements of $\mbox{Is}_{\alpha}(\mathcal{E}_2')$. Since the free homotopy class of an element of $\pi_1(X,q_0)$ coincides with the free homotopy class of the element of $\pi_1(X,q)$ obtained by applying $\mbox{Is}_{\alpha}$, the free homotopy class of each product of one or two elements of $\mbox{Is}_{\alpha}(\mathcal{E}_2')$ intersects $L_0$. We may assume as in the proof of Proposition \ref{prop2a} that $\mbox{Is}_{\alpha}(\mathcal{E}_2')$ consists of the elements $e$ and $e''$, where $e''$ is either equal to ${e'}^{\pm 1}$, or equals the product of $e$ and $e'$. Lemma \ref{lem3} applies to the pair $e$, $e''$, the function $F$, and the curve $L_0$. Since $F$ is irreducible, the monodromies of $F$ along $e$ and $e''$ are not powers of a single standard generator of the fundamental group of $\pi_1(\mathbb{C}\setminus \{-1,1\},q')$. Hence, the monodromy along each of the $e$ and $e''$ is the product of at most two elements of $\mathcal{L}_-$ not exceeding $2 \pi\lambda_{e,e''}$. Therefore, the monodromy of $F$ along each of the $e$ and $e''$ has $\mathcal{L}_-$ not exceeding $4 \pi\lambda_{e,e''}$. Notice that $\lambda_{e,e''}=\lambda_{e_0 e_0''}\leq \lambda_3(X)$, since $e_0''$ is the product of at most two factors, each an element of $\mathcal{E}\cup \mathcal{E}^{-1}$. Since $e'$ is the product of at most two different elements among the $e$ and $e''$ and their inverses, we obtain Proposition \ref{prop2} for $e$ and $e'$, in particular $\mathcal{L}_-(F_*(e))$ and $\mathcal{L}_-(F_*( e'))$ do not exceed $8\pi \lambda_3(X)$. Proposition \ref{prop2} is proved for tori with a hole. \noindent {\bf 2. A planar domain.} Consider the curve $L_0$ and the set $\mathcal{E}_2'$ of Proposition \ref{prop2a}. Move the base point $q_0$ along an arc $\alpha$ to a point $q \in L_0$. Then $f(q) \in \mathbb{R}\setminus \{-1,1\}$ and for one of the mappings $f$, $M_1 \circ f$, or $M_2 \circ f$, denoted by $F$, the inclusion $F(L_0)\subset (-1,1))$ holds, hence, $q'\stackrel{def}=F(q)$ is contained in $(-1,1)$. Denote $e_j = \mbox{Is}_{\alpha}(e_{j,0})$ for each element $e_{j,0}\in \mathcal{E}$. The $e_j$ form the basis $\mbox{Is}_{\alpha}(\mathcal{E})$ of $\pi_1(X,q)$. The set $\mbox{Is}_{\alpha}(\mathcal{E}_2')$ consists of primitive elements of $\pi_1(X,q)$ such that the free homotopy class of each product of one or two elements of $\mbox{Is}_{\alpha}(\mathcal{E}_2')$ intersects $L_0$. Moreover, each element of $\mbox{Is}_{\alpha}(\mathcal{E})$ is the product of one or two elements of $\mbox{Is}_{\alpha}(\mathcal{E}_2') \cup (\mbox{Is}_{\alpha}(\mathcal{E}_2'))^{-1}$. By the condition of the proposition not all monodromies $F_*(e),\, e \in \mbox{Is}_{\alpha}(\mathcal{E}_2'),$ are (trivial or non-trivial) powers of the same standard generator of $\pi_1(\mathbb{C}\setminus \{-1,1\},q')$. Apply Lemma \ref{lem3} to all pairs of elements of $\mbox{Is}_{\alpha}(\mathcal{E}_2')$ whose monodromies are not (trivial or non-trivial) powers of the same standard generator of $\pi_1(\mathbb{C} \setminus \{-1,1\},q')$. Since the product of at most two different elements of $\mbox{Is}_{\alpha}(\mathcal{E}_2')$ is contained in $\mbox{Is}_{\alpha}(\mathcal{E}_4)$, Lemma \ref{lem3} shows that the monodromy $F_*(e)$ along each element $e \in \mbox{Is}_{\alpha}(\mathcal{E}_2')$ is the product of at most two factors, each with $\mathcal{L}_-$ not exceeding $2 \pi \lambda_4(X)$. Since each element of $\mbox{Is}_{\alpha}(\mathcal{E}) $ is a product of at most two factors in $\mathcal{E}_2' \cup (\mathcal{E}_2')^{-1}$, the monodromy $F_*(e_j)$ along each generator $e_j$ of $\pi_1(X,q)$ is the product of at most $4$ factors of $\mathcal{L}_-$ not exceeding $2 \pi \lambda_4(X)$, and, hence, each monodromy $F_*(e_j)$ has $\mathcal{L}_-$ not exceeding $8 \pi \lambda_4(X)$. Proposition \ref{prop2} is proved for planar domains. \noindent {\bf 3.1. The general case. Diagrams of coverings.} We will use diagrams of coverings to reduce this case to the case of a torus with a hole or to the case of the Riemann sphere with three holes. Let as before $\tilde{q}_0$ be the point in $\tilde X$ with ${\sf P}(\tilde{q}_0)=q_0$ chosen in Section \ref{sec:1a} . Let $N$ be a subgroup of the fundamental group $\pi_1(X,q_0)$ and let $\omega^N:\tilde{X}\to \tilde{X}\diagup({\rm Is}^{\tilde{q}_0})^{-1}(N)=X(N)$ be the projection defined in Section \ref{sec:1a}. Put $(q_0)_N \stackrel{def}= \omega^N(\tilde{q}_0)$. For an element $e_0\in N\subset \pi_1(X,q_0)$ we denote by $(e_0)_N$ the element of $\pi_1(X(N),(q_0)_N)$ that is obtained as follows. Take a curve $\gamma$ in $X$ with base point $q_0$ that represents $e_0\in N$. Let $\tilde \gamma$ be its lift to $\tilde X$ with terminating point $\tilde{q}_0$. Then $\gamma_N\stackrel{def}=\omega^N(\tilde{\gamma})$ is a closed curve in $X(N)=\tilde{X}\diagup ({\rm Is}^{\tilde{q}_0})^{-1}(N) $ with base point $(q_0)_N$. The element of $\pi_1(X(N), (q_0)_N)$ represented by $\gamma_N$ is the required element $(e_0)_N$. All curves $\gamma'_N$ representing $(e_0)_N$ have the form $\omega^N(\tilde{\gamma}')$ for a curve $\tilde{\gamma}'$ in $\tilde X$ with terminating point $\tilde{q}_0$ and initial point $({\rm Is}^{\tilde{q}_0})^{-1}(e_0)(\tilde{q}_0)$. Since $\omega_N\circ\omega^N={\sf P}$, the curve $\omega_N(\gamma'_N)={\sf P}(\tilde{\gamma}')={\gamma}'$ represents $e_0$ for each curve $\gamma'_N$ in $X(N)$ that represents $(e_0)_N$. We obtain $(\omega_N)_*((e_0)_N)=e_0$. For two subgroups $N_1\leq N_2$ of $\pi_1(X,q_0)$ we obtain $(\omega^{N_2}_{N_1})_*((e_0)_{N_1})=(e_0)_{N_2}$, $e_0\in N_1$ (see the commutative diagram Figure \ref{fig3}). Let $\tilde q$ be another base point of $\tilde X$ and let $\tilde \alpha$ be a curve in $\tilde X$ with initial point $\tilde{q}_0$ and terminating point $\tilde q$. Let again $N$ be a subgroup of $\pi_1(X,q_0)$. Put $q_N\stackrel{def}=\omega^N(\tilde{q})$. The curve $\alpha_N=\omega^N(\tilde{\alpha})$ in $X(N)$, and the base point $\tilde q$ of $\tilde X$ are compatible, hence, $({\rm Is}^{\tilde{q}_0})^{-1}(N)=({\rm Is}^{\tilde{q}})^{-1}({\rm Is}_{\alpha_N}(N))$ and $X({\rm Is}_{\alpha_N}(\sigma) )=\tilde{X}\diagup ({\rm Is}^{\tilde{q}})^{-1}({\rm Is}_{\alpha_N}(\sigma)),\, \sigma\in N,$ is canonically isomorphic to $X(N)=\tilde{X}\diagup ({\rm Is}^{\tilde{q}_0})^{-1}(N) $. We will use the previous notation $\omega^{N_2}_{N_1}$ also for the projection $\tilde{X}\diagup ({\rm Is}^{\tilde{q}})^{-1}({\rm Is}_{\alpha_{N_1}}(N_1))\to \tilde{X}\diagup ({\rm Is}^{\tilde{q}})^{-1}({\rm Is}_{\alpha_{N_2}}(N_2)) $, $N_1\leq N_2$ being subgroups of $\pi_1(X,q_0)$ ($N_1$ may be the identity and $N_2$ may be $\pi_1(X,q_0)$.) Put $\alpha\stackrel{def}={\sf P}(\tilde{\alpha})$. For an element $e_0\in \pi_1(X,q_0)$ we put $e\stackrel{def}={\rm Is}_{\alpha}(e_0)\in \pi_1(X,q)$ and denote by $e_N$ the element of $\pi_1(X(N),q_N)$, that is represented by $\omega^N(\tilde{\gamma})$ for a curve $\tilde \gamma$ in $\tilde X$ with terminating point $\tilde q$ and projection ${\sf P}(\tilde{\gamma})$ representing $e$. Again $(\omega^{N_2}_{N_1})_*(e_{N_1})=e_{N_2}$ for subgroups $N_1\leq N_2$ of $\pi_1(X,q)$ and $e\in N_1$, in particular $( \omega_{N})_*(e_{N})=e$ for a subgroup $N$ of $\pi_1(X,q)$ and $e\in N$. \noindent {\bf 3.2. The estimate for a chosen pair of monodromies.} Since the mapping $f:X\to \mathbb{C}\setminus\{-1,1\}$ is irreducible, there exist two elements $e_0',\, e_0''\in \pi_1(X,q_0)$ such that the monodromies $f_*(e_0')$ and $f_*(e_0'')$ are not powers of a single conjugate of a power of one of the elements $a_1$, $a_2$ or $a_1a_2$. The fundamental group of the Riemann surface $X(\langle e_0',e_0''\rangle)$ is a free group in the two generators $(e'_0)_{\langle e_0',e_0''\rangle}$ and $(e''_0)_{\langle e_0',e_0''\rangle}$, hence, $X(\langle e_0',e_0''\rangle)$ is either a torus with a hole or is equal to $\mathbb{P}^1$ with three holes. Moreover, the system $\mathcal{E}_{\langle e_0',e_0''\rangle}=\{(e'_0)_{\langle e_0',e_0''\rangle}, \,(e''_0)_{\langle e_0',e_0''\rangle}\}$ of generators of the fundamental group $\pi_1(X(\langle e_0',e_0''\rangle),(q_0)_{\langle e_0',e_0''\rangle})$ is associated to a standard bouquet of circles for $X(\langle e_0',e_0''\rangle)$. This can be seen as follows. The set of generators $\mathcal{E}$ of $\pi_1(X,q_0)$ is associated to a standard bouquet of circles for $X$. For each $e_0\in \mathcal{E}$ we denote the circle of the bouquet that represents $e_0$ by $\gamma_{e_0}$. For each $e_0\in\mathcal{E}$ we lift the circle $\gamma_{e_0}$ with base point $q_0$ to an arc $\widetilde{\gamma_{e_0}}$ in $\tilde X$ with terminating point $\tilde{q_0}$. Let $D$ be a small disc in $X$ around $q_0$, and $\widetilde{D_0}$, $\widetilde{D_{e_0}}, e_0\in \mathcal{E},$ be the preimages of $D$ under the projection ${\sf P}:\tilde{X}\to X$, that contain $\tilde{q_0}$, or the initial point of $\widetilde{\gamma_{e_0}}$, respectively. We assume that $D$ is small enough so that the mentioned preimages of $D$ are pairwise disjoint. Put $D_{\langle e'_0,e''_0\rangle}=\omega^{\langle e'_0,e''_0\rangle}(\widetilde{D_0})$. For $e_0\neq e'_0,e''_0$ the image $\omega^{\langle e'_0,e''_0\rangle}(\widetilde{D_0}\cup \widetilde{\gamma_{e_0}}\cup \widetilde{D_{e_0}} )$ is the union of an arc $\omega^{\langle e'_0,e''_0\rangle} (\widetilde{\gamma_{e_0}})$ in $X(\langle e'_0,e''_0\rangle)$ with two disjoint discs, each containing an endpoint of the arc and one of them equal to $D_{\langle e'_0,e''_0\rangle}$. For $e_0=e'_0,e''_0$ the image $\omega^{\langle e',e''\rangle}(\widetilde{D_0}\cup \widetilde{\gamma_{e_0}}\cup \widetilde{D_{e_0}} )$ is the union of $D_{\langle e'_0,e''_0\rangle}$ with the loop $(\gamma_{e_0})_{\langle e'_0,e''_0\rangle}\stackrel{def}= \omega^{\langle e'_0,e''_0\rangle}( \widetilde{\gamma_{e_0}})$. For $e_0=e'_0,e''_0$ the loop $(\gamma_{e_0})_{\langle e'_0,e''_0\rangle}$ in $X(\langle e'_0,e''_0\rangle)$ has base point $(q_0)_{\langle e'_0,e''_0\rangle}=\omega^{\langle e'_0,e''_0\rangle}(\tilde{q_0})$ and represents the generator $(e_0)_{\langle e'_0,e''_0\rangle}$ of the fundamental group of $\pi_1(X(\langle e'_0,e''_0\rangle),(q_0)_{\langle e'_0,e''_0\rangle})$. Since the bouquet of circles $\cup_{e_0\in \mathcal{E}}\, \gamma_{e_0}$ is a standard bouquet of circles for $X$, the union $(\gamma_{e'_0})_{\langle e'_0,e''_0\rangle} \cup (\gamma_{e''_0})_{\langle e'_0,e''_0\rangle}$ is a standard bouquet of circles in $X(\langle e'_0,e''_0\rangle)$. This can be seen by looking at the intersections of the loops with a circle that is contained in $D_{\langle e'_0,e''_0\rangle}$ and surrounds $(q_0)_{\langle e'_0,e''_0\rangle}$. By the commutative diagram of coverings the intersection behaviour is the same as for the images of these objects under $\omega_{\langle e'_0,e''_0\rangle}$. Hence, since $(\gamma_{e'_0})_{\langle e'_0,e''_0\rangle}$ and $(\gamma_{e''_0})_{\langle e'_0,e''_0\rangle}$ represent the generators $({e'_0})_{\langle e'_0,e''_0\rangle}$ and $({e''_0})_{\langle e'_0,e''_0\rangle}$ of $\mathcal{E}_{\langle e_0',e_0''\rangle}$, the union $(\gamma_{e'_0})_{\langle e'_0,e''_0\rangle} \cup (\gamma_{e''_0})_{\langle e'_0,e''_0\rangle}$ is a standard bouquet of circles for $X(\langle e'_0,e''_0\rangle)$. The set $X(\langle e_0',e_0''\rangle)$ is either a torus with a hole or is equal to $\mathbb{P}^1$ with three holes. Apply Proposition \ref{prop2a} to the Riemann surface $X(\langle e_0',e_0''\rangle)$ with base point $(q_0)_{\langle e'_0,e''_0\rangle}$, the holomorphic mapping $f_{\langle e_0',e_0''\rangle}=f\circ\omega_{\langle e_0',e_0''\rangle}$ into $\mathbb{C}\setminus \{-1,1\}$, and the set of generators $\mathcal{E}_{\langle e_0',e_0''\rangle}$ of the fundamental group $\pi_1(X(\langle e'_0,e''_0\rangle),(q_0)_{\langle e'_0,e''_0\rangle})$. We obtain a relatively closed curve $L_{\langle e_0',e_0''\rangle}$ on which the function $f_{\langle e_0',e_0''\rangle}$ is real, and a set $(\mathcal{E}_{\langle e_0',e_0''\rangle})_2'=\{({\sf e}_0')_{\langle e_0',e_0''\rangle},({\sf e}_0'')_{\langle e_0',e_0''\rangle}\}$ which contains one of the elements of $\mathcal{E}_{\langle e_0',e_0''\rangle}$. The second element of $(\mathcal{E}_{\langle e_0',e_0''\rangle})_2'$ is either equal to second element of $\mathcal{E}_{\langle e_0',e_0''\rangle}$ or to its inverse, or to the product of the two elements (in any order) of $\mathcal{E}_{\langle e_0',e_0''\rangle}$. (We will usually refer to the product $ ({ e}_0')_{\langle e_0',e_0''\rangle}\,({e}_0'') _{\langle e_0',e_0''\rangle}=({ e}_0'\,{ e}_0'')_{\langle e_0',e_0''\rangle} $, but we may change the product $({ e}_0'\,{ e}_0'')_{\langle e_0',e_0''\rangle} $ to the product $({ e}_0''\,{ e}_0')_{\langle e_0',e_0''\rangle} $, without changing the arguments and the estimate of the $\mathcal{L}_-$ of the monodromies of the elements of $\mathcal{E}_2'$.) The free homotopy classes $\reallywidehat{ ({\sf e}_0')_{\langle e_0',e_0''\rangle}}$, $\reallywidehat{ ({\sf e}_0'')_{\langle e_0',e_0''\rangle}}$, and $\reallywidehat{ ({\sf e}_0')_{\langle e_0',e_0''\rangle}\,({\sf e}_0'') _{\langle e_0',e_0''\rangle}}=\reallywidehat{({\sf e}_0'\,{\sf e}_0'')_{\langle e_0',e_0''\rangle}} $ intersect $L_{\langle e_0',e_0''\rangle}$. Choose a point $q_{\langle e_0',e_0''\rangle}\in L_{\langle e_0',e_0''\rangle}$ and a point $\tilde{q}\in\tilde{X}$ with $\omega^{\langle e_0',e_0''\rangle}(\tilde{q})= q_{\langle e_0',e_0''\rangle}$. Let $\tilde{\alpha}$ be a curve in $\tilde X$ with initial point $\tilde{q}_0$ and terminating point $\tilde q$, and $\alpha_{\langle e_0',e_0''\rangle}= \omega^{\langle e_0',e_0''\rangle}(\tilde{\alpha})$. Put ${\sf e}'_{\langle e_0',e_0''\rangle}={\rm Is}_{\alpha_{\langle e_0',e_0''\rangle}}(({\sf e}_0')_{\langle e_0',e_0''\rangle})$ and ${\sf e}''_{\langle e_0',e_0''\rangle}={\rm Is}_{\alpha_{\langle e_0',e_0''\rangle}}(({\sf e}_0'')_{\langle e_0',e_0''\rangle})$. For one out of three M\"obius transformations $M_l$ the mapping $F_{\langle e_0',e_0''\rangle}=M_l\circ f_{\langle e_0',e_0''\rangle}=M_l\circ f\circ\omega_{\langle e_0',e_0''\rangle}$ takes $L_{\langle e_0',e_0''\rangle}$ to $(-1,1)$, and hence $F_{\langle e_0',e_0''\rangle}$ takes a value $q'=F_{\langle e_0',e_0''\rangle}(q_{\langle e_0',e_0''\rangle})\in (-1,1)$ at $q_{\langle e_0',e_0''\rangle}$. By Lemma \ref{lem3} each of the $(F_{\langle e_0',e_0''\rangle})_*({\sf e}'_{\langle e_0',e_0''\rangle})$ and $(F_{\langle e_0',e_0''\rangle})_*({\sf e}''_{\langle e_0',e_0''\rangle})$ is the product of at most two elements of $\pi_1(\mathbb{C}\setminus\{-1,1\},q')$ of $\mathcal{L}_-$ not exceeding $2\pi \lambda_3(X(\langle e_0',e_0''\rangle))$, hence, \begin{align*} \mathcal{L}_-&((F_{\langle e_0',e_0''\rangle})_*({\sf e}'_{\langle e_0',e_0''\rangle}))\leq 4\pi\lambda_3(X(\langle e_0',e_0''\rangle)),\\ \mathcal{L}_-&((F_{\langle e_0',e_0''\rangle})_*({\sf e}''_{\langle e_0',e_0''\rangle}))\leq 4\pi\lambda_3(X(\langle e_0',e_0''\rangle))\,. \end{align*} It follows that each of the $(F_{\langle e_0',e_0''\rangle})_*({ e}'_{\langle e_0',e_0''\rangle})$ and $(F_{\langle e_0',e_0''\rangle})_*({ e}''_{\langle e_0',e_0''\rangle})$ is the product of at most four elements of $\pi_1(\mathbb{C}\setminus\{-1,1\},q')$ of $\mathcal{L}_-$ not exceeding $2\pi \lambda_3(X(\langle e_0',e_0''\rangle) )$, hence, \begin{align*} \mathcal{L}_-&((F_{\langle e_0',e_0''\rangle})_*({ e}'_{\langle e_0',e_0''\rangle}))\leq 8\pi\lambda_3(X(\langle e_0',e_0''\rangle)),\\ \mathcal{L}_-&((F_{\langle e_0',e_0''\rangle})_*({ e}''_{\langle e_0',e_0''\rangle}))\leq 8\pi \lambda_3(X(\langle e_0',e_0''\rangle))\,. \end{align*} It remains to take into account that for a subgroup $N$ of $\pi_1(X,q_0)$ the equation $(F_N)_*({{ e}}_{\langle e_0',e_0''\rangle})=F_*({e})$ holds for each ${e}\in {\rm Is}_{\alpha}(N)$, and $\lambda_j(X(N))\leq \lambda_j(X)$ for each natural number $j$. \noindent {\bf 3.3. Other generators. Intersection of free homotopy classes with a component of the zero set.} Take any element $e\in {\rm Is}_{\alpha}(\mathcal{E})$ that is not in $\langle e',e''\rangle$. Then the monodromy $F_*(e)$ is either equal to the identity, or one of the pairs $(F_*(e),F_*({\sf e}')) $ or $(F_*(e),F_*({\sf e}'')) $ consists of two elements of $\pi_1(\mathbb{C}\setminus\{-1,1\},q')$ that are not powers of the same standard generator $a_j$, $j=1$ or $2$. Interchanging if necessary ${\sf e}'$ and ${\sf e}''$, we may suppose this option holds for the pair $(F_*(e),F_*({\sf e}')) $. Moreover, changing if necessary ${\sf e}'$ to its inverse $({\sf e}')^{-1}$, we may assume that ${\sf e}'$ is either an element of ${\rm Is}_{\alpha}(\mathcal{E})$ or it is a product of two elements of ${\rm Is}_{\alpha}(\mathcal{E})$. The quotient $X(\langle e,{\sf e}'\rangle)=\tilde{X}\diagup ({\rm Is}^{\tilde q})^{-1}(\langle e, {\sf e}'\rangle)$ is a Riemann surface whose fundamental group is a free group in two generators. Hence $X(\langle e,{\sf e}'\rangle)$ is either a torus with a hole or is equal to $\mathbb{P}^1$ with three holes. We consider a diagram of coverings as follows. Let first $X(\langle {\sf e}'\rangle)= \tilde{X}\diagup ({\rm Is}^{\tilde q})^{-1}(\langle {\sf e}'\rangle)$ be the annulus with base point $q_{\langle {\sf e}'\rangle}=\omega^{\langle {\sf e}'\rangle}(\tilde{q})$, that admits a mapping $\omega_{\langle {\sf e}'\rangle}: X(\langle {\sf e}'\rangle)\to X$ that represents ${\sf e}'$. By Lemma \ref{lem1} the connected component $L_ {\langle {\sf e}'\rangle}$ of $(\omega_{\langle {\sf e}'\rangle}^{\langle {\sf e}', {\sf e}''\rangle} )^{-1}(L_{\langle {\sf e}', {\sf e}''\rangle})$ that contains $q_{\langle {\sf e}'\rangle}=\omega^{\langle {\sf e}'\rangle}(\tilde{q})$ is a relatively closed curve in $X(\langle {\sf e}'\rangle)$ with limit points on both boundary components. The free homotopy class of the generator ${\sf e}'_{\langle {\sf e}'\rangle}$ of $\pi_1(X(\langle {\sf e}'\rangle),q_{\langle {\sf e}'\rangle})$ intersects $L_ {\langle {\sf e}'\rangle}$. The mapping $F_{\langle {\sf e}'\rangle}=M_l \circ f\circ \omega_{\langle {\sf e}'\rangle} $ maps $L_ {\langle {\sf e}'\rangle}$ into $(-1,1)$, and $F_{\langle {\sf e}'\rangle}(q_{\langle {\sf e}'\rangle})= F\circ{\sf P}(\tilde{q})=q'$. Next we consider the quotient $X(\langle {\sf e}', e \rangle)= \tilde{X}\diagup ({\rm Is}^{\tilde q})^{-1}(\langle {\sf e}',e\rangle)$ whose fundamental group is again a free group in two generators. The image $L_{\langle {\sf e}', e\rangle}\stackrel{def}= \omega_{\langle {\sf e}'\rangle}^{\langle {\sf e}', e\rangle}(L_{\langle {\sf e}'\rangle})$ is a connected component of the preimage of $(-1,1)$ under $ F_{\langle {\sf e}', e\rangle}$. Indeed, $L_{\langle {\sf e}', e\rangle}$ is connected as image of a connected set under a continuous mapping, and $F_{\langle {\sf e}', e\rangle}(\omega_{\langle {\sf e}'\rangle}^{\langle {\sf e}', e\rangle}(L_{\langle {\sf e}'\rangle}))= F\circ \omega_{\langle {\sf e}',e\rangle}\circ \omega_{\langle {\sf e}'\rangle}^{\langle {\sf e}', e\rangle}(L_{\langle {\sf e}'\rangle})= F_{\langle {\sf e}'\rangle}(L_{\langle {\sf e}'\rangle})\subset (-1,1)$. Moreover, since the mapping $\omega_{\langle {\sf e}'\rangle}^{\langle {\sf e}', e\rangle}:X({\langle {\sf e}'\rangle})\to X({\langle {\sf e}',e\rangle})$ is a covering, its restriction $({\rm Im}F\circ \omega_{\langle {\sf e}'\rangle})^{-1}(0)\to ({\rm Im}F\circ \omega_{\langle {\sf e}', e\rangle})^{-1}(0)$ is a covering. Hence the image under $\omega_{\langle {\sf e}'\rangle}^{\langle {\sf e}', e\rangle}$ of a connected component of $({\rm Im}F\circ \omega_{\langle {\sf e}'\rangle})^{-1}(0)$ is open and closed in $({\rm Im}F\circ \omega_{\langle {\sf e}', e\rangle})^{-1}(0)$. Hence, $L_{\langle {\sf e}', e\rangle}\stackrel{def}= \omega_{\langle {\sf e}'\rangle}^{\langle {\sf e}', e\rangle}(L_{\langle {\sf e}'\rangle})$ is a connected component of the preimage of $(-1,1)$ under $ F_{\langle {\sf e}', e\rangle}$. Put $q_{\langle {\sf e}', e\rangle}=\omega_{\langle {\sf e}'\rangle}^{\langle {\sf e}', e\rangle}(q_{\langle {\sf e}'\rangle})=\omega_{\langle {\sf e}'\rangle}^{\langle {\sf e}', e\rangle}\circ \omega^{\langle {\sf e}'\rangle}(\tilde{q})=\omega^{\langle {\sf e}', e\rangle}(\tilde{q})$. Note that $F_{\langle {\sf e}', e\rangle}(q_{\langle {\sf e}', e\rangle})=F\circ\omega_{\langle {\sf e}', e\rangle}(q_{\langle {\sf e}', e\rangle})= F(q)=q'$. The free homotopy class $\reallywidehat{{\sf e}'_{\langle {\sf e}',e \rangle}}$ in $X(\langle {\sf e}',e \rangle)$ that is related to ${\sf e}'$ intersects $L_{\langle {\sf e}',e \rangle} $. Indeed, consider any loop $\gamma'_{\langle{\sf e}',e \rangle}$ in $X(\langle {\sf e}',e \rangle)$ with some base point $q'_{\langle {\sf e}',e \rangle}$, that represents $\reallywidehat{{\sf e}'_{\langle {\sf e}',e \rangle }}$. There exists a loop ${\gamma}'_{\langle {\sf e}' \rangle}$ in $ X(\langle {\sf e}' \rangle)$ which represents $\reallywidehat{{\sf e}'_{\langle{\sf e}' \rangle}}$ such that $\omega_{\langle {\sf e}'\rangle}^{\langle {\sf e}',e\rangle }({\gamma}'_{\langle {\sf e}'\rangle})=\gamma'_{\langle {\sf e}',e\rangle}$. Such a curve ${\gamma}'_{\langle {\sf e}' \rangle}$ can be obtained as follows. There is a loop $\gamma''_{\langle {\sf e}',e\rangle}$ in $X(\langle {\sf e}',e\rangle )$ with base point ${q}_{\langle {\sf e}',e\rangle}$ that represents $({\sf e}' )_{\langle {\sf e}',e\rangle}$, and a curve $\alpha'_{\langle {\sf e}',e\rangle}$ in $X(\langle{\sf e}',e\rangle)$, such that $\gamma'_{\langle {\sf e}',e\rangle}$ is homotopic with fixed endpoint to $(\alpha'_{\langle {\sf e}',e\rangle})^{-1}\, \gamma''_{\langle{\sf e}',e\rangle}\, \alpha'_{\langle {\sf e}',e\rangle}$. Consider the lift $\tilde{\gamma}''$ of $\gamma''_{\langle {\sf e}',e\rangle}$ to $\tilde X$ with terminating point $\tilde{q}$, and the lift $\tilde{\alpha}'$ of $\alpha'_{\langle {\sf e}',e\rangle}$ with initial point $\tilde{q}$. The initial point of $\gamma''_{\langle {\sf e}',e\rangle}$ equals $\sigma(\tilde{q})$ for the covering transformation $\sigma= ({\rm Is}^{\tilde{q}})^{-1}({\sf e}' )= ({\rm Is}^{\tilde{q}_0})^{-1}({\sf e}' _0)$. (See equation \eqref{eq1''}.) The initial point of the curve $\sigma((\tilde{\alpha}')^{-1}) \tilde{\gamma}''\tilde{\alpha}'$ is obtained from its terminating point by applying the covering transformation $\sigma$. Hence, $\omega^{\langle {\sf e}'\rangle}(\sigma((\tilde{\alpha}')^{-1}) \tilde{\gamma}''\tilde{\alpha}')$ is a closed curve in $X(\langle{\sf e}' \rangle )$ that represents $\reallywidehat{{\sf e}' _{\langle {\sf e}'\rangle}}$ and projects to $(\alpha'_{\langle {\sf e}',e\rangle})^{-1}\, \gamma''_{\langle {\sf e}',e\rangle}\, \alpha'_{\langle {\sf e}',e\rangle}$ under $\omega_{\langle {\sf e}' \rangle}^{\langle {\sf e}',e\rangle }$. Since $\gamma'_{\langle {\sf e}',e \rangle}$ is homotopic to $(\alpha'_{\langle {\sf e}',e \rangle})^{-1}\, \gamma''_{\langle {\sf e}',e\rangle}\, \alpha'_{\langle {\sf e}',e\rangle}$ with fixed base point, it also has a lift to $X(\langle {\sf e}'\rangle)$ which represents $\reallywidehat{{\sf e}' _{\langle {\sf e}'\rangle}}$. Since $\reallywidehat{{\sf e}'_{\langle {\sf e}' \rangle}}$ intersects $L_{\langle {\sf e}' \rangle}$, the loop ${\gamma}'_{\langle {\sf e}' \rangle}$ has an intersection point $p'_{\langle {\sf e}' \rangle}$ with $L_{\langle {\sf e}'\rangle}$. The point $p'_{\langle {\sf e}',e \rangle }= \omega_{\langle {\sf e}'\rangle}^{\langle {\sf e}',e\rangle }(p'_{\langle {\sf e}'\rangle} )$ is contained in $\gamma'_{\langle {\sf e}',e \rangle }$ and in $L_{\langle {\sf e}',e\rangle}$. We proved that the free homotopy class $\reallywidehat{{\sf e}'_{\langle {\sf e}',e\rangle } }$ in $X(\langle {\sf e}',e \rangle )$ intersects $L_{\langle {\sf e}',e\rangle }$. \noindent {\bf 3.4. A system of generators associated to a standard bouquet of circles.} We claim that the system of generators $\;{\sf e}'_{\langle {\sf e}',e\rangle }, \;\; e_{\langle {\sf e}',e\rangle }\;$ of $\;\pi_1(X(\langle {\sf e}',e\rangle),q_{\langle {\sf e}',e\rangle })$ is associated to a standard bouquet of circles for $X(\langle {\sf e}',e\rangle)$. If ${\sf e}'\in \mathcal{E}$ the claim can be obtained as in paragraph 3.2. Suppose ${\sf e}'=e'e''$ for $e', e''\in \mathcal{E}$. Consider the system $\mathcal{E}'$ of generators of $\pi_1(X,q)$ that is obtained from $\mathcal{E}$ by replacing $e'$ by $e'e''$. If $e'$ and $e''$ correspond to a handle of $X$, then $\mathcal{E}'$ is also associated to a standard bouquet of circles for $X$, see Figure 4a for the case when $e'$ is represented by an $\alpha$-curve and $e''$ is represented by a $\beta$-curve. The situation when $e'$ is represented by a $\beta$-curve and $e''$ is represented by an $\alpha$-curve is similar. The claim is obtained as in paragraph 3.2. \begin{figure} \caption{Standard bouquets of circles} \label{fig4} \end{figure} Suppose one of the pairs $(e,e')$ or $(e,e'')$ corresponds to a handle of $X$. We assume that $e$ corresponds to an $\alpha$-curve and $e'$ corresponds to a $\beta$-curve of a handle of $X$ (see Figure 4b). The remaining cases are treated similarly, maybe, after replacing $e'e''$ by $e''e'$ (see paragraph 3.2). With our assumption $\mathcal{E}'$ is associated to a bouquet of circles that is a deformation retract for $X$, but it is not a standard bouquet of circles. Nevertheless, the pair $(e_{\langle {\sf e}',e\rangle }, {\sf e}'_{\langle {\sf e}',e\rangle })$ with ${\sf e}'=e'e''$ is associated to a standard bouquet of circles for $X(\langle {\sf e}',e\rangle)$. This can be seen as before. Consider the bouquet of circles corresponding to $\mathcal{E}'$ and take its union with a disc $D$ around $q$. Lift this set to $\tilde X$. We obtain the union of a collection of arcs in $\tilde X$ with terminating point $\tilde q$, with a collection of discs in $\tilde X$ around $\tilde q$ and around the initial points of the arcs. Take the union of the arcs and the discs. The image in $X(\langle {\sf e}',e\rangle)$ of this union under the projection $\omega^{\langle {\sf e}',e\rangle }$ is the union of the two loops $(\gamma_{e})_{\langle {\sf e}',e\rangle}\cup (\gamma_{{\sf e}'})_{\langle {\sf e}',e\rangle}$, the disc $D_{\langle {\sf e}',e\rangle }$ and a set, that is contractible to $D_{\langle {\sf e}',e\rangle }$. Looking at the intersection of the two loops with a small circle contained in $D_{\langle {\sf e}',e\rangle }$ and surrounding $q_{\langle {\sf e}',e\rangle }$, we see as before that $(\gamma_{e})_{\langle {\sf e}',e\rangle}\cup (\gamma_{{\sf e'}})_{\langle {\sf e}',e\rangle}$ is a standard bouquet of circles for $X(\langle {\sf e}',e\rangle)$. In this case $X(\langle {\sf e}',e\rangle)$ is a torus with a hole. In the remaining case no pair of generators among $e$, $e'$, and $e''$ corresponds to a handle. In this case again $\mathcal{E}'$ does not correspond to a standard bouquet of circles for $X$. But $\{e_{\langle {\sf e}',e\rangle } , (e'e'')_{\langle {\sf e}',e\rangle }\}$ (maybe, after changing $e'e''$ to $e''e'$) corresponds to a standard bouquet of circles for $X(\langle {\sf e}',e\rangle )$. (See Figure 4c for the case when walking along a small circle around $q$ counterclockwise, we meet the incoming and outgoing rays of representatives of the three elements of $\mathcal{E}$ in the order $e,e',e''$. If the order is different the situation is similar, maybe, after replacing $e'e''$ by $e''e'$.) In this case $X(\langle {\sf e}',e\rangle )$ is a planar domain. \noindent{\bf 3.5. End of the proof.} Consider first the case when $X(\langle {\sf e}',e \rangle )$ is a torus with a hole. Since $\reallywidehat{{\sf e}'_{\langle {\sf e}',e \rangle }}$ intersects $L_{\langle {\sf e}',e \rangle }$, we see as in the proof when $X$ itself is a torus with a hole, that the curve $L_{\langle {\sf e}',e \rangle }$ cannot be contractible or contractible to the hole, and the intersection number must be different from zero. Then the intersection number with $L_{\langle {\sf e}',e \rangle }$ of the free homotopy class of one the choices $e_{\langle {\sf e}',e \rangle }^{\pm 1}$ or $({\sf e}'e)_{\langle {\sf e}',e \rangle }$, denoted by ${\sf e}'''_{\langle {\sf e}',e \rangle }$, is not zero and has the same sign. By lemma \ref{lem3} each of the $(F_{\langle {\sf e}',e \rangle })_*({\sf e}'_{\langle {\sf e}',e \rangle })$ and $(F_{\langle {\sf e}',e \rangle })_*({\sf e}'''_{\langle {\sf e}',e \rangle })$ is the product of at most two elements of $\pi_1(\mathbb{C}\setminus\{-1,1\},q')$ with $\mathcal{L}_-$ not exceeding \begin{equation}\label{eq++++} 2\pi\lambda_{{\sf e}'_{\langle {\sf e}',e \rangle },{\sf e}'''_{\langle {\sf e}',e \rangle }}\leq 2\pi \lambda_5(X), \end{equation} since ${\sf e}'$ is the product of at most two elements of $\mathcal{E}\cup \mathcal{E}^{-1}$ and ${\sf e}'''$ is the product of at most three elements of $\mathcal{E}\cup \mathcal{E}^{-1}$. The element $e$ is the product of at most two different elements among the ${\sf e}'$ and ${\sf e}'''$ or their inverses. Hence, the monodromy $F_*(e)=(F _{\langle {\sf e}',e \rangle })_*(e_{\langle {\sf e}',e \rangle })$ is the product of at most four elements with $\mathcal{L}_-$ not exceeding \eqref{eq++++}. Hence, \begin{equation}\label{eq+++1} F_*(e)\leq 8\pi \lambda_5(X)\,. \end{equation} Consider now the case when $X(\langle {\sf e}',e \rangle)$ equals $\mathbb{P}^1$ with three holes. Since ${\sf e}'_{\langle {\sf e}',e \rangle}$ and $e_{\langle {\sf e}',e \rangle}$ correspond to a standard bouquet of circles for $X(\langle {\sf e}',e \rangle)$, the curves representing ${\sf e}'_{\langle {\sf e}',e \rangle}$ surround counterclockwise one of the holes, denoted by $\mathcal{C}'$ , and the curves representing $e_{\langle {\sf e}',e \rangle}$ surround counterclockwise another hole, denoted by $\mathcal{C}''$. After applying a M\"obius transformation we may assume that the remaining hole, denoted by $\mathcal{C}_{\infty}$, contains the point $\infty$. There are several possibilities for the behaviour of the curve $L_{\langle {\sf e}',e \rangle}$. Since $\reallywidehat{{\sf e}'_{\langle {\sf e}',e \rangle}}$ intersects $L_{\langle {\sf e}',e \rangle}$, the curve $L_{\langle {\sf e}',e \rangle}$ must have limit points on $\mathcal{C}'$. The first possibility is that $L_{\langle {\sf e}',e \rangle}$ has limit points on $\partial\mathcal{C}'$ and $\partial\mathcal{C}''$, the second possibility is, $L_{\langle {\sf e}',e \rangle}$ has limit points on $\mathcal{C}'$ and $\mathcal{C}_{\infty}$, the third possibility is, $L_{\langle {\sf e}',e \rangle}$ has all limit points on $\mathcal{C}'$, and $\mathcal{C}''$ is contained in the bounded connected component of $\mathbb{C}\setminus ( L_{\langle {\sf e}',e \rangle}\cup \mathcal{C}')$. In the first case the free homotopy classes $\reallywidehat{{\sf e}'_{\langle {\sf e}',e \rangle}}$ and $\reallywidehat{e_{\langle {\sf e}',e \rangle}^{-1}}$ have positive intersection number with the suitably oriented curve $L_{\langle {\sf e}',e \rangle}$. In the second case the free homotopy classes $\reallywidehat{{\sf e}'_{\langle {\sf e}',e \rangle}}$ and $\reallywidehat{ ({{\sf e}'e})_{\langle {\sf e}',e \rangle} }$ have positive intersection number with the suitably oriented curve $L_{\langle {\sf e}',e \rangle}$. In the third case the free homotopy classes of ${\sf e}'_{\langle {\sf e}',e \rangle}$, $ ({{\sf e}'}^2 e)_{\langle {\sf e}',e \rangle} $ and of their product intersect $L_{\langle {\sf e}',e \rangle}$. The first two cases were treated in paragraph 2 of this section. The statement concerning the third case is proved as follows. Any curve that is contained in the complement of $\mathcal{C}'\cup L_{\langle {\sf e}',e \rangle} $ has either winding number zero around $\mathcal{C}'$ (as a curve in the complex plane $\mathbb{C}$), or its winding number around $\mathcal{C}'$ coincides with the winding number around $\mathcal{C}''$. On the other hand the representatives of the free homotopy class of ${\sf e}'_{\langle {\sf e}',e \rangle} $ have winding number $1$ around $\mathcal{C}'$ and winding number $0$ around $\mathcal{C}''$. The representatives of the free homotopy class of $({{\sf e}'}^2 e)_{\langle {\sf e}',e \rangle}$ have winding number $2$ around $\mathcal{C}'$, and winding number $1$ around $\mathcal{C}''$. By the same argument the free homotopy class of the product of ${\sf e}'_{\langle {\sf e}',e \rangle}$ and $({{\sf e}'}^2 e)_{\langle {\sf e}',e \rangle}$ intersects $L_{\langle {\sf e}',e \rangle}$. We let ${\sf e}'''_{\langle {\sf e}',e \rangle}$ be equal to $e_{\langle {\sf e}',e \rangle}^{-1}$ in the first case, equal to $ ({{\sf e}'e})_{\langle {\sf e}',e \rangle} $ in the second case, and equal to $({{\sf e}'}^2 e)_{\langle {\sf e}',e \rangle}$ in the third case. By lemma \ref{lem3} each of the $(F_{\langle {\sf e}',e \rangle })_*({\sf e}'_{\langle {\sf e}',e \rangle })$ and $(F_{\langle {\sf e}',e \rangle })_*({\sf e}'''_{\langle {\sf e}',e \rangle })$ is the product of at most two elements of $\pi_1(\mathbb{C}\setminus\{-1,1\},q')$ with $\mathcal{L}_-$ not exceeding \begin{equation}\label{eq++++'} 2\pi\lambda_{{\sf e}'_{\langle {\sf e}',e \rangle },{\sf e}'''_{\langle {\sf e}',e \rangle }}\leq 2\pi \lambda_7(X), \end{equation} We used that ${\sf e}'$ is the product of at most two elements of $\mathcal{E}\cup \mathcal{E}^{-1}$, $e\in \mathcal{E}\cup \mathcal{E}^{-1}$ and ${\sf e}'''$ is the product of at most five elements of $\mathcal{E}\cup \mathcal{E}^{-1}$. Since $e$ is the product of at most two different elements among the $({\sf e}')^{\pm 1}$ and $({\sf e}''')^{\pm 1}$, the monodromy $F_*(e)=(F _{\langle {\sf e}',e \rangle })_*(e_{\langle {\sf e}',e \rangle })$ is the product of at most four elements with $\mathcal{L}_-$ not exceeding \eqref{eq++++'}. Hence, \begin{equation}\label{eq+++1} F_*(e)\leq 8\pi \lambda_7(X)\,. \end{equation} The proposition is proved. $\Box$ \section{($\sf{g},\sf{m}$)-bundles over Riemann surfaces}\label{sec:3} We will consider bundles whose fibers are smooth surfaces or Riemann surfaces of type $(\sf{g},\sf{m})$. \begin{defn}\label{def9.1}{\rm (Smooth oriented $(\textsf{g},\textsf{m})$ fiber bundles.)} Let $X$ be a smooth oriented manifold of dimension $k$, let ${\mathcal X}$ be a smooth (oriented) manifold of dimension $k+2$ and ${\mathcal P} : {\mathcal X} \to X$ an orientation preserving smooth proper submersion such that for each point $x \in X$ the fiber ${\mathcal P}^{-1} (x)$ is a closed oriented surface of genus $\sf{g}$. Let $\mathbold{E}$ be a smooth submanifold of $\mathcal{X}$ that intersects each fiber $\mathcal{P}^{-1}(x)$ along a set $E_x$ of $\sf{m}$ distinguished points. Then the tuple ${\mathfrak F}_{\textsf{g},\textsf{m}} = ({\mathcal X} , {\mathcal P} , \mathbold{E}, X)$ is called a smooth (oriented) fiber bundle over $X$ with fiber a smooth closed oriented surface of genus $\textsf{g}$ with $\textsf{m}$ distinguished points (for short, a smooth oriented $(\textsf{g},\textsf{m})$-bundle). \end{defn} If ${\sf m}=0$ the set $\mathbold E$ is the empty set and we will often denote the bundle by $({\mathcal X} , {\mathcal P} , X)$. If ${\sf{m}}>0$ the mapping $x\to E_x$ locally defines $\sf m$ smooth sections. $(\textsf{g},0)$-bundles will also be called genus $\textsf{g}$ fiber bundles. For $\textsf{g}=1$ and $\textsf{m}=0$ the bundle is also called an elliptic fiber bundle. \index{fiber bundle ! elliptic} In the case when the base manifold is a Riemann surface, a holomorphic $(\textsf{g}$,$\textsf{m})$ fiber bundle over $X$ is defined as follows. \begin{defn}\label{def2} Let $X$ be a Riemann surface, let ${\mathcal X}$ be a complex surface, and ${\mathcal P}$ a holomorphic proper submersion from ${\mathcal X}$ onto $X$, such that each fiber $\mathcal{P}^{-1}(x)$ is a closed Riemann surface of genus $\sf g$. Suppose $\mathbold{E}$ is a complex one-dimensional submanifold of $\mathcal{X}$ that intersects each fiber $\mathcal{P}^{-1}(x)$ along a set $E_x$ of ${\sf m}$ distinguished points. Then the tuple ${\mathfrak F}_{\textsf{g},\textsf{m}} = ({\mathcal X} , {\mathcal P}, \mathbold{E}, X)$ is called a holomorphic $(\textsf{g}$,$\textsf{m})$ fiber bundle over $X$. \end{defn} Notice that for ${\sf{m}}>0$ the mapping $x\to E_x$ locally defines $\sf{m}$ holomorphic sections. Two smooth oriented (holomorphic, respectively) $(\textsf{g},\,\textsf{m})$ fiber bundles, ${\mathfrak F}^0 = ({\mathcal X}^0 , {\mathcal P}^0 , \mathbold{E}^0, X)$ and ${\mathfrak F}^1 = ({\mathcal X}^1 , {\mathcal P}^1 ,\mathbold{E}^1, X)$, are called smoothly isomorphic (holomorphically isomorphic, respectively,) if there are smooth (holomorphic, respectively) homeomorphisms $\Phi: \mathcal{X}^0\to \mathcal{X}^1$ and $\phi:X^0\to X^1$ such that for each $x \in X^0$ the mapping $\Phi$ maps the fiber $(\mathcal{P}^0 )^{-1}(x)$ onto the fiber $(\mathcal{P}^1 )^{-1}(\phi(x))$ and the set of distinguished points in $(\mathcal{P}^0 )^{-1}(x)$ to the set of distinguished points in $(\mathcal{P}^1 )^{-1}(\phi(x))$. Holomorphically isomorphic bundles will be considered the same holomorphic bundles. Two smooth (oriented) $(\textsf{g},\textsf{m})$ fiber bundles over the same oriented smooth base manifold $X$, ${\mathfrak F}^0 = ({\mathcal X}^0 , {\mathcal P}^0 ,\mathbold{E}^0, X)$, and ${\mathfrak F}^1 = ({\mathcal X}^1 , {\mathcal P}^1 ,\mathbold{E}^1, X)$, are called (free) isotopic if for an open interval $I$ containing $[0,1]$ there is a smooth $(\textsf{g},\textsf{m})$ fiber bundle $({\mathcal Y} , {\mathcal P} , \mathbold{E}, X \times I)$ over the base $X \times I$ (called an isotopy) with the following property. For each $t \in [0,1]$ we put ${\mathcal Y}^t = {\mathcal P}^{-1} (X \times \{t\})$ and $\mathbold{E}^t= \mathbold{E}\cap {\mathcal P}^{-1} (X \times \{t\})$. The bundle ${\mathfrak F}^0\,$ is equal to $\,\left( {\mathcal Y}^0 , {\mathcal P} \mid {\mathcal Y}^0 , \mathbold{E}^0, X \times \{0\} \right)\,$, and the bundle $\,{\mathfrak F}^1\,$ is equal to $\,\bigl( {\mathcal Y}^1 , {\mathcal P} \mid {\mathcal Y}^1 , \mathbold{E}^1, X \times \{1\} \bigl)\,$. Two smooth $({\sf g},{\sf m})$-bundles are smoothly isomorphic if and only if they are isotopic (see \cite{Jo5}). \index{$\mathfrak{F}_{{\sf g},{\sf m}}$} \index{fiber-bundle ! ${\sf g},{\sf m}$-fiber bundle} Denote by $S$ a reference surface of genus $\sf g$ with a set $E\subset S$ of $\sf{m}$ distinguished points. By Ehresmann's Fibration Theorem each smooth $(\sf{g},\sf{m})$-bundle ${\mathfrak F}_{\textsf{g},\textsf{m}} = ({\mathcal X} , {\mathcal P} , \mathbold{E}, X)$ with set of distinguished points $E_x \stackrel{def}=\mathbold{E}\cap{\mathcal P}^{-1}(x) $ in the fiber over $x$ is locally smoothly trivial, i.e. each point in $X$ has a neighbourhood $U\subset X$ such that the restriction of the bundle to $U$ is isomorphic to the trivial bundle $\big(U\times S, {\rm {pr}}_1, U\times E,U\big)$ with set $\{x\}\times E$ of distinguished points in the fiber $\{x\}\times S$ over $x$. Here ${\rm pr}_1: U\times S\to U $ is the projection onto the first factor. The idea of the proof of Ehresmann's Theorem is the following. Choose smooth coordinates on $U$ by a mapping from a rectangular box to $U$. Consider smooth vector fields $v_j$ on $U$, which form a basis of the tangent space at each point of $U$. Take smooth vector fields $V_j$ on ${\mathcal P}^{-1} (U)$ that are tangent to $\mathbold{E}$ at points of this set and are mapped to $v_j$ by the differential of ${\mathcal P}$. Such vector fields can easily be obtained locally. To obtain the globally defined vector fields $V_j$ on ${\mathcal P}^{-1} (U)$ one uses partitions of unity. The required diffeomorphism $\varphi_U$ is obtained by composing the flows of these vector fields (in any fixed order). In this way a trivialization of the bundle can be obtained over any simply connected domain. Let $q_0$ be a base point in $X$ and $\gamma(t),\, t \in [0,1],$ a smooth curve in $X$ with base point $q_0$ that represents an element $e$ of the fundamental group $\pi_1(X,q_0)$. Let $\varphi^t:\mathcal{P}^{-1}(q_0) \to \mathcal{P}^{-1}(\gamma(t)),\, t \in [0,1]$, $\varphi^0=\mbox{Id}$, be a smooth family of diffeomorphisms that map the set of distinguished points in $\mathcal{P}^{-1}(q_0)$ to the set of distinguishes points in $\mathcal{P}^{-1}(\gamma(t))$. To obtain such a family we may restrict the bundle to the closed curve given by $\gamma$ and lift the restriction to a bundle over the real axis $\mathbb{R}$. The family of diffeomorphisms may be obtained by considering Ehrenpreis' vector field for the lifted bundle and take the flow of this vector field. The mapping $\varphi^1$ obtained for $t=1$ is an orientation preserving self-homeomorphism of the fiber over $q_0$ that preserves the set of distinguished points. Its isotopy class depends only on the homotopy class of the curve and the isotopy class of the bundle. The isotopy class of its inverse $(\varphi^1)^{-1}$ is called the monodromy of the bundle along $e$. Assign to each element $e\in \pi_1(X,q_0)$ the monodromy of the bundle along $e$. We obtain a homomorphism from $\pi_1(X,q_0)$ to the group of isotopy classes of self-homeomorphisms of the fiber over $q_0$ that preserve the set of distinguished points. The modular group ${\rm{Mod}}(\textsf{g},\textsf{m})$ is the group of isotopy classes of self-homeomorphisms of a reference Riemann surface of genus $\textsf{g}$ that map a reference set of $\textsf{m}$ distinguished points to itself. The following theorem holds (see e.g. \cite{FaMa} and \cite{Jo5} ). \noindent {\bf Theorem G.} {\it Let $X$ be a connected finite open smooth oriented surface. The set of isotopy classes of smooth oriented $(\sf{g},\sf{m})$ fiber bundles over $X$ is in one-to-one correspondence to the set of conjugacy classes of homomorphisms from the fundamental group $\pi_1 (X,q_0)$ into the modular group ${\rm{Mod}}(\textsf{g},\textsf{m})$.} Let $2{\sf g}-2 + {\sf m}>0$. A holomorphic $({\sf g},{\sf m})$-bundle is called locally holomorphically trivial if it is locally holomorphically isomorphic to the trivial $({\sf g},{\sf m})$-bundle. All fibers of a locally holomorphically trivial $({\sf g},{\sf m})$-bundle ${\mathfrak{F}}=({\mathcal{X}}, {\mathcal{P}}, {\mathbold{E}}, {X})$ are conformally equivalent to each other. For a locally trivial holomorphic $({\sf g},{\sf m})$-bundle there exists a finite unramified covering $\hat{\sf P} :\hat{X}\to X$ and a lift $\hat{\mathfrak{F}}=(\hat{\mathcal{X}}, \hat{\mathcal{P}}, \hat{\mathbold{E}}, \hat{X})$ of $\mathfrak{F}$ to $\hat X$ such that $\hat{\mathfrak{F}}$ is holomorphically isomorphic to the trivial bundle. This can be seen as follows. Consider the lift $\tilde{\mathfrak{F}}$ of the bundle $\mathfrak{F}$ to the universal covering ${\sf P}:\tilde X\to X$ of $X$, i.e. $\tilde{\mathfrak{F}}=(\tilde{\mathcal{X}}, \tilde{\mathcal{P}}, \tilde{\mathbold{E}}, \tilde{X})$, where the fiber $\tilde{\mathcal{P}}^{-1}(\tilde{x})$ with distinguished points $\tilde{\mathbold{E}}\cap \tilde{\mathcal{P}}^{-1}(\tilde{x})$ is conformally equivalent to the fiber ${\mathcal{P}}^{-1}({x})$ with distinguished points ${\mathbold{E}}\cap {\mathcal{P}}^{-1}({x})$ with $x={\sf P} (\tilde{x})$. The bundle $\tilde{\mathfrak{F}}$ is locally holomorphically trivial. Since $\tilde X$ is simply connected, $\tilde{\mathfrak{F}}$ is holomorphically trivial on $\tilde X$, hence, there is a biholomorphic mapping $\Phi: \tilde{\mathcal{X}}\to \tilde{X}\times S$ that maps $\tilde{\mathcal{P}}^{-1}(\tilde{x})$ to $\{\tilde{x}\}\times S$ for each $\tilde{x}\in\tilde{X}$. Here $S$ is the fiber $\tilde{\mathcal{P}}^{-1}(\tilde{q}_0)$ over a chosen point $\tilde{q_0}$ over the base point $q_0\in X$. The mapping $\Phi^{-1}$ provides a uniquely determined holomorphic family of conformal mappings $ \varphi_{\tilde{x}}: S=\tilde{\mathcal{P}}^{-1}(\tilde{q}_0) \to \tilde{\mathcal{P}}^{-1}(\tilde{x})$, $\tilde{x}\in \tilde{X}$, that map the set of distinguished points in one fiber to the set of distinguished points in the other fiber, such that the total space ${\mathcal{X}}$ of the bundle $\mathfrak{F}$ is holomorphically equivalent to the quotient of $\{\tilde{x}\}\times S$ by the following equivalence relation $\sim$. A point $\zeta_1$ in the fiber $\tilde{\mathcal{P}}^{-1}(\tilde{x}_1)$ is equivalent by the relation $\sim$ to a point $\zeta_2$ in the fiber $\tilde{\mathcal{P}}^{-1}(\tilde{x}_2)$ if ${\sf P}(\tilde{x}_1)={\sf P}(\tilde{x}_2)$ and $\zeta_2=\varphi_{\tilde{x}_2}\varphi_{\tilde{x}_1}^{-1}(\zeta_1)$. The mapping $\varphi_{\tilde{x}_2}\varphi_{\tilde{x}_1}^{-1}(\zeta_1)$ is a holomorphic self-homeomorphism of the fiber $\tilde{\mathcal{P}}^{-1}(\tilde{x}_1)$. The set of such self-homeomorphims is finite. Consider the set $N$ of elements $e\in \pi_1(X,q_0)$ for which $\varphi_{({\rm Is}^{\tilde{q}_0})^{-1}(e)(\tilde{q}_0)}$ is the identity. As before $({\rm Is}^{\tilde{q}_0})^{-1}$ is the isomorphism from the fundamental group to the group of covering transformations. The set $N$ is a normal subgroup of the fundamental group. It is of finite index, since two cosets $e_1 \,N$ and $e_2\, N$ are equal if $\varphi_{({\rm Is}^{\tilde{q}_0})^{-1}(e_2 e_1^{-1})(\tilde{q}_0)}={\rm Id}$, and there are only finitely many distinct holomorphic self-homeomorphisms of $\tilde{\mathcal{P}}^{-1}(\tilde{q}_0)$. Hence, $\tilde{X}\diagup ({\rm Is}^{\tilde{q}_0})^{-1}(N)$ is a finite unramified covering of $X$ and the lift of the bundle $\mathfrak{F}$ to $\hat{X}$ has the required property. Vice versa, if for a holomorphic $({\sf g},{\sf m})$-bundle $\mathfrak{F}$ there exists a finite unramified covering $\hat{\sf P}:\hat{X}\to X$, such that the lift $\hat{\mathfrak{F}}=(\hat{\mathcal{X}}, \hat{\mathcal{P}}, \hat{\mathbold{E}}, \hat{X})$ of $\mathfrak{F}$ to $\hat X$ is holomorphically isomorphic to the trivial bundle, then $\mathfrak{F}$ is locally holomorphically trivial. A smooth (holomorphic, respectively) bundle is called isotrivial, if it has a finite covering by the trivial bundle. If all monodromy mapping classes of a smooth bundle are periodic, then the bundle is isotopic (equivalently, smoothly isomorphic) to an isotrivial bundle. This can be seen by the same arguments as above. We explain now the notion of irreducible smooth $(\textsf{g},\textsf{m})$-bundles. It is based on Thurston's notion of irreducible surface homeomorphisms. Let $S$ be a connected finite smooth oriented surface. It is either closed or homeomorphic to a surface with a finite number of punctures. We will assume from the beginning that $S$ is either closed or punctured. A finite non-empty set of mutually disjoint Jordan curves $\{ C_1 , \ldots , C_{\alpha}\}$ on a connected closed or punctured oriented surface $S$ is called admissible if no $C_i$ is homotopic to a point in $X$, or to a puncture, or to a $C_j$ with $i \ne j$. Thurston calls an isotopy class $\mathfrak{m}$ of self-homeomorphisms of $S$ (in other words, a mapping class on $S$) reducible if there is an admissible system of curves $\{ C_1 ,\ldots , C_{\alpha}\}$ on $S$ such that some (and, hence, each) element in $\mathfrak{m}$ maps the system to an isotopic system. In this case we say that the system $\{ C_1 , \ldots , C_{\alpha}\}$ reduces $\mathfrak{m}$. A mapping class which is not reducible is called irreducible. Let $S$ be a closed or punctured surface with set $E$ of distinguished points. We say that $\varphi$ is a self-homeomorphism of $S$ with distinguished points $E$, if $\varphi$ is a self-homeomorphism of $S$ that maps the set of distinguished points $E$ to itself. Notice that each self-homeomorphism of the punctured surface $S\setminus E$ extends to a self-homeomorphism of the surface $S$ with set of distinguished points $E$. We will sometimes identify self-homeomorphisms of $S\setminus E$ and self-homeomorphism of $S$ with set $E$ of distinguished points. For a (connected oriented closed or punctured) surface $S$ and a finite subset $E$ of $S$ a finite non-empty set of mutually disjoint Jordan curves $\{ C_1 , \ldots , C_{\alpha}\}$ in $S\setminus E$ is called admissible for $S$ with set of distinguished points $E$ if it is admissible for $S \setminus E$. An admissible system of curves for $S$ with set of distinguished points $E$ is said to reduce a mapping class $\mathfrak{m}$ on $S$ with set of distinguished points $E$, if the induced mapping class on $S\setminus E$ is reduced by this system of curves. Conjugacy classes of reducible mapping classes can be decomposed in some sense into irreducible components, and conjugacy classes of reducible mapping classes can be recovered from the irreducible components up to products of commuting Dehn twists. Conjugacy classes of irreducible mapping classes are classified and studied. A Dehn twist \index{Dehn twist} about a simple closed curve $\gamma$ in an oriented surface $S$ is a mapping that is isotopic to the following one. Take a tubular neighbourhood of $\gamma$ and parameterize it as a round annulus $A=\{ e^{-\varepsilon} < \vert z \vert < 1\}$ so that $\gamma$ corresponds to $|z|=e^{-\frac{\varepsilon}{2}}$. The mapping is an orientation preserving self-homeomorphism of $S$ which is the identity outside $A$ and is equal to the mapping $e^{-\varepsilon s +2\pi i t}\to e^{-\varepsilon s +2\pi i (t+s) }$ for $e^{-\varepsilon s +2\pi i t}\in A$, i.e. $s\in (0,1)$. Here $\varepsilon$ is a small positive number. Thurston's notion of reducible mapping classes takes over to families of mapping classes on a surface of type $({\sf{g}},{\sf{m}})$, and therefore to $({\sf{g}},{\sf{m}})$-bundles. Namely, an admissible system of curves on a (connected oriented closed or punctured) surface $S$ with set of $\sf m$ distinguished points $E$ is said to reduce a family of mapping classes $\mathfrak{m}_j \in \mathfrak{M}(S; \emptyset, E) $ if it reduces each $\mathfrak{m}_j$. Similarly, a $({\sf{g}},{\sf{m}})$-bundle with fiber $S$ over the base point $q_0$ and set of distinguished points $E\subset S$ is called reducible if there is an admissible system of curves in the fiber over the base point that reduces all monodromy mapping classes simultaneously. Otherwise the bundle is called irreducible. Reducible bundles can be decomposed into irreducible bundle components and the reducible bundle can be recovered from the irreducible bundle components up to commuting Dehn twists in the fiber over the base point. Let $X$ be a finite open connected Riemann surface. By a holomorphic (smooth, respectively) $(0,n)$-bundle with a section over $X$ we mean a holomorphic (smooth, respectively) $(0,n+1)$-bundle $(\mathcal{X},\mathcal{P}, \mathbold{E}, X)$, such that the complex manifold (smooth manifold, respectively) $\mathbold{E}\subset \mathcal{X}$ is the disjoint union of two complex manifolds (smooth manifolds, respectively) $\mathring{\mathbold{E}}$ and $\mathbold{s}$, where $\mathring{\mathbold{E}}\subset \mathcal{X}$ intersects each fiber $\mathcal{P}^{-1}(x)$ along a set $\mathring{E}_x$ of $n$ points, and $\mathbold{s}\subset \mathcal{X}$ intersects each fiber $\mathcal{P}^{-1}(x)$ along a single point $s_x$. We will also say, that the mapping $x\to s_x,\, x\in X$, is a holomorphic (smooth, respectively) section of the $(0,n)$-bundle with set of distinguished points $\mathring{E}_x$ in the fiber over $x$. A special $(0,n+1)$-bundle is a bundle over $X$ of the form $(X\times \mathbb{P}^1, {\rm pr}_1, \mathbold{E}, X)$, where ${\rm pr}_1 : X\times \mathbb{P}^1 \to X$ is the projection onto the first factor, and the smooth submanifold $\mathbold{E}$ of $X\times \mathbb{P}^1$ is equal to the disjoint union $\mathring{\mathbold{E}}\cup \mathbold{s}_{\infty}$ where $\mathbold{s}_{\infty}$ intersects each fiber $\{x\}\times\mathbb{P}^1$ along the point $\{x\}\times\{\infty\}$, and the set $\mathring{\mathbold{E}}$ intersects each fiber along $n$ points. A special $(0,n+1)$-bundle is, in particular, a $(0,n)$-bundle with a section. Two smooth $(0,n)$-bundles with a section (in particular, two special $(0,n+1)$-bundles) are called isotopic if they are isotopic as $(0,n+1)$-bundles with an isotopy that joins the sections of the bundles. A holomorphic (smooth, respectively) $(0,n)$-bundle with a section is isotopic to a holomorphic (smooth, respectively) special $(0,n+1)$-bundle over $X$ (see \cite{Jo5}). Theorem \ref{thm2} is a consequence of the following theorem on $(0,3)$-bundles with a section. \begin{thm}\label{thm3} Over a connected Riemann surface of genus $g$ with $m+1$ holes there are up to isotopy no more than $(15 \exp( 6 \pi \lambda_{10}(X)))^{6(2g+m)}$ irreducible holomorphic $(0,3)$-bundles with a holomorphic section. \end{thm} For a reducible $(0,4)$-bundle the fiber of each irreducible bundle component is a thrice-punctured Riemann sphere. Hence each irreducible bundle component of a reducible $(0,4)$-bundle is isotopic to an isotrivial bundle. For more details see \cite{Jo5}. Theorem \ref{thm1} (with a weaker estimate) is a consequence of Theorem \ref{thm3}. Indeed, consider holomorphic (smooth, respectively) bundles whose fiber over each point $x\in X$ equals $\mathbb{P}^1$ with set of distinguished points $\{-1,1,f(x),\infty\}$ for a function $f$ which depends holomorphically (smoothly, respectively) on the points $x\in X$ and does not take the values $-1$ and $1$. Then we are in the situation of Theorem \ref{thm1}. It is not hard to see that the mapping $f$ is reducible, iff this $(0,4)$-bundle is reducible (see also Lemma 7 of \cite{Jo5}). The relation between Theorems \ref{thm2} and \ref{thm3} is given in Proposition \ref{prop1} below. A holomorphic $(1,1)$-bundle $\mathfrak{F}=\big(\mathcal{X},\mathcal{P}, \mathbold{s},X\big)$ is called a double branched covering of the special holomorphic $(0,4)$-bundle $\;\big(X\times \mathbb{P}^1,{\rm pr}_1,\mathbold{E} ,\,X\big)\;$ if there exists a holomorphic mapping $\;P:\mathcal{X}\to X\times\mathbb{P}^1$ that maps each fiber $\mathcal{P}^{-1}(x)$ of the $(1,1)$-bundle onto the fiber $\{x\}\times \mathbb{P}^1$ of the $(0,4)$-bundle over the same point $x$, such that the restriction ${{P}}: \mathcal{P}^{-1}(x) \to \{x\}\times \mathbb{P}^1$ is a holomorphic double branched covering with branch locus being the set $\{x\}\times (\mathring{E}_x\cup \{\infty\})=\mathbold{E}\cap (\{x\}\times \mathbb{P}^1) $ of distinguished points in the fiber $\{x\}\times \mathbb{P}^1$, and ${P}$ maps the distinguished point $s_x$ in the fiber $\mathcal{P}^{-1}(x)$ over $x$ to the point $\{x\}\times \{\infty\}$ in $\{x\}\times \mathbb{P}^1$. We will also denote $(X\times \mathbb{P}^1 , {\rm pr}_1 ,\mathbold{E}, X)$ by ${{P}}((\mathcal{X},\mathcal{P},\mathbold{s},X))$, and call the bundle $(\mathcal{X},\mathcal{P},\mathbold{s}, X)$ a lift of $(X\times\mathbb{P}^1, {\rm pr}_1 ,\mathbold{E} ,X)$. Let the fiber of the $(1,1)$-bundle over the base point $q_0\in X$ be $Y$ with distinguished point $s$, and let the fiber of the $(0,4)$-bundle over $q_0$ be $\mathbb{P}^1$ with distinguished points $\mathring{E}\cup\{\infty\}$ for a set $\mathring{E}\subset C_3(\mathbb{C})\diagup \mathcal{S}_3$. Then the monodromy mapping class $\mathfrak{m}_1\in \mathfrak{M}(\mathbb{P}^1;\infty,\mathring{E})$ of the $(0,4)$-bundle along any generator of the fundamental group of $X$ is the projection of the monodromy mapping class $\mathfrak{m}\in \mathfrak{M}(Y;s,\emptyset)$ of the $(1,1)$-bundle along the same generator. This means that there are representing homeomorphisms $\varphi\in \mathfrak{m}$ and $\varphi_1\in \mathfrak{m}_1$ such that $\varphi_1({ P}(\zeta))= {P}(\varphi(\zeta)),\, \zeta \in Y$. We will also say that $\mathfrak{m}$ is a lift of $\mathfrak{m}_1$. The lifts of a mapping class $\mathfrak{m}_1\in \mathfrak{M}(\mathbb{P}^1;\infty,\mathring{E})$ differ by the involution of $Y$, that interchanges the sheets of the double branched covering. Hence, each class $\mathfrak{m}_1\in \mathfrak{M}(\mathbb{P}^1;\infty,\mathring{E})$ has exactly two lifts. \begin{prop}\label{prop1} Let $X$ be a Riemann surface of genus $ g$ with ${ m}+1\geq 1$ holes with base point $q_0$ and curves $\gamma_j$ representing a set of generators $e_j\in \pi_1(X,q_0)$.\\ \noindent (1) Each holomorphic $(1,1)$-bundle over $X$ is holomorphically isomorphic to the double branched covering of a special holomorphic $(0,4)$-bundle over $X$.\\ \noindent (2) Vice versa, for each special holomorphic $(0,4)$-bundle over $X$ and each collection $\mathfrak{m}^j$ of lifts of the $2{ g} +{ m} $ monodromy mapping classes $\mathfrak{m}_1^j$ of the bundle along the $\gamma_j$ there exists a double branched covering by a holomorphic $(1,1)$-bundle with collection of monodromy mapping classes equal to the $\mathfrak{m}^j$. Each special holomorphic $(0,4)$-bundle has exactly $2^{2g+m}$ non-isotopic holomorphic lifts.\\ \noindent (3) A lift of a special $(0,4)$-bundle is reducible if and only if the special $(0,4)$-bundle is reducible. \end{prop} The proof of the proposition uses the fact that a holomorphic $(1,1)$-bundle over $X$ is holomorphically isomorphic to a holomorphic bundle whose fiber over each point $x$ is a quotient $\mathbb{C}\diagup \Lambda_x$ of the complex plane by a lattice $\Lambda_x$ with distinguished point $0\diagup \Lambda_x$. The lattices depend holomorphically on the point $x$. To represent the fibers as branched coverings depending holomorphically on the points in $X$ we use embeddings of punctured tori into $\mathbb{C}^2$ by suitable versions of the Weierstraß $\wp$-function. For a detailed proof of Proposition \ref{prop1} see \cite{Jo5}. \noindent {\bf Preparation of the proof of Theorem \ref{thm3}.} The proof will be given in terms of braids. Let $C_n(\mathbb{C})=\{(z_1,\ldots,z_n)\in \mathbb{C}^n: z_j\neq z_k \;\mbox{for}\; j\neq k\}$ be the $n$-dimensional configuration space. The symmetrized configuration space is its quotient $C_n(\mathbb{C})\diagup \mathcal{S}_n$ by the diagonal action of the symmetric group $\mathcal{S}_n$. We write points of $C_n(\mathbb{C})$ as ordered $n$-tuples $(z_1,\ldots,z_n)$ of points in $\mathbb{C}$, and points of $C_n(\mathbb{C})\diagup \mathcal{S}_n$ as unordered tuples $\{z_1,\ldots,z_n\}$ of points in $\mathbb{C}$. We regard geometric braids on $n$ strands with base point $E_n$ as loops in the symmetrized configuration space $C_n(\mathbb{C})\diagup \mathcal{S}_n$ with base point $E_n$, and braids on $n$ strands ($n$-braids for short) with base point $E_n\in C_n(\mathbb{C})\diagup \mathcal{S}_n$ as homotopy classes of loops with base point $E_n$ in $C_n(\mathbb{C})\diagup \mathcal{S}_n$, equivalently, as elements of the fundamental group $\pi_1(C_n (\mathbb {C}) \diagup {\mathcal S}_n, E_n)$ of the symmetrized configuration space with base point $E_n$. Each smooth mapping $\;\;F:X \to C_n ({\mathbb C}) \diagup {\mathcal S}_n\;\;$ defines a smooth special $\;(0,n+1)$-bundle $\;\;\;(X\times \mathbb{P}^1, {\rm pr}_1, \mathbold{E},X)\,,$ where $\mathbold{E}\cap (\{x\}\times \mathbb{P}^1)=\{x\}\times ( F(z) \cup \{\infty\})$. Vice versa, for each smooth special $(0,n+1)$-bundle $(X\times \mathbb{P}^1, {\rm pr}_1, \mathbold{E},X)$ the mapping that assigns to each point $x \in X$ the set of finite distinguished points in the fiber over $x$ defines a smooth mapping $F:X \to C_n ({\mathbb C}) \diagup{\mathcal S}_n$. The mapping $F$ is holomorphic iff the bundle is holomorphic. It is called irreducible iff the bundle is irreducible. Choose a base point $q_0\in X$. The restriction of the mapping $F$ to each loop with base point $q_0$ defines a geometric braid with base point $F(q_0)$. The braid represented by it is called the monodromy of the mapping $F$ along the element of the fundamental group represented by the loop. The monodromy mapping classes of a special $(0,n+1)$-bundle are isotopy classes of self-homeomorphisms of the fiber $\mathbb{P}^1$ over the base point $q_0$ which map the set of finite distinguished points $E_n=F(q_0)$ in this fiber onto itself, and fix $\infty$. Two smooth mappings $F_1$ and $F_2$ from $X$ to $C_n ({\mathbb C}) \diagup {\mathcal S}_n$, that have equal value $E_n\in C_n(\mathbb{C})\diagup\mathcal{S}_n$ at the base point $q_0$, define special $(0,n+1)$-bundles , that are isotopic with an isotopy that fixes the fiber over $q_0$ and the set of distinguished points in this fiber, iff their restrictions to each curve in $X$ with base point $q_0$ define braids that differ by an element of the center $\mathcal{Z}_n$ of the braid group $\mathcal{B}_n$ on $n$ strands (in other words, by a power of a full twist). Indeed, the braid group on $n$ strands modulo its center $\mathcal{B}_n\diagup \mathcal{Z}_n$ is isomorphic to the group of mapping classes of $\mathbb{P}^1$ that fix $\infty$ and map $E_n$ to itself. Note that for the group $\mathcal{PB}_3$ of pure braids on three strands the quotient $\mathcal{PB}_3\diagup \mathcal{Z}_3$ is isomorphic to the fundamental group of $\mathbb{C}\setminus \{-1,1\}$. The isomorphism maps the generators $\sigma_j^2\diagup \langle \Delta_3^2 \rangle, \, j=1,2$, of $\mathcal{PB}_3\diagup \mathcal{Z}_3$ to the standard generators $a_j,\, j=1,2$, of the fundamental group $\pi_1(\mathbb{C}\setminus \{-1,1\},0)$. Here $\langle \Delta_3^2 \rangle$ denotes the group generated by $\Delta_3^2$ which is equal to the center $\mathcal{Z}_3$. The proof of Theorem \ref{thm3} will go now along the same lines as the proof of Theorem \ref{thm1} with some modifications. Lemma H, Lemmas \ref{lem4} and \ref{lem3a}, and Theorem I below are given in terms of braids rather than in terms of elements of $\mathcal{B}_3\diagup \mathcal{Z}_3$. The following lemma and the following theorem were proved in \cite{Jo2}. \noindent {\bf Lemma H.} {\it Any braid $b\in \mathcal{B}_3$ which is not a power of $\Delta_3$ can be written in a unique way in the form \begin{equation}\label{eq2'} \sigma_j^k \, b_1 \, \Delta_3^{\ell}\, \end{equation} where $j=1$ or $j=2$, $k\neq 0$ is an integer, $\ell$ is a (not necessarily even) integer, and $b_1$ is a word in $\sigma_1^2$ and $\sigma_2^2$ in reduced form. If $b_1$ is not the identity, then the first term of $b_1$ is a non-zero even power of $\sigma_2$ if $j=1$, and $b_1$ is a non-zero even power of $\sigma_1$ if $j=2$.} For an integer $j\neq 0$ we put $q(j)=j$ if $j$ is even, and for odd $j$ we denote by $q(j)$ the even integer neighbour of $j$ that is closest to zero. In other words, $q(j)=j$ if $j\neq 0$ is even, and for each odd integer $j\,,$ $q(j)= j -{\mbox{sgn}}(j)$, where ${\mbox{sgn}}(j)$ for a non-zero integral number $j$ equals $1$ if $j$ is positive, and $-1$ if $j$ is negative. For a braid in form \eqref{eq2'} we put $\vartheta(b) \stackrel{def}{=}\sigma_j^{q(k)} \, b_1$. If $b$ is a power of $\Delta_3$ we put $\vartheta(b) \stackrel{def}{=}{\rm Id}$. Let $C_n(\mathbb{R})\diagup \mathcal{S}_n$ be the totally real subspace of $C_n(\mathbb{C})\diagup \mathcal{S}_n$. It is defined in the same way as $C_n(\mathbb{C})\diagup \mathcal{S}_n$ by replacing $\mathbb{C}$ by $\mathbb{R}$. Take a base point $E_n \in C_n(\mathbb{R})\diagup \mathcal{S}_n$. The fundamental group $\;\pi_1(\,C_n (\mathbb {C}) \diagup {\mathcal S}_n\,,\; E_n\,)\;$ with base point is isomorphic to the relative fundamental group $\; \pi_1(\,C_n (\mathbb {C}) \diagup {\mathcal S}_n\,,\; C_n (\mathbb {R}) \diagup {\mathcal S}_n \,)\,. $ The elements of the latter group are homotopy classes of arcs in $\,C_n (\mathbb {C}) \diagup {\mathcal S}_n\,$ with endpoints in the totally real subspace $\,C_n (\mathbb {R})\diagup{\mathcal S}_n\,$ of the symmetrized configuration space. Let $b \in \mathcal{B}_n$ be a braid. Denote by $b_{tr}$ the element of the relative fundamental group $\pi_1(\,C_n (\mathbb {C}) \diagup {\mathcal S}_n\,,\; C_n (\mathbb {R}) \diagup {\mathcal S}_n\, )$ that corresponds to $b$ under the mentioned group isomorphism. For a rectangle $R$ in the plane with sides parallel to the axes we let $f:R \to \,C_n (\mathbb {C}) \diagup {\mathcal S}_n\,$ be a mapping which admits a continuous extension to the closure $\bar R$ (denoted again by $f$) which maps the (open) horizontal sides into $\,C_n (\mathbb {R}) \diagup {\mathcal S}_n\,$. We say that the mapping represents $b_{tr}$ if for each maximal vertical line segment contained in $R$ (i.e. $R$ intersected with a vertical line in $\mathbb{C}$) the restriction of $f$ to the closure of the line segment represents $b_{tr}$. The extremal length of a $3$-braid with totally real horizontal boundary values is defined as \begin{align} \Lambda_{tr}(b)=& \inf \{\lambda(R): R\, \mbox{ a rectangle which admits a holomorphic map to} \nonumber \\ &C_n (\mathbb {C}) \diagup {\mathcal S}_n \,\mbox{ that represents}\; b_{tr}\}\,.\nonumber \end{align} (see \cite{Jo2}.) The following theorem holds (see \cite{Jo2}). \noindent {\bf Theorem I.} {\it Let $b \in \mathcal{B}_3$ be a (not necessarily pure) braid which is not a power of $\Delta_3$, and let $w$ be the reduced word representing the image of $\vartheta(b)$ in $\mathcal{PB}_3 \diagup \langle \Delta_3^2\rangle$. Then $$ \Lambda_{tr}(b) \geq \frac{1}{2\pi}\cdot \mathcal{L}_-(w) \,, $$ except in the case when $b=\sigma_j^{k}\,\Delta_3^{\ell}$, where $j=1$ or $j=2$, $k\neq 0$ is an integer number, and $\ell$ is an arbitrary integer. In this case $\Lambda_{tr}(b)=0$.} The set \begin{align} \mathcal{H} \stackrel{def}= & \{ \{z_1,z_2,z_3\} \in C_3(\mathbb{C})\diagup \mathcal{S}_3: \mbox{the three points}\; z_1,z_2, z_3 \nonumber\\ & \mbox{are contained in a real line in the complex plane}\} \end{align} is a smooth real hypersurface of $C_3(\mathbb{C})\diagup \mathcal{S}_3$. Indeed, let $\{z_1^0,z_2^0,z_3^0\}$ be a point of the symmetrized configuration space. Introduce coordinates near this point by lifting a neighbourhood of the point to $C_3(\mathbb{C})$ with coordinates $(z_1,z_2,z_3)$. Since the linear map $M(z)\stackrel{def}= \frac{z-z_1}{z_3-z_1},\, z \in \mathbb{C},$ maps the points $z_1$ and $z_3$ to the real axis, the three points $z_1, \, z_2,$ and $z_3$ lie on a real line in the complex plane iff the imaginary part of $z'_2\stackrel{def}=M(z_2)= \frac{z_2-z_1}{z_3-z_1}$ vanishes. The equation $\mbox{Im}\frac{z_2-z_1}{z_3-z_1}=0$ in local coordinates $(z_1,z_2,z_3)$ defines a local piece of a smooth real hypersurface. For each complex affine self-mapping $M$ of the complex plane we consider the diagonal action $M\big((z_1,z_2,z_3)\big)=\big(M(z_1),M(z_2),M(z_3)\big)$ on points $(z_1,z_2,z_3)\in C_3(\mathbb{C})$, and the diagonal action $M\big(\{z_1,z_2,z_3\}\big)=\{M(z_1),M(z_2),M(z_3)\}$ on points $\{z_1,z_2,z_3\}\in C_3(\mathbb{C})\diagup \mathcal{S}_3$. The following two lemmas replace Lemma \ref{lem2} in the case of $(0,3)$-bundles with a section. \begin{lemm}\label{lem4} Let $A$ be an annulus with an orientation of simple closed dividing curves. Suppose $F:A \to C_3(\mathbb{C})\diagup \mathcal{S}_3$ is a holomorphic mapping whose image is not contained in $\mathcal{H}$. Suppose $L_A$ is a simple relatively closed curve in $A$ with limit points on both boundary circles of $A$, and $F(L_A) \subset \mathcal{H}$. Moreover, for a point $q_A\in L_A$ the value $F(q_A)$ is in the totally real subspace $ C_3(\mathbb{R})\diagup \mathcal{S}_3$. Let $e_A\in \pi_1(A,q_A)$ be the positively oriented generator of the fundamental group of $A$ with base point $q_A$. If the braid $b\stackrel{def}=F_*(e_A) \in \mathcal{B}_3 $ is different from $\sigma_j^k \, \Delta_3^{2 \ell'} $ with $j$ equal to $1$ or $2$, and $k\neq 0$ and $\ell'$ being integers, then \begin{equation}\label{eq+} \mathcal{L}_-(\vartheta(b)) \leq 2\pi \lambda(A). \end{equation} \end{lemm} Notice that the braids $\sigma_j^k \, \Delta_3^{\ell} $ for odd $\ell$ are exceptional for Theorem I, but not exceptional for Lemma \ref{lem4}. The reason is that the braid in Lemma \ref{lem4} is related to a mapping of an annulus, not merely to a mapping of a rectangle. For $t\in [0,\infty)$ we put $\log_+ t\stackrel{def}= \begin{cases} \log t& \; t \in [1,\infty) \\ 0 & \; t \in [0,1)\;\;\;\; .\\ \end{cases}$ \begin{lemm}\label{lem3a} If the braid in Lemma {\rm \ref{lem4}} equals $b = \sigma_j^k \, \sigma_{j'}^{k'} \,\Delta_3^{\ell}$ with $j$ and $j'$ equal to $1$ or to $2$, $j'\neq j$, and $k$ and $k'$ being non-zero integers, and $\ell$ an even integer, then \begin{equation}\label{eq++} \log_+(3[\frac{|k|}{2}]) + \log_+(3[\frac{|k'|}{2}])\leq {\pi} \lambda(A). \end{equation} \end{lemm} \noindent Here for a non-negative number $x$ we denote by $[x]$ the smallest integer not exceeding $x$. \noindent {\bf Proof of Lemma \ref{lem4}.} By the same argument as in the proof of Lemma \ref{lem2} we may assume that the annulus $A$ has smooth boundary, the mapping $F$ extends continuously to the closure $\overline{A}$, and the curve $L_A$ is a smooth (connected) curve in $\overline{A}$ whose endpoints are on different boundary components of $A$. Perhaps after a diagonal action of a fixed M\"obius transformation on each point of $C_3(\mathbb{C}\diagup \mathcal{S}_3)$, we may also assume that the value of $F$ at the point $q_A \in L_A$ is equal to the unordered triple $\{-1,q',1\}\in C_3(\mathbb{R})\diagup \mathcal{S}_3$ for a number $q' \in \mathbb{R}\setminus\{-1,1\}$. We restrict the mapping $F$ to $A\setminus L_0$. Let $\tilde R$ be a lift of $A\setminus L_A$ to the infinite strip $\widetilde{\overline{A}}$ that covers $\overline{A}$. We consider $\tilde R$ as curvilinear rectangle with horizontal sides being the two different lifts of $L_A$ and vertical sides being the lifts of the two boundary circles cut at the endpoints of $L_A$. Take a closed curve $\gamma_A:[0,1]\to A$ in $A$ with base point $q_A\in L_A$, that intersects $L_A$ only at the base point and represents the element $e_A\in\pi_1(A,q_A)$. Let $\tilde{\gamma}_A$ be the lift of $\gamma_A$ to $\widetilde{\overline{A}}$ for which $\tilde{\gamma}_A((0,1))$ is contained in $\tilde R$, and let $\tilde F=(\tilde {F}_1,\tilde {F}_2,\tilde {F}_3):{\tilde R} \to \mathcal{C}_3(\mathbb{C})$ be a lift of $F$ to a mapping from ${\tilde R}$ to the configuration space $\mathcal{C}_3(\mathbb{C})$. The continuous extension of $\tilde F$ to $\overline{\tilde R}$ is also denoted by $\tilde F$. We may choose the lift so that the value of $\tilde F$ at the copy of $q_A$ on the lower horizontal side of $\tilde R$ equals $(-1,q',1)$. For each $z\in \overline{\tilde R}$ we consider the complex affine mapping $\mathfrak{A}_z(\zeta)=a(z)\zeta+b(z) \stackrel{def} = -1 + 2\frac{\zeta-\tilde{F}_1(z)}{\tilde{F}_3(z)-\tilde{F}_1(z)},\, \zeta \in \mathbb{C}$. Denote by $\hat{F}(z)\stackrel{def}=\mathfrak{A}_{z}(\tilde{F}(z))=\mathfrak{A}_{z} \big((\tilde{F}_1(z),\tilde{F}_2(z),\tilde{F}_3(z)\big),\, z \in \overline{\tilde{R}}, $ the result of applying $\mathfrak{A}_z$ to each of the three points of $\tilde{F}(z)$. The mapping $\hat{F}(z)=(\hat{F}_1(z),\hat{F}_2(z),\hat{F}_3(z))=(-1,\hat{F}_2(z),1)$ is holomorphic on $\tilde R$. Let $\tilde{F}_{\rm sym}\stackrel{def}=\{\tilde{F}_1,\tilde{F}_2,\tilde{F}_3\}$ ( $\hat{F}_{\rm sym}\stackrel{def}=\{\hat{F}_1,\hat{F}_2,\hat{F}_3\}$, respectively) be the projection of $\tilde{F}$ ($\hat{F}$, respectively) to a mapping from $\overline{\tilde{R}}$ to the symmetrized configuration space $C_3(\mathbb{C})\diagup \mathcal{C}_3$. Since $F(L_A)\subset \mathcal{H}$ the mapping $\hat{F}_{\rm sym}$ takes the horizontal sides of $\tilde R$ to the totally real subspace $C_3(\mathbb{R})\diagup \mathcal{S}_3$ of the symmetrized configuration space. Moreover, $\hat{F}_{\rm sym}$ maps the copy of $q_A$ on the lower side of $\tilde R$ to $\{-1,q',1\}$. Recall that also $\tilde{F}_{\rm sym}$ takes the value $\{-1,q',1\}$ at the copy of $q_A$ on the lower side of $\tilde R$. The restrictions of $\tilde{F}_{\rm sym}$ and of ${\hat{F}}_{\rm sym}$ to the curve $\tilde{\gamma}_A$ represent elements of the relative fundamental group $\pi_1(C_3(\mathbb{C})\diagup \mathcal{S}_3,C_3(\mathbb{R})\diagup \mathcal{S}_3)$. The represented elements of the relative fundamental group differ by a finite number of half-twists. Indeed, for each $z$, the lifts to $C_3(\mathbb{C})$, $\tilde F(z)$ and ${\hat{F}}(z)$, differ by a complex affine mapping. Hence, ${\hat{F}}(\tilde {\gamma}_A(t))=b(t)+a(t) \tilde F(\tilde{\gamma}_A(t))$ for continuous functions $a$ and $b$ on $[0,1]$ with $a$ nowhere vanishing, $b(0)=0$, $a(0)=1$, and $b(1)$ and $a(1)$ real valued. Then the function $b:[0,1]\to \mathbb{C}$ is homotopic with endpoints in $\mathbb{R}$ to the function that is identically equal to zero. The mapping $a:[0,1]\to \mathbb{C}\setminus \{0\}$ is homotopic with endpoints in $\mathbb{R}$ to $\frac{a}{|a|}$. Hence, the mappings $\hat{F}(\tilde{\gamma}_A(t))$ and $\frac{a(t)}{|a|(t)}\tilde{F}(\tilde{\gamma}_A(t))$ from $[0,1]$ to $C_3(\mathbb{C})\diagup \mathcal{S}_3$ are homotopic with endpoints in $C_3(\mathbb{R})\diagup \mathcal{S}_3$. The statements follows. Let $\omega(z):A\setminus L_A\to R$ be the conformal mapping of the curvilinear rectangle onto the rectangle of the form $R=\{z=x+i y:x \in (0,1),\, y \in (0,{\sf{a}})\}$, that maps the lower curvilinear side of $A\setminus L_0$ to the lower side of $R$. (Note that the number $\sf{a}$ is uniquely defined by $\tilde R$.) For $i'\in \mathbb{Z}$ we put $ \mathring{F}_{i'}(z) \stackrel{def}= e^{-i' \frac{\pi}{\sf a}\omega(z)} \hat{F}_{\rm sym}(z)$. Then, for some choice of $i'$ the restrictions $\tilde{F}_{\rm sym}\mid {\gamma}_A$ and $\mathring{F}_{i'}\mid {\gamma}_A$ represent the same element of $\pi_1(C_3(\mathbb{C})\diagup \mathcal{S}_3,C_3(\mathbb{R})\diagup \mathcal{S}_3))$, namely $b_{tr}$. We represented $b_{tr}$ by the holomorphic map $\mathring{F}_{i'}$ from the rectangle $\tilde R$ into $C_3(\mathbb{C})\diagup \mathcal{S}_3$ that maps horizontal sides into $C_3(\mathbb{R})\diagup \mathcal{S}_3$. Hence, \begin{equation}\label{eq100} \Lambda(b_{tr})\leq \lambda(\tilde R) = \lambda(A\setminus L_A)\leq \lambda(A)\,. \end{equation} For $b\neq \sigma_j^k \, \Delta_3^{\ell}$ with $j$ equal to $1$ or $2$, and $k\neq 0$ and $\ell$ being integers, the statement of Lemma \ref{lem4} follows from Theorem I in the same way as Lemma \ref{lem2} follows from Theorem F. For $b=\sigma_j^k \, \Delta_3^{\ell}$ with $k=0$ the statement is trivial since then $ \vartheta({\rm Id})={\rm Id}$ and $\mathcal{L}_-({\rm Id})=0$. To obtain the statement in the remaining case $b = \sigma_j^k \, \Delta_3^{2 \ell' +1} $ with $j$ equal to $1$ or $2$, and $k$ and $\ell'$ being integers, we use Lemma \ref{lem3a}. Notice that $\sigma_1\, \Delta_3 = \Delta_3 \, \sigma_2$ and $\sigma_2\, \Delta_3 = \Delta_3 \, \sigma_1$. Hence, $b^2 = \sigma_j^k\, \sigma_{j'}^k \,\Delta_3^{4 \ell'+2}$ with $\sigma_j\,\neq \sigma_{j'}$. Let $\omega_2:A^2\to A$ be the two-fold unbranched covering of $A$ by an annulus $A^2$. The equality $\lambda(A^2)=2 \lambda(A)$ holds. Let ${q}_{A^2}$ be a point in $\omega_2^{-1}(q_A)$, and let ${L}_{q_{A^2}}$ be the lift of $L_A$ to $A^2$ that contains ${q}_{A^2}$. Denote by ${\gamma}_{A^2}$ the loop $\omega_2^{-1}(\gamma_A)$ with base point $q_{A^2}$. Then $ {F}\circ \omega_2 \mid {\gamma}_{A^2}$ represents $b^2$ and $(b^2)_{tr}$. Lemma \ref{lem3a} applied to $ \sigma_j^k\, \sigma_{j'}^k \,\Delta_3^{4 \ell'+2}$ gives the estimate $2\log_+(3[\frac{|k|}{2}]) \leq \pi \lambda(A^2)=2\pi\lambda(A)$. Since $\vartheta(b)=\sigma_j^{2[\frac{|k|}{2}]\mbox{sgn}(k)}$, the inequality \eqref{eq+} follows. The lemma is proved. $\Box$ \noindent {\bf Proof of Lemma \ref{lem3a}.} By \cite{Jo2}, Lemma 1 and Proposition 6, statement 2, \begin{align}\label{eq17} \Lambda_{tr}( \sigma_j^k \, \sigma_{j'}^{k'} \,\Delta_3^{\ell}) \geq \frac{1}{\pi} (\log_+(3[\frac{|k|}{2}])+ \log_+(3[\frac{|k'|}{2}])). \end{align} Since by \eqref{eq100} the inequality $\Lambda_{tr}(\sigma_j^k \, \sigma_{j'}^{k'} \,\Delta_3^{\ell}) \leq \lambda(A)$ holds, the lemma is proved. $\Box$ We want to emphasize that periodic braids are not non-zero powers of a $\sigma_j$, so the lemma is true also for periodic braids. For each periodic braid $b$ of the form $\sigma_1 \sigma_2= \sigma_1 ^{-1} \, \Delta_3$, $(\sigma_1 \sigma_2)^2=\sigma_1 \, \Delta_3$, $\sigma_2 \sigma_1= \sigma_2 ^{-1} \, \Delta_3$, $(\sigma_2 \sigma_1)^2=\sigma_2 \, \Delta_3$, and $\Delta_3$ the $\mathcal{L}_-(\vartheta(b))$ vanishes. However, for instance for the conjugate $\sigma_1^{-2k} \Delta_3 \sigma_1^{2k}= \sigma_1^{-2k} \sigma_2^{2k} \Delta_3$ of $\Delta_3$ we have $\mathcal{L}_-(\vartheta(\sigma_1^{-2k} \Delta_3 \sigma_1^{2k}))= 2 \log(3|k|)$. Another example, for the conjugate $\sigma_2^{-2k}\, \sigma_1 \sigma_2\, \sigma_2^{2k}$ of $\sigma_1 \sigma_2$ we have \begin{align}\nonumber \sigma_2^{-2k}\, \sigma_1 \sigma_2\, \sigma_2^{2k} = \sigma_2^{-2k-1}\, \Delta_3\, \sigma_2^{2k} = \sigma_2^{-2k-1}\, \sigma_1^{2k} \,\Delta_3\,. \end{align} and $\mathcal{L}_-(\vartheta(\sigma_2^{-2k}\, \sigma_1 \sigma_2\, \sigma_2^{2k})) $ equals $2 \log(3|k|)$. Notice that the lemmas and Theorem I descend to statements on elements of $\mathcal{B}_3 \diagup \mathcal{Z}_3$ rather than on braids. For an element $\textsf{b}$ of the quotient $\mathcal{B}_3 \diagup \mathcal{Z}_3$ we put $\vartheta(\textsf{b})= \vartheta(b)$ for any representative $b \in \mathcal{B}_3$ of $\textsf{b}$. Lemma \ref{lem8a} below is an analog of Lemma \ref{lem3}. It follows from Lemma \ref{lem4} in the same way as Lemma \ref{lem3} follows from Lemma \ref{lem2}. \begin{lemm}\label{lem8a} Let $X$ be a connected finite open Riemann surface, and $F:X \to C_3(\mathbb{C}) \diagup \mathcal{S}_3$ be a holomorphic map that is transverse to the hypersurface $\mathcal{H}$ in $ C_3(\mathbb{C}) \diagup \mathcal{S}_3$. Suppose $L_0$ is a simple relatively closed curve in $X$ such that $F(L_0)$ is contained in $\mathcal{H}$, and for a point $q \in L_0$ the point $F(q)$ is contained in the totally real space $C_3(\mathbb{R}) \diagup \mathcal{S}_3$. Let $e^{(1)}$ and $e^{(2)}$ be primitive elements of $\pi_1(X,q)$. Suppose that for $e=e^{(1)}$, $e=e^{(2)}$, and $e=e^{(1)}e^{(2)}$ the free homotopy class $\widehat e$ intersects $L_0$. Then either the two monodromies of $F$ modulo the center $F_*(e^{(j)})\diagup \mathcal{Z}_3,\, j=1,2,\,$ are powers of the same element $\sigma_j\diagup \mathcal{Z}_3$ of $\mathcal{B}_3\diagup \mathcal{Z}_3$, or each of them is the product of at most two elements $\textsf{b}_1$ and $\textsf{b}_2$ of $\mathcal{B}_3\diagup \mathcal{Z}_3$ with \begin{equation}\label{eq101} \mathcal{L}_-(\vartheta(\textsf{b}_j)) \leq 2\pi \lambda_{e^{(1)},e^{(2)}},\, j=1,2, \end{equation} where \begin{equation}\nonumber \lambda_{e^{(1)},e^{(2)}} \stackrel{def}=\max\{\lambda(A(\reallywidehat{e^{(1)}})),\, \lambda(A(\reallywidehat{e^{(2)}})),\, \lambda(A(\reallywidehat{e^{(1)}\,e^{(2)}}))\}. \end{equation} \end{lemm} \noindent{\bf Proof.} Suppose for an element $e\in \pi_1(X,q)$ the free homotopy class $\widehat e$ intersects $L_0$. By Lemma \ref{lem1} there exists an annulus $A$, a point $q_A\in A$, and a holomorphic map $\omega_A:(A,q_A)\to (X,q)$ that represents $e$. Moreover, the connected component of $(\omega_A)^{-1}(L_0)$ that contains $q_A$ has limit points on both boundary components of $A$. Put $F_A=F\circ\omega_A$. By the conditions of Lemma \ref{lem8a} $F_A(L_A)=F(L_0)\subset \mathcal{H}$ and $F_A(q_A)\in C_3(\mathbb{R})\diagup\mathcal{S}_3$. Let $e_A$ be the generator of $\pi_1(A,q_A)$ for which $\omega_A(e_A)=e$. The mapping $F_A:A\to C_3(\mathbb{C})\diagup \mathcal{S}_3$, the point $q_A$ and the curve $L_A$ satisfy the conditions of Lemma \ref{lem4}. Notice that the equality $(F_A)_*(e_A)= F_*(e)$ holds. Hence, if $F_*(e)$ is not a power of a $\sigma_j$ then inequality \eqref{eq+} holds for $F_*(e)$. Suppose the two monodromies modulo center $F_*(e^{(j)})\diagup \mathcal{Z}_3,\, j=1,2,\,$ are not (trivial or non-trivial) powers of the same element $\sigma_j \diagup \mathcal{Z}_3$ of $\mathcal{B}_3\diagup \mathcal{Z}_3$. Then at most two of the elements, $F_*(e^{(1)})\diagup \mathcal{Z}_3$, $F_*(e^{(2)})\diagup \mathcal{Z}_3$, and $F_*(e^{(1)}e^{(2)})\diagup \mathcal{Z}_3=F_*(e^{(1)})\diagup \mathcal{Z}_3\cdot F_*(e^{(2)})\diagup \mathcal{Z}_3 $, are powers of an element of the form $\sigma_j\diagup \mathcal{Z}_3$. If the monodromies modulo center along two elements among $e^{(1)}$, $e^{(2)}$, and $e^{(1)}e^{(2)}$ are not (zero or non-zero) powers of a $\sigma_j\diagup \mathcal{Z}_3$ then by Lemma \ref{lem4} for each of these two monodromies modulo center inequality \eqref{eq101} holds, and the third monodromy modulo center is the product of two elements of $\mathcal{B}_3\diagup \mathcal{Z}_3$ for which inequality \eqref{eq101} holds. If the monodromies modulo center along two elements among $e^{(1)}$, $e^{(2)}$, and $e^{(1)}e^{(2)}$ have the form $\sigma_{j}^{k} \diagup \mathcal{Z}_3$ and $\sigma_{j'}^{k'}\diagup\mathcal{Z}_3$, then the $\sigma_j$ and the $\sigma_{j'}$ are different and $k$ and $k'$ are non-zero. The third monodromy modulo center has the form $\sigma_{j }^{\pm k} \sigma_{j'}^{\pm k'} \diagup \mathcal{Z}_3$ (or the order of the two factors interchanged). Lemma \ref{lem3a} gives the inequality $\log_+(3\max([\frac{|k|}{2}]) + \log_+(3\max([\frac{|k'|}{2}])\leq \pi \lambda_{e^{(1)},e^{(2)}}$. Since $\mathcal{L}_-(\vartheta(\sigma_{j}^{\pm k}))= \log(3[\frac{k}{2}])$ and $\mathcal{L}_-(\vartheta(\sigma_{j'}^{\pm k'})) =\log(3[\frac{k'}{2}])$, inequality \eqref{eq101} follows for the other two monodromies. The lemma is proved. $\Box$ The following lemma holds. \begin{lemm}\label{lem7} Let $X$ be a connected finite open Riemann surface, and $F:X\to C_3(\mathbb{C})\diagup \mathcal{S}_3$ a smooth mapping. Suppose for a base point $q_1$ of $X$ each element of $\pi_1(X,q_1)$ can be represented by a curve with base point $q_1$ whose image under $F$ avoids $\mathcal{H}$. Then all monodromies of $F$ are powers of the same periodic braid of period $3$. \end{lemm} \noindent {\bf Proof.} Take the monodromy of $F$ along any curve with base point $q_1$. It has a power that is a pure $3$-braid $b$, and a representative of $b$ avoids $\mathcal{H}$. Then for some integer $l$ the first and the last strand of $b \,\Delta_3^{2l}$ are fixed, and a representative of $b \,\Delta_3^{2l}$ avoids $\mathcal{H}$. Hence, $b\,\Delta_3^{2l}={\rm Id}$ and $b=\Delta_3^{-2l}$. We saw that the monodromy of $F$ along each element $e\in \pi_1(X,q_1)$ is a periodic braid. If a representative $f:[0,1]\to C_3(\mathbb{C})\diagup \mathcal{S}_3,\, f(0)=f(1),$ of a $3$-braid $b$ avoids $\mathcal{H}$, then the associated permutation $\tau_3(b)$ cannot be a transposition. Indeed, assume the contrary. Then there is a lift $\tilde f$ of $f$ to $C_3(\mathbb{C})$, for which $(\tilde{f}_1(1),\tilde{f}_2(1),\tilde{f}_3(1))=(\tilde{f}_3(0),\tilde{f}_2(0),\tilde{f}_1(0))$. Let $L_t$ be the line in $\mathbb{C}$ that contains $\tilde{f}_1(t)$ and $\tilde{f}_3(t)$, and is oriented so that running along $L_t$ in positive direction we meet first $\tilde{f}_1(t)$ and then $\tilde{f}_3(t)$. The point $f_2(0)$ is not on $L_0$. Assume without loss of generality, that it is on the left of $L_0$ with the chosen orientation of $L_0$. Since for each $t\in [0,1]$ the three points $\tilde{f}_1(t),\,\tilde{f}_2(t)$ and $\tilde{f}_3(t)$ in $\mathbb{C}$ are not on a real line, the point $\tilde{f}_2(t)$ is on the left of $L_t$ with the chosen orientation. But the unoriented lines $L_0$ and $L_1$ coincide, and their orientation is opposite. This implies $\tilde{f}_2(1)\neq\tilde{f}_2(0)$, which is a contradiction. We proved that all monodromies are periodic with period $3$. There is a smooth homotopy $F_s,\, s\in[0,1],$ of $F$, such that $F_0=F$, each $F_s$ is different from $F$ only on a small neighbourhood of $q_1$, each $F_t$ avoids $\mathcal{H}$ on this neighbourhood of $q_1$, and $F_1(q_1)$ is the set of vertices of an equilateral triangle with barycenter $0$. Since $F$ and $F_1$ are free homotopic, their monodromy homomorphisms are conjugate, and it is enough to prove the statement of the lemma for $F_1$. For notational convenience we will keep the notation $F$ for the new mapping and assume that $F(q_1)$ is the set of vertices of an equilateral triangle with barycenter $0$. The monodromy $F_*(e)$ along each element $e\in \pi_1(X,q_1)$ is a periodic braid of period $3$. Hence, $\tau_3(F_*(e))$ is a cyclic permutation. Consider the braid $g$ with base point $F(0)$ that corresponds to rotation by the angle $\frac{2\pi}{3}$, i.e. it is represented by the geometric braid $t\to e^{\frac{i2\pi t}{3}}F(0),\, t\in [0,1],$ that avoids $\mathcal{H}$. There exists an integer $k$ such that $F_*(e)\, g^k$ is a pure braid that is represented by a mapping that avoids $\mathcal{H}$. Hence, $F_*(e)\, g^k$ represents $\Delta_3^{2l}$ for some integer $l$. We proved that for each $e\in \pi_1(X,q_1)$ the monodromy $F_*(e)$ is represented by rotation of $F(0)$ around the origin by the angle $\frac{2\pi j}{3}$ for some integer $j$. The Lemma is proved. $\Box$ Let as before $X$ be a finite open connected Riemann surface. The following proposition is the main ingredient of the proof of Theorem \ref{thm3}. Let as before $\mathcal{E}\subset \pi_1(X,q_0)$ be the system of generators of the fundamental group with base point $q_0\in X$ that was chosen in Section 1. \begin{prop}\label{prop4} Let $(X\times \mathbb{P}^1,{\rm pr}_1,\mathbold{E},X)$ be an irreducible holomorphic special $(0,4)$-bundle over a finite open Riemann surface $X$, that is not isotopic to a locally holomorphically trivial bundle. Let $F(x),\, x \in X,$ be the set of finite distinguished points in the fiber over $x$. Assume that $F$ is transverse to $\mathcal{H}$. Then there exists a complex affine mapping $M$ and a point $q\in X$ such that $M\circ F(q)$ is contained in $C_3(\mathbb{R})\diagup \mathcal{S}_3$, and for an arc $\alpha$ in $X$ with initial point $q_0$ and terminating point $q$ and each element $e_j\in Is_{\alpha}(\mathcal{E})$ the monodromy modulo center $(M\circ F)_*(e_j)\diagup \mathcal{Z}_3$ can be written as product of at most $6$ elements $\textsf{b}_{j,k},\, k=1,2,3,4,5,6,$ of $\mathcal{B}_3 \diagup \mathcal{Z}_3$ with \begin{equation}\label{eq7} \mathcal{L}_-(\vartheta(\textsf{b}_{j,k})) \leq 2\pi \lambda_{10}(X). \end{equation} If $X$ is a torus with a hole the monodromy along each $e_j$ is the product of at most $4$ elements with $\mathcal{L}_-(\vartheta(\textsf{b}_{j,k})) \leq 2\pi \lambda_3(X)$, and in case of a planar domain the monodromy along each $e_j$ is the product of at most $6$ elements with $\mathcal{L}_-(\vartheta(\textsf{b}_{j,k})) \leq 2\pi \lambda_8(X)$. \end{prop} \noindent {\bf Proof of Proposition \ref{prop4}.} Since the bundle is not isotopic to a locally holomorphically trivial bundle, it is not possible that all monodromies are powers of the same periodic braid, and by Lemma \ref{lem7} the set \begin{align}\label{eq7a} L\stackrel{def}= & \{ z \in X: F(z) \in\mathcal{H }\} \end{align} is not empty. \noindent {\bf 1. A torus with a hole.} Let $X$ be a torus with a hole and let $\mathcal{E}=\{e'_0 ,\,e''_0\}$ be a set of generators of $\pi_1(X,q_0)$ that is associated to a standard bouquet of circles for $X$. There exists a connected component $L_0$ of $L$ which is not contractible and not contractible to the hole. Indeed, otherwise there would be a base point $q_1$ and a curve $\alpha_{q_1}$ that joins $q_0$ with $q_1$, such that for both elements of $\mbox{Is}_{\alpha_{q_1}}(\mathcal{E})$ there would be representing loops with base point $q_1$ which do not meet $L$, and hence, by Lemma \ref{lem7} the monodromies along both elements would be powers of a single periodic braid of period $3$. Hence, as in the proof of Proposition \ref{prop2} there exists a component $L_0$ of $L$, which is a simple smooth relatively closed curve in $X$, such that the free homotopy class of one of the elements of $\mathcal{E}$, say of $e'_0$, has positive intersection number with $L_0$. Put ${\sf e}'_0 =e'_0$. Moreover, the intersection number with $L_0$ is positive for the free homotopy class of one of the elements ${e_0''}^{\pm 1}$ or $e'_0 e''_0$. Denote this element by ${\sf e}_0''$. (Since $\reallywidehat{e'_0 e''_0}=\reallywidehat{e''_0 e'_0}$ we may also put ${\sf e}_0''=e''_0 e'_0$ if the free homotopy class of $e'_0 e''_0$ intersects $L_0$.) Put $\mathcal{E}_2'= \{{\sf e}_0', {\sf e}_0''\}$. The free homotopy class of each element of $\mathcal{E}_2'$ and of the product of its two elements intersects $L_0$. One of the ${\sf e}_0'$ and ${\sf e}_0''$ is an element of $\mathcal{E}$, the other is in $\mathcal{E}\cup \mathcal{E}^{-1}$ or is the product of two elements of $\mathcal{E}$. Each element of $\mathcal{E}$ is the product of at most two elements of $\mathcal{E}'_2\cup {\mathcal{E}'_2}^{-1}$. Move the base point $q_0$ to a point $q \in L_0$ along a curve $\alpha$, and consider the respective generators ${\sf e}'=\mbox{Is}_{\alpha}({\sf e}'_0)$ and ${\sf e}''=\mbox{Is}_{\alpha}({\sf e}''_0)$ of the fundamental group $\pi_1(X,q)$ with base point $q$. Since $F(L_0)\subset \mathcal{H}$ there is a complex affine mapping $M$ such that $M\circ F(q)\in C_3(\mathbb{R})\diagup \mathcal{S}_3$. Since $F$ is irreducible, the monodromy maps modulo center $(M\circ F)_*({\sf e}')\diagup \mathcal{Z}_3$ and $(M\circ F)_*({\sf e}'')\diagup \mathcal{Z}_3$ are not powers of a single standard generator $\sigma_j\diagup \mathcal{Z}_3$ of $\mathcal{B}_3 \diagup \mathcal{Z}_3$ (see Lemma 7 of \cite{Jo5}). Hence, the second option of Lemma \ref{lem8a} occurs. We obtain that each of the $(M\circ F)_*({\sf e}')\diagup \mathcal{Z}_3$ and $(M\circ F)_*({\sf e}'')\diagup \mathcal{Z}_3$ is a product of at most two elements ${\sf{b}}_j$ of $\mathcal{B}_3\diagup \mathcal{S}_3$ with $\mathcal{L}_-(\vartheta({\sf{b}}_j))\leq 2\pi\lambda_3(X)$. Hence, $(M\circ F)_*(e')\diagup \mathcal{Z}_3$ and $(M\circ F)_*(e'')\diagup \mathcal{Z}_3$ are products of at most $4$ elements of $\mathcal{B}_3\diagup \mathcal{Z}_3$ with this property. The proposition is proved for tori with a hole. \noindent {\bf 2. A planar domain.} Let $X$ be a planar domain. Maybe, after applying a M\"obius transformation, we represent $X$ as the Riemann sphere with holes $\mathcal{C}_j$, $j=1,\ldots,m+1,$ such that $\mathcal{C}_{m+1}$ contains $\infty$. Recall, that the set $\mathcal{E}$ of generators $e_{j,0}, \, j=1,\ldots,m,$ of the fundamental group $\pi_1(X,q_0)$ with base point $q_0$ is chosen so that $e_{j,0}$ is represented by a loop with base point $q_0$ that surrounds $\mathcal{C}_j$ counterclockwise and does not surround any other hole. There is a connected component $L_0$ of $L$ of one of the following kinds. Either $L_0$ has limit points on the boundary of two different holes (one of them may contain $\infty$) (first kind), or a component $L_0$ has limit points on a single hole $\mathcal{C}_j,\, j\leq m+1,$ and $\mathcal{C}_j\cup L_0$ divides the plane $\mathbb{C}$ into two connected components each of which contains a hole (maybe, only the hole containing $\infty$) (second kind), or there is a compact component $L_0$ that divides $\mathbb{C}$ into two connected components each of which contains at least two holes (one of them may contain $\infty$). Indeed, suppose each non-compact component of $L$ has boundary points on the boundary of a single hole and the union of the component with the hole does not separate the remaining holes of $X$, and for each compact component of $L$ one of the connected components of its complement in $X$ contains at most one hole. Then there exists a base point $q_1$, a curve $\alpha_{q_1}$ in $X$ with initial point $q_0$ and terminating point $q_1$, and a representative of each element of $\mbox{Is}_{\alpha_{q_1}}(\mathcal{E})\subset\pi_1(X,q_1)$ that avoids $L$. Lemma \ref{lem7} implies that all monodromies modulo center are powers of a single periodic element of $\mathcal{B}_3\diagup \mathcal{Z}_3$ which is a contradiction. If there is a component $L_0$ of the first kind we may choose the same set of primitive elements $\mathcal{E}_2'\subset \mathcal{E}_2 \subset \pi_1(X,q_0)$ as in the proof of Proposition \ref{prop2a} in the planar case. The free homotopy class of each element of $\mathcal{E}_2'$ and of the product of two such elements intersects $L_0$. Moreover, each element of $\mathcal{E}$ is the product of at most two elements of $\mathcal{E}_2'$. Let $\alpha_q$ be a curve in $X$ with initial point $q_0$ and terminating point $q$, and $M$ a complex affine mapping, such that $(M\circ F)(q)\in C_3(\mathbb{R})\diagup \mathcal{S}_3$. Since $M\circ F$ is irreducible, the monodromies modulo center of $M\circ F$ along the elements of $\mbox{Is}_{\alpha}(\mathcal{E}_2')$ are not (trivial or non-trivial) powers of a single element $\sigma_j\diagup \mathcal{Z}_3$. Hence, for each element of $\mbox{Is}_{\alpha}(\mathcal{E}_2')$ there exists another element of $\mbox{Is}_{\alpha}(\mathcal{E}_2')$ so that the second option of Lemma \ref{lem8a} holds for this pair of elements of $\mbox{Is}_{\alpha}(\mathcal{E}_2')$. Therefore, the monodromy modulo center of $M\circ F$ along each element of $\mbox{Is}_{\alpha}(\mathcal{E}_2')$ is the product of at most two elements ${\sf b}_j\in\mathcal{B}_3\diagup \mathcal{Z}_3$ of $\mathcal{L}_-$ not exceeding $2\pi \lambda_4(X)$, and the monodromy modulo center of $M\circ F$ along each element $\mbox{Is}_{\alpha}(\mathcal{E})$ is the product of at most $4$ elements of $\mathcal{B}_3\diagup \mathcal{Z}_3$ with $\mathcal{L}_-(\vartheta({\sf b}_j))$, each not exceeding $2\pi \lambda_4(X)$. Suppose there is no component of the first kind but a component $L_0$ of the second kind. Assume first that all limit points of $L_0$ are on the boundary of a hole $\mathcal{C}_j$ that does not contain $\infty$. Put $\mathcal{E}_3'=\{e_{j,0}\} \cup_{1\leq k\leq m,\, k\neq j}\{ e_{j,0}^2 e_{k,0}\}$. Each element of $\mathcal{E}_3'$ is a primitive element and is the product of at most three generators contained in the set $\mathcal{E}$. Further, each element of $\mathcal{E}$ is the product of at most three elements of $\mathcal{E}_3'\cup {\mathcal{E}_3'}^{-1}$. The free homotopy class of each element of $\mathcal{E}_3'$ and of each product of two different elements of $\mathcal{E}_3'$ intersects $L_0$. Indeed, any curve that is contained in the complement of $\mathcal{C}_j\cup L_0$ has either winding number zero around $\mathcal{C}_j$ (as a curve in the complex plane $\mathbb{C}$), or its winding number around $\mathcal{C}_j$ coincides with the winding number around each of the holes in the bounded connected component of $\mathcal{C}_j$. On the other hand the representatives of the free homotopy class of $e_{j,0}$ have winding number $1$ around $\mathcal{C}_j$ and winding number $0$ around each other hole that does not contain $\infty$. The representatives of the free homotopy class of $e_{j,0}^2 e_{k,0}$, $k\leq m,\, k\neq j$, have winding number $2$ around $\mathcal{C}_j$, winding number $1$ around $\mathcal{C}_k$, and winding number zero around each other hole $\mathcal{C}_l,\, l\leq m$. The argument for products of two elements of $\mathcal{E}_3'$ is the same. Choose a point $q\in L_0$, a curve $\alpha$ in $X$ with initial point $q_0$ and terminating point $q$, and a complex affine mapping $M$ such that $M\circ F(q)\in C_3(\mathbb{R})\diagup \mathcal{S}_3$. Lemma \ref{lem8a} finishes the proof for this case in the same way as in the case when there is a component of first kind. In the present case each $(M\circ F)_*(\tilde{e})\diagup \mathcal{Z}_3$, $\tilde{e}\in {\rm Is}_{\alpha}(\mathcal{E}_3')$, can be written as a product of at most $2$ factors ${\sf{b}}\in \mathcal{B}_3\diagup \mathcal{Z}_3$ with $\mathcal{L}_-(\vartheta({\sf{b}}))\leq 2\pi \lambda_6(X)$. Hence, each $(M\circ F)_*(e_j)\diagup \mathcal{Z}_3$, $e_j=\mbox{Is}_{\alpha}(e_{j,0})$, can be written as a product of at most $6$ factors ${\sf{b}}\in \mathcal{B}_3\diagup \mathcal{Z}_3$ with $\mathcal{L}_-(\vartheta({\sf{b}}))\leq 2\pi \lambda_6(X)$. Assume that the limit points of $L_0$ are on the boundary of the hole $\mathcal{C}_{\infty}$ that contains $\infty$. Let $\mathcal{C}_{j_0}$ and $\mathcal{C}_{k_0}$ be holes that are contained in different components of $X \setminus (L_0 \cup \mathcal{C}_{\infty})$, and let $e _{j_0,0}$ and $e _{k_0,0}$ be the elements of $\mathcal{E}$ whose representatives surround $\mathcal{C}_{j_0}$, and $\mathcal{C}_{k_0}$ respectively. Denote by $\mathcal{E}'_3$ the set that consists of the elements $e _{j_0,0}e _{k_0,0}\,$, $\;e _{j_0,0}^2e _{k_0,0}\,$, and all elements $ e _{j_0,0}e _{k_0,0} \tilde {e}_0$ with $\tilde{e}_0$ running over $ \mathcal{E}\setminus\{ e _{j_0,0},e _{k_0,0}\}$. Each element of $\mathcal{E}_3'$ is the product of at most $3$ elements of $\mathcal{E}$, and each element of $\mathcal{E}$ is the product of at most $3$ elements of $\mathcal{E}'_3\cup (\mathcal{E}'_3)^{-1}$. Each element of $\mathcal{E}'_3$ and each product of at most two different elements of $\mathcal{E}'_3$ intersects $L_0$. Indeed, if a closed curve is contained in one of the components of $X\setminus (L_0\cup \mathcal{C}_{\infty})$ then its winding number around each hole contained in the other component is zero. But for all mentioned elements there is a hole in each component of $X\setminus (L_0 \cup \mathcal{C}_{\infty})$ such that the winding number of the free homotopy class of the element around the hole does not vanish. Lemma \ref{lem8a} applies with the same meaning of $q$, $\alpha$, and $M$ as before. Again, each $(M\circ F)_*(e_j)\diagup \mathcal{Z}_3$, $e_j=\mbox{Is}_{\alpha}(e_{j,0})$, can be written as a product of at most $6$ factors ${\sf{b}}\in \mathcal{B}_3\diagup \mathcal{Z}_3$ with $\mathcal{L}_-(\vartheta({\sf{b}}))\leq 2\pi \lambda_6(X)$. Notice that in case of $m+1=3$ holes only these two possibilities for the curve $L_0$ may occur. Hence in this case we find a set $\mathcal{E}_3'=\{{\sf e}'_0,{\sf e}''_0\} \subset \pi_1(X,q_0)$, such that one of the elements of $\mathcal{E}_3'$ is the product of at most two elements of $\mathcal{E}\cup \mathcal{E}^{-1}$, and each of the monodromies $F_*({\sf e}'_0)$ and $F_*({\sf e}''_0)$ is the product of at most two elements ${\sf{b}}\in \mathcal{B}_3\diagup \mathcal{Z}_3$ with $\mathcal{L}_-(\vartheta({\sf{b}}))\leq 2\pi \lambda_5(X)$. Moreover, $e$ and $e'$ are products of at most three factors, each an element of $\mathcal{E}_3'\cup \mathcal{E}_3'^{-1}$. Suppose there are no components of $L$ of the first or the second kind, but there is a connected component $L_0$ of $L$ of the third kind. Let $\mathcal{C}_{j_0}$ be a hole contained in the bounded component of the complement of $L_0$, and let $\mathcal{C}_{k_0},\, k_0\leq m,$ be a hole that is contained in the unbounded component of $X \setminus L_0$. Let $e _{j_0,0}$ and $e _{k_0,0}$ be the elements of $\mathcal{E}$ whose representatives surround $\mathcal{C}_{j_0}$, and $\mathcal{C}_{k_0}$ respectively. Consider the set $\mathcal{E}'_4$ consisting of the following elements: $e_{j_0,0} e_{k_0,0}$, $e_{j_0,0}^2 e_{k_0,0}$, and $e_{j_0,0}^2 e_{k_0,0} \tilde{ e}_0$ for each $\tilde{e}_0\in \mathcal{E}$ different from $e_{j_0,0}$ and $e_{k_0,0}$. Each element of $\mathcal{E}'_4$ is the product of at most $4$ elements of $\mathcal{E}$ and each element of $\mathcal{E}$ is the product of at most $3$ elements of $\mathcal{E}_4'\cup (\mathcal{E}_4')^{-1}$. The product of two different elements of $\mathcal{E}'_4$ is contained in $\mathcal{E}'_8$. The free homotopy classes of each element of $\mathcal{E}'_4$ and of each product of two different elements of $\mathcal{E}'_4$ intersects $L_0$. Indeed, if a loop is contained in the bounded connected component of $X\setminus L_0$, its winding number around the holes $\mathcal{C}_j\,, j\leq m,$ contained in the unbounded component is zero. If a loop is contained in the unbounded connected component of $X\setminus L_0$, its winding numbers around all holes contained in the bounded connected component are equal. But the winding number of $e_{j_0,0} e_{k_0,0}$ and $e_{j_0,0}^2 e_{k_0,0}$ around the hole $\mathcal{C}_{j_0}$ is positive and the winding number around the other holes that are contained in the bounded connected component of $X\setminus L_0$ vanishes, hence the representatives of these two elements cannot be contained in the unbounded component of $X\setminus L_0$. Since the winding number of representatives of these elements around $\mathcal{C}_ {k_0}$ is positive, the representatives cannot be contained in the bounded component of $X\setminus L_0$. For representatives of the elements $e_{j_0,0}^2 e_{k_0,0} \tilde{e}_0$ the winding number around $\mathcal{C}_{j_0}$ equals $2$, the winding number around any other hole in the bounded component of $X\setminus L_0$ is at most $1$, and the winding number around $\mathcal{C}_{k_0}$ equals $1$. Hence, the free homotopy classes of the mentioned elements must intersect both components of $X \setminus L_0$, hence they intersect $L_0$. Representatives of any product of two elements of $\mathcal{E}_4'$ have winding number around $\mathcal{C}_{j_0}$ at least $3$, the winding number around any other hole in the bounded component of $X\setminus L_0$ is at most $1$, and the winding number around $\mathcal{C}_{k_0}$ equals $2$. Hence, the free homotopy classes of these elements intersect $L_0$. For a point $q\in L_0$, a curve $\alpha$ in $X$ joining $q_0$ and $q$, and a complex affine mapping $M$ for which $M\circ F(q)\in C_3( \mathbb{R})\diagup \mathcal{S}_3$, an application of Lemma \ref{lem8a} proves that in this case each $(M\circ F)_*(e_j)\diagup \mathcal{Z}_3$, $e_j=\mbox{Is}_{\alpha}(e_{j,0})$, can be written as a product of at most $6$ factors ${\sf{b}}\in \mathcal{B}_3\diagup \mathcal{Z}_3$ with $\mathcal{L}_-(\vartheta({\sf{b}}))\leq 2\pi \lambda_8(X)$. Proposition \ref{prop4} is proved in the planar case with a slightly better constant. \noindent {\bf 3. The general case.} Since not all monodromies are powers of a single element of $\mathcal{B}_3\diagup \mathcal{Z}_3$ that is either periodic or reducible, there exists a pair of generators $e_0'$, $e_0''$ in $\mathcal{E}$, such that the monodromies along them are not powers of a single periodic or reducible element. Consider the projection $\omega^{\langle e_0', e_0''\rangle}: \tilde{X}\to X(\langle e_0', e_0''\rangle)$. By the proof for tori with a hole or for $\mathbb{P}^1$ with three holes there exist a relatively closed curve $L_{\langle e_0', e_0''\rangle}$ in $X(\langle e_0', e_0''\rangle)$ and a M\"obius transformation $M$, such that for $F=M\circ f$ the mapping $F_{\langle e_0', e_0''\rangle} =F \circ \omega_{\langle e_0', e_0''\rangle}$ takes $L_{\langle e_0', e_0''\rangle}$ into $\mathcal{H}$, and takes a chosen point $q_{\langle e_0', e_0''\rangle}\in L_{\langle e_0', e_0''\rangle}$ to a point in $C_3(\mathbb{R})\diagup \mathcal{S}_3$. Choose a point $\tilde{q}\in \tilde X$, for which $\omega^{\langle e'_0, e''_0\rangle}(\tilde{q})=q_{\langle e_0, e'_0\rangle}$. Let $\tilde{\alpha}$ be a curve in $\tilde X$ with initial point $\tilde{q}_0$ and terminating point $\tilde{q}$. Then $\alpha_{\langle e_0', e''_0\rangle}\stackrel{def}=\omega^{\langle e'_0, e''_0\rangle}(\tilde{\alpha})$ is a curve in $X(\langle e'_0, e''_0\rangle)$ with initial point ${q_0}_{\langle e'_0, e''_0\rangle}$ and terminating point $ q_{\langle e'_0, e''_0\rangle}$, and the curve $\alpha_{\langle e'_0, e''_0\rangle}$ in $X(\langle e'_0, e''_0\rangle)$ and the point $\tilde{q}_0$ in the universal covering $\tilde X$ of $X(\langle e'_0, e''_0\rangle)$ are compatible. Put $\alpha=\omega_{\langle e'_0, e''_0\rangle}(\alpha_{\langle e'_0, e''_0\rangle})$, and for each $e_0\in \pi_1(X,q_0)$ we denote as before the element ${\rm Is}_{\alpha}({ e}_0)$ by $e$. As in the case of a torus with a hole or $\mathbb{P}^1$ with three holes there are elements ${\sf e}_0'$ and ${\sf e}_0''$, one of them contained in $\mathcal{E}$ or equal to the product of at most two factors among the $e'_0$ and $e''_0$, the second either contained in $\mathcal{E}\cup \mathcal{E}^{-1}$, or equal to the product of at most three factors among the $e'_0$ and $e''_0$, such that the free homotopy classes of $({\sf e}_0')_{\langle e'_0, e''_0\rangle}$, of $({\sf e}''_0)_{\langle e'_0, e''_0\rangle}$, and of their product intersect $ L_{\langle e'_0, e''_0\rangle}$. (For the definitions of $({\sf e}_0')_{\langle e'_0, e''_0\rangle}$ and of $({\sf e}''_0)_{\langle e'_0, e''_0\rangle}$ see paragraph 3.1.) Moreover, $e'_0$ and $e''_0$ are products of at most three factors, each being either $({\sf e}'_0)^{\pm 1}$ or $({\sf e}''_0)^{\pm 1}$. Put ${\sf e}'_{\langle e'_0, e''_0\rangle}={\rm Is}_{\alpha_{\langle e'_0, e''_0\rangle} } (({\sf e}_0')_{\langle e'_0, e''_0\rangle})$, ${\sf e}''_{\langle e'_0, e''_0\rangle}={\rm Is}_{\alpha_{\langle e'_0, e''_0\rangle} } ( ({\sf e}_0'')_{\langle e'_0, e''_0\rangle})$. By Lemma \ref{lem8a} each monodromy $(F_{\langle e'_0, e''_0\rangle})_*({\sf e}'_{\langle e'_0, e''_0\rangle})= F_*({\sf e}')$ and $(F_{\langle e'_0, e''_0\rangle})_*({\sf e}''_{\langle e'_0, e''_0\rangle})= F_*({\sf e}'')$ is the product of at most two elements ${\sf b}_j\in\mathcal{B}_3\diagup \mathcal{Z}_3$ with $\mathcal{L}_-(\vartheta({\sf b}_j))\leq 2\pi \lambda_5(X)$. Since $e'$ and $e''$ are products of at most three elements among $({\sf e}')^{\pm 1}$ and $({\sf e}'')^{\pm 1}$, each of the monodromies $F_*(e')$ and $F_*(e'')$ is the product of at most $6$ elements ${\sf b}_j\in\mathcal{B}_3\diagup \mathcal{Z}_3$ with $\mathcal{L}_-(\vartheta({\sf b}_j))\leq 2\pi \lambda_5(X)$. Take any element $e_0\in \mathcal{E}\setminus\{e'_0,e''_0\}$. Either the pair of monodromies ($F_*({\sf e}')$, $F_*(e)$) or the pair of monodromies ($F_*({\sf e}'')$, $F_*(e)$) does not consist of two powers of the same element of $\mathcal{B}_3\diagup \mathcal{S}_3$ that is either periodic or reducible. Suppose this is so for the pair ($F_*({\sf e}')$, $F_*(e)$). Let $L_{\langle {\sf e}'\rangle}$ be the connected component of $(\omega^{\langle {\sf e}', {\sf e}''\rangle}_{\langle {\sf e}'\rangle})^{-1}(L_{\langle {\sf e}',{\sf e}''\rangle})$ that contains $\omega^{\langle {\sf e}'\rangle}(\tilde{q})$. By Lemma \ref{lem1}, applied to the holomorphic projection $\tilde{X}\diagup ({\rm Is}^{\tilde{q}})^{-1}(\langle {\sf e}'\rangle) \to X(\langle {\sf e}', {\sf e}'' \rangle)$, the free homotopy class $\reallywidehat{{\sf e}'_{\langle {\sf e}'\rangle}}$ intersects $L_{\langle {\sf e}'\rangle}$. (For the definition of ${\sf e}'_{\langle {\sf e}'\rangle}$ see paragraph 3.1.) As in the proof of Proposition \ref{prop2} we consider the Riemann surface $X(\langle e, {\sf e}'\rangle)$ and the curve $L_{\langle e, {\sf e}'\rangle}= \omega^{\langle e, {\sf e}'\rangle}_ {\langle {\sf e}'\rangle}(L_{\langle {\sf e}'\rangle}) $ (see paragraph 3.3. of the proof of proposition \ref{prop2}). As there we see that the free homotopy class $\reallywidehat{ {\sf e}'_{\langle e, {\sf e}'\rangle}}$ intersects $L_{\langle e, {\sf e}'\rangle}$. The system $e_{\langle e, {\sf e}'\rangle}, {\sf e}'_{\langle e, {\sf e}'\rangle}$ is associated to a standard bouquet of circles for $X(\langle e, {\sf e}'\rangle)$ (though the representing curves of ${\sf e}'$ in $X$ may not be simple closed curves or may intersect representing curves of $e$). This can be seen in the same way as in the proof of Proposition \ref{prop2}. Apply the arguments, used for $X(\langle e',e''\rangle )$ and the generators $e'_{\langle e', e''\rangle},e''_{\langle e', e''\rangle}$ of the fundamental group $\pi_1(X(\langle e', e''\rangle), q_{\langle e', e''\rangle})$, to $X(\langle e, {\sf e}'\rangle)$ and the generators $e_{\langle e, {\sf e}'\rangle}, {\sf e}'_{\langle e, {\sf e}'\rangle}$ of the fundamental group $\pi_1(X(\langle e, {\sf e}'\rangle), q_{\langle e, {\sf e}'\rangle})$. In the case when $X(\langle e, {\sf e}'\rangle)$ is a torus with a hole, the intersection number of $\reallywidehat{{\sf e}'_{\langle e, {\sf e}'\rangle} }$ with $L_{\langle e, {\sf e}'\rangle}$ is non-zero. Put $\mathfrak{e}'={\sf e}'$. For one of the choices $e^{\pm 1}$, or ${\sf e}'\, e$, denoted by $\mathfrak{e}''$, the free homotopy classes of $\mathfrak{e}'_{\langle e, {\sf e}'\rangle}$, $\mathfrak{e}''_{\langle e, {\sf e}'\rangle}$, and of their product intersect $L_{\langle e, {\sf e}'\rangle}$. Moreover, $e$ is the product of at most two factors, each being $(\mathfrak{e}')^{\pm 1}$, or $(\mathfrak{e}'')^{\pm 1}$. In case $X(\langle e, {\sf e}'\rangle)$ is planar, the curve $L_{\langle e, {\sf e}'\rangle}$ must have limit points on the hole that corresponds to the generator ${\sf e}'_{\langle e, {\sf e}'\rangle}$ of the fundamental group $\pi_1(X(\langle e, {\sf e}'\rangle), q_{\langle e, {\sf e}'\rangle})$. We find elements $\mathfrak{e}'$ and $\mathfrak{e}''$ such that $\mathfrak{e}'= {\sf e}' $ and $\mathfrak{e}''$ is either equal to $e^{-1}$, or to the product of at most three factors, one being equal to $e$ and the others equal to ${\sf e}' $, and the free homotopy classes of $\mathfrak{e}'_{\langle e, {\sf e}'\rangle}$, $\mathfrak{e}''_{\langle e, {\sf e}'\rangle}$, and their product intersect $L_{\langle e, {\sf e}'\rangle}$. Moreover, $e$ is the product of at most $3$ factors, each being equal to $(\mathfrak{e}'')^{\pm 1}$ or $(\mathfrak{e}')^{\pm 1}$. In both cases for $X(\langle e, {\sf e}'\rangle)$ the element $\mathfrak{e}' \mathfrak{e}''$ is the product of at most $10$ elements of $\mathcal{E} \cup \mathcal{E}^{-1}$. Lemma \ref{lem8a} implies, that $F_*(\mathfrak{e})$ and $F_*(\mathfrak{e}')$ are products of at most two factors $\sf b$ with $\mathcal{L}_-(\vartheta({\sf b}))$ not exceeding $2\pi \lambda_{10}(X)$. Hence, $F_*(e)$ is the product of at most $6$ factors $\sf b$ with $\mathcal{L}_-(\vartheta({\sf b}))$ not exceeding $2\pi \lambda_{10}(X)$. We obtain the statement of Proposition \ref{prop4} in the general case. Proposition \ref{prop4} is proved. $\Box$ \noindent {\bf Proof of Theorem \ref{thm3}.} Let $X$ be a connected Riemann surface of genus $\sf g$ with ${\sf m}+1\geq 1$ holes. Since each holomorphic $(0,3)$-bundle with a holomorphic section on $X$ is isotopic to a holomorphic special $(0,4)$-bundle, we need to estimate the number of isotopy classes of irreducible smooth special $(0,4)$-bundles on $X$, that contain a holomorphic bundle. By Lemma 4 of \cite{Jo5} the monodromies of such a bundle are not powers of a single element of $\mathcal{B}_3 \diagup \mathcal{Z}_3$ which is conjugate to a $\sigma_j\diagup \mathcal{Z}_3$, but they may be powers of a single periodic element of $\mathcal{B}_3 \diagup \mathcal{Z}_3$ (equivalently, the isotopy class may contain a locally holomorphically trivial holomorphic bundle). Consider an irreducible special holomorphic $(0,4)$-bundle on $X$ which is not isotopic to a locally holomorphically trivial bundle. Let $F(x),\, x \in X,$ be the set of finite distinguished points in the fiber over $x$. By the Holomorphic Transversality Theorem \cite{KZ} the mapping $F:X\to C_3(\mathbb{C})\diagup \mathcal{S}_3$ can be approximated on relatively compact subsets of $X$ by holomorphic mappings that are transverse to $\mathcal{H}$. Similarly as in the proof of Theorem \ref{thm1} we will therefore assume in the following (after slightly shrinking $X$ to a deformation retract of $X$ and approximating $F$) that $F$ is transverse to $\mathcal{H}$. By Proposition \ref{prop4} there exists a complex affine mapping $M$ and a point $q\in X$ such that $M\circ F(q)$ is contained in $C_3(\mathbb{R})\diagup \mathcal{S}_3$, and for an arc $\alpha$ in $X$ with initial point $q_0$ and terminating point $q$ and each element $e_j\in {\rm Is}_{\alpha}(\mathcal{E})$ the monodromy $(M\circ F)_*(e_j)\diagup \mathcal{Z}_3$ of the bundle can be written as product of at most $6$ elements $\textsf{b}_{j,k},\, k=1,2,3,4,5,6,$ of $\mathcal{B}_3 \diagup \mathcal{Z}_3$ with \begin{equation}\label{eq7} \mathcal{L}_-(\vartheta(\textsf{b}_{j,k})) \leq 2\pi \lambda_{10}(X). \end{equation} \noindent The mappings $F$ and $M\circ F$ from $X$ into the symmetrized configuration space are free homotopic. Consider an isotopy class of special $(0,4)$-bundles that corresponds to a conjugacy class of homomorphisms $\pi_1(X,q_0)\to \mathcal{B}_3\diagup \mathcal{Z}_3$ whose image is generated by a single periodic element of $\mathcal{B}_3\diagup \mathcal{Z}_3$. Up to conjugacy we may assume that this element is one of the following: ${\rm Id},\, \Delta_3\diagup \mathcal{Z}_3,\, (\sigma_1 \sigma_2 )\diagup \mathcal{Z}_3 , \,(\sigma_1 \sigma_2)^{-1} \diagup \mathcal{Z}_3$. For each of these elements $\textsf{b}$ the equality $\mathcal{L}_-(\vartheta(\textsf{b}))=0$ holds. Hence, in this case the isotopy class contains a smooth mapping $\tilde F$ such that for each $e_{j,0}\in \mathcal{E}$ the monodromy $(M\circ F)_*(e_{j,0})\diagup \mathcal{Z}_3$ of the bundle can be written as product of at most $6$ elements $\textsf{b}_{j,k},\, k=1,2,3,4,5,6,$ of $\mathcal{B}_3 \diagup \mathcal{Z}_3$ satisfying inequality \eqref{eq7}. The same argument as in the proof of Theorem \ref{thm1} shows the following fact. Each irreducible free homotopy class of mappings $X\to C_3(\mathbb{C})\diagup \mathcal{S}_3$ that contains a holomorphic mapping contains a smooth mapping $\tilde F$ such that for each $e_{j,0}\in \mathcal{E}$ the monodromy $\tilde{F}_*(e_{j,0})\diagup \mathcal{Z}_3$ of the bundle can be written as product of at most $6$ elements $\textsf{b}_{j,k},\, k=1,2,3,4,5,6,$ of $\mathcal{B}_3 \diagup \mathcal{Z}_3$ satisfying inequality \eqref{eq7}. Using Lemma 1 of \cite{Jo3} the number of elements of $ \textsf{b}\in \mathcal{B}_3 \diagup \mathcal{Z}_3$ (including the identity), for which $\mathcal{L}_-(\vartheta(\textsf{b}))\leq 2\pi \lambda_{10}(X)$, is estimated as follows. The element ${\sf w}\stackrel{def}=\vartheta({\sf b}) \in \mathcal{PB}_3 \diagup \mathcal{Z}_3$ can be considered as a reduced word in the free group generated by $a_1=\sigma_1^2\diagup \mathcal{Z}_3$ and $a_2=\sigma_2^2\diagup \mathcal{Z}_3$. By Lemma 1 of \cite{Jo3} there are no more than $ \frac{1}{2}\exp(6 \pi \lambda_{10}(X))+1\leq \frac{3}{2}\exp(6 \pi \lambda_{10}(X))$ reduced words $\sf w$ in $a_1$ and $a_2$ (including the identity) satisfying the inequality $\mathcal{L}_-(\textsf{w})\leq 2\pi \lambda_{10}(X)$. For a given element ${\sf w}\in \mathcal{PB}_3 \diagup \mathcal{Z}_3$ (including the identity) we describe now all elements ${\textsf{b}}$ of $\mathcal{B}_3 \diagup \mathcal{Z}_3$ with $\vartheta(\textsf{b})=\textsf{w}$. If ${\textsf{w}}\neq \mbox{Id}$ these are the following elements. If the first term of ${\sf w}$ equals $a_j^k$ with $k\neq 0$, then the possibilities are $\textsf{b}={\sf w} \cdot (\Delta_3^{\ell}\diagup \mathcal{Z}_3)$ with $\ell=0$ or $1$, $\textsf{b}=(\sigma_j^{\begin{tiny}{\mbox{sgn}}\end{tiny} k}\diagup\mathcal{Z}_3)\cdot {\sf w} \cdot (\Delta_3^{\ell}\diagup \mathcal{Z}_3)$ with $\ell=0$ or $1$, or $\textsf{b}=(\sigma_{j'}^{\pm 1}\diagup \mathcal{Z}_3) \cdot {\sf w} \cdot (\Delta_3^{\ell}\diagup \mathcal{Z}_3)$ with $\ell=0$ or $1$ and $\sigma_{j'}\neq \sigma_{j}$. Hence, for $\textsf{w}\neq \mbox{Id}$ there are $8$ possible choices of elements ${\textsf{b}}\in \mathcal{B}_3 \diagup \mathcal{Z}_3$ with $\vartheta({\textsf{b}})= {\sf w} $. If $\textsf{b}= \mbox{Id}$ then the choices are $\Delta^{\ell}\diagup \mathcal{Z}_3$ and $(\sigma_j^{\pm 1}\Delta^{\ell})\diagup \mathcal{Z}_3$ for $j=1,2,$ and $\ell=0$ or $\ell=1$. These are $10$ choices. Hence, there are no more than $15 \exp( 6 \pi \lambda_{10}(X))$ different elements $\textsf{b}\in \mathcal{B}_3 \diagup \mathcal{Z}_3$ with $\mathcal{L}_-(\vartheta(\textsf{b}))\leq 2 \pi \lambda_{10}(X)$. Each monodromy is the product of at most six elements ${\sf{b}}_j$ of $\mathcal{B}_3 \diagup \mathcal{Z}_3$ with $\mathcal{L}_-(\vartheta(\textsf{b}_j))\leq 2 \pi \lambda_{10}(X)$. Hence, for each monodromy there are no more than $(15 \exp( 6 \pi \lambda_{10}(X)))^{6}$ possible choices. We proved that there are up to isotopy no more than $(15 \exp( 6 \pi \lambda_{10}(X)))^{6(2g+m)}$ irreducible holomorphic $(0,3)$-bundles with a holomorphic section over $X$. Theorem \ref{thm3} is proved. $\Box$ Notice that we proved a slightly stronger statement, namely, over a Riemann surface of genus $g$ with $m+1\geq 1$ holes there are no more than $(15 \exp( 6 \pi \lambda_{10}(X)))^{6(2g+m)}$ isotopy classes of smooth $(0,3)$-bundles with a smooth section that contain a holomorphic bundle with a holomorphic section that is either irreducible or isotopic to the trivial bundle. \noindent {\bf Proof of Theorem \ref{thm2}.} Proposition \ref{prop1} and Theorem \ref{thm3} imply Theorem \ref{thm2} as follows. Suppose an isotopy class of smooth $(1,1)$-bundles over a finite open Riemann surface $X$ contains a holomorphic bundle. By Proposition \ref{prop1} the class contains a holomorphic bundle which is the double branched covering of a holomorphic special $(0,4)$-bundle. If the $(1,1)$-bundle is irreducible then also the $(0,4)$-bundle is irreducible. There are up to isotopy no more than $\big(15(\exp(6 \pi \lambda_{10}(X)))\big)^{6(2g+m)}$ holomorphic special $(0,4)$-bundles over $X$ that are either irreducible or isotopic to the trivial bundle. By Theorem G and Theorem \ref{thm3} there are no more than $\big(15\big(\exp(6 \pi \lambda_{10}(X)))\big)^{6(2g+m)}$ conjugacy classes of monodromy homomorphisms that correspond to a special holomorphic $(0,4)$-bundle over $X$ that is either irreducible or isotopic to the trivial bundle. Each monodromy homomorphism of the holomorphic double branched covering is a lift of the respective monodromy homomorphism of the holomorphic special $(0,4)$-bundle. Different lifts of a monodromy mapping class of a special $(0,4)$-bundle differ by involution, and the fundamental group of $X$ has $2g+m$ generators. Using Theorem G for $(1,1)$-bundles, we see that there are no more than $2^{2g+m}\big(15(\exp(6 \pi \lambda_{10}(X)))\big)^{6(2g+m)}=\big(2 \cdot 15^6\cdot\exp(36 \pi \lambda_{10}(X))\big)^{2g+m}$ isotopy classes of $(1,1)$-bundles that contain a holomorphic bundle that is either irreducible or isotopic to the trivial bundle. Theorem \ref{thm2} is proved. $\Box$ \noindent For convenience of the reader we give the short proofs of the Corollaries \ref{cor1a} and \ref{cor1b}. Such statements are known in principle, but the case considered here is especially simple. \noindent {\bf Proof of Corollary \ref{cor1a}.} We will prove that on a punctured Riemann surface there are no non-constant reducible holomorphic mappings to the twice punctured complex plane and that any homotopy class of mappings from a punctured Riemann surface to the twice punctured complex plane contains at most one holomorphic mapping. This implies the corollary. Recall that a holomorphic mapping $f$ from any punctured Riemann surface $X$ to the twice punctured complex plane extends by Picard's Theorem to a meromorphic function $f^c$ on the closed Riemann surface $X^c$. Suppose now that $X$ is a punctured Riemann surface and that the mapping $f:X \to \mathbb{C}\setminus \{-1,1\}$ is reducible, i.e. it is homotopic to a mapping into a punctured disc contained in $\mathbb{C}\setminus \{-1,1\}$. Perhaps after composing $f$ with a M\"obius transformation we may suppose that this puncture equals $-1$. Then the meromorphic extension $f^c$ omits the value $1$. Indeed, if $f^c$ was equal to $1$ at some puncture of $X$, then $f$ would map the boundary of a sufficiently small disc on $X^c$ that contains the puncture to a loop in $\mathbb{C}\setminus \{-1,1\}$ with non-zero winding number around $1$ , which contradicts the fact that $f$ is homotopic to a mapping into a disc punctured at $-1$ and contained in $\mathbb{C}\setminus \{-1,1\}$. Hence, $f^c$ is a meromorphic function on a compact Riemann surface that omits a value, and, hence $f$ is constant. Hence, on a punctured Riemann surface there are no non-constant reducible holomorphic mappings to $\mathbb{C}\setminus \{-1,1\}$. Suppose $f_1$ and $f_2$ are non-constant homotopic holomorphic mappings from the punctured Riemann surface $X$ to the twice punctured complex plane. Then for their meromorphic extensions $f^c_1$ and $f^c_2$ the functions $f^c_1-1$ and $f^c_2-1$ have the same divisor on the closed Riemann surface $X^c$. Indeed, suppose, for instance, that $f^c_1-1$ has a zero of order $k >0$ at a puncture $p$. Then for the boundary $\gamma$ of a small disc in $X^c$ around $p$ the curve $(f_1-1) \circ \gamma$ in $\mathbb{C}\setminus \{-2,0\}$ has index $k$ with respect to the origin. Since $f_2-1$ is homotopic to $f_1-1$ as mapping to $\mathbb{C}\setminus \{-2,0\}$, the curve $(f_2-1) \circ \gamma$ is free homotopic to $(f_1-1) \circ \gamma$ . Hence, $f_2-1$ has a zero of order $k$ at $p$. Applying the same arguments with $0$ replaced by $\infty$, we obtain that $f^c_1-1$ and $f^c_2-1$ have the same divisor. Hence, $f^c_1-1$ and $f^c_2-1$ differ by a non-zero multiplicative constant. Since the functions are non-constant they must take the value $-2$. By the same reasoning as above the functions are equal to $-2$ simultaneously. Hence, the multiplicative constant is equal to $1$. We proved that non-constant homotopic holomorphic maps from punctured Riemann surfaces to $\mathbb{C}\setminus \{-1,1\}$ are equal. $\Box$ \noindent {\bf Proof of Corollary \ref{cor1b}.} We need the following fact. For each special $(0,4)$-bundle $\mathfrak{F}=(X\times \mathbb{P}^1, {\rm pr}_1, {\mathbold E}, X)$ there is a finite unramified covering $\hat{{\sf P}}:\hat X\to X$ of $X$, such that $\mathfrak{F}$ lifts to a special $(0,4)$-bundle $(\hat{X}\times\mathbb{P}^1,{\rm pr}_1,\hat{\mathbold{E}},X)$, for which the complex curve $\hat{\mathbold{E}}$ is the union of four disjoint complex curves $\hat{\mathbold{E}}^k,\, k=1,2,3,4,$ each intersecting each fiber $\{\hat{x}\}\times\mathbb{P}^1$ along a single point $(\hat{x},\,\hat{g}^k(\hat{x}))$. This can be seen as follows. Let $q_0$ be the base point of $X$. The monodromy mapping class along each element $e$ of $\pi_1(X,q_0)$ takes the set of distinguished points ${\mathbold{E}}\cap (\{q_0\}\times\mathbb{P}^1)$ onto itself, permuting them by a permutation $\sigma(e)$. Consider the set $ N$ of elements $e\in \pi_1(X,q_0)$ for which $\sigma(e)$ is the identity. The set $N$ is a normal subgroup of $\pi_1(X,q_0)$. Its index is finite, since two left cosets $e_1\,N$ and $e_2 \, N$ are equal if $ \sigma(e_2\,e_1^{-1}) = \sigma(e_2)\sigma(e_1)^{-1}={\rm Id}$, and there are only finitely many distinct permutations of points of ${\mathbold{E}}\cap (\{q_0\}\times\mathbb{P}^1)$. The quotient $\hat{X}\stackrel{def}=\tilde {X}\diagup {\rm Is}^{\tilde{q}_0}(N)$ of the universal covering of $X$ by the group of covering transformations corresponding to $N$ and the canonical projection $\hat{X}\to X$ define the required covering. To prove the corollary, we have to show, that any reducible holomorphic $(1,1)$-bundle over a punctured Riemann surface $X$ is locally holomorphically trivial, and that two isotopic (equivalently, smoothly isomorphic) holomorphically non-trivial holomorphic $(1,1)$-bundles over $X$ are holomorphically isomorphic. The second fact is obtained as follows. Suppose the holomorphically non-trivial holomorphic $(1,1)$-bundles $\mathfrak{F}_j,\,j=1,2,$ have conjugate monodromy homomorphisms. By Proposition \ref{prop1} each $\mathfrak{F}_j$ is holomorphically isomorphic to a double branched covering of a special holomorphic $(0,4)$-bundle $(X\times\mathbb{P}^1,{\rm pr}_1,\mathbold{E}_j,X)\stackrel{def}= {P}(\mathfrak{F}_j)$. The bundles ${P}(\mathfrak{F}_j)$ are isotopic, since they have conjugate monodromy homomorphisms. There is a finite unramified covering $\hat{{\sf P}}:\hat X\to X$ of $X$, such that the bundles $ P(\mathfrak{F}_j)$ have isotopic lifts $(\hat{X}\times\mathbb{P}^1,{\rm pr}_1,\hat{\mathbold{E}}_j,X)$ to $\hat X$, and for each $j$ the complex curve $\hat{\mathbold{E}}_j$ is the union of four disjoint complex curves $\hat{\mathbold{E}}_l^k,\, k=1,2,3,4,$ each intersecting each fiber $\{\hat{x}\}\times\mathbb{P}^1$ along a single point $(\hat{x},\,\hat{g}_j^k(\hat{x}))$. The lifted bundles are not isotopic to the trivial bundle. The mappings $\hat{X}\ni \hat{x}\to \hat{g}_j^k(\hat{x})$ are holomorphic. We may assume that $\hat{g}_j^4(\hat{x})=\infty$ for each $\hat{x}$. Define for $j=1,2,$ a holomorphic isomorphism of the bundle $(\hat{X}\times\mathbb{P}^1,{\rm pr}_1,\hat{\mathbold{E}}_j,X)$ by $$ \{\hat{x}\}\times \mathbb{P}^1\ni (\hat{x},\,\zeta)\to \Big(\hat{x},\,-1 + 2\frac{\hat{g}_j^1(\hat{x})-\zeta}{\hat{g}_j^1(\hat{x})-\hat{g}_j^2(\hat{x}) }\Big)\,. $$ The image $\hat{\mathbold{E}}'_j$ of $\hat{\mathbold{E}}_j$ under the $j$-th isomorphism intersects the fiber over each $\hat{x}\in \hat X$ along the four points $(\hat{x},-1),\,(\hat{x},1),\,(\hat{x},\infty),$ and $(\hat{x},\mathring{g}_j(\hat{x}))$ for a holomorphic function $\mathring{g}_j$ on $\hat X$ that avoids $-1,\,1$ and $\infty$. The functions $\;\mathring{g}_j\;$, $j=1,2,\;$ are homotopic, since the bundles are isotopic. They are not homotopic to a constant function since the bundles are not isotopic to the trivial bundle. By Corollary \ref{cor1a} the functions $\;\mathring{g}_1\;$ and $\;\mathring{g}_2\;$ coincide. Hence, the bundles $\;\;(\hat{X}\times\mathbb{P}^1,{\rm pr}_1,\hat{\mathbold{E}}_j,X)\;\;$ are holomorphically isomorphic. This means that there is a nowhere vanishing holomorphic function $\hat{\alpha}$ on $\hat X$, such that for each $\hat{x}\in \hat{X}$ the equality $\{\hat{x}\}\times\hat{E}_2(\hat{x})=\{\hat{x}\}\times\hat{\alpha}(\hat{x}) \hat{E}_1(\hat{x})$ holds. Here $\hat{E}_j(\hat{x})$ is defined by the equality $\hat{\mathbold{E}}_j\cap (\{\hat{x}\}\times \mathbb{P}^1)=\{\hat{x}\}\times \hat{E}_j(\hat{x})$. Define also $E_j(x)$ by the equality $\{x\}\times E_j(x)=\mathbold{E}_j\cap (\{{x}\}\times \mathbb{P}^1)$. For a point $x\in X$ and $\hat{x}_1,\hat{x}_2\in {\hat{{\sf P}}}^{-1}(x)$ the equalities $\hat{{E}}_j(\hat{x}_1)= \hat{{E}}_j(\hat{x}_2)=E_j(x),\,j=1,2,$ hold. Hence, $E_2(x)=\hat{\alpha}(\hat{x}_1){E}_1({x})=\hat{\alpha}(\hat{x}_2) {E}_1({x})$. For a set $E\subset C_3(\mathbb{C})\diagup \mathcal{S}_3$ and a complex number $\alpha$ the equality $E=\alpha E$ is possible only if $\alpha=1$, or $\alpha= -1$ and $E$ is obtained from $\{-1,0,1\}$ by multiplication with a non-zero complex number, or $\alpha=e^{\pm \frac{2\pi i}{3}}$ and $E$ is obtained from the set of vertices of an equilateral triangle with barycenter $0$ by multiplication with a non-zero complex number. For $x$ in a small open disc on $X$ and $x\to\hat{x}_j(x),\,j=1,2,$ being two local inverses of $\hat{{\sf P}}$ the functions $x\to \hat{\alpha}(\hat{x}_j(x))$ are two analytic functions whose ratio is contained in a finite set, hence the ratio is equal to a constant. If the constant was different from one, then all fibers of ${ P}(\mathfrak{F}_1)$ would be conformally equivalent to each other and, hence, ${ P}(\mathfrak{F}_1)$ would be locally holomorphically trivial. Since the bundles $\mathfrak{F}_j$, and, hence, also the ${P}(\mathfrak{F}_j)$, are locally holomorphically non-trivial, the ratio of the two functions equals $1$. We saw that for each pair of points $\hat{x}_1,\hat{x}_2\in \hat{X}$, that project to the same point $x\in X$, $\hat{\alpha}(\hat{x}_1)=\hat{\alpha}(\hat{x}_2)$. Put $\alpha(x)=\hat{\alpha}(\hat{x}_j)$ for any point $\hat{x}_j\in (\hat{{\sf P}})^{-1}(x)$. We obtain ${E}_2({x})={\alpha}({x}){E}_1({x})$, that means, the bundles ${P}(\mathfrak{F}_j)$ are holomorphically isomorphic. Since the bundles $\mathfrak{F}_j$, $j=1,2,$ are double branched coverings of the ${P}(\mathfrak{F}_j)$ and have conjugate monodromy homomorphism, they are holomorphically isomorphic. The first fact is obtained as follows. After a holomorphic isomorphism we may assume that the reducible holomorphic $(1,1)$-bundle is a double branched covering of a reducible special $(0,4)$-bundle $P(\mathfrak{F})=(X\times \mathbb{P}^1, {\rm pr}_1, \mathring{\mathbold{E}}\cup \mathbold{s}^{\infty}, X)$. After a further isomorphism the bundle $P(\mathfrak{F})$ lifts to a holomorphic bundle $\reallywidehat{P(\mathfrak{F})}= (\hat{X} \times \mathbb{P}^1, {\rm pr}_1, \hat{\mathring{\mathbold{E}}}\cup \widehat{\mathbold{s}^{\infty}},\hat{ X}) $, such that $\hat{\mathring{\mathbold{E}}}$ intersects each fiber $\{x\}\times\mathbb{P}^1$ along a set of the form $\{\hat{x}\}\times \{-1,1,\mathring{g}(\hat{x}))$. Since $\mathfrak{F}$ is reducible, hence $P(\mathfrak{F})$ and also $\reallywidehat{P(\mathfrak{F})}$ are reducible, the mapping $\mathring{g}$ is constant by Corollary \ref{cor1a}. Hence, all fibers of $\reallywidehat{P(\mathfrak{F})}$ are conformally equivalent, and, hence, all fibers of $P(\mathfrak{F})$ are conformally equivalent. Since $\mathfrak{F}$ is the double branched covering of $P(\mathfrak{F})$, all fibers of $\mathfrak{F}$ are conformally equivalent. The first fact is proved. $\Box$ \noindent {\bf Proof of Proposition \ref{prop1a}.} Denote by $S^{\alpha}$ a skeleton of $T^{\alpha,\sigma} \subset T^{\alpha}$ which is the union of two circles each of which lifts under the covering ${\sf P}:\mathbb{C}\to T^{\alpha}$ to a straight line segment which is parallel to an axis in the complex plane. Denote the intersection point of the two circles by $q_0$. Note that $S_{\alpha}$ is a standard bouquet of circles for $T^{\alpha,\sigma}$ with base point $q_0$, and ${\sf P}^{-1}(T^{\alpha,\sigma})$ is the $\frac{\sigma}{2}$-neighbourhood of ${\sf P}^{-1}(S^{\alpha})$. Denote by $e$ the generator of $\pi_1(T^{\alpha,\sigma},q_0)$, that lifts to a vertical line segment and $e'$ the generator of $\pi_1(T^{\alpha,\sigma},q_0)$, that lifts to a horizontal line segment. Put $\mathcal{E}=\{e,e'\}$. We show first the inequality \begin{equation}\label{eq20} \lambda_3(T^{\alpha,\sigma}) \leq \frac{4(2\alpha+1)}{\sigma}\;. \end{equation} For this purpose we take any primitive element $e''$ of the fundamental group $\pi_1(T^{\alpha,\sigma},q_0)$ which is the product of at most three factors, each of the factors being an element of $\mathcal{E}$ or the inverse of an element of $\mathcal{E}$. We represent the element $e''$ by a piecewise $C^1$ mapping $f_1$ from an interval $[0,l_1]$ to the skeleton $S^{\alpha}$. We may consider $f_1$ as a piecewise $C^1$ mapping from the circle $\mathbb{R}\diagup (x \sim x+l_1)$ to the skeleton, and assume that for all points $t'$ of the circle where $f_1$ is not smooth, $f_1(t')=q_0$. Let $t_0\in [0,l_1]$ be a point for which $f_1(t_0)\neq q_0$. Let $\tilde{f}_1$ be a piecewise smooth mapping from $[t_0,t_0+l_1]$ to the universal covering $\mathbb{C}$ of $T^{\alpha}\subset T^{\alpha,\sigma}$ which projects to $f_1$. We may take $f_1$ so that the equality $|\tilde{f}_1'|=1$ holds. The mapping may be chosen so that $l_1\leq 2 \alpha + 1$. (Recall that $\alpha\geq 1$ and the element $e$ is primitive.) Take any $t'$ for which $f_1$ is not smooth. We may assume that $f_1$ is chosen so that the direction of $\tilde{f}_1'$ changes by the angle $\pm \frac{\pi}{2}$ at each such point. Hence, there exists a neighbourhood $I(t')$ of $t'$ on $(t_0,t_0+\ell_1)$, such that the restriction $\tilde{f}_1'| I(t')$ covers two sides of a square of side length $\frac{\sigma}{2}$. Denote $\tilde{q}'_0$ the common vertex $\tilde{f}_1'(t')$ of these sides, and by $\tilde{q}''_0$ the vertex of the square that is not a vertex of one of the two sides. Replace the union of the two sides of the square that contain $\tilde{q}'_0$ by a quarter-circle of radius $\frac{\sigma}{2}$ with center at the vertex $\tilde{q}''_0$, and parameterize the latter by $t \to \frac{\sigma}{2} e^{\pm i\frac{2}{\sigma}t}$ so that the absolute value of the derivative equals $1$. Notice that the quarter-circle is shorter than the union of the two sides. Proceed in this way with all such points $t'$. After a reparameterization we obtain a $C^1$ mapping $\tilde f$ of the interval $[0,l]$ of length $l$ not exceeding $2 \alpha +1$ whose image is contained in the union of ${\sf P}^{-1}(S^{\alpha})$ with some quarter-circles, such that $|\tilde{f}'|=1$. The distance of each point of the image of $\tilde f$ to the boundary of ${\sf P}^{-1}(T^{\alpha,\sigma})$ is not smaller than $\frac{\sigma}{2}$. The mapping $\tilde f$ is piecewise of class $C^2$. The normalization condition $|\tilde{ f}'|=1$ implies $|\tilde{f}''|\leq \frac{2}{\sigma}$. The projection $f={\sf P}\circ\tilde{f} $ can be considered as a mapping from the circle $\mathbb{R}\diagup (x\sim x +l)$ of length $l$ not exceeding $2 \alpha + 1$ to $T^{\alpha,\sigma}$, that represents the free homotopy class $\reallywidehat{e''}$ of the chosen element of the fundamental group. Consider the mapping $ x+iy \to \tilde{F}(x+iy) \stackrel{def}=\tilde{f}(x) + i \tilde{f}'(x) y \in \mathbb{C}$, where $x+iy$ runs along the rectangle $R_l= \{x+iy \in \mathbb{C}: x\in [0,l], |y| \leq \frac{\sigma}{4}\}$. The image of this mapping is contained in the closure of ${\sf P}^{-1}(T^{\alpha,\sigma})$. Since $2\frac{\partial}{\partial z} \tilde{F} (x+iy)= 2 \tilde{f}'(x) + i \tilde{f}''(x) y $ and $2\frac{\partial}{ \partial \bar z} \tilde{F }(x+iy)= i \tilde{f}''(x) y $ the Beltrami coefficient $\mu _{\tilde{F}}(x+iy)= \frac{\frac{\partial}{\bar z} \tilde{F} (x+iy)}{\frac{\partial}{z} \tilde{F} (x+iy)}$ of $\tilde{F}$ satisfies the inequality $|\mu _{\tilde{F}}(x+iy) | \leq \frac{1}{3}$. Hence, for $K=\frac{1+\frac{1}{3}}{1-\frac{1}{3}}=2$ the mapping $\tilde{F}$ descends to a $K$-quasiconformal mapping $F$ from the annulus $A_l$ to $T^{\alpha,\sigma}$ of extremal length $\lambda(A_l)=\frac{l}{\frac{\sigma}{2}}\leq 2 \frac{(2 \alpha +1)}{\sigma}$ that represents the free homotopy class of the element $e''$ of the fundamental group $\pi_1(T^{\alpha,\sigma} ,q_0)$. Realize $A_l$ as an annulus in the complex plane. Let $\varphi$ be the solution of the Beltrami equation on $\mathbb{C}$ with Beltrami coefficient $\mu_{\tilde{F}}$ on $A_l$ and zero else. Then the mapping $g=F \circ \varphi^{-1}$ is a holomorphic mapping of the annulus $\varphi(A_l)$ of extremal length not exceeding $K \lambda(A_l) \leq \frac{4(2\alpha +1 )}{\sigma} $ into $T^{\alpha,\sigma}$ that represents the chosen element of the fundamental group $\pi_1(T^{\alpha,\sigma},q_0)$. Inequality \eqref{eq20} is proved. By Theorem \ref{thm1} for tori with a hole there are up to homotopy no more than $3(\frac{3}{2}e^{24 \pi \lambda_3(T^{\alpha,\sigma})})^2\leq \frac{27}{4}e^{3\cdot 2^4 \pi \frac{2\alpha +1}{\sigma}}< 7e^{3\cdot 2^4 \pi \frac{2\alpha +1}{\sigma}} $ non-constant irreducible holomorphic mappings from $T^{\alpha,\sigma}$ to the twice punctured complex plane. We give now the proof of the lower bound. Let $\delta=\frac{1}{10}$. We consider the annulus $A^{\alpha,\delta}\stackrel{def}=\{z\in \mathbb{C}:|\mbox{Re}z|< \frac{5\delta}{2}\}\diagup (z\sim z+\alpha i)$. The extremal length of the annulus equals $\frac{\alpha}{5\delta}=2\alpha$. For any natural number $j$ we consider all elements of $\pi_1(\mathbb{C}\setminus \{-1,1\},0)$ of the form \begin{equation}\label{eq21} a_1^{\pm 2}a_2^{\pm 2} \ldots a_1^{\pm 2}a_2^{\pm 2} \end{equation} containing $2j$ terms, each of the form $a_j^{\pm 2}$. The choice of the sign in the exponent of each term is arbitrary. There are $2^{2j}$ elements of this kind. By \cite{Jo2} there is a relatively compact domain $G$ in the twice punctured complex plane $\mathbb{C}\setminus \{-1,1\}$ and a positive constant $C$ such that the following holds. For each $j$, each element of the fundamental group of the form \eqref{eq21}, and for each annulus of extremal length at least $2Cj$ there exists a base point $q$ in the annulus, and a holomorphic mapping from the annulus to $G$ that maps $q$ to $0$ and represents the element. Put $j=[\frac{\alpha}{10C\delta}]$, where $[x]$ is the largest integer not exceeding a positive number $x$. Then each element of the form \eqref{eq21} with this number $j$ can be represented by a holomorphic map $\mathring{g}$ from the annulus $A^{\alpha,\delta}$ to $G$. There is a constant $C_1$ that depends only on $G$ such that the mapping $\mathring{g}$ satisfies the inequality $|\mathring{g}|<C_1$. Let $g$ be the lift of $\mathring{g}$ to a mapping from the strip $\{z\in \mathbb{C}:|\mbox{Re}z|< \frac{5\delta}{2}\}$ to $G$. On the thinner strip $\{|\mbox{Re}z|< \frac{3\delta}{2}\}$ the derivative of $g$ satisfies the inequality $|g'|\leq \frac{C_1}{\delta}$. We will associate to the holomorphic mapping $\mathring{g}$ on the annulus a smooth mapping $g_1$ from $T^{\alpha,\delta}\subset T^{\alpha}$ to $G$, such that (with $\sf P$ being the projection ${\sf P}:\mathbb{C}\to T^{\alpha}$) the monodromy along the circle ${\sf P}(\{\mbox{Re}z=0\})$ with base point ${\sf P}(0)$ is equal to \eqref{eq21}, and the monodromy along ${\sf P}(\{\mbox{Im}z=0\}$ with the same base point equals the identity. This is done as follows. Let $F_{\alpha}=[-\frac{1}{2},\frac{1}{2})\times [-\frac{\alpha}{2},\frac{\alpha}{2})\subset \mathbb{C}$ be a fundamental domain for the projection ${\sf P}:\mathbb{C}\to T^{\alpha}$. Put $\Delta^{\alpha,\delta}=F_{\alpha}\cap {\sf P}^{-1}(T^{\alpha,\delta})$. Let $\chi_0:[0,1]\to \mathbb{R}$ be a non-decreasing function of class $C^2$ with $\chi_0(0)=0,\, \chi_0(1)=1$, $\chi'_0(0)=\chi'_0(1)=0$ and $|\chi_0'(t)|\leq \frac{3}{2}$. Define $\chi:[\frac{-3\delta}{2},\frac{+3\delta}{2}]\to [0,1]$ by \begin{equation}\label{eq22} \chi(t) = \begin{cases} \chi_0(\frac{1}{\delta}t+ \frac{3}{2})& \; t \in [\frac{-3\delta}{2},\frac{-\delta}{2}] \\ 1 & \; t \in [\frac{-\delta}{2},\frac{+\delta}{2}]\\ \chi_0(-\frac{1}{\delta}t+ \frac{3}{2})& \; t \in [\frac{\delta}{2},\frac{3\delta}{2}]\,. \end{cases} \end{equation} Notice that $\chi$ is a $C^2$-function that vanishes at the endpoints of the interval $[\frac{-3\delta}{2},\frac{+3\delta}{2}]$ together with its first derivative, is non-decreasing on $[\frac{-3\delta}{2},\frac{-\delta}{2}]$, and non-increasing on $[\frac{\delta}{2},\frac{3\delta}{2}]$. Put $g_1(z)= \chi (\mbox{Re}z)\; g(z) + (1-\chi (\mbox{Re}z))\; g(0)$ for $z$ in the intersection of $\Delta^{\alpha,\delta}$ with $\{|\mbox{Re}z|<\frac{3\delta}{2}\}$, and $g_1(z)=g(0)$ for $z$ in the rest of $\Delta^{\alpha,\delta}$. Put $\varphi(z)= \frac{\partial}{\partial \bar z} g_1(z)$ on $\Delta^{\alpha,\delta}$. Since $\frac{\partial}{\partial \bar z} \chi (\mbox{Re} z)=0$ for $|\mbox{Re} z|<\frac{\delta}{2}$ and for $|\mbox{Re} z|>\frac{3\delta}{2}$, the function $\varphi(z)$ vanishes on $\Delta^{\alpha,\delta}\setminus Q$ with $Q\stackrel{def}=([ -\frac{3\delta}{2}, +\frac{3\delta}{2}] \times [-\frac{\delta}{2},\frac{\delta}{2}])$. On $Q \cap \Delta^{\alpha,\delta}$ the inequality \begin{equation}\label{eq23} |\varphi(z)|\leq\frac{1}{2} |\chi'(\mbox{Re} z)|\, |g(z)-g(0)|\leq \frac{3}{4\delta} \cdot \frac{C_1}{\delta}|z|< \frac{3}{4\delta^2} \cdot C_1 \cdot 2\delta= \frac{3}{2} \frac{C_1}{\delta}\, \end{equation} holds. Notice that the functions $g_1$ and $\varphi$ extend to ${\sf P}^{-1}( T^{\alpha,\delta})$ as continuous doubly periodic functions. Hence, we may consider them as functions on $T^{\alpha,\delta}$. \begin{figure} \caption{A fundamental domain for a torus with a hole and the poles of the kernel for the $\overline{\partial}$-equation} \label{fig8.4} \end{figure} We want to find a small positive number $\varepsilon$ that depends on $C$ and $C_1$, but not on $\alpha$, such that the following holds. For $\sigma\stackrel{def}= \varepsilon \delta$ there exists a solution $f$ of the equation $\frac{\partial}{\partial \bar z}f(z)=\varphi(z)$ on $T^{\alpha,\sigma}$ such that for each $z$ the value $|f(z)|$ is smaller than the Euclidean distance in $\mathbb{C}$ of $\pm 1$ to $\overline G$. Then $g_1-f$ is a holomorphic mapping from $T^{\alpha,\sigma}$ to $\mathbb{C}\setminus \{-1,1\}$ whose class has monodromies equal to \eqref{eq21}, and to the identity, respectively. This gives $\,2^{2 [\frac{\alpha}{10C\delta}]}\geq 2^{2 \frac{\alpha}{10C\delta}-2}=\frac{1}{4}e^{\frac{2\varepsilon\log 2}{10C}\frac{\alpha}{\sigma}}$ different homotopy classes of mappings from $T^{\alpha,\sigma}$ to $\mathbb{C}\setminus \{-1,1\}$, and, hence proves the lower bound. To solve the $\bar{\partial}$-problem on $T^{\alpha,\varepsilon\delta}= T^{\alpha,\sigma}$, we consider an explicit kernel function which mimics the Weierstraß $\wp$-function. The author is grateful to Bo Berndtsson who suggested to use this kernel function. Recall that the Weierstraß $\wp$-function related to the torus $T^{\alpha}$ is the doubly periodic meromorphic function $$ \wp_{\alpha}(\zeta)=\frac{1}{\zeta^2}+ \underset{(n,m) \in \mathbb{Z}^2\setminus (0,0)}{\sum} \Big(\frac{1}{(\zeta-n- i m \alpha)^2 }- \frac{1}{(n+ i m \alpha)^2}\Big)\, $$ on $\mathbb{C}$. It defines a meromorphic function on $T^{\alpha}$ with a double pole at the projection of the origin and no other pole. Put $\nu=\frac{1}{2} + \frac{\alpha i}{2}$. Since for $\zeta\not\in (\mathbb{Z}+i\alpha \mathbb{Z})\cup (\nu+\mathbb{Z}+i\alpha \mathbb{Z})$ the equality \begin{align*} \frac{1}{(\zeta-n- i m \alpha) } - \frac{1}{(\zeta-n- i m \alpha-\nu) }+ \frac{\nu}{(\zeta-n- i m \alpha)^2 } = \frac{-\nu^2}{(\zeta -n- i m \alpha)^2 (\zeta - n- i m \alpha-\nu) }\, \end{align*} holds, and the series with these terms converges uniformly on compact sets not containing poles, the expression $$ \wp_{\alpha}^{\nu}(\zeta)= \frac{1}{\zeta}-\frac{1}{\zeta-\nu} + \, \underset{(n,m) \in \mathbb{Z}^2\setminus (0,0)}{\sum} \Big(\frac{1}{(\zeta-n- i m \alpha) } - \frac{1}{(\zeta-n- i m \alpha-\nu) } + \frac{\nu}{(n+ i m \alpha)^2}\Big)\, $$ defines a doubly periodic meromorphic function on $\mathbb{C}$ with only simple poles. The function descends to a meromorphic function on $T^{\alpha}$ with two simple poles and no other pole. Recall that the support of $\varphi$ is contained in $Q$. The set $Q$ is contained in the $2\delta$-disc in $\mathbb{C}$ (in the Euclidean metric) around the origin. If $\zeta$ is contained in the $2\delta$-disc around the origin and $z \in \Delta^{\alpha,\delta}$, then the point $\zeta-z$ is contained in the ${2\delta}$-neighbourhood (in $\mathbb{C}$) of $\Delta^{\alpha,\delta}$. By the choice of $\delta$ the distance of any such point $\zeta-z$ to any lattice point $n+i \alpha m$ except $0$ is larger than $\frac{1}{2}-2\delta> \frac{1}{4}$. Further, for $z\in \Delta^{\alpha,\delta}$ and $\zeta$ in the $2\delta$-disc around the origin the distance of the point $\zeta-z$ to any point $n+i \alpha m +\nu$ (including the point $\nu$) is not smaller than $ \frac{1}{2}-\frac{5\delta}{2}= \frac{1}{4}$. Put $Q_{\varepsilon}\stackrel{def}= Q\cap \Delta^{\alpha,\varepsilon\delta}= ([ -\frac{3\delta}{2}, +\frac{3\delta}{2}] \times [-\frac{\varepsilon\delta}{2},+\frac{\varepsilon\delta}{2}]) \bigcup ([ -\frac{\varepsilon\delta}{2}, +\frac{\varepsilon\delta}{2}] \times [-\frac{\delta}{2},+\frac{\delta}{2}])$. Then the function \begin{align}\label{eq24} f(z)= - \frac{1}{\pi}\iint_{Q_{\varepsilon}} \varphi(\zeta) \wp_{\alpha}^{\nu} (\zeta-z) dm_2(\zeta)\;, \end{align} for $z$ in $\Delta^{\alpha,\varepsilon\delta}$ is holomorphic outside $Q_{\varepsilon}$ and satisfies the equation $\frac{\partial}{\partial \bar z}f=\varphi$ on $Q_{\varepsilon}$. It extends continuously to a doubly periodic function on ${\sf P}^{-1}(T^{\alpha,\varepsilon\delta})$ and hence descends to a continuous function on $T^{\alpha,\varepsilon\delta}$. It remains to estimate the supremum norm of the function $f$ on $\Delta^{\alpha,\sigma}=\Delta^{\alpha,\varepsilon\delta}$. The following inequality holds for $z \in \Delta^{\alpha,\sigma}$ \begin{align}\label{eq26} |\iint_{Q_{\varepsilon}} \varphi(\zeta) \wp_{\alpha}^{\nu} (\zeta-z) dm_2(\zeta)|= |& \frac{1}{\pi}\iint_{Q_{\varepsilon}} \varphi(\zeta)\Big(\frac{1}{\zeta-z} + (\wp_{\alpha}^{\nu}(\zeta-z) -\frac{1}{\zeta-z})\Big) dm_2(\zeta)| \nonumber\\ \leq & \frac{1}{\pi}\iint_{Q_{\varepsilon}} \frac{3C_1}{2\delta}\Big(|\frac{1}{\zeta-z}|+ C_2\Big) dm_2(\zeta) \,. \end{align} We used the upper bound for $\varphi$ and the fact that for $z \in \Delta^{\alpha,\sigma}$ and $\zeta$ in $Q_{\varepsilon}$ the expression $|\wp_{\alpha}^{\nu}(\zeta-z) -\frac{1}{\zeta-z}|$ is bounded by a universal constant $C_2$. The integral of the second term on the right hand side does not exceed $\frac{3 C_1}{2 \delta}\cdot C_2 \cdot 4\varepsilon \delta^2=6 C_1 C_2 \varepsilon \delta$. The integral $\quad \iint_{Q_{\varepsilon}}|\frac{1}{\zeta-z}|dm_2(\zeta)\; $ does not exceed the sum of the two integrals $\quad I_1=\iint_{(- \frac{3}{2} \delta, \frac{3}{2} \delta)\times (- \frac{1}{2}\varepsilon\delta, \frac{1}{2}\varepsilon\delta)}\; \mid \frac{1}{\zeta-z}\mid dm_2(\zeta)\;, \;\; $ and $\;\; I_2=\iint_{(-\frac{1}{2}\varepsilon\delta, \frac{1}{2}\varepsilon\delta)\times (-\frac{1}{2}\delta,\frac{1}{2}\delta)}\; \mid \frac{1}{\zeta-z}\mid dm_2(\zeta)\;$. The first integral $I_1$ is largest when $z= 0 $. Hence, it does not exceed \begin{align}\label{eq27} \iint_{|\zeta|< (\sqrt{2})^{-1} \varepsilon\delta} \; \mid \frac{1}{\zeta} \mid dm_2(\zeta) + 2\varepsilon \delta \int_{\frac{1}{2}\varepsilon\delta}^{\frac{3}{2}\delta}\, \frac{1}{\eta}\, d\eta \leq \sqrt{2}\pi\varepsilon\delta + 2\varepsilon\delta \log{\frac{3}{\varepsilon}}\,. \end{align} The second integral $I_2$ is smaller. We obtain the estimate \begin{equation}\label{eq28} |f(z)|\leq \frac{6 C_1 C_2 \varepsilon \delta}{\pi} + 3\frac{C_1}{\pi\delta}(\sqrt{2}\pi\varepsilon\delta + 2\varepsilon\delta \log{\frac{3}{\varepsilon}})\,. \end{equation} Recall that we have chosen $\delta=\frac{1}{10}$. We may choose $\varepsilon_0>0$ depending only on $C_1$ (and, hence, only on the domain $G$) so that if $\varepsilon< \varepsilon_0$ the supremum norm of $f$ is less than the distance of $\pm 1$ to $\overline G$. The proposition is proved. $\Box$ \noindent {\bf Proof of Proposition \ref{prop1b}.} Let $\ell_0$ be the length in the K\"ahler metric of the longest circle in the bouquet. For each natural number $k$ and each positive $\sigma<\sigma_0$ the value $\lambda_k(S_{\sigma})$ satisfies the inequalities \begin{align} C_1' \frac{\ell_0}{\sigma}\leq \lambda_k(S_{\sigma})\leq C_1'' \frac{\ell_0}{\sigma} \end{align} for constants $C_1'$ and $C_1''$ depending on $k$, $X$, $S$ and the K\"ahler metric. This can be seen by the argument used in the proof of Proposition \ref{prop1a}. The upper bound in inequalities \eqref{eqabc} follows from Theorem \ref{thm1}. The proof of the lower bound in \eqref{eqabc} follows along the same lines as the proof of Proposition \ref{prop1a}. It leads to a $\overline{\partial}$-problem on an open Riemann surface, for which H\"ormander's $L^2$-method can be used. The case of open Riemann surfaces is easier to treat as the general case of pseudo-convex domains. The needed results for Riemann surfaces are explicitly formulated in \cite{Na}. To obtain the lower bound we consider for each positive number $\delta<\sigma_0$ the $\delta$-neighbourhood of the longest circle $\gamma_0$ of the bouquet. Consider a curvilinear rectangle $R^X_{\delta}$, that is contained in the $\delta$-neighbourhood of the largest circle $\gamma_0$, whose ''vertical curvilinear sides'' are contained in the boundary of $S_{\delta}$ and whose open ''horizontal curvilinear sides'' are contained in $S_{\delta}$. Choosing $\sigma_0$ small enough, we may choose $R^X_{\delta}$ so that for its extremal length the inequality $\lambda(R^X_{\delta})>c \frac{\ell_0}{\delta} +4$ holds for a number $c>0$ that depends only on $X$, $S$ and the K\"ahler metric. For any positive $\delta<\sigma_0$ we denote by $R_{\delta}$ the true rectangle $R_{\delta}\stackrel{def}=\{x+iy: x\in (-\delta,\delta), y \in (-c\ell_0-2\delta,c\ell_0+2\delta)\} $ in the complex plane. Shrinking perhaps $R^X_{\delta}$, we may assume that $R^X_{\delta}$ is conformally equivalent to $R_{\delta}$. Denote by $\omega$ the conformal mapping $R^X_{\delta}\to R_{\delta}$ for which the orientation of the curve $\gamma_0$ corresponds to the positive orientation of the imaginary axis. Let $G\subset \mathbb{C}\setminus \{-1,1\}$ be the same relatively compact domain as in the proof of Proposition \ref{prop1a}, and let $\mathring{R}_{\delta}\subset R_{\delta} $ be the rectangle in the complex plane with the same center and horizontal side length as $R_{\delta} $, and with vertical side length $2c\ell_0$. There is an absolute constant $C>0$ such that for $j=[\frac{c}{C} \frac{\ell_0}{\delta}]$ and any word of the form $a_1^{\pm 1}\, a_2^{\pm}\ldots a_2^{\pm 1}$ in the relative fundamental group $\pi_1(\mathbb{C}\setminus\{-1,1\}, (-1,1))$ with $2j$ terms there exists a holomorphic mapping $g:\mathring{R}_{\delta}\to G\subset \mathbb{C}\setminus \{-1,1\}$ that represents this word and vanishes at $ \pm i c\ell_0$ (see Theorem 1 of \cite{Jo2}). The function $g$ extends by reflection through the horizontal sides of $\mathring{R}_{\delta}$ to a holomorphic function on $R_{\delta}$, that we also denote by $g$. Since $|g|\leq C_1$ on $\mathring{R}_{\delta}$ and, hence, $|g|\leq C_1$ also on $R_{\delta}$, for any positive $\alpha<1$ the inequality $|g'|\leq \frac{C_1}{\delta(1-\alpha)}$ holds for the derivative of the mapping $g$ on the smaller rectangle $R_{\delta\alpha}$ (defined as $R_\delta$ with $\delta$ replaced by $\alpha\delta$). This fact implies that $|g|\leq \frac{\sqrt{2} C_1\alpha}{1-\alpha}$ on $Q_{\alpha\delta}^{\pm}\stackrel{def}= \{x+iy: x\in (-{\alpha\delta},{\alpha\delta}), \pm y\in (c\ell_0, c\ell_0 + \alpha \delta)\}$. We took into account that $g(\pm i c \ell_0)=0$. We take $\alpha$ so that $\frac{\sqrt{2} C_1\alpha}{1-\alpha} =\frac{1}{2}$. With the same function $\chi_0$ as in the proof of Proposition \ref{prop1a} we define \begin{equation}\label{eq22+} \chi(t) = \begin{cases} 1 & \; \;t \;\in [-c\ell_0,c\ell_0]\\ \chi_0( \frac{c\ell_0+ \alpha\delta -|t|}{\alpha\delta}) &\; |t| \in (c\ell_0, c\ell_0+ \alpha \delta)\,,\\ 0&\; \;t\; \in \mathbb{R}\setminus[-c\ell_0-\alpha\delta,c\ell_0+ \alpha\delta]\, . \end{cases} \end{equation} Consider the function $\tilde{g}(z)=g(z)\cdot\chi({\rm Im}(z))$ and the continuous $(0,1)$-form $\varphi\stackrel{def}= \bar{\partial}{\tilde{g}}$ on $R_{\alpha\delta}$. The form $\varphi$ vanishes outside $Q_{\alpha\delta}^{\pm}$. Let $\varepsilon$ be a small positive number that will be chosen later. Consider the measurable $(0,1)$-form $\varphi_{\varepsilon}$ on $R_{\delta}$ that equals $\varphi$ on $Q_{\alpha\delta,\varepsilon}^{\pm}\stackrel{def}= \{x+iy: x\in (-{\alpha\delta\varepsilon},{\alpha\delta\varepsilon}), \pm y\in (c\ell_0, c\ell_0 + \alpha \delta)\}$ and vanishes outside this set. Extend its pullback under the conformal mapping $\omega: R^X_{\delta}\to R_{\delta}$ to a measurable $(0,1)$-form on $X$ by putting it equal to zero outside $R^X_{\delta}$. Denote the obtained form by $\varphi_{\varepsilon}^X$. By Corollary 2.14.2 of \cite{Na} there exists a strictly subharmonic exhaustion function $\psi$ on $X$. The $L^2$-norm of $\varphi_{\varepsilon}$ with respect to the Euclidean metric on the complex plane does not exceed $C_2 \sqrt{\varepsilon}$ for an absolute constant $C_2$. Hence, the weighted $L^2$-norm on $X$ of $\varphi^X_\varepsilon$ with respect to the K\"ahler metric and the weight $e^{-\psi}$ (see Definition 2.6.1 of \cite{Na}) does not exceed $C_3 C_2 \sqrt{\varepsilon}$ for a constant $C_3$ that depends on $\psi$ and on the K\"ahler metric on a the compact subset $R^X_{\delta}$ of $X$. By Corollary 2.12.6 of \cite{Na} there exists a function $f^X$ with $\bar{\partial} f^X=\varphi_X$ in the weighted $L^2$-space on $X$ with respect to the K\"ahler metric and the weight $e^{-\psi}$ (see Definition 2.6.1 of \cite{Na}), whose norm in this space does not exceed $C_4 C_3 C_2 \sqrt{\varepsilon}$ for a constant $C_4$ depending only on $X$, $\psi$, and the K\"ahler metric. Let $(Q_{\alpha\delta,\varepsilon}^{\pm})^X$ be the preimages of $Q_{\alpha\delta,\varepsilon}^{\pm}$ under $\omega$. The function $f^X$ is holomorphic on $X\setminus \big((Q_{\alpha\delta,\varepsilon}^{+})^X\cup (Q_{\alpha\delta,\varepsilon}^{-})^X\big)$. Put $\tilde{Q}_{\delta}^{\pm}\stackrel{def}=\{x+iy\in R_{\delta}: \pm y\in (c\ell_0 -\delta, c\ell_0 + 2\delta)\}$, and $(\tilde{Q}_{\delta}^{\pm})^X=\omega^{-1}(\tilde{Q}_{\delta}^{\pm})$. Then $(Q_{\alpha\delta,\varepsilon}^{\pm})^X$ is relatively compact in $(\tilde{Q}_{\delta}^{\pm})^X$. On a relatively compact open subset of $X$, containing the closed subset $\overline{S_{\delta_0}} \setminus ((\tilde{Q}_{\delta}^+)^X \cup (\tilde{Q}_{\delta}^-)^X)$ of $X$, the supremum norm of $|f^X|$ is estimated by its weighted $L^2$-norm: $|f^X|< C_5 \sqrt{\varepsilon}$ in a neighbourhood of $\overline{S_{\delta_0}} \setminus ((\tilde{Q}_{\delta}^+)^X \cup (\tilde{Q}_{\delta}^-)^X)$ for a constant $C_5$ that depends on the K\"ahler metric, on $\psi$ and on the constants chosen before (see Theorem 2.6.4 of \cite{Na}). On the other hand the classical Cauchy-Green formula on the complex plane provides a solution $\tilde f$ of the equation $\overline{\partial}\tilde{f}=\varphi_{\varepsilon}$ on the set $\tilde{Q}_{\delta}^+ \cup \tilde{Q}_{\delta}^-$. The supremum norm of the function $\tilde{f}$ is estimated by $C_6\sqrt{\varepsilon}$ for an absolute constant $C_6$. Let $\tilde{f}^X$ be the pullback of $\tilde f$ to $(\tilde{Q}_{\delta}^+)^X \cup (\tilde{Q}_{\delta}^-)^X$. The function $f^X - \tilde{f}^X$ is holomorphic on $(\tilde{Q}_{\delta}^+)^X \cup (\tilde{Q}_{\delta}^-)^X$ and satisfies the inequality $|f^X - \tilde{f}^X|<(C_5+C_6)\sqrt{\varepsilon}$ at all points of the set $(\tilde{Q}_{\delta}^+)^X \cup (\tilde{Q}_{\delta}^-)^X$, that are close to its boundary. Hence, the inequality is satisfied on $(\tilde{Q}_{\delta}^+)^X \cup (\tilde{Q}_{\delta}^-)^X$. As a consequence, $|f^X|<(C_5+2C_6)\sqrt{\varepsilon}$ on $(\tilde{Q}_{\delta}^+)^X \cup (\tilde{Q}_{\delta}^-)^X$. Choose $\varepsilon$ depending on $C_5$ and $C_6$, so that \begin{align}\label{eqabcd'} |f^X|<\min\Big\{{\rm dist}(G,\{-1,1\}),\,\frac{1}{2}\Big\} \;\mbox{ on} \; S_{\sigma_0}\,. \end{align} Put $\sigma=\varepsilon\alpha \delta$. Consider the smooth function $g_{\sigma}^X$ on $S_{\alpha\delta\varepsilon }=S_{\sigma}$ which equals $\tilde{g}^X$ on $R_{\delta}^X\cap S_{\sigma }$ and vanishes on the rest of $S_{\sigma}$. Hence, it vanishes on all circles of the bouquet except $\gamma_0$, and therefore, the monodromy of its homotopy class along each such circle is the identity. The restriction of $g_{\sigma}^X$ to $\mathring{R}_{\delta}^X\cap S_{\sigma}$ represents the element $a_1^{\pm} a_2^{\pm}\ldots a_2^{\pm}\in\pi_1(\mathbb{C}\setminus\{-1,1\}, (-1,1))$. Moreover, on $\big({R}_{\delta}^X\setminus \mathring{R}_{\delta}^X\big)\cap S_{\sigma} $ the inequality $|g_\sigma^X|<\frac{1}{2} $ holds, and on $S_{\sigma}\setminus {R}_{\delta}^X$ the mapping $g_{\sigma}^X$ vanishes. Hence, the monodromy of the homotopy class of $g_{\sigma}^X$ along $\gamma_0$ equals $a_1^{\pm} a_2^{\pm}\ldots a_2^{\pm}$. By the inequality \eqref{eqabcd'} the monodromies of the homotopy class of $g_{\sigma}^X -f^X$ along all circles of the bouquet coincide with those of the homotopy class of $g_{\sigma}^X$. The function $g_{\sigma}^X -f^X$ is holomorphic on $S_{\sigma }$. We put $C_7= \frac{2 \log 2 c\alpha\varepsilon}{C}$. For each positive $\sigma< \epsilon\alpha\sigma_0$ we found no less than $\frac{1}{4} 2^{\frac{C_7 \ell_0}{\sigma}}$ irreducible non-homotopic holomorphic mappings from $S_{\sigma}$ to $\mathbb{C}\setminus\{-1,1\}$. The proposition is proved. $\Box$ \end{document}
arXiv
\begin{document} \title{A note on recovering the Brownian motion component from a L\'evy process} \footnotetext[1]{School of Mathematics and Statistics, The University of Melbourne, Parkville 3010, Australia; e-mail: [email protected].} \begin{abstract} Gonz\'alez C\'azares and Ivanovs~(2021) suggested a new method for ``recovering'' the Brownian motion component from the trajectory of a L\'evy process that required sampling from an independent Brownian motion process. We show that such a procedure works equally well without any additional source of randomness if one uses normal quantiles instead of the ordered increments of the auxiliary Brownian motion process. {\it Key words and phrases:} Brownian motion, L\'evy process, quantile function, high frequency sampling. {\em AMS Subject Classification:} 60F99, 60G51, 60J65. \end{abstract} \section{Introduction and main results} The present note is complementing recent interesting paper~\cite{GoIv21} (see also related paper~\cite{FoGoIv21}) presenting an original idea on how to ``recover'' the Brownian motion component from the observed L\'evy process \[ X_t=Y_t + \sigma W_t, \quad t\in [0,1], \] where the standard Brownian motion $W$ is independent of the pure jump process~$Y$ and $\sigma>0.$ More precisely, the authors of~\cite{GoIv21} recovered the path of the Brownian bridge $\{W_t - t W_1\}_{t\in [0,1]}$ (noting that it is not possible to consistently ``extract'' the linear drift from the Brownian motion process trajectory, due to the equivalence of the distributions of Brownian motion processes with different linear drifts). Apart from being an interesting mathematical result by itself, such a separation can be useful in some applications as well. For instance, if the Brownian component is interpreted as noise, it enables one to recover (up to a linear drift) the signal~$Y$ from the observed process~$X$. Further comments (and some relevant references) on how one can benefit from such a separation in statistical problems can be found in~\cite{GoIv21}. The first method used in~\cite{GoIv21} for recovering the Brownian motion component was based on a construction that curiously required randomization. To describe that method, we need to introduce some notations. For an $n$-tuple $\mathcal X=(x_1, \ldots, x_n)$ of real numbers that are all different from each other (this will a.s.\ be the case for all the random samples considered in this note, so without loss of generality we will assume in what follows that they all do have this property, omitting ``a.s.'' in the respective relations), denote by $R_k (\mathcal X):= \sum_{j=1}^n {\bf 1} (x_j\le x_k)$ the rank of $x_k$ in~$\mathcal X$, $k= 1,\ldots, n$, and by $(\mathcal X)_j$ and $[ \mathcal X]_j$ the $j$th component of $\mathcal X$ and $j$th order statistic for $ \mathcal X$, respectively, so that $(\mathcal X)_{k}=x_k $ and $[\mathcal X]_{R_k (\mathcal X)}=x_k $. For a random process $\{V_t\}_{t\in [0,1]}$ and $n\ge 1,$ we set $ \Delta_{n } V:=(V_{1/n}-V_{0}, V_{2/n}-V_{1/n},\ldots ,V_{1 }-V_{(n-1)/n})\in \mathbb{R}^n. $ Now assume that $\{W'_t\}_{t\in [0.1]}$ is a standard Brownian motion process that is independent of~$X$ and put \[ W^{(n)}_t:= \sum_{i=1}^{\lfloor nt\rfloor } [\Delta_n W' ]_{R_i (\Delta_n X)}, \quad t\in [0,1]. \] In words, we first re-order one-step increments of $W'$ on the grid $k/n,$ $k=0,1,\ldots, n,$ such that the sequence of their ranks is the same as the sequence of the ranks of the increments of~$X$ on the same grid, and then form $W^{(n)}$ as the process of the partial sums of that re-ordered sequence of the increments of $W'.$ The following theorem re-states the first part of the main result in~\cite{GoIv21}. Set $\beta^*:=\inf\big\{p>0:\int_{(-1,1)}|x|^p\Pi (dx)<\infty \big\},$ where $\Pi$ is the jump measure of~$Y$. \begin{theo} \label{T1} For any $p\in (\beta^*,2]\cup \{2\},$ as $n\to\infty,$ \begin{align} \label{ConvT1} \sup_{t\in [0,1]} \big|W_t - W^{(n)}_t - (W_1 - W^{(n)}_1)t \big| =o_P (n^{-1/2+p/4}) . \end{align} \end{theo} The natural question that arises here is about the role of the independently sampled process~$W'$: why do we need this auxiliary random object? Does it necessarily need to be a standard Brownian motion? It it possible to modify the suggested method to avoid using any auxiliary independent random processes? Simulations showed that the described scheme still works when $W'$ is an independent fractional Brownian motion process with an arbitrary Hurst parameter $H\in (0,1)$ (one just needs to scale the process so that the marginal distributions of the components of $\Delta_n W'$ would be the same as for $\Delta_n W$). This observation suggested that the true role of the auxiliary process $W'$ is just to provide approximations to the normal quantiles and that one can ``recover'' $W$ using this kind of approach without sampling any independent random process. We show in the present note that this is the case indeed. Denote by $\Phi$ the standard normal distribution function, by $\overline{\Phi} :=1 - \Phi $ the distribution tail of~$\Phi,$ by $\varphi$ the density of~$\Phi$, and by $Q:=\Phi^{-1}$ the standard normal quantile function. For $ u_{n,k}:=\frac{k}{n+1},$ $ n\ge 1,$ $1\le k\le n,$ set \begin{align} \label{TildeW} \widetilde W^{(n)}_t:= n^{-1/2}\sum_{i=1}^{\lfloor nt\rfloor }Q(u_{n,R_i (\Delta_n X)}), \quad t\in [0,1]. \end{align} Our main result is the following theorem. \begin{theo} \label{T2} The assertion of Theorem~\ref{T1} remains true if the process $W^{(n)}$ is replaced in it with~$\widetilde W^{(n)}.$ \end{theo} \section{Proofs} \begin{proof}[Proof of Theorem~\ref{T2}] As in the proof of Theorem~\ref{T1} in~\cite{GoIv21}, we will start with the observation that \begin{align} \label{grid} \sup_{t\in [0,1]} |W_t - W_{\lfloor nt\rfloor/n}| =O_P \big(n^{-1/2}(\ln n)^{1/2}\big), \end{align} which is an immediate consequence of the L\'evy's modulus of continuity theorem. Therefore, in the problem of bounding the error in the version of~\eqref{ConvT1} with $\widetilde W^{(n)},$ we only need to consider the maximum of the absolute deviations on the grid $t= \frac{i}n,$ $1\le i\le n .$ To this end, we observe that $\widetilde W^{(n)}_1=0$ since $Q(\frac12+h)+Q(\frac12-h)=0,$ $h\in [0,\frac12),$ and hence, letting \begin{align*} \eta_{n,k}: =(\Delta_n W)_k- n^{-1/2} Q(u_{n,R_k (\Delta_n X)}) -n^{-1}W_1, \end{align*} one has \begin{align} \label{WW} W_{i/n} -\widetilde W^{(n)}_{i/n} - (W_{1} -\widetilde W^{(n)}_{1} )i/n = \sum_{k=1}^ i \eta_{n,k}. \end{align} Next, similarly to the decomposition of $\xi_{ni}$ on p.\,2422 in~\cite{GoIv21}, we write $\eta_{n,k}= \widetilde\eta_{n,k}+\widehat \eta_{n,k},$ where \begin{align*} \widetilde \eta_{n,k}:& = [\Delta_n W]_{ R_k (\Delta_n X)} - n^{-1/2} Q(u_{n,R_k (\Delta_n X)}) -n^{-1}W_1, \\ \widehat \eta_{n,k}:& =(\Delta_n W)_k - [\Delta_n W]_{ R_k (\Delta_n X)} \end{align*} That \begin{align} \label{hat} \max_{1\le i\le n} \bigg|\sum_{k=1}^i \widehat\eta_{n,k}\bigg| = o_P (n^{-1/2+p/4}) \quad\mbox{as \ } n\to\infty \end{align} was proved on p.\,2425 in~\cite{GoIv21}. To complete the proof of our theorem, we will now show that \begin{align} \label{tilde} \max_{1\le i\le n} \bigg|\sum_{k=1}^i \widetilde \eta_{n,k}\bigg| = O_P \big( n^{-1/2 }(\ln\ln n)^{1/2 }\big) \quad\mbox{as \ } n\to\infty. \end{align} First note that it is not hard to verify that~\eqref{tilde} is equivalent to the assertion that, for any positive sequence $\varepsilon_n\to 0,$ \begin{align} \label{tildes} \max_{1\le i\le n} \bigg|\sum_{k=1}^i \widetilde \eta_{n,k}\bigg| = o_P \big(\varepsilon_n^{-1} n^{-1/2 }(\ln\ln n)^{1/2 }\big) \quad\mbox{as \ } n\to\infty. \end{align} Further, it is easy to see that the random variables $ \widetilde \eta_{n,1}, \ldots, \widetilde \eta_{n,n}$ are exchangeable and $\sum_{k=1}^n \widetilde \eta_{n,k} =0$. Therefore, setting $\gamma_{n,k}:=\varepsilon_n n^{ 1/2 }(\ln\ln n)^{-1/2 }\widetilde \eta_{n,k}, $ $k=1,\ldots, n,$ the desired relation~\eqref{tildes} (and hence~\eqref{tilde}) immediately follows from Lemmata~\ref{L00} and~\ref{L0} below, the latter implying that $\sum_{k=1}^n\gamma_{n,k}^2 =O_P(\varepsilon_n^2)=o_P(1).$ Now the assertion of Theorem~\ref{T2} follows from representation~\eqref{WW} and relations~\eqref{grid}, \eqref{hat} and~\eqref{tilde}.\end{proof} \begin{lemo} \label{L00} Let $\gamma_{n,1}, \gamma_{n,2},\ldots, \gamma_{n,n}$, $n\ge 1,$ be a triangular array of random variables that are exchangeable in each row and such that $\sum_{k=1}^n \gamma_{n,k} =0$ a.s.\ and, as $n\to \infty,$ \begin{align} \label{Squares} \sum_{k=1}^n \gamma_{n,k}^2 = o_P (1). \end{align} Then $\max_{1\le i \le n} \big|\sum_{k=1}^i \gamma_{n,k}\big| = o_P (1).$ \end{lemo} The assertion of Lemma~\ref{L00} is an immediate consequence of Theorem~3.13 in~\cite{Ka05} on convergence of partial sums processes. In our case, the limiting process in that theorem is identically equal to zero, the convergence of characteristic triples following from the assumptions and the obvious observation that $ \max_{1\le k\le n} | \gamma_{n,k} |= o_P (1)$ from~\eqref{Squares}. \begin{lemo} \label{L0} As $n\to \infty,$ \[ \sum_{k=1}^n\widetilde \eta_{n,k}^2 = O_P \big(n^{-1 } \ln \ln n \big). \] \end{lemo} \begin{proof}[Proof of Lemma~\ref{L0}] Note that the components of the vector \[ \mathcal Z_n = (Z_1, \ldots, Z_n):= n^{1/2} \bigl((\Delta_n W)_1, \ldots, (\Delta_n W)_n \bigr) \] are independent standard normal random variables, and let $\overline{Z}_n:=n^{-1}\sum_{k=1}^n Z_k .$ Setting $\zeta_{n,k}:= [\mathcal Z_n]_{k} - Q(u_{n,k}),$ $k=1,\ldots, n,$ we observe that $\overline{\zeta}_n:=n^{-1}\sum_{k=1}^n \zeta_{n,k} =\overline{Z}_n$ and hence \[ \widetilde \eta_{n,k} \equiv n^{-1/2} ( [\mathcal Z_n]_{R_k (\Delta_n X)} - Q(u_{n,R_k (\Delta_n X)}) - \overline{Z}_n) = n^{-1/2} (\zeta_{n,R_k (\Delta_n X)} - \overline{\zeta}_n). \] Therefore, \[ \sum_{k=1}^n \widetilde \eta_{n,k}^2 = n^{-1} \sum_{k=1}^n (\zeta_{n.k}- \overline{\zeta}_n)^2 = n^{-1} \sum_{k=1}^n \zeta_{n.k}^2 - \overline{\zeta}_n^2. \] Here $\overline{\zeta}_n^2=\overline{Z}_n^2 \deq n^{-1} Z_1^2 = O_P (n^{-1}). $ Further, denoting by \[ Q^*_n(u):=\sum_{k=1}^n [\mathcal Z_n]_k {\bf 1} (u\in \mbox{$[\frac{k-1 }n, \frac{k}n)$}), \quad u\in (0,1), \] the empirical quantile function for $\mathcal Z_n $ and letting \[ Q_n(u):=\sum_{k=1}^n Q(u_{n,k}) {\bf 1} (u\in \mbox{$[\frac{k-1 }n, \frac{k}n)$}), \quad u\in (0,1), \] one has \begin{align*} n^{-1} \sum_{k=1}^n \zeta_{n.k}^2 &= n^{-1} \sum_{k=1}^n ([\mathcal Z_n]_k- Q(u_{n,k}))^2 =\int_0^1 (Q^*_n(u) - Q_n (u))^2 du \\ & \le 2 \int_0^1 (Q^*_n(u) - Q (u))^2 du + 2 \int_0^1 (Q (u) - Q_n (u))^2 du. \end{align*} It follows from Theorem~1 in~\cite{BeFo20} that the first term in the second line is $O_P (n^{-1}\ln \ln n),$ whereas the second term in that line is $O(n^{-1})$ by Lemma~\ref{L1} below. This completes the proof of Lemma~\ref{L0}. \end{proof} \begin{lemo} \label{L1} For any $n\ge 1 ,$ one has \[ \int_0^1 (Q(u) - Q_n (u))^2 du \le 3.73 n^{-1} . \] \end{lemo} The proof of this bound uses the following three elementary auxiliary results. \begin{lemo} \label{L2} Let $f\in C^1 (a_0, b_0)$ be convex and non-decreasing on $[a,b]\subset (a_0, b_0)$. Then, for any $v_0\in [a,b],$ \[ \int_a^b (f(v)-f(v_0))^2 dv \le \mbox{$\frac13$} (f'(b))^2 (b-a)^3. \] \end{lemo} \begin{proof}[Proof of Lemma~\ref{L2}] For $v,v_0\in [a,b],$ one has \[ | f(v)-f(v_0)| = \biggl|\int_{v_0}^v f'(s)\, ds \biggr| \le f'(b) |v-v_0| \] as $f'(s) \ge 0$ is non-decreasing by assumption. Hence \begin{align*} \int_a^b (f(v)-f(v_0))^2 dv \le (f'(b))^2\int_a^b (v-v_0)^2 dv \le \mbox{$\frac13$} (f'(b))^2 \big((b-v_0)^3- (a-v_0)^3), \end{align*} completing the proof since clearly $y^3 - x^3 \le (y-x)^3$ for $x\le 0\le y.$\end{proof} \begin{lemo} \label{L3} For any $u\in (0.5, 1),$ one has $1-u \le \sqrt{\pi/2} \varphi (Q(u)). $ \end{lemo} \begin{proof}[Proof of Lemma~\ref{L3}] The desired inequality follows from the observation that it turns into equality at the endpoints $u=0.5$ and $u= 1$ and that its RHS is a concave function as $[\varphi (Q(u))]'' = -\sqrt{2\pi}e^{Q^2(u)/2}<0.$ \end{proof} \begin{lemo} \label{L4} Assume that $f(v),$ $ v\in [a,b],$ is a non-decreasing function, $v_0\in [a,\frac12 (a+b)].$ Then \[ I_1: = \int_a^b (f(v)-f(v_0))^2 dv \le \int_a^b (f(v)-f(a))^2 dv=:I_2. \] \end{lemo} \begin{proof}[Proof of Lemma~\ref{L4}] Expanding the squares, one has \begin{align*} I_2- I_1 &= 2 (f(v_0)- f(a)) \int_a^b f(v) \, dv + (f^2 (a) - f^2 (v_0))(b-a) \\ & = (f(v_0)- f(a))\biggl[ 2 \int_a^b f(v) \, dv -(f(v_0)+ f(a))(b-a) \biggr]\ge 0 \end{align*} since, due to the monotonicity of~$f$, \begin{align*} \int_a^b f(v) \, dv &=\int_a^{v_0 }\cdots + \int_{v_0 }^b\cdots \ge f(a)(v_0- a) + f (v_0) (b-v_0) \\ & =\mbox{$ \frac12 $} f(a)(b- a) +\mbox{$ \frac12 $} f(v_0)(b- a) + (f(v_0)- f(a)) (\mbox{$ \frac12 $} (a+b) - v_0), \end{align*} where the last term is non-negative by the assumptions. \end{proof} \begin{proof}[Proof of Lemma~\ref{L1}] Putting $q_{n,k}:=Q(u_{n,k}),$ $k=1, \ldots, n,$ one has \begin{align} \label{SumQ} \int_0^1 (Q(u) - Q_n (u))^2 du =\sum_{k=1}^n \int_{ (k-1)/n }^{k/n} (Q(u) -q_{n,k})^2 du . \end{align} By symmetry, it is enough to bound the terms with $k\ge n/2$ only, assuming for simplicity that $n$ is even. As $Q$ is clearly convex and increasing on $(\frac12,1),$ for $n/2<k<n$ we get by Lemmata~\ref{L2} and~\ref{L3} that \begin{align*} \int_{ (k-1)/n }^{k/n} (Q(u) -q_{n,k})^2 du &\le \mbox{$\frac13$} (Q'(k/n)) ^2 n^{-3} =\frac{n^{-3}}{3\varphi^2 (Q(k/n))} \le \frac{ \pi n^{-1}}{6(n-k)^2}. \end{align*} Therefore \begin{align} \label{InnerSum} \sum_{k=n/2+1} ^{n-1} \int_{ (k-1)/n }^{k/n} (Q(u) -q_{n,k})^2 du \le \frac{ \pi}{ 6 n }\sum_{k=n/2+1} ^{n-1} \frac{1}{ (n-k)^2} \le \frac{ \pi}{ 6 n }\sum_{m=1} ^{\infty} \frac{1}{ m^2}=\frac{\pi^3}{36 n}. \end{align} For the last term in the sum on the RHS of~\eqref{SumQ}, setting $q:= Q(1-1/n),$ from Lemma~\ref{L4} we obtain that \begin{align*} J_n:&= \int_{ (n-1)/n }^{1} (Q(u) -Q(u_{n,n}) )^2 du \le \int_{ (n-1)/n }^{1} (Q(u) -q )^2 du \\ & = \int_{ (n-1)/n }^{1} Q(u) ^2 du -2 q \int_{ (n-1)/n }^{1} Q(u) du +q^2 n^{-1}. \end{align*} Integrating by parts, we get \begin{align*} \int_{ (n-1)/n }^{1} Q(u) ^2 du & ={\bf E} (Z_1^2; Z_1>q) = \int_q^\infty z^2\varphi (z) dz \\ & = [-z\varphi(z)]_q^\infty + \int_q^\infty \varphi (z) dz = q \varphi (q) +\overline{\Phi}(q), \end{align*} whereas \begin{align*} \int_{ (n-1)/n }^{1} Q(u) du & ={\bf E} (Z_1 ; Z_1>q) = \int_q^\infty z \varphi (z) dz = - \int_q^\infty d\varphi (z) = \varphi (q) . \end{align*} Since $\overline{\Phi}(q)=n^{-1} $ and $ q^2 n^{-1}-q \varphi (q) = q^2 (\overline{\Phi}(q) - \varphi(q)/q)<0 $ by the well-known inequality for the normal Mills' ratio (see e.g.\ Ch.~VII.1 in~\cite{Fe68}), we conclude that \begin{align*} J_n\le q \varphi (q) + n^{-1} - 2 q \varphi (q)+q^2 n^{-1} \le n^{-1}. \end{align*} Together with~\eqref{InnerSum} and an elementary bound for the constant $\frac{\pi^3}{18}+2 <3.73$ this completes the proof of Lemma~\ref{L1}. \end{proof} \end{document}
arXiv
Journal of Paleontology (1) The Paleontological Society (1) Journal of Medieval Military History (16) Development of equations, based on milk intake, to predict starter feed intake of preweaned dairy calves A. L. Silva, T. J. DeVries, L. O. Tedeschi, M. I. Marcondes Journal: animal / Volume 13 / Issue 1 / January 2019 Published online by Cambridge University Press: 16 April 2018, pp. 83-89 There is a lack of studies that provide models or equations capable of predicting starter feed intake (SFI) for milk-fed dairy calves. Therefore, a multi-study analysis was conducted to identify variables that influence SFI, and to develop equations to predict SFI in milk-fed dairy calves up to 64 days of age. The database was composed of individual data of 176 calves from eight experiments, totaling 6426 daily observations of intake. The information collected from the studies were: birth BW (kg), SFI (kg/day), fluid milk or milk replacer intake (MI; l/day), sex (male or female), breed (Holstein or Holstein×Gyr crossbred) and age (days). Correlations between SFI and the quantitative variables MI, birth BW, metabolic birth BW, fat intake, CP intake, metabolizable energy intake, and age were calculated. Subsequently, data were graphed, and based on a visual appraisal of the pattern of the data, an exponential function was chosen. Data were evaluated using a meta-analysis approach to estimate fixed and random effects of the experiments using nonlinear mixed coefficient statistical models. A negative correlation between SFI and MI was observed (r=−0.39), but age was positively correlated with SFI (r=0.66). No effect of liquid feed source (milk or milk replacer) was observed in developing the equation. Two equations, significantly different for all parameters, were fit to predict SFI for calves that consume less than 5 (SFI<5) or more than 5 (SFI>5) l/day of milk or milk replacer: ${\rm SFI}_{{\,\lt\,5}} {\equals}0.1839_{{\,\pm\,0.0581}} {\times}{\rm MI}{\times}{\rm exp}^{{\left( {\left( {0.0333_{{\,\pm\,0.0021 }} {\minus}0.0040_{{\,\pm\,0.0011}} {\times}{\rm MI}} \right){\times}\left( {{\rm A}{\minus}{\rm }\left( {0.8302_{{\,\pm\,0.5092}} {\plus}6.0332_{{\,\pm\,0.3583}} {\times}{\rm MI}} \right)} \right)} \right)}} {\minus}\left( {0.12{\times}{\rm MI}} \right)$ ; ${\rm SFI}_{{\,\gt\,5}} {\equals}0.1225_{{\,\pm\,0.0005 }} {\times}{\rm MI}{\times}{\rm exp}^{{\left( {\left( {0.0217_{{\,\pm\,0.0006 }} {\minus}0.0015_{{\,\pm\,0.0001}} {\times}{\rm MI}} \right){\times}\left( {{\rm A}{\minus}\left( {3.5382_{{\,\pm\,1.3140 }} {\plus}1.9508_{{\,\pm\,0.1710}} {\times}{\rm MI}} \right)} \right)} \right)}} {\minus}\left( {0.12{\times}{\rm MI}} \right)$ where MI is the milk or milk replacer intake (l/day) and A the age (days). Cross-validation and bootstrap analyses demonstrated that these equations had high accuracy and moderate precision. In conclusion, the use of milk or milk replacer as liquid feed did not affect SFI, or development of SFI over time, which increased exponentially with calf age. Because SFI of calves receiving more than 5 l/day of milk/milk replacer had a different pattern over time than those receiving <5 l/day, separate prediction equations are recommended. Impact of automatic milking systems on dairy cattle producers' reports of milking labour management, milk production and milk quality C. Tse, H. W. Barkema, T. J. DeVries, J. Rushen, E. A. Pajor Journal: animal / Volume 12 / Issue 12 / December 2018 Published online by Cambridge University Press: 04 April 2018, pp. 2649-2656 Automatic milking systems (AMS), or milking robots, are becoming widely accepted as a milking technology that reduces labour and increases milk yield. However, reported amount of labour saved, changes in milk yield, and milk quality when transitioning to AMS vary widely. The purpose of this study was to document the impact of adopting AMS on farms with regards to reported changes in milking labour management, milk production, milk quality, and participation in dairy herd improvement (DHI) programmes. A survey was conducted across Canada over the phone, online, and in-person. In total, 530 AMS farms were contacted between May 2014 and the end of June 2015. A total of 217 AMS producers participated in the General Survey (Part 1), resulting in a 41% response rate, and 69 of the respondents completed the more detailed follow-up questions (Part 2). On average, after adopting AMS, the number of employees (full- and part-time non-family labour combined) decreased from 2.5 to 2.0, whereas time devoted to milking-related activities decreased by 62% (from 5.2 to 2.0 h/day). Median milking frequency was 3.0 milkings/day and robots were occupied on average 77% of the day. Producers went to fetch cows a median of 2 times/day, with a median of 3 fetch cows or 4% of the herd per robot/day. Farms had a median of 2.5 failed or incomplete milkings/robot per day. Producers reported an increase in milk yield, but little effect on milk quality. Mean milk yield on AMS farms was 32.6 kg/cow day. Median bulk tank somatic cell count was 180 000 cells/ml. Median milk fat on AMS farms was 4.0% and median milk protein was 3.3%. At the time of the survey, 67% of producers were current participants of a DHI programme. Half of the producers who were not DHI participants had stopped participation after adopting AMS. Overall, this study characterized impacts of adopting AMS and may be a useful guide for making this transition. Edited by Clifford J. Rogers, Kelly DeVries Book: Journal of Medieval Military History Published by: Boydell & Brewer Print publication: 19 September 2013, pp v-v Journal of Medieval Military History 1477–545X 10 - Defense Schemes of Southampton in the Late Medieval Period, 1300–1500 The purpose of this study is to examine the military schemes of defense that were employed in the town of Southampton during the late medieval period, 1300–1500. In many ways this is the most basic and vital of the varied roles Southampton had as a military entity. If the town was incapable of protecting its own possessions and dependencies, it would be unable to fulfill its other military roles. Moreover, if the town was unable to defend the area it was expected to, that would represent a major vulnerability in the region and ultimately the kingdom. If, on the other hand, the townsmen of Southampton were able to fulfill this responsibility of self-protection, that would not only maintain their day-to-day activities but would also give the kingdom an important defense. The protection of Southampton was actually a multi-tiered system of various parties and individuals. The townspeople of Southampton managed their own defense in two primary methods. First they created various plans to organize themselves for conflict, second they participated in the gathering and control of information. The town also fitted into a wider defense structure in the kingdom with various groups and individuals supporting it in this endeavor. This organization was at one point the basis of English defense though many questions remain unanswered. How did these defensive schemes develop in Southampton? Were these schemes created in response to or in preparation for dangers posed to the town? 6 - The Military Effectiveness of Alan Mercenaries in Byzantium, 1301–1306 Byzantine military events in the early fourteenth century have captured the imagination of Western scholars for well over a century, thanks to the participation of Spanish mercenaries in the Catalan Company led by the flamboyant Roger de Flor. The important contribution to this military adventure made by another company of mercenaries composed entirely of Alans has been grossly underestimated when not totally ignored by both Western and Russian historians. Since the Alans were implicated in the disastrous defeat of the Byzantines in Anatolia as well as the first historical success of the Turkish leader Osman, eponymous founder of the Ottoman state, examining the specific role of these warriors may shed light on the military debacle that the Byzantines suffered at the hands of the Turks and Catalans in the first decade of the fourteenth century. Despite its importance, the military effectiveness of this company while contributing to the defense of the empire has never been examined in detail, so that the question remains: why did a Mongol-trained division of considerable military potential fail so spectacularly? The appearance of the Alans in Byzantine service first attracted attention because scholars in Western Europe, especially Spain, have long been fascinated by the adventures of the Catalan Company, comparing its exploits in the East with those of Cortez and Pizarro in the Americas. Although composed of Aragonese, Calabrians, and Sicilians, it is usually referred to as the Catalan Company since most of the officers and men were from Catalonia. 2 - The Battle of Civitate: A Plausible Account Print publication: 19 September 2013, pp 25-56 On 18 June 1053 in the undulating open country of the Capitanata of northern Apulia, near an ancient Roman city which no longer exists, one of the most crucial battles of the Middle Ages was fought. At Civitate, a modest force of Norman adventurers faced a papal army of Germans, Italians and Lombards, perhaps twice its size. The outcome of that clash in what is today a sparsely populated region would profoundly influence the course of Mediterranean history for centuries to come. It is recorded in more than a score of contemporary sources. The most comprehensive accounts are provided by the three primary chroniclers of the Normans in the South: Amatus of Montecassino, Geoffrey Malaterra and, especially, William of Apulia. Their narratives are largely corroborated by the Swabian historian Herman of Reichenau, and the various biographers of Pope Leo IX. Yet the details of the event remain mired in myth. Although much has been written about the engagement, what actually happened has been obfuscated by legend, poetic license and outright partisan myth-making. Even Dante Alighieri reserved a few effusive lines of verse in his renowned La divina commedia to extol the exploits of one the battle's epic heroes: "With those who felt the agony of blows by making counterstand to Robert Guiscard." In the modern era, Sir Edward Gibbon's account of the encounter in The Decline and Fall of the Roman Empire is rather short and somewhat impaired by the pretentious prose of the era. 11 - French and English Acceptance of Medieval Gunpowder Weaponry In 1966, John R. Hale, following up an earlier essay, which had addressed in general the subject of warfare and public opinion during the fifteenth and sixteenth century, published an article focusing on the question of that same society's acceptance of early gunpowder weaponry. In this article, "Gunpowder and the Renaissance: An Essay in the History of Ideas," Professor Hale concluded that although there were historical examples of societal acceptance of early guns, on the whole, the people of the fourteenth through the sixteenth centuries had rejected this new military technology. In a time of scholarly protest against the war in Vietnam and a growing concern over the arms race of the Cold War, Hale found some solace in the fact that substantial literary evidence showed that the period of his research also had a large number of intelligentsia who criticized their kingdoms' weapons policies. And while, as he concludes, "by the early seventeenth century, ideals had given ground to the arguments of fact," yielding to an increased use of and, equally, a growing complacency towards, gunpowder weapons, that period of intellectual fervor known as the Renaissance had at least held out a disdain for the killing power of the new weapons. List of Illustrations and Tables Print publication: 19 September 2013, pp vi-vi Print publication: 19 September 2013, pp i-iv 5 - Saint Catherine's Day Miracle – the Battle of Montgisard Print publication: 19 September 2013, pp 95-106 "We learn from history that we do not learn from history": this was the title of an address by Captain B. H. Liddell Hart on 3 May 1938 to the Manchester Luncheon Club at the Midland hotel. A major point he made, based on twenty years' of study of the records of the First World War, was: pure documentary history seems to me akin to mythology. Many were the gaps to be found in official archives – tokens of documents destroyed later to conceal what might impair a commander's reputation … a general could safeguard the lives of his men as well as his own reputation by writing orders based on a situation that did not exist, for an attack that nobody carried out … I have wondered how the war went on at all when I have found how much of their time the commanders spent in preparing its history. Liddell Hart's conclusions were based on abundant data which contradicted official documentation. He interviewed people who participated in battles, read the memoirs of politicians and army officers, compared archives, etc. All these are utterly irrelevant while dealing with medieval battles. In these cases, the only relevant part of Liddell Hart's assertion is that documentation is not reliable. In other words, fishing out facts from medieval records is almost impossible. Therefore, having independent data which may shed some light on events that took place during a battle might be an important contribution to its reconstruction and understanding. 1 - Military Games and the Training of the Infantry Print publication: 19 September 2013, pp 1-24 ["I giochi militari e l'addestramento delle fanterie," in Aldo A. Settia, Comuni in guerra. Armi ed eserciti nell'Italia delle città (Bologna, 1994), pp. 29–52] Translated by Valerie Eads The "Little Battles" Italian communal armies were, as is well known, largely made up of infantry. Admittedly, the strength of these latter would have resulted more from numbers and determination than from combat experience; still, the term "infantry" properly means "a group of soldiers with a certain level of training and discipline," two qualities that result only from some form of instruction. And yet, if we wish to develop at least a rough understanding of the military obligations of the mass of the population, of the armament the people had to provide for themselves, and of how it (the population) was mobilized to train for war – with the exception of some late provisions concerning marksmen – the sources as a rule are simply silent concerning training. One can certainly maintain that economic reasons would have prevented regular exercises for the communal infantry in times of peace, but it is difficult to believe that all training would have taken place on the battlefield or in the course of the socio-political conflicts between milites and pedites. There is, however, a third possibility offered by certain war games – a sort of plebeian rival to the aristocratic tournaments – practiced specifically by those classes that furnished the infantry for the city militia. 9 - Sir John Radcliffe, K.G. (d. 1441): Miles Famossissimus Sir John Radcliffe of Attleborough, Norfolk, had an admirable career as a soldier and administrator in the service of the Lancastrian kings of England. His assignments took him into all the dominions of the crown: Ireland, Wales, Normandy, and Gascony. While Sir John's life is of considerable interest in its own right, his career both exemplifies as well as personalizes the English military experience of his lifetime. Sir John came from a landed family with Lancashire roots. He was the second son of James Radcliffe (d. 1410) and his wife Joan, daughter of Sir John Tempest of Bracewell, Yorkshire. Nothing has come to light on the early years of Sir John, but his career suggests that he was given a firm grounding in military, financial, and administrative matters. The first sure record of John Radcliffe is in a military context helping his king consolidate his hold upon the throne. Many years after the event, in 1429, it was noted that while an esquire (probably in the service of his father, James), John had participated in the battle of Shrewsbury on 21 July 1403. In that battle the first king of the Lancastrian dynasty, Henry IV, defeated a rebel force led by Henry Percy, known as Hotspur, the son of Henry Percy, earl of Northumberland. Six days after the battle, James Radcliffe was one of the men commissioned to gather forces from Lancashire and bring them to the king at Pontefract in Yorkshire. 7 - Winning and Recalling Honor in Spain: Pro-English Poetry in Celebration of the Battle of Nájera (1367) Over the centuries, a great deal of poetry has been devoted to warfare and its practitioners. While the majority has tended to celebrate the heroism of men involved in conflict, a not insignificant part has condemned the carnage and futility, a condemnation that reached its height during the First World War in the works of such writers as Wilfred Owens, Siegfried Sassoon, and Robert Service. During the Middle Ages, martial poetry followed both strains. Much of it emphasized the glory of combat, serving as the supreme tool for recalling honor and assigning shame earned on the battlefield. This was true of the most widely-recited works of the period, the great epics and chanson de geste, including Beowulf, the Song of Roland, the Tales of King Arthur, and the Poema del mio Cid, to name only the most prominent. All centered on human conflict and extolled the heroism of their protagonists. Poets, like the great troubadour, Bertran de Born (c. 1140–c. 1215), could look on war as a spectacle complete with "proud pavilions high … squadrons of armored chivalry … trumpets and tabors, ensigns and pennants." To Born's mind, participants were expected to spill blood and engage in butchery in their pursuit of "death or victory." Journal of Medieval Military History Print publication: 19 September 2013 The comprehensive breadth and scope of the Journal are to the fore in this issue, which ranges widely both geographically and chronologically. The subjects of analysis are equally diverse, with three contributions dealing with the Crusades, four with matters related to the Hundred Years War, two with high-medieval Italy, one with the Alans in the Byzantine-Catalan conflict of the early fourteenth century, and one with the wars of the Duke of Cephalonia in Western Greece and Albania at the turn of the fifteenth century. Topics include military careers, tactics and strategy, the organization of urban defenses, close analysis of chronicle sources, and cultural approaches to the acceptance of gunpowder artillery and the prevalence of military "games" in Italian cities. Contributors: T.S. Asbridge, A. Compton Reeves, Kelly DeVries, Michael Ehrlich, Scott Jessee, Donald Kagay, Savvas Kyriakidis, Randall Moffett, Aldo A. Settia, Charles D. Stanton, Georgios Theotokis, L.J. Andrew Villalon, Anatoly Isaenko. 8 - The Wars and the Army of the Duke of Cephalonia Carlo I Tocco (c. 1375–1429) This article will examine the nature of the military operations conducted by the armies of the duke of Cephalonia, Carlo I Tocco (c. 1375–1429). The Toccos were originally from Benevento and served the Angevin rulers of Sicily for many years. In 1330/31 Carlo's grandfather, Guglielmo, was appointed captain of Corfu. In 1357, Carlo's father, also Guglielmo, was appointed by Robert of Taranto count of Cephalonia and Zakynthos and soon added Vonitsa (Bonditsa) and Leukas to his domains. Guglielmo Tocco died in 1375/76 while his sons, Carlo and Leonardo were still infants. Their mother, Maddalena Buondelmonti, who was acting as their regent, had their titles confirmed by Queen Joanna of Naples. When he took over the reins of his principality, Carlo I Tocco exploited the extreme political fragmentation, which ensued after the collapse of the Serbian empire of Stefan Dušan (1331–55) and the dramatic territorial reduction of the Byzantine empire, to expand his principality in Western Greece, in Albania and in the Peloponnese. His military exploits are the subject of an anonymous chronicle known as the Chronicle of the Toccos which covers the period 1375–1422 and its compilation was completed in 1429. It is likely that this anonymous work was commissioned by Carlo I Tocco himself. Nevertheless, in spite of being a work of propaganda, the chronicle of the family of the Toccos is an excellent source of material on the warfare between the small political entities that were established in Western Greece and Albania in the late fourteenth and early fifteenth centuries. 4 - How the Crusades Could Have Been Won: King Baldwin II of Jerusalem's Campaigns against Aleppo (1124–5) and Damascus (1129) In the wake of the First Crusaders' conquest of Jerusalem on 15 July 1099, four Latin Christian (or "Frankish") settlements were established in the Near East – the so-called "crusader states" of the kingdom of Jerusalem, the principality of Antioch and the counties of Edessa and Tripoli. For the next two centuries, western European settlers and crusaders sought to defend these isolated polities – "the lands beyond the sea", or "Outremer", as they were known collectively in the Middle Ages – struggling to preserve Latin Christendom's fragile foot-hold in the Holy Land. Ultimately they failed. Frankish fortunes waned after the first flush of success and, as the Muslim powers of the Near East began to claw back territory, the power of the crusader states diminished. Outremer's dismemberment was gradual, but seemingly inexorable. The loss of Edessa to the Turkish warlord Zangi in 1144 led to the eradication of the first crusader state. The Ayyubid Sultan Saladin captured Jerusalem in 1187 and, barring a brief period of recovery, the Holy City remained in the hands of Islam until the twentieth century. With the advent of the more bellicose Mamluk sultanate in the mid-thirteenth century, the pace of Muslim reconquest accelerated: Antioch fell in 1268; Tripoli in 1289; and finally Acre in 1291. With that, the last vestiges of Latin rule on the Levantine mainland disappeared. 3 - The Square "Fighting March" of the Crusaders at the battle of Ascalon (1099) On 12 August 1099 the Latin knights and footsoldiers of the First Crusade left Jerusalem to meet the Fatimid army of the grand vizier Al-Afdal which, at that time, had invaded Judaea and had encamped close to the coastal city of Ascalon. The army was estimated to be around twenty thousand strong, including both infantry and cavalry. This would be the first of several major expeditions by the Egyptians launched against the Crusader states in Palestine, all entering through Ascalon and its coastal plain. The Latin leaders were first alerted about a possible large enemy force approaching from the south on 9 August and on the next day the Crusader armies began their forty-kilometer march south to the city of Ascalon where the enemy was last reported to have camped. According to Raymond of Aguilers, one of the main chroniclers of the First Crusade and an eye witness of the events, the Latins numbered 1,200 knights and no more than 9,000 footsoldiers and "they marched in nine ranks, three to the rear, three to the front, and three in the middle so that attack would be met in three ranks with the middle one always available to bolster the others." In this paper I will examine a number of theories about the origin of this particular marching formation, based on the manuals attributed to the Byzantine Emperors Maurice (582–602), Leo VI (886–912) and Nicephoros Phocas (963–69) and several anonymous Byzantine military treatises of the sixth and tenth centuries. Silk Polymer Coating with Low Dielectric Constant and High Thermal Stability for Ulsi Interlayer Dielectric P. H. Townsend, S. J. Martin, J. Godschalx, D. R. Romer, D. W. Smith, D. Castillo, R. DeVries, G. Buske, N. Rondan, S. Froelicher, J. Marshall, E. O. Shaffer, J-H. Im A novel polymer has been developed for use as a thin film dielectric in the interconnect structure of high density integrated circuits. The coating is applied to the substrate as an oligomeric solution, SiLK*, using conventional spin coating equipment and produces highly uniform films after curing at 400 °C to 450 °C. The oligomeric solution, with a viscosity of ca. 30 cPs, is readily handled on standard thin film coating equipment. Polymerization does not require a catalyst. There is no water evolved during the polymerization. The resulting polymer network is an aromatic hydrocarbon with an isotropie structure and contains no fluorine. The properties of the cured films are designed to permit integration with current ILD processes. In particular, the rate of weight-loss during isothermal exposures at 450 °C is ca. 0.7 wt.%/hour. The dielectric constant of cured SiLK has been measured at 2.65. The refractive index in both the in-plane and out-of-plane directions is 1.63. The flow characteristics of SiLK lead to broad topographic planarization and permit the filling of gaps at least as narrow as 0.1 μm. The glass transition temperature for the fully cured film is greater than 490 °C. The coefficient of thermal expansivity is 66 ppm/°C below the glass transition temperature. The stress in fully cured films on Si wafers is ca. 60 MPa at room temperature. The fracture toughness measured on thin films is 0.62 MPa m ½. Thin coatings absorb less than 0.25 wt.% water when exposed to 80% relative humidity at room temperature. In Situ and Ex Situ Ellipsometric Characterization of Oxygen Plasma and UV Radiation Effects on Spacecraft Materials C. L. Bungay, T. E. Tiwald, M. J. DeVries, B. J. Dworak, John A. Woollam Atomic Oxygen (AO) and ultraviolet (UV) radiation contribute (including synergistically) to degradation of spacecraft materials in Low Earth Orbit (LEO). NASA is, therefore, interested in determining what effects the harsh LEO environment has on materials exposed to it, as well as develop materials that are more AO and UV resistant. The present work involves the study of AO and UV effects on polyarylene ether benzimidazole (PAEBI) with in situ and ex situ spectroscopic ellipsometry. PAEBI is a polymer proposed for space applications due to its reported ability to form a protective phosphorous oxide on the surface when exposed to AO. In our experiments PAEBI was exposed to UV radiation from a xenon lamp while in situ ellipsometry data were acquired. The effects of UV radiation were modeled as an exponentially graded layer on the surface of bulk PAEBI. The change in UV absorption spectra, depth profile of the index of refraction, and growth trends of the UV irradiated PAEBI were all studied in these experiments. In addition, PAEBI was exposed to an oxygen plasma to simulate the synergistic effects of AO and UV. Ellipsometry data were acquired in-line with both a UV-Visible ellipsometer and an infrared ellipsometer. The change in UV absorption bands and index of refraction due to synergistic AO/UV, as well as the growth trends of the oxide layer were studied.
CommonCrawl
\begin{document} \title{Connected Components of Underlying Graphs of Halving Lines} \author{Tanya Khovanova\\MIT \and Dai Yang\\MIT} \maketitle \begin{abstract} In this paper we discuss the connected components of underlying graphs of halving lines' configurations. We show how to create a configuration whose underlying graph is the union of two given underlying graphs. We also prove that every connected component of the underlying graph is itself an underlying graph. \end{abstract} \section{Introduction} Halving lines have been an interesting object of study for a long time. Given $n$ points in general position on a plane the minimum number of halving lines is $n/2$. The maximum number of halving lines is unknown. The current lower bound of $O(ne^{\sqrt{\log n}})$ is found by Toth \cite{Toth}. The current asymptotic upper bound of $O(n^{4/3})$ is proven by Dey \cite{Dey98}. In 2006 a tighter bound for the crossing number was found \cite{PRTT}, which also improved the upper bound for the number of halving lines. In our paper \cite{KY} we further tightened the Dey's bound. This was done by studying the properties of the underlying graph. In this paper we concentrate on the underlying graphs and properties of its connected components. In Section~\ref{sec:union} we use the cross construction to show how to sum two underlying graphs. In Section~\ref{sec:subtraction} we show that any connected component of the underlying graph is realizable as an underlying graphs of the halving lines of its vertices. \section{Definitions} Let $n$ points be in general position in $\mathbb{R}^2$, where $n$ is even. A \textit{halving line} is a line through 2 of the points that splits the remaining $n-2$ points into two sets of equal size. From our set of $n$ points, we can determine an \textit{underlying graph} of $n$ vertices, where each pair of vertices is connected by an edge if and only if there is a halving line through the corresponding 2 points. In dealing with halving lines, we consider notions from both Euclidean geometry and graph theory. We define a \textit{geometric graph}, or \textit{geograph} for short, to be a pair of sets $(V,E)$, where $V$ is a set of points on the coordinate plane, and $E$ consists of pairs of elements from $V$. In essence, a geograph is a graph with each of its vertices assigned to a distinct point on the plane. \subsection{Examples} \subsubsection{Four points} Suppose we have four non-collinear points. If their convex hull is a quadrilateral, then there are two halving lines. If their convex hull is a triangle, then there three halving lines. Both cases are shown on Figure~\ref{fig:4points}. \begin{figure} \caption{Underlying graphs for four points.} \label{fig:4points} \end{figure} \subsubsection{Polygon}\label{sec:polygon} If all points belong to the convex hull of the point configuration, then each point lies on exactly one halving line. The number of halving lines is $n/2$, and the underlying graph is a matching graph --- a union of $n/2$ disjoint edges. The left side of Figure~\ref{fig:4points} shows an example of this configuration. For any point configuration there is at least one halving line passing through each vertex. Hence, the polygon provides an example of the minimum number of halving lines, and an example of the most number of disconnected components. \section{Union of Connected Components}\label{sec:union} Given two underlying graphs of two halving lines configurations, the following construction allows to create a new halving line configuration whose underlying graph consists of two given graphs as connected components. \subsection{Cross} The following construction we call a \textit{cross}. Given two sets of points, with $n_1$ and $n_2$ points respectively whose underlying graphs are $G_1$ and $G_2$, the cross is the construction of $n_1+n_2$ points on the plane whose underlying graph has two isolated components $G_1$ and $G_2$. We squeeze the initial sets of points in $G_1$ and $G_2$ into long narrow segments, a process called segmentarizing (see \cite{KY}). Note that segmentarizing is an affine transform, and does not change which pairs of points form halving lines. Then we intersect these segments in such a way that the halving lines of $G_1$ split the vertices of $G_2$ into two equal halves, and vice versa (See Figure~\ref{fig:cross}). \begin{figure} \caption{The Cross construction.} \label{fig:cross} \end{figure} With respect to geographs, the image of the cross construction depends on the precise manner in which $G_1$ and $G_2$ are segmentarized and juxtaposed. However, with respect to underlying graphs, the cross construction defines an associative and commutative binary operation. Our Polygon example in subsection~\ref{sec:polygon} can be viewed as the cross construction of a 2-path graph with itself many times. It is interesting to note that in the cross construction the halving lines of one component divide the points of the other component into the same halves. It is not necessarily so. Two connected components can interact in a way different from a cross as seen in Figure~\ref{fig:noncross}. \begin{figure} \caption{Two connected components that are not formed through the cross construction.} \label{fig:noncross} \label{fig:NonCrossSum} \end{figure} \section{Decomposition of Connected Components}\label{sec:subtraction} We will now prove that graph composition has an inverse of sorts, namely that we can subtract disconnected components of an underlying graph. We will show that a connected component of the underlying graph is itself an underlying graph. But before doing so, we will introduce some definitions. Given a set of points $G$ and a directed line, we can orient $G$ and pick a direction to be North; and thus, we can define the East and the West half of the plane. We define the \textit{$G$-balance} of the line to be the difference between the number of West points and East points in $G$. Similarly, we can define the \textit{$G$-balance} of two points as the $G$-balance of the line through them. It is often does not matter which direction is chosen as North, but it is important that when we move a variable line, the two sides of the line move accordingly. Let $A$ be a union of connected subcomponents in $G$. We will prove that the halving lines of $G$ that are formed by points in $A$ are also halving lines in $A$. \begin{theorem}\label{thm:preserved} If $A$ is a union of some connected subcomponents in $G$, then for every pair of points in $A$ forming a halving line in $G$, their $A$-balance is zero. \end{theorem} \begin{proof} Let $A$ contain $k$ of the halving lines of $G$. Label these lines $l_1,l_2,...,l_k$ by order of counter-clockwise orientation. For any two such lines $l_i,l_{i+1}$, there is a unique rotation of at most $180$ degrees about their point of intersection that maps $l_i$ to $l_{i+1}$, where the indices are taken mod $k$. Define $R_i$ to be the open region swept by $l_i$ as it moves into $l_{i+1}$ under this rotation. Note that each $R_i$ consists of two symmetric unbounded sectors. We claim that there are no vertices of $A$ lying in any of the $R_i$. Assume that $R_i$ contains a vertex $P$ of $A$. Draw the lines through $P$ parallel to $l_i$ and $l_{i+1}$, and call them $m_i$ and $m_{i+1}$ respectively, see Figure~\ref{fig:balance}. Note that neither $m_i$ nor $m_{i+1}$ are halving lines of $G$. Take a variable line $m$ through $P$ to initially coincide with $m_i$, and rotate it counter-clockwise until it coincides with $m_{i+1}$. The side of $m_i$ that contains $l_i$ has more points of $G$ than the other side. Similarly, the side of $m_{i+1}$ that contains $l_{i+1}$ has more points than the other side. As $m$ rotates its $G$-balance will change sign, so by continuity, $m$ coincides with a halving line during this rotation, a halving line which should occur between $l_i$ and $l_{i+1}$ in the counter-clockwise ordering of halving lines. As one point on the line, namely $P$, belongs to $A$, the other point of this halving line also belongs to $A$. Hence, $R_i$ can not contain a vertex of $A$. \begin{figure} \caption{No points of $A$ exist in the regions $R_i$.} \label{fig:balance} \end{figure} As $l_i$ rotates into $l_{i+1}$, it does not sweep across any points of $A$ along the way, and furthermore the points of $A$ on $l_i$ or $l_{i+1}$ do not affect the net $A$-balance of these lines. When the line $l_1$ completes its $180^\circ$ rotation, its $A$-balance does not change due to the above argument, but by definition it should be negated, so it must be zero. \end{proof} \begin{corollary}\label{thm:componentmix} If an underlying geograph $G$ is composed of disconnected components $A$ and $B$, then every halving line in $A$ divides points in $B$ in half, and vice versa. \end{corollary} Given a geograph and a fixed orientation such that no edges are vertical, we can denote the \textit{left-degree} and \textit{right-degree} of a given vertex as the number of edges emanating from the left and the right of that vertex respectively. Then the following result follows from the existence of structures called chains found on any oriented underlying geograph \cite{Dey98}, \cite{KY}. We will not discuss the definition of chains here. We will only mention that each chain is a subpath in the underlying graph that travels from left-to-right. We note that chains have the following properties: \begin{itemize} \item A vertex on the left half of the underlying graph is a left endpoint of a chain. \item A vertex on the right half of the underlying graph is a right endpoint of a chain. \item Every vertex is the endpoint of exactly one chain. \item Every halving line is part of exactly one chain. \end{itemize} Now we are ready to prove the following lemma. \begin{lemma}\label{thm:leftright} Let $G$ be an underlying geograph with a fixed orientation. If $v$ is a vertex appearing among the left half (right half) of $G$, then the right-degree (left-degree) of $v$ is one more than the left-degree (right-degree) of $v$. \end{lemma} \begin{proof} If $v$ appears among the left half of the vertices of $G$, then every chain passing through $v$ contributes one left-degree and one right-degree to $v$. There is one chain with $v$ as an endpoint, and it must emanate on the right since chains cannot end among the $\frac{n}{2}$ leftmost vertices of $G$. Therefore, the right-degree of $v$ exceeds the left-degree of $v$ by one. The proof when $v$ appears among the $\frac{n}{2}$ rightmost vertices of $G$ is analogous. \end{proof} The previous theorems and lemmas allow us to prove our main result of this section that the subtraction works: \begin{theorem}\label{thm:subtraction} Suppose that an underlying geograph $G$ contains a union of connected components $A$. Then if all vertices of $G$ that do not belong to $A$ are removed, the halving lines of $A$ in $G$ are precisely the halving lines of $A$ by itself. \end{theorem} \begin{proof} Fix an orientation of $G$, and consider $A$ by itself under the same orientation. Since Lemma~\ref{thm:preserved} asserts that deleting the extra vertices preserves the existing halving lines of $A$, it suffices to show that no new halving lines are added. Assume the contrary, and call $E_A$ the set of new edges in $A$ which were not in $G$. Let $v$ and $w$ be the leftmost and rightmost vertices of $A$ with edges in $E_A$, respectively. Clearly $v$ lies to the left of $w$ in $A$, and hence in $G$ as well. Note that $v$ has a greater right-degree in $A$ than in $G$, but the same left-degree in both geographs. Therefore, by Lemma ~\ref{thm:leftright}, $v$ was not among the leftmost half of the vertices of $G$, so it must have been among the rightmost half. Similarly, $w$ has a greater left-degree in $A$ than in $G$, but the same right-degree in both geographs. Thus, $w$ must have been among the leftmost half of the vertices in $G$. But this contradicts the fact that $v$ must lie to the left of $w$ in $G$, so our statement holds. \end{proof} \begin{corollary} Each connected component of an underlying geograph $G$ is itself an underlying geograph. \end{corollary} \section{Properties of Connected Components} Connected components of the underlying graph are themselves underlying graphs. Hence, the properties of the underlying graphs are shared by each component. For example, every connected component has at least three leaves. In addition, the properties of chains with respect to any geograph are the same as the properties of these chains with respect to the connected component they belong to. Consequently, we present a stronger version of Corollary~\ref{thm:componentmix}. \begin{lemma} In any orientation of a geograph $G$, if $C$ is a connected component of $G$, then the left half of the vertices of $C$ belong to the left half of the vertices of $G$, and the right half of the vertices of $C$ belong to the right half of the vertices of $G$. \end{lemma} \begin{proof} A vertex $v\in C$ is on the left half of $C$ or $G$ iff its right-degree is one more than its left-degree in $C$ or $G$. But the left-degree and right-degree of $v$ is the same whether we consider the entirety of $G$ or only its connected component. Hence, $v$ is on the left half of $C$ iff it is on the left half of $G$.\end{proof} \end{document}
arXiv
Force calculations for a wheeled robot I'm trying to work out the wheel force and torque required for a TWIP robot, so that I can size a motor. I've calculated a maximum traction force of $\small6.51\mathrm N$. My understanding is that a torque force at the wheels of up to and including $\small6.51\mathrm N$ can be applied, to drive the robot without the wheels slipping. This would give the robot a maximum acceleration of $\small3.92\mathrm{ms}^{-2}$ So, assuming I wanted to achieve the maximum force to drive the robot, and hence the maximum acceleration (assuming pendulum is balanced), I would need a wheel force of $\small6.51\mathrm N$. There is also resistance against the direction of motion/driving force of the robot, in the form of rolling resistance and aerodynamic drag. From what I've read rolling resistance (a type of static friction) is a resistive moment to wheel rotation, which needs to be overcome by the wheel torque force in order to produce acceleration. I've calculated a rolling resistance value of $\small0.16\mathrm N$. The robot is intended for indoor use but in case I take it outside I calculated an aerodynamic drag value of $\small0.14\mathrm N$, using an average wind flow velocity of $\small3\frac{\mathrm m}{\mathrm s}$ for my location. Taking these resistive forces into account I calculated a wheel force of $\small6.81\mathrm N$ and axle torque of $\small0.20\mathrm{Nm}$, for maximum acceleration of the robot. I've considered the maximum torque exerted by the pendulum i.e. when it's pitch angle/angle of inclination is at +- 90° from the stable vertical position at 0°. This torque needs to be matched (or exceeded) by the torque/moment exerted about the pivot by the wheel force, accelerating the robot horizontally. The wheel force and axle torque required to stabilise the pendulum I've calculated as $\small13.7340\mathrm N$ and $\small0.4120\mathrm{Nm}$ respectively, and an axle torque of $\small\approx0.2\mathrm{Nm}$ for one motor. I ignored rolling resistance and aerodynamic drag for these calculations. The motor will be a brushed DC motor, so I think $\small0.2\mathrm{Nm}$ should be 25% or less of the motor's stall torque. Can you please tell me if this is correct? Here are my calculations and FBD:$$$$ Maximum tractive force $\begin{align} F_{t(max)}&=μN\qquad\qquad\qquad\qquad\qquad\qquad Mass\,of\,robot: 1.66\mathrm{kg}\\ &=(0.4)*(16.28\mathrm N)\qquad\qquad\qquad\,\,\,\,Weight\,of\,robot:16.28\mathrm N\\ &=6.51\mathrm N\qquad\qquad\qquad\qquad\qquad\,\,\,\,\,\,Number\,of\,wheels:2\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\,\,\,\,\,Wheel\,radius: 0.03\mathrm m\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\,\,\,\,\,Mass\,of\,pendulum:1.4\mathrm{kg}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\,\,\,\,\,Distance\,from\,axle\,to\,pendulum\,COM:0.2575\mathrm m \end{align}$ $F_{t(max)}$: Maximum tractive force $μ$: Coefficient of friction $N$: Normal force at wheel$$$$ Maximum acceleration of robot $\begin{align} a_{r(max)}&=\frac{F_{t(max)}}{m}\\ &=\frac{6.51\mathrm N}{1.66\mathrm{kg}}\\ &=3.92\mathrm{ms}^{-2}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \end{align}$ $a_{r(max)}$: Maximum acceleration of robot $m$: Mass of robot$$$$ Rolling resistance force $\begin{align} F_{rr} &= C_{rr}N\\ &=(0.01)*(16.28\mathrm N)\\ &=0.16\mathrm N\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \end{align}$ $F_{rr}$: Rolling resistance force $C_{rr}$: Rolling resistance coefficient Drag resistance force $\begin{align} F_{d} &= C_{d}\left(\frac{ρ*v^2}{2}\right)A\\ &=1.28\left(\frac{1.2\frac{\mathrm {kg}}{\mathrm m^3}*(3\frac{\mathrm m}{\mathrm s})^2}{2}\right)0.06\mathrm m^2\\ &=0.14\frac{\mathrm{kg}\cdot\mathrm m}{\mathrm s^2}=0.14\mathrm N\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \end{align}$ $F_{d}$: Drag resistance force $C_{d}$: Drag coefficient $ρ$: Mass density of fluid $v$: Flow velocity of fluid relative to object $A$: Reference area/projected frontal area of object$$$$ Wheel force/tractive force for maximum acceleration of robot $\begin{align} F_t-F_{rr}-F_d&=ma_{r(max)}\\ F_t-0.16\mathrm N -0.14\mathrm N &=(1.66\mathrm{kg})*(3.92\mathrm{ms}^{-2})\\ F_t&=(1.66\mathrm{kg})*(3.92\mathrm{ms}^{-2})+0.16\mathrm N +0.14\mathrm N\\ &=6.81\mathrm N \end{align}$ $\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad$ OR $\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad$ $\begin{align} F_w&=F_{t(max)}+F_{rr}+F_d\\ &=6.51\mathrm N +0.16\mathrm N +0.14\mathrm N\\ &=6.81\mathrm N\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \end{align}$ $F_t$: Tractive force $m$: Mass of robot $F_w$: Wheel force $F_{t(max)}$: Maximum tractive force$$$$ Axle/wheel torque for maximum acceleration of robot $\begin{align} T_a&=F_w r\\ &=(6.81\mathrm N)*(0.03\mathrm m)\\ &=0.20\mathrm{Nm}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \end{align}$ $T_a$: Axle/wheel torque $r$: Wheel radius (lever arm length)$$$$ Maximum torque exerted by pendulum $\begin{align} T_{p(max)}&=F_p r\\ &=(1.4\mathrm{kg}*9.81)*(0.2575\mathrm m)\\ &=3.5365\mathrm{kg}\cdot \mathrm m\\ &=3.5365\mathrm{Nm}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \end{align}$ $T_{p(max)}$: Maximum torque exerted by pendulum $F_p$: Force applied to pendulum $r$: Distance from axle to pendulum COM (lever arm length at +/- 90° )$$$$ Wheel force to stabilise pendulum $\begin{align} T_{p(max)}&=F_w r\\ 3.5365\mathrm{Nm}&=F_w*(0.2575\mathrm m)\\ F_w&=\frac{3.5365\mathrm{Nm}}{0.2575\mathrm m}\\ &=13.7340\mathrm N\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \end{align}$ $r$: Distance from axle to pendulum COM$$$$ Axle/wheel torque to stabilise pendulum $\begin{align} T_a&=F_w r\\ &=13.7340\mathrm N*(0.03\mathrm m)\\ &=0.4120\mathrm{Nm}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \end{align}$ $\therefore$ $\begin{align} T_{a(one\,motor)}&=\frac{0.4120\mathrm{Nm}}{2}\\ &=0.2060\mathrm{Nm}\\ &\approx0.2\mathrm{Nm}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \end{align}$ $r$: Wheel radius (lever arm length) $T_{a(one\,motor)}$: Axle/wheel torque for one motor$$$$ FBD two-wheeled somerssomers $\begingroup$ Could you please provide diagrams and equations you used to get to those values? It's difficult/impossible to help you understand how to apply your terms if you don't tell us what they are or how you calculated them. Static friction is.. static, so it shouldn't have anything to do with moving your robot, unless you're referring to the static friction between the wheel and road/surface, where it provides traction. You want the wheel location to be static with respect to the road or you're slipping. It's not really clear what you're talking about though. Again, please post your methods. $\endgroup$ – Chuck♦ Jul 17 '17 at 14:31 $\begingroup$ @Chuck I was referring to traction. Though I thought tractive force and friction were used interchangeably. I understand that the wheel location must be static relative to the road or else there will be slippage. I'll post my calculations as soon as I can, thanks. $\endgroup$ – somers Jul 17 '17 at 18:02 $\begingroup$ There's a bunch of similar sounding terms that are related but distinctly different. Rolling resistance is, as you said, the force it takes to roll one kind of material over another, and has to do with the energy dissipated by deforming the wheel and surface at the point of contact. Static friction is required to get the wheel to rotate, so as long as the rolling resistance is less than the static friction between the wheel and surface, the wheel rotates instead of sliding, but that's it - the static friction doesn't play any part in the math except to guarantee the wheel rolls. $\endgroup$ – Chuck♦ Jul 17 '17 at 18:22 $\begingroup$ That is, traction is generally an all-or-nothing event (unless you get into stick-slip). If your analysis says you have traction (static friction is greater than rolling resistance), you go to the rolling resistance analysis and completely disregard the static friction. It's important to realize that every joint that experiences relative motion between the members also experiences friction, like motor bearings and gear trains, so it's important to characterize that in addition to your wheel's static friction and rolling resistance terms. $\endgroup$ – Chuck♦ Jul 17 '17 at 18:26 $\begingroup$ Note: If you are asking this because you are trying to size the motor, the diagram you drew seems to indicate the motor will be experiencing both static torque and possible dynamic torque due to the weight and Center of Gravity of the robot and also the lifting arm. You will also need to consider that. $\endgroup$ – markshancock Jul 19 '17 at 20:47 I've calculated a max. traction force of 6.51N. Does this mean a torque force at the wheels of up to and including 6.51N can be applied, to drive the robot without the wheels slipping? $max(F_{t}) = μN$ Yes, the traction force equation means that the wheels can push on that surface with up to and including that force (and the surface pushes back with an equal and opposite force, accelerating the robot) without the wheels slipping. Also, I've calculated a rolling resistance value of 0.16N. Am I correct in thinking this value needs to be added to the wheel force, so as to provide enough force to overcome it? $F_{rr} = C_{rr}N$ Yes, Wikipedia defines "specific rolling resistance" $C_{rr}$ as "the force per unit vehicle weight required to move the vehicle on level ground at a constant slow speed where aerodynamic drag (air resistance) is insignificant and also where there are no traction (motor) forces or brakes applied." Newton's second law $\vec{F} = m\vec{A}$ applies to this free body diagram -- the acceleration of your vehicle, times its mass, equals the net total force. The net total force in the horizontal direction on your vehicle appears to be the traction force $F_t$ of the ground on the contact patch of the wheel (this actual traction force is almost always far less than the maximum traction force $max(F_{t})$), the (effective) rolling resistance $F_{rr}$, and the force due to aerodynamic drag -- forces that in some cases may align in the same direction and add up, but in more common cases are opposing and partially cancel out. David CaryDavid Cary $\begingroup$ So the driving force in the horizontal direction is $F_{Net}=F_{t}−F_{rr}−F_{d}$ and $F_{Net}=ma$? $\endgroup$ – somers Jul 21 '17 at 20:09 $\begingroup$ Yes, $F_t - F_{rr} - F_d = F_{Net} = ma$ is correct for the most common cases -- accelerating, holding speed, and coasting to a halt. However, TWIP robots often rapidly decelerate (regenerative braking or normal braking), and during those times $F_t + F_{rr} + F_d = F_{Net} = ma$. $\endgroup$ – David Cary Jul 24 '17 at 14:30 $\begingroup$ Dai, Gao, Jiang, Guo, Liu. "A two-wheeled inverted pendulum robot with friction compensation". doi.org/10.1016/j.mechatronics.2015.06.011 -- is that article helpful? $\endgroup$ – David Cary Jul 24 '17 at 14:34 $\begingroup$ Thanks David, I realise I'll need a Newtonian/Lagrangian mathematical model - just trying to work out the wheel force/torque required to size the motor. I've updated my post. $\endgroup$ – somers Jul 28 '17 at 11:03 Not the answer you're looking for? Browse other questions tagged two-wheeled or ask your own question. What algorithm should I use for balancing a two wheeled robot using a gyroscope? Is it possible to both move and stabilize a two wheeled robot with no gyroscopes? What is a suitable model for two-wheeled robots? Sensors for differential drive Reward Function for q learning on a robot Determining rated speed (RPM) required for a DC gearmotor Dynamic model of two wheeled mobile robot How do I compute the translation and rotation velocities of a robot
CommonCrawl
\begin{document} \title{A Numerical Algorithm for \\ $L_2$ Semi-Discrete Optimal Transport in 3D} \author{Bruno L\'evy}\thanks{Inria Nancy Grand-Est and LORIA, rue du Jardin Botanique, 54500 Vandoeuvre, France} \date{04/01/2014} \begin{abstract} This paper introduces a numerical algorithm to compute the $L_2$ optimal transport map between two measures $\mu$ and $\nu$, where $\mu$ derives from a density $\rho$ defined as a piecewise linear function (supported by a tetrahedral mesh), and where $\nu$ is a sum of Dirac masses. I first give an elementary presentation of some known results on optimal transport and then observe a relation with another problem (optimal sampling). This relation gives simple arguments to study the objective functions that characterize both problems. I then propose a practical algorithm to compute the optimal transport map between a piecewise linear density and a sum of Dirac masses in 3D. In this semi-discrete setting, Aurenhammer et.al [\emph{8th Symposium on Computational Geometry conf. proc.}, ACM (1992)] showed that the optimal transport map is determined by the weights of a power diagram. The optimal weights are computed by minimizing a convex objective function with a quasi-Newton method. To evaluate the value and gradient of this objective function, I propose an efficient and robust algorithm, that computes at each iteration the intersection between a power diagram and the tetrahedral mesh that defines the measure $\mu$. The numerical algorithm is experimented and evaluated on several datasets, with up to hundred thousands tetrahedra and one million Dirac masses. \end{abstract} \subjclass{49M15, 35J96, 65D18} \keywords{optimal transport, power diagrams, quantization noise power, Lloyd relaxation} \maketitle \section*{Introduction} Optimal Transportation, initially studied by Monge \cite{Monge1784}, is a very general problem formulation that can be used as a model for a wide range of applications domains. In particular, it is a natural formulation for several fundamental questions in Computer Graphics \cite{DBLP:journals/focm/Memoli11,DBLP:journals/cgf/Merigot11,DBLP:journals/tog/BonneelPPH11} This article proposes a practical algorithm to compute the optimal transport map between two measures $\mu$ and $\nu$, where $\mu$ derives from a density $\rho$ defined as a piecewise linear function (supported by a tetrahedral mesh), and where $\nu$ is a sum of Dirac masses. Possible applications comprise measuring the (approximated) Wasserstein distance between two shapes and deforming a 3D shape onto another one (3D morphing). \\ I first review some known results about optimal transport in Section \ref{sect:OT}, its relation with power diagrams \cite{DBLP:conf/compgeom/AurenhammerHA92,DBLP:journals/cgf/Merigot11} in Section \ref{sect:semidiscreteOT} and observe some connections with another problem (optimal sampling \cite{Lloyd82leastsquares,Du:1999:CVT:340312.340319}). The structure of the objective function minimized by both problems is very similar, this allows reusing known results for both functions. This gives a simple argument to easily compute the gradient of the quantization noise power minimized by optimal sampling, and this gives the second order continuity of the objective function minimized in semi-discrete optimal transport (see Section \ref{sect:CVTandOT}). \\ I then propose a practical algorithm to compute the optimal transport map between a piecewise linear density and a sum of Dirac masses in 3D (Section \ref{sect:numerics}). This means determining the weights of a power diagram, obtained as the unique minimizer of a convex function \cite{DBLP:conf/compgeom/AurenhammerHA92}. Following the approach in \cite{DBLP:journals/cgf/Merigot11}, to optimize this function, I use a quasi-Newton solver combined with a multilevel algorithm. Adapting the approach to the 3D setting requires an efficient method to compute the intersection between a power diagram and the tetrahedral mesh that defines the density $\mu$. \\ To compute these intersections, the algorithm presented here simultaneously traverses the tetrahedral mesh and the power diagram (Section \ref{sect:RVD}). The required geometric predicates are implemented in both standard floating point precision and arbitrary precision, using arithmetic filtering \cite{meyer:inria-00344297}, expansion arithmetics \cite{DBLP:conf/compgeom/Shewchuk96} and symbolic perturbation \cite{Edelsbrunner90simulationof}. Both predicates and power diagram construction algorithm are available in PCK (Predicate Construction Kit) part of my publically available ``geogram'' programming library\footnote{\url{http://gforge.inria.fr/projects/geogram/}}. \\ The algorithm was experimented and evaluated on several datasets (Section \ref{sect:results}). \section{Optimal Transport: an Elementary Introduction} \label{sect:OT} This section, inspired by \cite{OTON}, \cite{OTintro}, \cite{MAEintro} and \cite{OTuserguide}, presents an introduction to optimal transport. It stays at an elementary level that corresponds to what I have understood and that keeps computer implementation in mind. \subsection{The initial formulation by Monge} The problem of Optimal Transport was first introduced and studied by Monge \cite{Monge1784}. With modern notations, it can be stated as follows~: \begin{equation*} (M) \ \begin{array}{l} \mbox{given $\Omega$ a Borel set and two measures $\mu$ and $\nu$ on $\Omega$ such that $\mu(\Omega) = \nu(\Omega)$,} \\ \mbox{find } T: \Omega \rightarrow \Omega \mbox{ such that} \left\{ \begin{array}{cl} (C1) & \nu = T\sharp\mu \\ (C2) & \int_\Omega c(x,T(x)) d\mu \mbox{ is minimal} \end{array} \right. \end{array} \end{equation*} where $c$ denotes a convex distance function. In the first constraint $(C1)$, $T\sharp\mu$ denotes the pushforward of $\mu$ by $T$, defined by $T\sharp\mu(X) = \mu(T^{-1}(X))$ for any Borel (i.e. measurable) subset $X$ of $\Omega$. In other words, the constraint $(C1)$ means that $T$ should preserve the mass of any measurable subset of $\Omega$. The functional in $(C2)$ has a non-symmetric structure, that makes it difficult to study the existence for problem $(M)$. \begin{figure} \caption{A classical illustration of the existence problem with Monge's formulation: there is no optimal transport map from a segment $L_1$ to two parallel segments $L_2$ and $L_3$ (it is always possible to find a better one by replacing $h$ with $h/2$)} \label{fig:monge_pb} \end{figure} The non-symmetry comes from the constraint that $T$ should be a map. It makes it possible to merge mass but not to split mass. This difficulty is illustrated in Figure \ref{fig:monge_pb}. Suppose you want to find the optimal transport from one vertical segment $L_1$ to two parallel segments $L_2$ and $L_3$. It is possible to split $L_1$ into segments of length $h$ mapped to $L_2$ and $L_3$ in alternance (Figure \ref{fig:monge_pb} left). For any length $h$, it is always possible to find a better map, i.e. with a lower value of the functional in $(C2)$, by splitting $L_1$ into smaller segments (Figure \ref{fig:monge_pb} right), therefore problem (M) does not have a solution within the set of admissible maps. This problem occurs whenever the source measure $\mu$ has mass concentrated on sets with zero geometric measure (like $L_1$). \subsection{The relaxation of Kantorovich for Monge's problem} To overcome this difficulty, Kantorovich proposed a relaxation of problem (M) where mass can be both splitted and merged. The idea consists of manipulating measures on $\Omega \times \Omega$ as follows~: \begin{equation*} (K) \quad \quad \begin{array}{l} \mbox{min} \left\{ \int\limits_{\Omega \times \Omega} c(x,y) d \gamma \ | \ \gamma \in \Pi(\mu,\nu) \right\} \\[5mm] \mbox{where } \Pi(\mu,\nu) = \{ \gamma \in {\mathcal{P}}(\Omega \times \Omega) \ | \ (P_1)\sharp\gamma = \mu \ ; \ (P_2)\sharp\gamma = \nu \} \end{array} \end{equation*} where $(P_1)$ and $(P_2)$ denote the two projections $(x,y) \in \Omega \times \Omega \mapsto x$ and $(x,y) \in \Omega \times \Omega \mapsto y$ respectively. The pushforwards of the two projections $(P_1)\sharp\gamma$ and $(P_2)\sharp\gamma$ are called the marginals of $\gamma$. The probability measures $\gamma$ in $\Pi(\mu,\nu)$, i.e. that have $\mu$ and $\nu$ as marginals, are called \emph{transport plans}. Among the transport plans, those that are in the form $(Id \times T)\sharp \mu$ correspond to a transport map $T$~: \begin{observation} If $(Id \times T)\sharp\mu \in \pi(\mu,\nu)$, then $T$ pushes $\mu$ to $\nu$. \end{observation} \begin{proof} $(Id \times T)\sharp\mu$ belongs to $\pi(\mu,\nu)$, therefore $(P_2)\sharp(Id \times T)\sharp \mu = \nu$, \\ or $\left((P_2)\circ(Id \times T)\right)\sharp \mu = \nu$, thus $T\sharp\mu = \nu$ \end{proof} With this observation, for transport plans of the form $\gamma = (Id \times T)\sharp \mu$, (K) becomes $$ \mbox{min} \left\{ \int\limits_{\Omega \times \Omega} c(x,y)d\left( (Id \times T)\sharp \mu \right) \right\} \quad = \quad \mbox{min} \left\{ \int\limits_\Omega c(x,T(x)) d \mu \right) $$ To help intuition, four examples of transport plans in 1D are depicted in Figure \ref{fig:OTplans}. The measure $\gamma$ on $\Omega \times \Omega$ is non-zero on subsets that contain points $(x,y)$ such that mass is transported from $x$ to $y$. The transport plans in the first two examples are in the form $(Id \times T)\sharp \mu$, i.e. they are derived from a transport map\footnote{For the second one (B), the transport map is not defined in the center of the segment, but it is not a problem since there is no mass concentrated there.}. The third and fourth ones do not admit a transport map, because they split a Dirac mass. The optimal transport plan for the case shown in Figure \ref{fig:monge_pb} is of the same nature. It is not in the form $(Id \times T)\sharp \mu$ because it splits the mass concentrated in $L_1$ into $L_2$ and $L_3$. \begin{figure} \caption{Four examples of transport plans in 1D. A: a segment is translated. B: a segment is splitted into two segments. C: a Dirac mass is splitted into two Dirac masses. D: a Dirac mass is splitted into two segments. The first two ones (A and B) are in the form $(Id \times T)\sharp \mu$ where $T$ is a transport map, whereas the third and fourth ones (C and D) are not, because they both split a Dirac mass.} \label{fig:OTplans} \end{figure} At this point, a standard approach to tackle the existence problem is to find some regularity in both the functional and space of admissible transport plans, i.e. proving that the functional is smooth enough and finding a compact set of admissible transport plans. Since the set of admissible transport plans contains at least the product measure $\mu \otimes \nu$, it is non-empty, and existence can be proved using a topological argument that exploits the smoothness of the functional and the compactness of the set. Once the existence of a transport plan is proved, an interesting question is whether there exists a transport map that corresponds to this transport plan. Unfortunately, problem (K) does not directly exhibit the properties required by this path of reasoning. However, one can observe that (K) is a linearly constrained optimization problem. This calls for studying the dual formulation, as done by Kantorovich. This dual formulation has a nice structure, that allows answering the questions above (existence of a transport plan, and whether there is a transport map that corresponds to this transport plan when it exists). \subsection{The dual formulation of Kantorovich} The dual formulation can be stated as follows\footnote{ Showing the equivalence with problem (K) requires some care, the reader is referred to \cite{OTON} chapter 5. Note that \cite{OTON} uses a slightly different definition (with $\phi - \psi$ instead of $\phi + \psi$), that makes the detailed argument simpler but that breaks symmetry between $\phi$ and $\psi$. Since I stay at an elementary level, I prefer to keep the symmetry. }~: \begin{equation*} (D): \quad \quad \mbox{max} \left\{ \int\limits_\Omega \phi d\mu + \int\limits_\Omega \psi d\nu \ | \ \begin{array}{ll} (C1) & \phi \in L^1(\mu); \psi \in L^1(\nu); \\ (C2) & \phi(x) + \psi(y) \le c(x,y) \forall (x,y) \in \Omega \times \Omega \end{array} \right\} \end{equation*} Following the classical image that gives some intuition about this formula, imagine now that you are hiring a transport company to do the job for you. The company has a special way of calculating the price: the function $\phi(x)$ corresponds to what they charge you for loading at $x$, and $\psi(y)$ what they charge for unloading at $y$. The company tries to maximize profit (therefore is looking for a max instead of a min), but they cannot charge you more than what it will cost you if you do the job yourself $(C2)$. \\ The existence for $(D)$ is difficult to study, since the class of admissible functions that satisfy $(C1)$ and $(C2)$ is non-compact. However, more structure in the problem can be revealed by referring to the notion of \emph{c-transform}, that exhibits a class of admissible functions with regularity~: \begin{definition} Given a function ${\mathcal{X}}: \Omega \rightarrow \bar{\mathbb{R}}$, the c-transform ${\mathcal{X}}^c$ is defined by~: $$ {\mathcal{X}}^c := \inf\limits_{x \in \Omega} c(x,y) - {\mathcal{X}}(x) $$ \begin{itemize} \item If for a function $\phi$ there exists a function ${\mathcal{X}}$ such that $\phi = {\mathcal{X}}^c$, then $\phi$ is said to be \emph{c-concave}; \item ${\bf \Psi}_c(\Omega)$ denotes the set of c-concave functions on $\Omega$. \end{itemize} \end{definition} It is now possible to make two observations, that allow us to restrict ourselves to the class of c-concave functions for the possible choices for $\phi$ and $\psi$~: \begin{observation} If $(\phi,\psi)$ is admissible for $(D)$, then $(\phi,\phi^c)$ is also admissible. \end{observation} \begin{proof} $$ \begin{array}{l} \left\{ \begin{array}{l} \forall(x,y) \in \Omega \times \Omega, \phi(x) + \psi(y) \le c(x,y) \\ \phi^c(y) = \inf\limits_{x \in \Omega} c(x,y) - \phi(x) \end{array} \right. \\ \begin{array}{lcl} \phi(x) + \phi^c(y) & = & \phi(x) + \inf_{x^\prime \in \Omega}\left( c(x^\prime,y) - \phi(x^\prime) \right) \\ & \le & \phi(x) + c(x,y) - \phi(x) \\ & \le & c(x,y) \end{array} \end{array} $$ \end{proof} \begin{observation} If $(\phi,\psi)$ is admissible for $(D)$, then a \emph{better candidate} can be found by replacing $\psi$ with $\phi^c$~: \end{observation} \begin{proof} $$ \left\{ \begin{array}{lcl} \phi^c(y) & = & \inf\limits_{y \in \Omega} c(x,y) - \phi(x) \\ \forall x \in \Omega, \psi(y) & \le & c(x,y) - \phi(x) \end{array} \right. \quad \quad \Rightarrow \quad \psi(y) \le \phi^c(y) $$ \end{proof} Therefore, we have $\min(K) \quad = \max\limits_{\psi \in {\bf \Psi}_c(\Omega)} \quad \int\limits_\Omega \psi d\mu + \int\limits_\Omega \psi^cd\nu $ \\ I will not detail here the proof for the existence, the reader is referred to \cite{OTON}, Chapter 4. The idea is that we are now in a much better situation, since the class of admissible functions ${\bf \Psi}_c(\Omega)$ is compact (provided that we fix the value of $\Psi$ at one point of $\Omega$ to remove the translational invariance degree of freedom of the problem). \\ Since we have computer implementation in mind, our goal is to find a numerical algorithm to compute an optimal transport map $T$. At first sight, though the values of the functionals match at a solution of $(K)$ and $(D)$, it seems to be difficult to deduce $T$ from a solution to the dual problem $(D)$. However, there is a nice relation between the dual problem $(D)$ and the initial Monge's problem $(M)$, detailed in \cite{OTON}, chapters 9 and 10. The main result characterizes the pairs of points $(x,y)$ that are connected by the transport plan~: \begin{theorem} $$ \forall (x,y) \in \partial_c \psi, \nabla \psi(x) - \nabla_x c(x,y) = 0 $$ where $\partial_c \psi = \{ (x,y) | \phi(x) + \psi(y) = c(x,y) \}$ denotes the so-called \emph{c-subdifferential} of $\psi$. \label{thm:MongeSol} \end{theorem} \begin{proof} See \cite{OTON} chapter 10. \\ I summarize the heuristic argument given at the beginning of the same chapter, that gives some intuition~:\\ Consider a point $(x,y)$ on the c-subdifferential $\partial_c \psi$, that satisfies $\phi(y) + \psi(x) = c(x,y)\ (1)$. \\ By definition, $\phi(y) = \psi^c(y) = \inf\limits_x c(x,y) - \psi(x)$, thus $\forall \tilde{x}, \phi(y) \le c(\tilde{x},y) - \psi(\tilde{x})$, or $\phi(y) + \psi(\tilde{x}) \le c(\tilde{x},y)\ (2)$. \\ By substituting (1) into (2), one gets $\psi(\tilde{x}) - \psi(x) \le c(\tilde{x},y) - c(x,y)$ for all $\tilde{x}$.\\ Imagine now that $\tilde{x}$ follows a trajectory parameterized by $\epsilon$ and starting at $x$. One can compute the gradient along an arbitrary direction $w$ by taking the limit when $\epsilon$ tends to zero in the relation $\frac{\psi(\tilde{x}) - \psi(x)}{\epsilon} \le \frac{c(\tilde{x},y) - c(x,y)}{\epsilon}$. Thus we have $\nabla \psi(x) \cdot w \le \nabla_x c(x,y) \cdot w$. The same derivation can be done with $-w$ instead of $w$, and one gets: $\forall w, \nabla \psi(x) \cdot w = \nabla_x c(x,y) \cdot w$, thus $\forall (x,y) \in \partial_c \psi, \nabla \psi(x) - \nabla_x c(x,y) = 0$. \\ \emph{Note: the derivations above are only formal ones and do not make a proof. The proof requires a much more careful analysis, using generalized definitions of differentiability and tools from convex analysis.} \end{proof} In the $L_2$ case, i.e. $c(x,y) = 1/2 \| x - y \|^2$, we have $\forall (x,y) \in \partial_c \psi, \nabla \psi(x) + y - x = 0$, thus, whenever the optimal transport map $T$ exists, we have $T(x) = x - \nabla \psi(x) = \nabla (\|x\|^2/2 - \psi(x))$. Not only this gives an expression of $T$, but also it allows characterizing $T$ as the gradient of a \emph{convex} function, which is an interesting property since it implies that two ``transported particles'' $x_1 \mapsto T(x_1)$ and $x_2 \mapsto T(x_2)$ cannot collide, as shown below~: \begin{observation} If $c(x,y)$ = $1/2 \| x - y \|^2$ and $\psi \in {\bf \Psi}_c(\Omega)$, then $\bar{\psi}: x \mapsto \bar{\psi}(x) = \| x \|^2/2 - \psi(x)$ is convex (it is an equivalence if $\Omega = \mathbb{R}^d$). \label{obs:PhiConvexity} \end{observation} \begin{proof} $$ \begin{array}{lcl} \psi(x) & = & \inf\limits_y \frac{|x -y|^2}{2} - \phi(y) \\ & = & \inf\limits_y \frac{\|x\|^2}{2} - x \cdot y + \frac{\|y\|^2}{2} - \phi(y) \\ -\bar{\psi}(x) & = & \phi(x) - \frac{\|x\|^2}{2} = \inf\limits_y -x \cdot y + \left( \frac{\|y\|^2}{2} - \phi(y) \right) \\ \bar{\psi}(x) & = & \sup\limits_y x \cdot y - \left( \frac{\|y\|^2}{2} - \phi(y) \right) \end{array} $$ The function $ x \mapsto x \cdot y - \left( \frac{\|y\|^2}{2} - \phi(y) \right)$ is linear in $x$, therefore the graph of $\bar{\psi}$ is the upper envelope of a family of hyperplanes, thus $\bar{\psi}$ is convex. \end{proof} \begin{observation} Consider the trajectories of two particles parameterized by $t \in [0,1]$, $t \mapsto (1-t)x_1 + t T(x_1)$ and $t \mapsto (1-t)x_2 + t T(x_2)$. If $x_1 \neq x_2$ and for $0 < t < 1$ the particles cannot collide. \end{observation} \begin{proof} By contradiction, suppose that you have $t \in (0,1)$ and $x_1 \neq x_2$ such that: $$ \begin{array}{lcl} (1-t)x_1 + tT(x_1) & = & (1-t)x_2 + tT(x_2) \\ (1-t)x_1 + t \nabla \bar{\psi}(x_1) & = & (1-t)x_2 + t \nabla \bar{\psi}(x_2) \\ (1-t)(x_1 - x_2) + t (\nabla \bar{\psi}(x_1) - \nabla \bar{\psi}(x_2)) & = & 0 \\ \forall v, (1-t) v \cdot(x_1 - x_2) + t v \cdot (\nabla \bar{\psi}(x_1) - \nabla \bar{\psi}(x_2)) & = & 0 \\ \mbox{ take } v = (x_1 - x_2) \\ (1-t)\| x_1 - x_2 \|^2 + t (x_1 - x_2)\cdot(\nabla \bar{\psi}(x_1) - \nabla \bar{\psi}(x_2)) & = & 0 \end{array} $$ which is a contradiction since this quantity is the sum of two strictly positive numbers ( recalling the definition of the convexity of $\bar{\psi}$: $\forall x_1 \neq x_2, (x_1-x_2) \cdot (\nabla \bar{\psi}(x_1) - \nabla \bar{\psi}(x_2)) > 0$ ). \end{proof} At this point, we know that when the optimal transport map exists, it can be deduced from the function $\psi$ using the relation $T(x) = \nabla \bar{\psi} = x - \nabla \psi$. We now consider some ways of finding the function $\psi$. The classical change of variable formula gives: \begin{equation*} \forall B, \int_B \mu(x) d\mu = \mu(B) = \nu(T(B)) = \int_B \frac{1}{ \det JT(x)} T(x) d\nu \label{eqn:changevar} \end{equation*} where $JT$ denotes the Jacobian matrix of $T$. If $\mu$ and $\nu$ both have a density $u$ and $v$ (i.e. $\forall B, \mu(B) = \int_B u(x)dx$ and $\nu(B) = \int_B v(x)dx$), then one can (formally) consider (\ref{eqn:changevar}) in a pointwise manner~: \begin{equation*} \forall x \in \Omega, u(x) = \frac{1}{\det JT(x)} v(T(x))\ ; \label{eqn:pointwise} \end{equation*} injecting $T=\nabla\bar{\psi}$ and $JT = H \bar{\psi}$ in (\ref{eqn:pointwise}) gives: \begin{equation} \forall x \in \Omega, u(x) = \frac{1}{\det H \bar{\psi}(x)} v (\nabla \bar{\psi}(x)) \label{eqn:MAE} \end{equation} where $H\bar{\psi}$ denotes the Hessian of $\bar{\psi}$. Equation \ref{eqn:MAE} is known as the \emph{Monge-Amp\`ere} equation. It is a highly non-linear equation, and its solution when it exists often has singularities. It is similar to the eikonal equation that characterizes the distance function and that has a singularity on the medial axis. Note that the derivations above are only formal, studying the solutions of the Monge-Amp\`ere equation requires using more elaborate tools, and several types of weak solutions can be defined (viscosity solutions, solutions in the sense of Brenier, \ldots). \\ Still keeping computer implementation in mind, one may consider three different problem settings~: \begin{itemize} \item {\bf continuous:} if $\mu$ and $\nu$ have a density $u$ and $v$, it is possible to numerically solve the Monge-Amp\`ere equation, as done in \cite{ACFM:BB:2000} and \cite{papadakis:hal-00816211}; \item {\bf discrete:} if both $\mu$ and $\nu$ are discrete (sums of Dirac masses), then finding the optimal transport plan becomes an assignment problem, that can be solved with some variants of linear programming techniques (see the survey in \cite{AP:BDM:2009}); \item {\bf semi-discrete:} if $\mu$ has a density and $\nu$ is discrete (sum of Dirac masses), then an optimal transport map exists. It has interesting connections with notions of computational geometry. The remainder of this paper considers this problem setting. \end{itemize} \subsection{The semi-discrete case} \label{sect:semidiscreteOT} I now consider that $\mu$ has a density $u$, and that $\nu = \sum_{i=1}^k \nu_i \delta_{p_i}$ is a sum of $k$ Dirac masses, that satisfies $\nu(\Omega) = \sum_{i=1}^k \nu_i = \mu(\Omega)$. Whenever $T$ exists, the pre-images of the Dirac masses $T^{-1}(p_i)$ partition $\Omega$ almost everywhere\footnote{ except on a subset of measure 0 on the common boundaries of the parts.}. This subsection reviews the main results in \cite{DBLP:conf/compgeom/AurenhammerHA92}, showing that this partition corresponds to a geometrical structure called a power diagram. Interestingly, from the point of view of computer implementation, the proof directly leads to a numerical algorithm, as experimented in 2D in \cite{DBLP:journals/cgf/Merigot11} and in 3D further in this paper. \begin{definition} Given a set $P$ of $k$ points $p_i$ in $\mathR^d$ and a set $W$ of $k$ real numbers $w_i$, the \emph{Voronoi diagram} $\Vor(P)$ and the \emph{power diagram} $\Pow_W(P)$ are defined as follows~: \begin{itemize} \item The Voronoi diagram $\Vor(P)$ is the partition of $\mathR^d$ into the subsets $\Vor(p_i)$ defined by~: \\ $\Vor(p_i) := \{ x | \|x-p_i\|^2 < \|x - p_j\|^2 \quad \forall j \neq i\}$; \item the power diagram $\Pow_W(P)$ is the partition of $\mathR^d$ into the subsets $\Pow_W(p_i)$ defined by~: \\ $\Pow_W(p_i) := \{ x | \|x-p_i\|^2 - w_i < \|x - p_j\|^2 - w_j \quad \forall j \neq i\}$; \item the map $T_W$ defined by $\forall i, \forall p \in \Pow_W(p_i), T_W(p) = p_i$ is called the \emph{assignment defined by the power diagram} $\Pow_W(P)$. \end{itemize} \end{definition} It can be shown that the assignment defined by a power diagram is an optimal transport map (the main argument of the proof is sketched further). Then one needs to determine - when it is possible\footnote{We will see further that it is always possible in this setting.} - the parameters of this power diagram (i.e. the weights) that realize the optimal transport towards a \emph{given} discrete target measure $\nu$. Intuitively, a power diagram may be thought-of as a generalization of the Voronoi diagram, with additional ``tuning buttons'' represented by the weights $w_i$. Changing the weight $w_i$ associated with a point $p_i$ influences the area and the measure $\mu(\Pow_W(p_i))$ of its power cell (the higher the weight, the larger the power cell). Though the relation between the weights and the measures of the power cells is non-trivial\footnote{Misleadingly, the term 'weight' seems similar to 'mass', but both notions are not directly related.}, it is well behaved, and as shown below, one can prove the existence and uniqueness of a set of weights such that the measure of each power cell $\mu(\Pow_W(p_i))$ matches a prescribed value $\nu_i$. In this case, the prescribed measures $\nu_i$ are referred to as \emph{capacity constraints}, and the power diagram is said to be \emph{adapted} to the capacity constraints. At this point, since we already know that the assignment defined by a power diagram is an optimal transport map, then we are done (i.e. the assignment defined by the power diagram is the optimal transport map that we are looking for). I shall now give more details about the proofs of the two parts of the reasoning. \begin{figure} \caption{ Illustration of the (by contradiction) argument that the common boundary between the pre-images of $p_i$ and $p_j$ is contained by a straight line orthogonal to $[p_i, p_j]$. } \label{fig:StraightBoundary} \caption{The weight vector that defines an optimal transport map can be found as the maximizer of a convex function, defined as the lower envelope of a family of linear functions.} \label{fig:LowerEnveloppe} \end{figure} \begin{theorem} Given a set of points $P$ and a set of weights $W$, the assignment $T_{P,W}$ defined by the power diagram is an optimal transport map. \end{theorem} \begin{proof} I give here the main idea of the proof (see \cite{DBLP:conf/compgeom/AurenhammerHA92} for the complete one). The main argument is that if $T$ is an optimal transport map, then the common boundary of the pre-images $T^{-1}(p_i)$ and $T^{-1}(p_j)$ of two Dirac masses is a straight line orthogonal to the segment $[p_i, p_j]$. The argument, obtained by contradiction, is illustrated in Figure \ref{fig:StraightBoundary}. Suppose that the common boundary between the pre-images $T^{-1}(p_i)$ and $T^{-1}(p_j)$ is not a straight line (thick curve in the figure), then one can find a straight line orthogonal to the segment $[p_i, p_j]$ that has an intersection with the common boundary (dashed line in the figure), and two points $q_i$ and $q_j$ located as shown in the figure. Then, it is clear (by the Pythagorean theorem) that re-assigning $q_j$ to $T^{-1}(p_i)$ and $q_i$ to $T^{-1}(p_j)$ lowers the transport cost, which contradicts the initial assumption. It is then possible to establish that the pre-images correspond to power cells, by invoking some properties of power diagrams \cite{DBLP:journals/siamcomp/Aurenhammer87}. \end{proof} \begin{theorem} Given a measure $\mu$ with density, a set of points $(p_i)$ and prescribed masses $\nu_i$ such that $\sum \nu_i = \mu(\Omega)$, there exists a weights vector $W$ such that $\mu(\Pow_W(p_i)) = \nu_i$. \label{thm:SemiDiscrete} \end{theorem} \begin{proof} Consider the function $f_T(W) = \int_\Omega \| x - T(x) \|^2 - w_{T(x)} d\mu$, where $T: \Omega \rightarrow P$ is an \emph{arbitrary} assignment. One can observe that: \begin{itemize} \item If the assignment $T$ is fixed, $f_T(W) = \int_\Omega \| x - T(x) \|^2 d\mu - \sum_{i=1}^{k} w_i \mu(T^{-1}(p_i))$ is affine in $W$. In Figure \ref{fig:LowerEnveloppe}, the graph of $f_T(W)$ for a fixed assignment $T$ corresponds to one of the straight lines (note that in the figure, the ``W axis'' symbolizes $k$ coordinates); \item we now consider a fixed value of $W$ and different assignments $T$. Among all the possible $T$'s, it is clear that $f_T(W)$ is minimized by $T_W$, the assignment defined by the power diagram with weights $W$ (the definition of the power cell minimizes at each point of $\Omega$ the integrand in the equation of $f_T(W)$). \end{itemize} Now take $T = T_W$ in $f_T(W)$, in other words, consider the function $f_{T_W}(W)$. Its graph, depicted as a dashed curve in Figure \ref{fig:LowerEnveloppe}, is the lower envelope of a family of hyperplanes, thus it is a concave function, with a single maximum. For the next steps of the proof, we now need to compute the gradient $\nabla_W f_{T_W}(W)$. Note that when computing the variations of $f_{T_W}(W)$, both the argument $W$ of $f$ and the parameter $T_W$ change, making the computations quite involved. When $T_W$ changes, the power cells change, and one needs to compute integrals over varying domains. However, it is possible to drastically simplify computations by using the \emph{envelope theorem}. Given a parameterized family of functions $f_T(W)$ (in our case, the parameter is $T$), whenever the gradient of $\nabla_W f_{T_W}(W)$ exists, it is equal to the gradient $\nabla_W f_{T^*}(W)$ computed at the minimizer $T^*$ ($f_{T_W}$ in our case). In other words, when computing the gradients, one can directly use the expression of $f_T(W)$ and ignore the variations of $T$ in function of $W$. In Figure \ref{fig:LowerEnveloppe}, it means that the tangent to $f_{T_W}$ at $W$ corresponds to the (linear) graph of $f_T(W)$ with a fixed $T = T_W$. Note that in our case, the so-called \emph{choice set}, i.e. where $T$ is chosen, is the set of all the assignments between $\Omega$ and $P$. This requires a special version of the envelope theorem that works for such a general choice sets \cite{RePEc:ecm:emetrp:v:70:y:2002:i:2:p:583-601}. One can see that the components of the gradient correspond to the (negated) measures of the power cells~: $$ \begin{array}{lcl} \frac{\partial f_{T_W}(W)}{\partial w_i} & = & \nabla_W \left( \underbrace{\int_\Omega \| x - T(x) \|^2 d\mu}_{\mbox{constant}(W)} - \sum\limits_{i=1}^{k} w_i \mu(T^{-1}_W(p_i)) \right) \\[12mm] & = & - \mu(T_W^{-1}(p_i)) = - \mu( \Pow_W(p_i) ) \end{array} $$ We are now in a very good situation to establish the existence and uniqueness of the weight vector $W$ that realizes the optimal transport map. The idea is to use $f_{T_W}$ to construct a function $g$ that has a global maximum realized at a weight vector such that the measures of the power cells match the prescribed measures. Consider the function $g$ defined by $g(W) = f_{T_W}(W) + \sum_i \nu_i w_i$. The components of the gradient of $g$ are given by $\partial g / \partial w_i = -\mu(\Pow_W(p_i)) + \nu_i$. This function is also concave (it is the sum of a concave function plus a linear one), therefore it has a unique global maximum where the gradient is zero. Therefore, at the maximum of $g$, for each power cell, the measure $\mu(\Pow_W(p_i))$ matches the prescribed measure $\nu_i$. \end{proof} Besides showing the existence of a semi-discrete transport map and characterizing it as the assignment defined by a power diagram, the proof in Theorem \ref{thm:SemiDiscrete} directly leads to a numerical algorithm, as shown in \cite{DBLP:journals/cgf/Merigot11}, described in Section \ref{sect:numerics} further. A similar algorithm can be obtained by starting from a discrete version of the Monge-Ampere equation and the characterization of $T$ as the gradient of a piecewise linear convex function\cite{DiscreteMA}. \subsection{Relation with Kantorovich's dual formulation} \label{sect:SemiDiscKanto} It is interesting to see the relation between the proof of Aurenhammer et.al that does not use the formalism of optimal transport, and the dual formulation of optimal transport. Interestingly, one can remark that the same argument (lower envelope of hyperplanes) is used to establish the concavity of $f_{T_W}$ in Theorem \ref{thm:SemiDiscrete} and the convexity of $\bar{\psi}$ in Observation \ref{obs:PhiConvexity}. The relation between both formulations can be further explained if we link the Kantorovich potential $\phi$ and the weights $w_i$ with the relation $\phi(y_i) = 1/2 w_i$. For instance, injecting $\phi(y_i) = 1/2 w_i$ and $c(x,y) = 1/2 \| x-y \|^2$ into $\psi(x) = \phi^c(x) = \inf_y c(x,y) - \phi(y)$ gives $\psi(x) = 1/2 \inf_i \| x-y_i \|^2 - w_i$. This corresponds to the definition of the power cells (intuitively, the $\inf$ in the definition of $\phi^c$ is the same as the $\inf$ in the definition of the power cell). Now consider $T(x) = x - \nabla \psi(x)$. Still using the expression of $\psi(x)$ above, we get $T(x) = x - 1/2 \nabla_x(\| x - y_i \|^2 - w_i) = y_i$. This connects the characterization of $T$ as the solution of $\nabla \phi(x) - \nabla_x c(x,y) = 0$ (Theorem \ref{thm:MongeSol}) with the characterization of $T$ as the assignment defined by the power diagram (Theorem \ref{thm:SemiDiscrete}). This corresponds to the point of view developed in \cite{DiscreteMA}. \begin{figure} \caption{ Left: random points (black dots), Voronoi diagram and cell centroids (gray dots); Right: a barycentric Voronoi diagram is a local minimizer of $Q$. } \label{fig:CVT} \caption{The quantization noise power $Q$ minimized in vector quantization is a lower envelope. } \label{fig:LowerEnveloppeCVT} \end{figure} \subsection{Relation with optimal sampling} \label{sect:CVTandOT} In this section, I exhibit some relations between semi-discrete optimal transport and another problem referred to as \emph{optimal sampling} (or \emph{vector quantization}). Given a compact $\Omega \subset \mathR^d$, a measure $\mu$, and a set of $k$ points $Y$ in $\mathR^d$, the \emph{quantization noise power} of $Y$ is defined as~: \begin{equation} Q(Y) := \int_\Omega \min_i \| x - y_i \|^2 d\mu = \sum\limits_{i=1}^k \int_{\Vor(y_i)} \| x - y_i \|^2 d\mu \label{eqn:CVT} \end{equation} The quantization noise power measures how good $Y$ is at ``sampling'' $\Omega$ (the smaller, the better), see the survey in \cite{Du:1999:CVT:340312.340319}. The \emph{vector quantization problem} consists in minimizing $Q(Y)$ (i.e. finding the poinset $Y$ that best samples $\Omega$). This notion comes from signal processing theory, and was used to find the optimal assignment of frequency bands for multiplexing communications in a single channel \cite{Lloyd82leastsquares}. Designing a numerical algorithm that optimizes $Q$ requires to evaluate the gradient of $Q$. This requires computing integrals over varying domains (since the Voronoi cells of the $y_i$'s depend on the $y_i$'s), which requires several pages of careful derivations, as done in \cite{Iri1984,Du:1999:CVT:340312.340319}. At the end, most of the terms cancel-out, leaving a simple formula (see below). One can note the similarity between the quantization noise power (Equation \ref{eqn:CVT}) and the objective function maximized by the weight vector in semi-discrete optimal transport (proof of Theorem \ref{thm:SemiDiscrete}). This suggests using the same type of argument (envelope theorem) to directly obtain the gradient of $Q$~: \begin{observation} The function $Q$ is of class $C^1$ (at least \footnote{it is in fact of class $C^2$ almost everywhere \cite{liu:onCVT:09}}) and the components of its gradient relative to one of the point $y_i$ is given by: $$ \nabla_{y_i} Q(Y) = 2 m_i (y_i - g_i) $$ where $m_i = \mu(\Vor(y_i)) = \int_{\Vor(y_i)}d\mu$ denotes the mass of the Voronoi cell $\Vor(y_i)$ and $g_i = 1/m_i \int_{\Vor(y_i)} x d\mu$ denotes the centroid of the Voronoi cell $\Vor(y_i)$. \end{observation} \begin{proof} Consider the function $Q_T(Y) := \int_\Omega \| x - T(x) \|^2 d\mu$, parameterized by an assignment $T: \Omega \rightarrow Y$. We are in a setting similar to semi-discrete optimal transport (Section \ref{sect:semidiscreteOT}), except that the function $Q_T(Y)$ is quadratic (see Figure \ref{fig:LowerEnveloppeCVT}), whereas $F_T(W)$ is linear (Figure \ref{fig:LowerEnveloppe}). We have~: \begin{itemize} \item $Q(Y) = Q_{T_{\Vor}}(Y)$; \item for a given $Y$, $T_{\Vor}$ is the unique affectation that minimizes $Q_T(Y)$. \end{itemize} By the envelope theorem, we have: $$ \begin{array}{lcl} \nabla Q(Y) & = & \nabla Q_{T_{\Vor}}(Y)) = \nabla \sum_i \int\limits_{\Vor(y_i)} (x^2 - 2x \cdot y_i + y_i^2) d\mu \\[4mm] & = & \sum_i \left( \nabla \int\limits_{\Vor(y_i)}x^2 d\mu - 2 \nabla \int\limits_{\Vor(y_i)} x \cdot y_i d\mu + \nabla \int\limits_{\Vor(y_i)} y_i^2 d\mu \right) \\[6mm] \nabla_{y_i}Q(y) & = & -2 y_i \int\limits_{\Vor(y_i)} x d\mu + 2 y_i \int\limits_{\Vor(y_i)} d\mu \\[4mm] & = & -2 m_i g_i + 2 m_i y_i = 2 m_i (y_i - g_i) \end{array} $$ \end{proof} This directly gives the expression of the gradient of $Q$ and explains why most of the terms cancel out in the derivations conducted in \cite{Iri1984}. I mention that the same result can be obtained in a more general setting with Reynold's transport theorem \cite{nivoliers:AFM:2013} (that deals with functions integrated over varying domains). However, the envelope argument cannot be used to compute the Hessian of $Q$ (second order derivatives), and the structure of the formulas \cite{Iri1984,Du1999,liu:onCVT:09} do not suggest that direct computation can be avoided for them. Note also that $Q$ is the lower envelope of a family of parabola (instead of a family of hyperplanes), therefore the concavity argument does not hold, and the graph of $Q$ has many local minima (as depicted in Figure \ref{fig:LowerEnveloppeCVT}). The local minima of $Q$, i.e. the point sets $Y$ such that $\nabla Q = 0$, satisfy $\forall i, y_i = g_i$, in other words, the position at each point $y_i$ corresponds to the centroids of the Voronoi cell associated with $y_i$. For this reason, a stationary point of $Q$ is called a \emph{centroidal Voronoi tessellations}. To compute a centroidal Voronoi tessellation, it is possible to iteratively move each point towards the centroid of its Voronoi cell (Lloyd relaxation \cite{Lloyd82leastsquares}), which is equivalent to minimizing $Q$ with a gradient descent method \cite{Du:1999:CVT:340312.340319}. It is also possible to minimize $Q$ with Newton-type methods \cite{liu:onCVT:09} that show faster convergence. \\ More relations between semi-discrete optimal transport and vector quantization can be exhibited by considering a power diagram as the intersection between a $d+1$ Voronoi diagram and $\mathR^d$~: \begin{observation} The $d$-dimensional power diagram $\Pow_W(Y)$ corresponds to the intersection between the $d+1$ dimensional Voronoi diagram $\Vor(\hat{Y})$ and $\mathR^d$, where the $\mathR^{d+1}$ lifting $\hat{y_i}$ of $y_i$ is defined by~: \begin{small} $$ \hat{y_i} = \left( \begin{array}{c} y_{i,1} \\ y_{i,2} \\ \vdots \\ y_{i,d} \\[2mm] h_i = \sqrt{w_M - w_i} \end{array} \right) $$ \end{small} where $y_{i,j}$ denotes the $j$-th coordinate of point $y_i$, and where $w_M$ denotes the maximum of all weights $\mbox{Max}(w_i)$. \end{observation} \begin{proof} $$ \begin{array}{l} \Vor(\hat{y}_i) \cap \mathR^d = \{ x \quad | \quad \| \hat{x} - \hat{y}_i \|^2 < \| \hat{x} - \hat{y}_j \|^2 \ \forall j \neq i \} \\[2mm] \begin{array}{lcl} & = & {\small \left\{ x \quad | \quad \left\| \left[\begin{array}{c} x \\[1mm] 0 \end{array}\right] - \left[\begin{array}{c} y_i \\[1mm] \sqrt{w_M - w_i} \end{array}\right] \right\|^2 < \left\| \left[\begin{array}{c} x \\[1mm] 0 \end{array}\right] - \left[\begin{array}{c} y_j \\[1mm] \sqrt{w_M - w_j} \end{array}\right] \right\|^2 \ \forall j \neq i \right\} } \\[6mm] & = & \{ x \quad | \quad \| x - y_i \|^2 - w_i + w_M < \| x - y_j \|^2 - w_j + w_M \ \forall j \neq i \} \\[2mm] & = & \{ x \quad | \quad \| x - y_i \|^2 - w_i < \| x - y_j \|^2 - w_j \ \forall j \neq i \} \\[2mm] & = & \Pow_W(y_i) \end{array} \end{array} $$ \end{proof} We can now see a relation between vector quantization and semi-discrete optimal transport~: \begin{observation} The quantization noise power $\hat{Q}(\hat{Y})$ computed in $\mathR^{d+1}$ corresponds to the term $f_{T_W}(W)$ of the function maximized by the weight vector that defines a semi-discrete optimal transport map plus the constant $w_M \mu(\Omega)$. \end{observation} \begin{proof} $$ \begin{array}{lcl} \hat{Q}(\hat{Y}) & = & \sum\limits_i \int\limits_{\Vor(\hat{y}_i) \cap {\mathR^d}} \| \hat{x} - \hat{y}_i \|^2 d\mu \\[8mm] & = & \sum\limits_i \int\limits_{\Pow_W(y_i)} \| x - y_i \|^2 - w_i + w_Md\mu \\[4mm] & = & f_{T_W}(W) + w_M \mu(\Omega) \end{array} $$ \end{proof} The quantization noise power $Q$ is already known to be of class $C^2$ almost everywhere\footnote{ ``by almost everywhere'', we mean that the function is no longer $C^2$ whenever two points become co-located, or whenever a Voronoi bisector matches a discontinuity of $\mu$ located on a straight line.} \cite{liu:onCVT:09}. As a consequence of this observation, since the function $f_{T_W}(W)$ can be obtained through the change of variable $h_i = \sqrt{w_M-w_i}$, it is also of class $C^2$ almost everywhere. This gives more justification for using a quasi-Newton method to find the maximum of $g$ as done in \cite{DBLP:journals/cgf/Merigot11} and in this paper (but note that a complete justification would require to find some bounds on the eigenvalue of the Hessian). Another consequence of this observation is that given $\Omega \subset \mathR^d$, a measure $\mu$ and a pointset $Y$, optimizing $\hat{Q}$ for the first $d$ coordinates moves the points in a way that minimizes the quantization noise power, and optimizing for the $d+1$ coordinate computes the weights of a power diagram that defines an assignment that transports $\mu$ to the points. Interestingly, the first problem has multiple local minima, whereas the second one admits a global maximum. \section{Numerical Algorithm} \label{sect:numerics} I shall now explain how to use the results in Section \ref{sect:semidiscreteOT} and turn them into an efficient numerical algorithm. The algorithm is a variation of the one in \cite{DBLP:journals/cgf/Merigot11}. Besides generalizing it to the 3d case, I make some observations that improve the efficiency of the multilevel optimization method. \\ The input of the algorithm is a measure $\mu$, represented by a simplicial complex $M$ (i.e. an interconnected set of tetrahedra in 3D), a set $Y$ of $k$ points $y_i$ and $k$ masses $\nu_i$ such that $\sum \nu_i = \mu(M)$ where $\mu(.)$ is defined as follows~: For a set $B \subset \mathR^3$, the measure $\mu(B)$ corresponds to the volume of the intersection between the tetrahedra of $M$ and $B$. Optionally, $M$ can have a density linearly interpolated from its vertices. In this setting, the measure of $B$ corresponds to the integral of the linearly interpolated density on the intersection between $B$ and the tetrahedra of $M$. The weight vector that realizes the optimal transport can be obtained by maximizing the function $g(W)$ using different numerical methods. The single-level version of the algorithm in \cite{DBLP:journals/cgf/Merigot11} is outlined in Algorithm \ref{alg:SingleLevel}~: \begin{algorithm} \SetAlgoLined \KwData{A tetrahedral mesh $M$, a set of points $Y$ and masses $\nu_i$ such that $\mu(M) = \sum\nu_i$} \KwResult{The weight vector $W$ that determines the optimal transport map $T$ from $M$ to $\sum \nu_i \delta_{y_i}$} \BlankLine $W \leftarrow 0$\\ (1) \While{ $\| \nabla g(W) \|^2 < \epsilon\ $}{ (2) Compute $\Vor_W(Y) \cap M$ \\[2mm] (3) Compute $g(W) = \sum_i \int\limits_{\Pow_W(y_i) \cap M} \| x - y_i \|^2 - w_i d\mu + \sum_i \nu_i w_i$ \\[2mm] (4) Compute $\nabla g(W) = -\mu(\Pow_W(y_i)) + \nu_i$ \\ (5) update $W$ with L-BFGS } \BlankLine \caption{Semi-discrete optimal transport (single-level algorithm)} \label{alg:SingleLevel} \end{algorithm} To facilitate reproducing the results, I give more details about each step of the algorithm: (1): note that the components of the gradient of $g$ correspond to the difference between the prescribed measures $\nu$ and the measures of the power cells. This gives an interpretation of the norm of the gradient of $g$, and helps choosing a reasonable $\epsilon$ threshold. In the experiments below, I used $\epsilon = 0.01 * \mu(M) / \sqrt{k}$. (2): the algorithm that computes the intersection between a power diagram and a tetrahedral mesh is detailed further (Algorithm \ref{alg:VoroClip}). (3),(4): once the intersection $\Vor_W(Y) \cap M$ is computed, the terms $g(W)$ and $\nabla g(W)$ are obtained by summing the contributions of each intersection (grayed area in Figure \ref{fig:VoroClip}). (5): To maximize $g$, as in \cite{DBLP:journals/cgf/Merigot11}, I use the L-BFGS numerical optimization method \cite{Liu:1989:LMB:81100.83726}. An implementation is available in \cite{LBFGSImpl}. \subsection{Computing the intersection between a tetrahedral mesh and a power diagram} \label{sect:RVD} To adapt the 2d algorithm in \cite{DBLP:journals/cgf/Merigot11} to the 3d case, the only required component is a method that computes the intersection between a tetrahedral mesh and a power diagram (step (2) in Algorithm \ref{alg:SingleLevel})~: \begin{algorithm} \SetAlgoLined \KwData{A tetrahedral mesh $M$, a set of points $Y$ and a weight vector $W$} \KwResult{The intersection $\Vor_W(Y) \cap M$} \BlankLine S: Stack(couple(tet index, point index)) \\ \ForEach{ tetrahedron $t \in M$}{ \If{ $t$ \emph{is not marked} }{ (1) $i \leftarrow i \ |\ \Pow_W(y_i) \cap t \neq \emptyset$ \\ Mark(t,i) \\ Push(S, (t,i)) \\ \While{\emph{S is not empty}}{ (t,i) $\leftarrow$ Pop(S) \\ (2) P: Convex $\leftarrow \Pow_W(y_i) \cap t$ \\ (3) Accumulate(P) \\ (4) \ForEach{j \emph{neighbor of} i \emph{in P}}{ \If{\emph{$(t,j)$ is not marked}}{ Mark($t,j$); \quad Push(S, ($t,j$)) } } (5) \ForEach{$t^\prime$ \emph{neighbor of} $t$ \emph{in P}} { \If{\emph{$(t^\prime,i)$ is not marked}}{ Mark($t^\prime,i$); \quad Push(S, ($t^\prime,i$)) } } } } } \BlankLine \caption{Computing $\Pow_W(Y) \cap M$ by propagation} \label{alg:VoroClip} \end{algorithm} The algorithm works by propagating simultaneously over the tetrahedra and the power cells. It traverses all the couples $(t,i)$ such that the tetrahedron $t$ has a non-empty intersection with the power cell of $y_i$. (1): Propagation is initialized by starting from an arbitrary tetrahedron $t$ and a point $y_i$ that has a non-empty intersection between its power cell and $t$. I use the point $y_i$ that minimizes its power distance $\| y_i - . \|^2 - w_i$ to one of the vertices of $t$. (2): a tetrahedron $t$ and a power cell $\Pow_W{y_i}$ can be both described as the intersection of half-spaces, as well as the intersection $t \cap \Pow_W{y_i}$, computed using re-entrant clipping (each half-space is removed iteratively). I use two version of the algorithm, a non-robust one that uses floating point arithmetics, and a robust one \cite{PCK}, that uses arithmetic filters \cite{meyer:inria-00344297}, expansion arithmetics \cite{DBLP:conf/compgeom/Shewchuk96} and symbolic perturbation \cite{Edelsbrunner90simulationof}. Both predicates and power diagram construction algorithm are available in PCK (Predicate Construction Kit) part of my publically available ``geogram'' programming library\footnote{\url{http://gforge.inria.fr/projects/geogram/}}. (3) the contribution of each intersection $P = t \cap \Pow_W{y_i}$ is added to $g$ and $\nabla g$. The convex $P$ is illustrated in the (2d) figure \ref{fig:VoroClip} as the grayed area (in 3d, $P$ is a convex polyhedron). The algorithm then propagates to both neighboring tetrahedra and points. (4): each portion of a facet of $t$ that remains in $P$ triggers a propagation to a neighboring tetrahedron $t^\prime$. In the 2d example of Figure \ref{fig:VoroClip}, this corresponds to edges $e_1$ and $e_4$ that trigger a propagation to triangles $t_2$ and $t_1$ respectively. (5): each facet of $P$ generated by a power cell facet triggers propagation to a neighboring point. In the 2d example of the figure, this corresponds to edges $e_2$ and $e_3$ that trigger propagation to points $y_{j_1}$ and $y_{j_2}$ respectively. This algorithm is parallelized, by partitioning the mesh $M$ into $M_1$, $M_2$, \ldots $M_{nb\_cores}$ and by computing in each thread $M_{thrd} \cap \Pow_W(Y)$. \begin{figure} \caption{Computing the intersection between a power diagram and a tetrahedral mesh by propagation.} \label{fig:VoroClip} \end{figure} \begin{table} \begin{tabular}{l|lllllll} \hline \hline nb masses $k$ & 1000 & 2000 & 5000 & 10000 & 30000 & 50000 & 100000 \\ nb iter & 146 & 200 & 328 & 529 & 1240 & 1103 & 1102 \\ time (s) & 2.8 & 6.4 & 21 & 65 & 232 & 568 & 847 \\ \hline \end{tabular} \caption{Statistics for a simple translation scenario with the single-level algorithm. The threshold for $\| \nabla g \|^2$ is set to $\epsilon = 0.01 * \mu(M) / \sqrt{k}$. } \label{tab:SingleLevel} \end{table} I conducted a simple experiment, where $M$ is a tessellated sphere with 2026 tetrahedra, and $Y$ a sampling of the same sphere shifted by a translation vector of three times the radius of the sphere. The statistics in Table \ref{tab:SingleLevel} obtained with a standard PC\footnote{ experiments done with a 2.8 GHz Intel Core i7-4900MQ CPU with an implementation of Algorithm \ref{alg:VoroClip} that uses 8 threads.} show that the single-level algorithm does not scale-up well with the number of points and starts taking a signiffficant time for processing 10K masses and above. This confirms the observation in \cite{DBLP:journals/cgf/Merigot11}. This is because at the initial iteration, all the weights are zero, and the power diagram corresponds to the Voronoi diagram of the points $y_i$. At this step, only some points $y_i$ on the border of the pointset have a Voronoi cell that ``see'' the mesh $M$ (i.e. that have a non-empty intersection with it). It takes many iteration to compute the weights that ``shift'' the concerned power cells onto $M$ and allow inner points to see $M$. It is only once all the points of $Y$ ``see'' $M$ that the numerical method can capture the trend of $g$ around the maximum (and then it takes a small number of iterations to the algorithm to balance the weights). Intuitively, $Y$ is ``peeled'' only one layer of points at a time. The bad effect on performances is even more important than in \cite{DBLP:journals/cgf/Merigot11}, because in the 3d setting, the proportion of ``inner'' points relative to the number of points on the border of the pointset is larger than in 2d. \subsection{Multi-level algorithm} To improve performances, I follow the approach in \cite{DBLP:journals/cgf/Merigot11}, that uses a multilevel algorithm. The idea consists in ``bootstrapping'' the algorithm on a coarse sub-sampling of the pointset. The ``peeling'' effect mentioned in the previous paragraph is limited since we have a small number of points. Then the algorithm is run with a larger number of points, using the previously computed weights as an initialization. The set of points can be decomposed into multiple level of increasing resolution. The complete algorithm will be detailed below (Algorithm \ref{alg:MultiLevel}). \begin{table} \begin{tabular}{l|lllllll} \hline \hline nb masses & 1000 & 2000 & 5000 & 10000 & 30000 & 50000 & 100000 \\ deg. 0 time (s) & 2.5 & 6 & 19 & 38 & 184 & 356 & 959 \\ deg. 1 time (s) & 1 & 2 & 6 & 14 & 54 & 103 & 172 \\ deg. 2 time (s) & 1.4 & 2.2 & 6 & 16 & 58 & 138 & 172 \\ BRIO/deg. 2 time (s) & 1 & 1.65 & 3.4 & 9 & 26 & 62 & 106 \\ \hline single level time (s) & 2.8 & 6.4 & 21 & 65 & 232 & 568 & 847 \\ \hline \end{tabular} \caption{Statistics for a simple translation scenario with the multi-level algorithm. The mesh $M$ has 61233 tetrahedra. Timings are in seconds. Each level is initialized from the previous one with regressions of different degrees.} \label{tab:MultiLevel} \end{table} To further improve the speed of convergence, I use the remark in Section \ref{sect:SemiDiscKanto} that the weights $w_i$ corresponds to the potential $\phi$ evaluated at $y_i$ (with a 1/2 factor). For a translation, we know that $T^{-1}(y) = y - V = y - \nabla \phi$, therefore $\phi(y) = V \cdot y$ where $V$ denotes the translation vector. In more general settings, $\phi$ is still likely to be quite regular (except on its singularities where $T$ is discontinuous). When initializing a level from the previous one, this suggests initializing the new $w_i$'s from a regression of their nearest neighbors computed at the previous level. Table \ref{tab:MultiLevel} shows the statistics for initialization with the nearest neighbor (deg. 0), linear regression with 10 nearest neighbors (deg. 1) and quadratic regression with 20 nearest neighbors (deg. 2). As can be seen, initializing with linear regression results in a significant speedup. In this specific case though, quadratic regression does not gain anything. It is not a big surprise since we know already that $\phi(y) = V \cdot y$ is linear in this specific case, but it can slightly improve performances in more general settings, as shown further. Finally, it is possible to gain another x2 speedup factor~: the algorithm that we use to compute the power diagrams \cite{Amenta:2003:ICC:777792.777824} sorts the points with a multilevel spatial reordering method, that makes it very efficient. It is possible to use the same multilevel spatial ordering for both the numerical optimization and for computing the power diagrams (BRIO/deg. 2 row in the table). Since only the weights change during the iterations, this order needs to be computed once only, at the beginning of the algorithm. Note the overall 8x acceleration factor as compared to the single-level algorithm in Table \ref{tab:SingleLevel} (repeated in the last row of Table \ref{tab:MultiLevel} to ease comparison). The complete multi-level algorithm is summarized below~: \begin{algorithm} \SetAlgoLined \KwData{A tetrahedral mesh $M$, a set of points $Y$ and masses $\nu_i$ such that $\mu(M) = \sum\nu_i$} \KwResult{The weight vector $W$ that determines the optimal transport map $T$ from $M$ to $\sum \nu_i \delta_{y_i}$} \BlankLine Apply a random permutation to the points $Y$ \\ (1) Partition the interval of indices $[1,k]$ of $Y$ into $n_l$ intervals $[b_l, e_l]$ of increasing size \\ \ForEach{\emph{level} $l$}{ (2) Sort the points $y_{b_l} \ldots Y_{e_l}$ spatially \\ (3) {\bf For each } $i$, $\nu_i \leftarrow |M|/e_l$ \\ (4) Interpolate the weights $w_{b_l} \ldots w_{e_l}$ from the already computed weights $w_1 \dots w_{b_l - 1}$\\ Optimize the weights using Algorithm \ref{alg:SingleLevel} } \BlankLine \caption{Semi-discrete optimal transport (multi-level algorithm)} \label{alg:MultiLevel} \end{algorithm} In my implementation, for step (1), the ratio between the number of points in a level and in the rest of the points is set to 0.125. For the spatial sort in step (2), the algorithm, available in ``geogram'', was inspired by the variant of the Hilbert sort implemented in \cite{SpatialSort}. (3): Before computing the optimal transport maps, since the number of points changes at each level, the masses of the points need to be updated. At step (4), to determine the weight of a new point $w_i$, I use linear least squares with 10 nearest neighbors for degree 1 and quadratic least squares with 20 nearest neighbors for degree 2. The influence of the degree of the regression is evaluated in Table \ref{tab:MultiLevel2} for a configuration where a sphere is splitted into two spheres (first row in Figure \ref{fig:result1}). Unlike in the previous translation case, in this configuration the potential $\phi$ is non-linear (see the deformations of the spheres), and a higher degree regression slightly improves the speed of convergence for a large number of points, since it captures more variations of $\phi$ and better initializes $W$. \begin{table} \begin{tabular}{l|lllllll} \hline \hline nb masses & 1000 & 2000 & 5000 & 10000 & 30000 & 50000 & 100000 \\ BRIO/deg. 1 time (s) & 1 & 1.7 & 3.5 & 9.8 & 25 & 61.7 & 122 \\ BRIO/deg. 2 time (s) & 0.9 & 1.6 & 3.5 & 8.4 & 28.3 & 61.4 & 112 \\ \hline \end{tabular} \caption{Statistics for splitting a sphere into two spheres with the multi-level algorithm. Timings are in seconds. Each level is initialized from the previous one with regressions of different degrees.} \label{tab:MultiLevel2} \end{table} \subsection{Using semi-discrete transport to approximate the transport between two tetrahedral meshes} I now consider the case where the input is a pair of tetrahedral meshes $M$ and $M^\prime$. The goal is now to generate a sequence of tetrahedral meshes that realize an approximation of the optimal transport between $M$ and $M^\prime$. The algorithm is outlined below~: \begin{algorithm} \SetAlgoLined \KwData{Two tetrahedral meshes $M$ and $M^\prime$, and $k$ the desired number of vertices in the result} \KwResult{A tetrahedral mesh $G$ with $k$ vertices and a pair of points $p_i^0$ and $p_i^1$ attached to each vertex. Transport is parameterized by time $t \in [0,1]$ with $p_i(t) = (1-t)p_i^0 + t p_i^1$.} \BlankLine (1) Sample $M^\prime$ with a set $Y$ of $k$ points \\ (2) Compute the weight vector $W$ that realizes the optimal transport between $M$ and $Y$ (Algorithm \ref{alg:MultiLevel})\\ (3) Compute $E = \Del(Y)|M^\prime$ and $F = \Pow_W(Y)|M$ \quad ; \quad Tets(G) $\leftarrow E \cap F$ \\ (4) \textbf{Foreach} $i \in [1\ldots k]$, $(p_i)^0 \leftarrow \mbox{centroid}(\Pow_W(y_i) \cap M) \quad ; \quad (p_i)^1 \leftarrow y_i$ \\ \BlankLine \caption{Approximated optimal transport between two tetrahedral meshes} \label{alg:ApproxTransport} \end{algorithm} The different steps of this algorithm are implemented as follows: (1): to compute a homogeneous sampling, I initialize $Y$ with a centroidal Voronoi tessellation (see Section \ref{sect:CVTandOT}). (3): the main difficulty consists in finding the discontinuities in $T$ and avoid generating tetrahedra that cross them. To detect the discontinuities in $T$, I consider that the Voronoi diagram $\Vor(Y)$ that samples $M^\prime$ evolves towards the power diagram $\Pow_W(Y)$ that samples $M$ (note that this evolution goes backwards, from $M^\prime$ to $M$). Thus, the tetrahedra that are kept are those that are present both in the dual $\Del(Y)$ of $\Vor(Y)$ (Delaunay triangulation) and the dual $\Reg_W(Y)$ of $\Pow_W(Y)$ (regular weighted triangulation). (4) Finally, the geometry $p_i^0$ of each vertex of $G$ at initial time $t=0$ is determined as the centroid of the power cell $\Pow_W(y_i) \cap M$. The geometry $p_i^1$ at final time $t=1$ is simply $y_i$. \section{Results and conclusions} \label{sect:results} \begin{figure} \caption{Some examples of semi-discrete optimal transport with topology changes.} \label{fig:result1} \end{figure} \begin{figure} \caption{More examples of semi-discrete optimal transport. Note how the solids deform and merge to form the sphere on the first row, and how the branches of the star split and merge on the second row. } \label{fig:result2} \end{figure} \begin{table} \begin{tabular}{l|llllllllll} \hline \hline nb masses & 1000 & 2000 & 5000 & 10000 & 30000 & 50000 & $10^5$ & $3\times 10^5$ & $5\times 10^5$ & $10^6$ \\ time (s) & 1.45 & 3.2 & 7.3 & 17.3 & 55 & 154 & 187 & 671 & 1262 & 2649 \\ \hline \end{tabular} \caption{Statistics for the Armadillo $\rightarrow$ sphere optimal transport with varying number of masses (see third row of Figure \ref{fig:result2}). Timings are given in seconds. The multi-level algorithm with BRIO pre-ordering and degree 2 regressions is used.} \label{tab:Armadillo} \end{table} Several results are shown in Figures \ref{fig:result1} and \ref{fig:result2}. Note that when the volume of $M$ and $M^\prime$ differ, using $\nu_i = |M|/k$ changes the ``density'' of $M^\prime$ and preserves the total mass. The intermediary steps are generated by using $p_i = (1-t) p_i^0 + t p_i^1$ for the locations at the vertices of $G$. As can be seen, the combinatorial criterion that selects the stable tetrahedra successfully finds the discontinuities. The third row of Figure \ref{fig:result2} demonstrates some potential applications in computer graphics. In the bottom row, the obtained deformation looks ``natural'' and ``visually pleasing'' (as far as I can judge, but my own judgment may be biased \ldots). However, a ``user'' would probably prefer to rotate the star in the center column of Figure \ref{fig:result2} rather than splitting and merging the branches, but optimal transport ``does not care'' about preserving topology. Timings for the Armadillo $\rightarrow$ sphere optimal transport are given in Table \ref{tab:Armadillo}. The algorithm scales up reasonably well, and computes the optimal transport from a tetrahedral mesh to 300K Dirac masses in 10 minutes. It scales-up to 1 million Dirac masses (but it nearly takes 45 minutes).\\ To conclude, I mention that the main limitation of Algorithm \ref{alg:ApproxTransport} is that the discontinuities are sampled at the precision of the initial sampling, that does not takes them into account. As a consequence, this leaves a gap that has a width of one tetrahedron in the result. One can clearly see it in the figures. Moreover, when the shape undergoes strong deformations, flipping may occur, making the concerned pairs of tetrahedra disappear in the result (for instance, one can observe some holes in the legs of the armadillo in Figure \ref{fig:result2}). With a better representation of discontinuity, one may obtain a more precise representation of the transport. This leads to the following open questions, that concern the continuous setting for some particular representations of $\mu$ and $\nu$~: \begin{enumerate} \item Given two tetrahedral meshes $M$ and $M^\prime$, is it possible to characterize the locus of the points where $T$ is discontinuous (\emph{discontinuity locus}), and invent an algorithm that generates a faithful representation of it ? \item What does the discontinuity locus looks like if $M$ and $M^\prime$ both have a density linearly interpolated over the tetrahedra ? \item What does the discontinuity locus looks like if $\mu$ and $\nu$ are supported by two different set of spheres ? \end{enumerate} \end{document}
arXiv
Environmental Science and Pollution Research May 2018 , Volume 25, Issue 14, pp 13254–13269 | Cite as Environmental risk assessment of pesticides in the River Madre de Dios, Costa Rica using PERPEST, SSD, and msPAF models Robert A. Rämö Paul J. van den Brink Clemens Ruepert Luisa E. Castillo Jonas S. Gunnarsson Ecotoxicology in Tropical Regions This study assesses the ecological risks (ERA) of pesticides to aquatic organisms in the River Madre de Dios (RMD), which receives surface runoff water from banana, pineapple, and rice plantations on the Caribbean coast of Costa Rica. Water samples collected over 2 years at five sites in the RMD revealed a total of 26 pesticides. Their toxicity risk to aquatic organisms was assessed using three recent ERA models. (1) The PERPEST model showed a high probability (>50 %) of clear toxic effects of pesticide mixtures on algae, macrophytes, zooplankton, macroinvertebrates, and community metabolism and a low probability (<50 %) of clear effects on fish. (2) Species sensitivity distributions (SSD) showed a moderate to high risk of three herbicides: ametryn, bromacil, diuron and four insecticides: carbaryl, diazinon, ethoprophos, terbufos. (3) The multi-substance potentially affected fraction (msPAF) model showed results consistent with PERPEST: high risk to algae (maximum msPAF: 73 %), aquatic plants (61 %), and arthropods (25 %) and low risk to fish (0.2 %) from pesticide mixtures. The pesticides posing the highest risks according to msPAF and that should be substituted with less toxic substances were the herbicides ametryn, diuron, the insecticides carbaryl, chlorpyrifos, diazinon, ethoprophos, and the fungicide difenoconazole. Ecological risks were highest near the plantations and decreased progressively further downstream. The risk to fish was found to be relatively low in these models, but water samples were not collected during fish kill events and some highly toxic pesticides known to be used were not analyzed for in this study. Further sampling and analysis of water samples is needed to determine toxicity risks to fish during peaks of pesticide mixture concentrations. The msPAF model, which estimates the ecological risks of mixtures based on their toxic modes of action, was found to be the most suitable model to assess toxicity risks to aquatic organisms in the RMD. The PERPEST model was found to be a strong tool for screening risk assessments. The SSD approach is useful in deriving water quality criteria for specific pesticides. This study, through the application of three ERA models, clearly shows that pesticides used in plantations within the RMD watershed are expected to have severe adverse effects on most groups of aquatic organisms and that actions are urgently needed to reduce pesticide pollution in this high biodiversity ecosystem. Aquatic pollution Agricultural runoff Mixture toxicity ERA Central America Tropical ecotoxicity Responsible editor: Marcus Schulz Costa Rica has among the highest biodiversity on earth and is known for its nature conservation efforts and eco-tourism. It is also a major agro-economy and among the world's largest producers of banana and pineapple (FAO 2014; FAOSTAT 2016). The banana production is achieved through large monocultures located in the tropical Caribbean lowlands (CORBANA 2013). Pesticide use in these plantations is intensive, with 49 kg active ingredients (a.i.) per hectare and year applied in banana plantations and 25 kg a.i. per hectare and year applied in pineapple plantations (Bravo et al. 2013). These plantations are rain-fed, but heavy rainfall in the Caribbean region (3.2 m average precipitation per year) requires that plantations drain excess rainwater through drainage canals (Grant et al. 2013), leading to discharges of untreated surface runoff water into rivers downstream these plantations. Pesticide contamination of Costa Rican wildlife has previously been reported (de la Cruz et al. 2014; Klemens et al. 2003) and both acute and chronic effects have been observed in aquatic ecosystems downstream plantations (Castillo et al. 2006; Castillo et al. 1997; Echeverria-Saenz et al. 2012). The River Madre de Dios (RMD) watershed (10.1921°N 83.2953°V) consists of a river and coastal lagoon in the province of Limón on the Caribbean coast of Costa Rica. This watershed has a high biodiversity and provides local residents with income from fishery and ecotourism, but it also hosts large monocultures of banana, pineapple, and rice (Fig. 1). Frequent fish kills have been reported in the RMD since 2004 (18 observed events between 2007 and 2009), and these have been suggested to be caused by pesticide runoff (CGR 2013; Diepens et al. 2014). Land use map of the River Madre de Dios watershed. Banana accounts for the largest portion of agricultural land use, followed by rice and pineapple. Center bottom: schematic diagram of the five sampling sites with water flow direction from left to right. The sites CA-S and CPama-J are located in tributaries. Map by Geannina Moraga, Centre de GIS, IRET, UNA, Heredia, Costa Rica Studies are needed to find the causes of these fish kills and to characterize the toxicity risks of pesticide pollution in the RMD. However, there is a lack of knowledge on how to assess and mitigate risks of chemicals in tropical countries: also, many Central American countries do not have or do not enforce environmental regulations. The pesticide registration process in Costa Rica consists of a simple risk quotient approach based on US EPA guidelines, where the aquatic toxicity evaluation consists of acute and chronic tests on three standard species (MINAE 2011). The relevance of using standard test species from temperate systems to predict toxicity risks in tropical systems may be questioned as tropical and temperate systems differ in several ways that may affect the risks of pesticides, e.g., soil and sediment types (affecting sorption and degradation rates of chemicals), temperature, sunlight, and pH (Sanchez-Bayo and Hyne 2011). Tropical ecosystems are often thought to contain more sensitive species and would therefore be more vulnerable to pesticides than temperate ones. Some recent studies have explored these differences, e.g., Maltby et al. (2005) found no influence of geographical distribution on species sensitivity distributions (SSDs) of insecticides, and Daam and Van den Brink (2010) found no systematic difference in chlorpyrifos degradation and toxicity between temperate and tropical systems. Rico et al. (2011) found no statistical difference in toxicity of the insecticide malathion on tropical freshwater (Amazonian) fish and invertebrates compared to temperate fish, but did find that tropical species were more robust to the fungicide carbendazim. They concluded that tropical species are protected when using threshold values (HC5) derived from SSD on temperate species, provided that sufficient representative species are used in the SSD (Rico et al. 2011). On the other hand, Kwok et al. (2007) found that tropical species may be more sensitive to some pesticides, e.g., chlorpyrifos, based on toxicity studies on a range of substances. In a recent toxicity study, Diepens et al. (2014) compared the temperate cladoceran Daphnia magna to its tropical counterpart Daphnia ambigua and found that D. ambigua is more sensitive than D. magna, the standard species used for aquatic toxicity assessment in Costa Rica, which implies that the current pesticide registration process in Costa Rica may be underprotective. These studies highlight that further ecotoxicological research is needed in tropical ecosystems, including studies on tropical endemic species, but also suggest that species in tropical and temperate regions do not appear to have fundamentally different responses to toxic substances. Many different methods have been proposed to derive toxicity risk values for pesticides. The species sensitivity distribution (SSD) describes the variation in species' sensitivity for a particular toxic substance by fitting pre-existing toxicity data for relevant species to an assumed (often log-normal) distribution (Aldenberg and Jaworska 2000). The SSD concept can be used in risk assessment to calculate the potentially affected fraction (PAF) of species from exposure to an environmental contaminant and is also used to derive environmental quality standards (EQS): concentration thresholds under which a fraction of species is protected from toxic effects, e.g., a 95 % protection level from HC5, the Hazard Concentration for 5 % of species (Kooijman 1987). SSD is a widely recognized method for toxicity assessments of single substances and for the development of water quality standards for environmental pollutants. It is a standard concept used in the EU, Canada, and the USA (CCME 2007; EFSA 2013; USEPA 2000) but has not yet been implemented in Costa Rican guidelines. SSD is often recommended as a tool for assessing the toxicity risks of individual substances, but mixtures of substances more often occur in the environment and risk assessments therefore need to account for the joint toxicity of mixtures (Suter et al. 2002). The multi-substance PAF (msPAF) model is designed to assess the toxicity risk of mixtures using the SSD principles. The msPAF model applies concentration addition (CA) to calculate a single risk value for substances that have a shared toxic mode of action (TMoA) and then applies response addition (RA) to sum the toxicity risks of each TMoA. The result is a msPAF value that describes the potentially affected fraction of species from exposure to a complex mixture (de Zwart and Posthuma 2005; Traas et al. 2002). The CA and RA models underpinning the msPAF model have been separately experimentally validated, where observed effects of known mixtures match the predicted effects from the respective models (Altenburger et al. 2000; Faust et al. 2003). The SPEARpesticides bioindicator, a trait-based ecological index for steam invertebrates, also correlates well with predictions made by the msPAF model (Smetanova et al. 2014). The msPAF approach is applicable provided that each component of a mixture has a known TMoA. There is, however, no consensus on what constitutes a distinct mode or mechanism of action (Lambert and Lipscomb 2007) and it has been advised that experimental validation of TMoA is necessary for the regulatory use of multiple TMoA (and consequently, of RA) in mixture assessments (Backhaus et al. 2013). The CA model is therefore often used as the default mixture model (and has been called a "General Solution") but overestimates the risk of mixtures with multiple TMoA when compared to the mixed model approach which applies both CA and RA. The present study applies CA and RA models in msPAF for the purpose of ERA without experimental validation of the TMoA. The predict the ecological risks of pesticides (PERPEST) model has been developed for risk assessment of both single pesticides and pesticide mixtures. It applies a case-based reasoning process to compare a current case of pesticide pollution to a database ("the case base") containing toxicity data from pesticide mesocosm experiments with known outcomes (Van den Brink et al. 2002). The model compares environmental concentrations of pesticides to previous results from the case base to estimate probabilities of toxic effects on several species (e.g., algae, macrophytes, zooplankton, macroinvertebrates, fish, and tadpoles) and on community metabolism (i.e., respiration, primary production). Thus, the PERPEST model accounts not only for direct effects on species but also for indirect effects and interactions among species (i.e., prey-predator effects) that are observed in mesocosms but not in the single species tests used in SSD and msPAF models. In the present study, we applied the three models for environmental risk assessment (ERA) presented above to assess the toxicity risks of pesticides in the RMD: (1) the PERPEST model, (2) the SSD method, and (3) the msPAF method. Results from the three models are compared and the advantages and drawbacks of each model are discussed. Recommendations are made for further ERA in tropical aquatic ecosystems. Sampling sites The sampling sites chosen for this study are part of a larger sampling effort in the RMD comprising 12 sites in total. Five sites were assessed in this study and are labeled 1, 2, 4, 5, and 6 on the map (Fig. 1). Three of the study sites are located in the river (RMD) and two sites are located in tributaries that receive untreated surface runoff water from agricultural lands. These sites were chosen to represent an exposure gradient from plantations towards the recipient coastal lagoon. These five study sites are as follows: (1) RMD-S, located upstream of most plantation discharges, (2) CA-S, located in the Caño Azul tributary that receives surface runoff from pineapple and banana plantations, (3) RMD-F, located further downstream of RMD-S and CA-S, (4) CPama-J, located in the Canal Pama tributary that receives surface runoff from mainly banana plantations, and (5) URMD-CPama, located further downstream of RMD-F and CPama-J. Water sampling and pesticide analysis We collected 68 surface water samples on 15 sampling occasions at the study sites over a 2-year period (2011–2012). Water samples were collected via boat by inserting pre-washed 2-L brown glass bottles into the water. The bottles were transported in cooled ice boxes to the laboratory LAREP, UNA, Heredia, Costa Rica and stored at 4–6 °C until analyses. The water samples were extracted on solid phase extraction columns and the extracts were analyzed by GC-MS (Agilent 7890A GC and 5975C MS, Palo Alto, USA) for non-polar pesticides and HPLC with diode array detection for polar pesticides. LC-PDA analyses were performed using a Shimadzu HPLC LC-10AD with an SPD-M10A diode array detector (Shimadzu, Kyoto, Japan). The chromatographic column was a LiChroCART HPLC RP-18e column (125 mm × 3 mm × 5 μm particle size, Merck, Germany). Fifty microliters of extracts was analyzed. The mobile phase consisted of 20-mM sodium acetate in ultra-pure water/methanol 56:44 (solvent A) and methanol (solvent B). Identification was performed using retention time and the UV spectra of the pesticides included in the analysis. Pesticide residues analyzed by GC-MS were identified using the Chemstation software and the NIST05 Mass Spectral Database, and concentrations were determined using external standards. A selection of 32 pesticides and pesticide metabolites were included in the analysis based on available external standards from pesticides reported in the pesticide registration process and in interviews with farmers and crop owners. Several groups of pesticides that may cause high risks were not analyzed, including pyrethroids, neonicotinoids, and some fungicides with high volumes of use in Costa Rica. Physico-chemical and toxicological properties of detected pesticides Information on the chemical characteristics of pesticides detected in the field was collected from the literature, including chemical abstract service (CAS) registry number, common name, molecular weight, vapor pressure, Henry's law constant, half degradation time in water (DT50), and octanol-water partition coefficient (K ow) from the Pesticide Properties Database (Lewis et al. [2016], available at sitem.herts.ac.uk/aeru/ppdb, last accessed on 2015-11-01). Literature toxicity data was obtained from the U.S. EPA Ecotox database (USEPA 2016) and the E-toxBase (De Zwart 2002). Selected test organisms were algae (microalgae, cyanobacteria), aquatic plants, arthropods (aquatic insects, crustaceans), and fish. Only freshwater laboratory tests with suitable test conditions were used to reduce inconsistencies from different experimental systems. As lethal effects in fish are observed in the field, we collected median effect concentrations (EC50) on mortality for all species, on immobility in mobile species, on inhibition of cell division in algae, and on growth inhibition in aquatic plants. Exposure times were 1–7 days for algae, aquatic insects, and crustaceans; 2–21 days for fish; 2–28 days for aquatic plants (Maltby et al. 2009). It should, however, be noted that 98.3 % of toxicity tests used for aquatic insects and crustaceans had exposure times between 1 and 4 days. Mean effect concentrations were calculated for each species-pesticide combination and were used to model species sensitivity with equal weight of each included species, i.e., any bias towards often tested species were removed by using one toxicity value per species. Toxic modes of action The present study uses classifications provided in the Compendium of Pesticide Common Names (Alan Wood, available at www.alanwood.net/pesticides; last accessed on 2015-08-25) to categorize pesticides by their TMoA, following recommendations of De Zwart and Posthuma (2005). This database identifies molecular classes of pesticides and is similar to an approach using molecular classes used by Gregorio et al. (2012). Similarly, De Zwart (2002) reported 68 TMoA identified either by molecular classes or QSAR, an approach that has since been expanded and applied to the management of European river basins (de Zwart et al. 2009). However, other sources of TMoA information are available: Jesenska et al. (2013) used classifications based on specific binding sites of herbicides, e.g., mechanisms, rather than modes, of action and Altenburger et al. (2013) took a similar approach using the classifications of the insecticide, fungicide, and herbicide resistance action committees. These committees catalogue modes of action to develop pesticide resistance management strategies, and it could be assumed that pesticides for which cross-resistance is developed share a common mode or mechanism of action. There are thus several sources available that identify TMoA for pesticides, and there is a need for further studies to identify the most suitable source of information for use in mixture toxicity modeling. PERPEST The PERPEST software (Van den Brink et al. [2006b]; Van den Brink et al. [2002]; version 4.0.0) was used to predict the probability of effects from pesticide mixtures in the RMD (www.perpest.wur.nl). Probabilities of no effects, slight effects, and clear effects were calculated for (1) algae and macrophytes, (2) zooplankton, (3) macroinvertebrates, (4) fish and tadpoles, and (5) community metabolism. Pesticides not currently available in the PERPEST case base were added using physico-chemical properties obtained from the literature and median hazard concentrations (HC50) calculated in the present study. The PERPEST program was used with default settings, except that cases from the case base were weighted using TMoA and toxic units (TU) and selected using TMoA and nearby TU. The results are presented as low or high probability of clear effects, where a low probability is defined as below 50 % and signifies a low risk, and a high probability is 50 % or higher and signifies a high risk. Species sensitivity distributions SSDs were generated using collected toxicity data with equal weight of species. Fish and arthropod species were used to generate insecticide SSDs, algae and aquatic plant species for herbicide SSDs, and all taxonomic groups mentioned above were used for fungicide SSDs. The ETX software (Van Vlaardingen et al. (2004); version 2.1) was used to generate SSDs with a confidence limit-based estimator, the best performing method to fit a SSD (Hickey and Craig 2012). Log-normality of species toxicity data was assessed using the Anderson-Darling test in ETX. Failure to meet the 5 % critical level resulted in rejecting the SSD and generating SSDs for minor taxonomic groups, until normality was met. The most sensitive group (based on HC5) to meet normality criteria was used. Pesticides with no log-normal distribution or insufficient sample size (<6 species) were not assessed with SSD. The hazard concentrations for 5 % of species (HC5) and 50 % of species (HC50) were extracted from the SSD of each pesticide, following the calculation of a potentially affected fraction of species (PAF) using the maximum measured environmental concentration of the pesticide in each site. The results are interpreted as low risk under 1 % PAF, as moderate risk above 1 % PAF, and as high risk above 5 % PAF, corresponding to the commonly used HC5 benchmark. Multi-substance PAF Six species groups were assessed using msPAF. First, (1) primary producers and (2) fish and arthropods were used to maximize sample sizes and for comparison to the SSD results, then distinct taxonomic groups were selected to study specific effects on (3) algae, (4) aquatic plants, (5) fish, and (6) arthropods. The method used to calculate msPAF follows De Zwart and Posthuma (2005) with modifications. A hazard unit (HU) was calculated for each species group-pesticide combination as the geometric mean of literature toxicity data (similar to the HC50). These HU units were used to scale toxicity data and measured environmental concentrations (MEC) of pesticides to dimensionless HU values to adjust for differences in the potency of pesticides. Mean (α) and standard deviation (σ) of log toxicity data (expressed in HU units) were calculated for each pesticide using equal weight of species for α but taking intra-species variance into account for σ. Each pesticide was assigned a TMoA based on molecular activity. These TMoA were also considered for non-target species, following de Zwart et al. (2009). The TMoA groups were evaluated using calculated SSD slopes, where pesticides with slopes (σ) deviating more than ±10 % from others were assigned to a separate TMoA. The CA model was used to calculate a PAF value for each TMoA (msPAFCA) in a sample using the Microsoft Excel© function (1). $$ \mathrm{NORM}.\mathrm{DIST}\left({MEC}_{\mathrm{TMoA}},\upalpha, \upsigma, 1\right) $$ Where MECTMoA is the total MEC of pesticides in the TMoA, ɑ is the average ɑ i for pesticides i = 1 to i = n in the TMoA, and σ is the average σ i for pesticides i = 1 to i = n in the TMoA. After obtaining msPAFCA for each TMoA, the total toxicity of a sample (msPAFRA) was calculated using the following formula for the RA model (2): $$ {\mathrm{msPAF}}_{RA}=1-{\displaystyle {\prod}_{i=1}^n\left(1-{\mathrm{msPAF}}_{CA,i}\right)} $$ Pesticides with insufficient sample size (<4 species) were not assessed with msPAF. This minimum sample size was set following calculation of the effects of minimum sample size on pesticide coverage (and subsequent toxicity risks), where msPAF was modeled with a minimum sample size of either 2, 4, 6, or 10 species. This evaluation indicated that the number of assessable pesticides and resulting toxicity risks decrease with sample size, particularly above a minimum of 4 species (see discussion). The results are interpreted analogous to SSD, where low risks occur below 1 % PAF, moderate risk occur between 1 and 5 % PAF, and high risk occur above 5 % PAF. Measured environmental concentrations There were 26 pesticides detected at the sampling sites: 13 fungicides, 7 herbicides, and 6 insecticides (Table 1). Detection frequencies varied from 1 to 48 occurrences per pesticide in a total of 68 samples. The herbicide diuron was the most commonly detected pesticide and was found in 62 % of samples. Water samples contained a median of 4 pesticides and a maximum of 16 pesticides. The median concentration of a pesticide was 0.13 μg/L (excluding non-detects), and the maximum was 24.0 μg/L for diuron (Table 1). Pesticide occurrences and measured environmental concentrations (MEC) in 68 samples at five sites in the RMD watershed (2011–2012) Analyzed samples (n) Detections (n) Avg. MEC (μg/L) Max. MEC (μg/L) Azoxystrobin Bitertanol Chlorothalonil Difenoconazole Epoxiconazole Imazalil Metalaxyl Myclobutanil Propiconazole Pyrimethanil Tebuconazole Triadimenol Bromacil Butachlor Diuron Hexazinone Oxyfluorfen Terbutryn Chlorpyrifos Ethoprophos Fenamiphos Terbufos The pesticide metabolites carbofuran phenol and terbufos sulfone and the chemicals deet and dichloroaniline were not included in the toxicity risk assessment F fungicide, H herbicide, I insecticide The pre-compiled PERPEST case base contained mesocosm effect data for 3 of the herbicides and 4 of the insecticides detected in the RMD: the herbicides diuron, hexazinone, and terbutryn and the insecticides carbaryl, carbofuran, chlorpyrifos, and diazinon. The 19 remaining pesticides were added to the model using their physico-chemical properties and toxicity data from the literature (Table 2). The results show that clear effects are likely (>50 % maximum probability) to occur on community metabolism and on the species communities of algae and macrophytes, zooplankton, and macroinvertebrates (Table 3). Clear effects were, however, not likely to occur on fish and tadpoles in any sampling site (the maximum probability of clear effect on fish and tadpoles was 43 %). Pesticide properties entered into the PERPEST program for assessment of pesticide mixtures. Physico-chemical properties retrieved from the PPDB (Lewis et al. 2016). HC50 derived in this paper Pesticide name Molecule group Aquatic phase DT50 (d) HC50 (μg/L) Henry's law constant at 25°C (Pa m3 mol-1) Kow at 20 °C Other fungicides 7.40E-09 Photosynthesis inhibitor Triazin(on)e Other herbicide Acetylcholinesterase inhibitor 2.70E + 00 Aquatic phase DT50: n/a not available, stable stable compound in water: 999 entered into the PERPEST program; DT 50 half-life degradation time; HC50 hazard concentration for 50 % of species; K ow octanol/water partition coefficient Average, standard deviation (in parenthesis), and maximum (bold text) probability of clear effect (%) derived from PERPEST for pesticide mixtures at each of the five study sites. Average number of analogous cases for predictions of each endpoint in the PERPEST case base RMD-S CA-S RMD-F CPama-J URMD-CPama Analogous cases Algae and macrophytes Fish and tadpoles Community metabolism Blank (–) indicates no result was obtained (no analogous cases or estimation out of bounds, i.e., near-zero risk) The highest probabilities of clear effects to all endpoints was observed at the site CA-S, with a 95 % probability of clear effects on community metabolism, an 84 % probability on algae and macrophytes, a 79 % probability on zooplankton, a 73 % probability on macroinvertebrates, and a 43 % probability of clear effects on fish and tadpoles. The results also show high variance in the probability of clear effect within sites (Table 3), with coefficients of variance ranging from 0.13 to 2.23 (median CV of 0.70), which indicates that there are both temporal and spatial variations in toxicity risks and suggests that peak concentrations may influence the apparent toxicity of pesticide mixtures. SSDs could be generated for 19 pesticides (Table 4). The 7 pesticides that were excluded had too few toxicity data points in literature and current databases for the select species group (bitertanol, butachlor, epoxiconazole, imazalil, myclobutanil, thiabendazole, triadimenol). Results of SSD: median HC5 (μg/L) and maximum PAF (%) of pesticides in the study sites Max. PAF (%) Species (n) HC5 (μg/L) Full (29a) Diurona Crustaceans (92) Moderate risk (PAF > 1 %) in bold text. Full indicates that all species were modeled for the pesticide: primary producers, fish and arthropods for fungicides; primary producers for herbicides; fish and arthropods for insecticides nd no detection, n/a unquantifiable (near-zero) aRemoved outlier(s) The pesticides that had the highest toxicity risks were the insecticides carbaryl, diazinon, ethoprophos, and terbufos, which were found to pose moderate (>1 % PAF) or high (>5 % PAF) risks to fish and arthropod species, and the herbicides ametryn, bromacil, and diuron that were found to pose moderate or high risks to primary producers. Each of the assessed fungicides posed only low risks (<1 % PAF) to primary producers, fish, and arthropods. The highest risks of single substances were observed at CA-S, where peak concentrations of the herbicides ametryn and diuron were predicted to affect 67 and 46.5 % of primary producers, respectively, and the insecticide diazinon was predicted to affect 11.5 % of crustaceans at levels higher than 50 % effect (Table 4). The 26 pesticides detected in the field were divided into 19 principal TMoA by chemical groups. Pesticides were further separated into distinct TMoA when SSD slopes differed between pesticides in a TMoA (Table 5). Toxicity data was generally sufficient to include at least 10 pesticides for msPAF assessment on each species group, except for aquatic plants, for which only 2 pesticides could be included (Table 6). We found a moderate to high risk of toxic effects on primary producers (mean 4.0 % msPAF, 9.6 % s.d.) with a peak effect on 75 % of the primary producers at CA-S (Fig. 2). Effects were similar on algae (mean 3.6 %, 9.2 % s.d., max 73 %), whereas there was a higher average effect (mean 12.8 %, 13 % s.d.) but lower peak effect (61 % msPAF) on aquatic plants. The results showed a moderate to low risk (mean 1.6 % msPAF, 2.6 % s.d.) to fish and arthropods, with high risk at peak effect (12 % max msPAF). The risks to arthropods was moderate to high (mean 3.1 %, 4.8 % s.d., maximum 25 %), but the risks to fish was consistently low (maximum 0.2 % msPAF [Fig. 2]). The type, chemical group, and TMoA assigned to pesticides assessed in msPAF. Letters (A–F) indicate pesticides placed in distinct TMoA. Pesticides without an assigned TMoA for a species group were not assessed Toxic mode of action Chemical group Fish and arthropods Fish (N/LOEC) Primary producers Acylamino acid, anilide Anilinopyrimidine Benzimidazole, thiazole Conazole (imidazoles) Conazole (triazoles) Methoxyacrylate strobilurin Triazole Chloroacetanilide Methylthiotriazine Nitrophenyl ether Phenylurea Triazinone Uracil Aliphatic organothiophosphate Benzofuranyl methylcarbamate Phosphoramidate Pyridine organothiophosphate Pyrimidine organothiophosphate The fungicide epoxiconazole was excluded from the assessment as there was insufficient toxicity data to assess the substance with any species groups Data available for msPAF calculations. Species count (n) with data point count (n, in parenthesis) and standard deviation (σ) for each detected pesticide Result of msPAF for six species groups. Maximum msPAF (gray bars) and average msPAF (black bars) at each site. Note the differences in scale of the y-axis between the graphs Following the low toxicity risk to fish, we produced an msPAF using available data on No and Lowest Observed Effect Concentration (NOEC, LOEC) to assess "onset effects" on fish (effects on a small percentage of fish populations). Sufficient literature NOEC or LOEC data was available for only 3 of 26 pesticides divided into separate TMoA: these were chlorpyrifos, diazinon, and hexazinone. This msPAF on NOEC and LOEC data showed a low average risk to fish (mean 0.1 %, 0.3 % s.d.), with a peak effect causing moderate risk (1.7 % msPAF) at CA-S. Determining which pesticides cause highest risks in msPAF An assessment of the pesticides causing the highest risks in the RMD was conducted by first calculating the cumulative risk posed by each TMoA over the study period as the sum(msPAFCA) for each TMoA and species group. The contribution from each pesticide to the cumulative risk of its TMoA was then calculated as the cumulative MEC of the pesticide over the study period, expressed as the sum(MEC) for the pesticide in non-dimensional HU units. This was followed by assigning a fraction of the cumulative risk of a TMoA to each pesticide corresponding to its fraction of cumulative MEC for its TMoA. The result is the fraction of risk contributed by each pesticide to each species group over the study period. The use of cumulative risk is an attempt to describe the relative risks of pesticides without selecting a parameter such as mean or maximum concentrations which could introduce biases stemming from exposure patterns. A pesticide in this system may be ranked among the top risk contributors by posing a frequent but low risk to the environment or by posing an infrequent but high risk. The top ranked pesticides are likely to be those that could be placed into both categories. This revealed that the herbicides ametryn and diuron and the fungicide difenoconazole are responsible for more than 90 % of the cumulative toxicity risk in primary producers, algae, and aquatic plants (Table 7). Diuron poses a much higher cumulative risk to aquatic plants than to algae: the sum(msPAFCA) of diuron is 961 % for aquatic plants and 57 % for algae. This suggested that aquatic plants are much more sensitive to diuron than algae, given that the exposure to diuron is the same for both groups. Similarly, more than 90 % of cumulative risk to fish, arthropods, and the fish and arthropod group is caused by the insecticides: chlorpyrifos, diazinon, ethoprophos, and the fungicide difenoconazole, as well as the insecticide carbaryl for arthropods. The msPAF on fish NOEC and LOEC data suggested that the herbicide hexazinone may cause onset lethal effects in fish, but this herbicide does not contribute to toxicity risks to fish at median effect levels (Table 7). Contribution of each pesticide to total cumulative risk in msPAF Primary producers (%) Algae (%) Aquatic plants (%) Fish and arthropods (%) Arthropods (%) Fish (%) Fish (N(L)OEC) (%) Bold values are the pesticides associated with at least 90 % of cumulative risk to the species group Ranking of study sites based on risks A ranking of the relative risks at the five study sites in the RMD was created using toxicity risk values from the three risk assessment models. Each site was assigned a rank based on relative risk and given a score of 1 for the lowest risk, up to a score of 5 for the highest risk. Sites were ranked based on the average probability of clear effects in PERPEST, the maximum PAF in SSD, and the average msPAF. As several endpoints or species groups were assessed in each model, a total of 15 results were ranked (Table 8). The lowest toxicity risk (score of 22) was found at RMD-S, which is located upstream of most plantation effluents (Fig. 1). The CA-S site located in the Caño Azul tributary had the highest risk (score of 72). Located downstream of RMD-S and CA-S, RMD-F had the second highest risk (score of 49). The second tributary site, CPama-J, ranked third (score of 44), and the URMD-CPama site downstream of CPama-J and RMD-F ranked fourth (score of 38). These results suggest that the largest source of pesticide pollution is the Caño Azul tributary, which initially affects CA-S followed by a gradually declining toxicity risk at the sites further downstream. Thus, the relative risks in the five sites in the RMD watershed are as follows (Table 8): $$ CA-S>RMD-F>CPama-J>URMD-CPama>RMD-S $$ Ranking of sites by the relative risks in the three applied models Assessed group msPAF Total score (avg.) Higher values indicate higher toxicity risks The three risk assessment models showed evident toxicity risks to aquatic organisms due to pesticide pollution in the RMD. The Caño Azul tributary (CA-S), which receives agricultural surface water runoff from pineapple and banana plantations, poses particularly high risks to aquatic organisms and the CA-S site is associated with the highest risks in each of the three models: i.e., a 95 % probability of clear effects on community metabolism in PERPEST, a 67 % PAF for primary producers from ametryn in SSD, and a 75 % msPAF on primary producers. Similarities between the pesticide residue profiles of samples taken at CA-S and the downstream sites RMD-F and URMD-CPama within specific sampling dates show that pesticides travel downstream from the Caño Azul tributary to pollute large areas of the RMD main stem. We identified 3 pesticides in msPAF associated with 90 % of median toxicity risks to primary producers and 5 pesticides with the same magnitude of effects on fish and arthropods: the fungicide difenoconazole, the herbicides ametryn and diuron, and the insecticides carbaryl, chlorpyrifos, diazinon, and ethoprophos (Table 7). A previous toxicity assessment of pesticide usage in Costa Rica found that 75 % of aquatic ecotoxicity was likely to be caused by diazinon, mancozeb, chlorothalonil, terbuthylazine, and ethoprophos (Humbert et al. 2007). The insecticides chlorpyrifos, diazinon, and ethoprophos were thus some of the most toxic pesticides for aquatic organisms in both the previous and the present study. The present study also found high risks of the fungicide difenoconazole and the herbicides ametryn and diuron which have previously not been reported. Humbert et al. (2007) found a high toxicity risk of the fungicide mancozeb, one of the most commonly used fungicides in this area. Mancozeb was, however, not analyzed in the present study. Many current-use pesticides are still a challenge to detect in environmental samples or not yet analyzed in the pesticide analysis laboratory at IRET. The list of omitted pesticides includes 9 of the 16 most imported pesticides, the majority of which are classified as highly toxic to aquatic biota (De la Cruz et al. 2014). Thus, the actual toxicity risks to aquatic organisms may be underestimated in this study. Pesticide pollution and fish kills Fish kills have been frequently observed in the RMD, and pesticide pollution has been suggested as a probable cause (CGR 2013; Diepens et al. 2014). The present study found that toxicity risks to fish was low in the three risk assessment models. There was a low probability of clear effects on fish and tadpoles in PERPEST; however, the risks to this endpoint are difficult to assess with PERPEST as the underlying case base still contains relatively few fish and tadpole data from mesocosm experiments (1.3 analogous cases, on average) compared to other taxonomic groups (4.2 to 11.4 analogous cases, see Table 3). The SSD model found high risks to fish and arthropods. However, the fish and arthropod group was found in msPAF to underestimate risk to arthropods and to overestimate risks to fish (compare in Fig. 2), suggesting that results from the fish and arthropod SSDs cannot be used to predict effects on either fish or arthropods separately. This is also seen in the SSD results, where fish were more robust than arthropods in three cases where SSDs were derived for fish and arthropods as separate groups: a fish SSD was reported for chlorpyrifos only because the arthropod toxicity data was non-log-normal distributed (Table 4). Additionally, there was a low risk to fish in msPAF (<1 % msPAF), which suggests that large mortality events (>50 % mortality in a fraction of species) are not likely to occur from exposure to the pesticide mixtures measured in the RMD during the 2-year sampling period. There was a moderate risk to fish (>1 % msPAF) in the msPAF using NOEC and LOEC data, but these effect levels are widely acknowledged to be of poor quality (Jager 2012; Landis and Chapman 2011; Laskowski 1995). Although these effect levels aim to describe the highest concentration not causing (or the lowest concentration causing) toxic effects in organisms, they are associated with other toxic effect levels in practice, see Crane and Newman (2000). Nevertheless, the three pesticides assessed were found to cause a low to moderate risks of such "onset effects." Overall, the three risk assessment models suggest that fish are less affected than other taxonomic groups and the detected concentrations of pesticides cannot explain the observed mass mortality of fish in the RMD. The apparent risk of pesticides may, however, be underestimated as highly toxic insecticides such as pyrethroids were not analyzed in this study, and because pesticide concentrations are expected to peak following pesticide application or heavy rainfall, while our sampling efforts did not aim to measure such peak concentrations. However, if we define peak concentrations as statistical outliers (>2 standard deviations [σ] above the mean), we did observe peak concentrations for 12 out of 26 pesticides. Two herbicides had particularly high peaks at 5.9 σ (ametryn) and 6.2 σ (diuron) above their means. Some insecticides reported in the present study are applied at high doses a few times per year to control nematode pests, but the measured peaks of insecticides were lower than those of herbicides (chlorpyrifos [3.5 σ], diazinon [3.6 σ], ethoprophos [4.2 σ]). We also acknowledge that it is possible that pesticide concentrations and associated risks to aquatic organisms are occasionally higher than those reported in the present study as the number of samples (15 per site) may be considered small in relation to the high variability observed in pesticide occurrence. Given that we did not observe fish kills during the sampling effort, unobserved peaks of pesticides (particularly insecticides used as nematicides) remain a potential cause that merits further investigation. We recommend further investigation into the toxicity risks of short pesticide pulses (e.g., peak concentrations) associated with pesticide application and rainfall. An investigation in the Sixaola River on the Caribbean coast of Costa Rica assessed the toxicity of three pesticides (chlorpyrifos, difenoconazole, and terbufos) in water samples and passive samplers already deployed during fish kill events using hazard quotients and found that measured pesticide concentrations did not pose a high risk of mortality in fish (Polidoro and Morra 2016). They concluded that a combination of multiple stressors, e.g., mixtures of pesticides, low oxygen content, high temperatures, high nutrients, and ecological effects including species interactions, may contribute to lower toxicity risk thresholds in the Sixaola watershed. The present study investigated the effects of pesticide mixtures, as well as ecological effects of pesticides, using the PERPEST model and found that clear effects on fish and tadpoles were still unlikely. Similarities and differences between the Sixaola study and the present study in the RMD show that toxicity studies need to assess a wider range of pesticides and include other stressors to gain a better understanding of the most probable causes of mortality in fish populations in tropical aquatic ecosystems. Stressors are often coupled, for example, heavy rainfall leads to runoff of fertilizers and soil in addition to pesticides, which may cause eutrophication and oxygen depletion. Eutrophication may be further associated with harmful algae blooms that cause fish kills (Paerl and Otten 2013) and have been documented on the Pacific coast of Costa Rica (Vargas-Montero et al. 2006). Algae blooms have not yet been reported or studied in the RMD, but the presence of herbicides may provide a competitive advantage for blooming cyanobacteria (Lurling and Roessink 2006) and decomposing blooms may contribute to oxygen depletion and fish kills in the RMD (Diepens et al. 2014). Comparison and recommended use of the models The three risk assessment models used in this study are all based on comparing environmental concentrations to toxicity benchmark values obtained from the literature. The models apply different methods to assess risks, and the similarities and dissimilarities, and strengths and weaknesses, of each model are discussed below for the purpose of future ERA use in similar aquatic ecosystems. The PERPEST model uses data gathered from mesocosm experiments (where pesticides are applied in ponds or tanks containing organisms from several trophic levels, e.g., microalgae, macrophytes, zooplankton, benthic invertebrates, fish). Apart from measuring direct effects (e.g., mortality) on multiple species, mesocosms allow assessment of indirect ecological effects (e.g., prey-predator interactions) of single pesticides or pesticide mixtures. This means that PERPEST has the highest ecological relevance out of the three models, but the absence of established threshold risk values (such as HC5 in SSD) makes the PERPEST model challenging to use in risk management. However, threshold values of acceptable probabilities could be set easily (e.g., 10 % probability of clear effect). Additionally, few mesocosm toxicity data available for fish and tadpoles may pose a problem in risk assessment, as fewer analogous cases lead to higher uncertainty in fish and tadpoles than other endpoints. On the other hand, the relative ease of meeting data requirements, the wide range of assessed endpoints, and high ecological relevance of the PERPEST model make it ideal for screening risk assessments of pesticide mixtures occurring in the field. It is also a more comprehensive approach than the risk (or hazard) quotient approaches currently used for screening risk assessments in Costa Rica, see, e.g., Polidoro and Morra (2016). The SSD model uses data from single species toxicity tests and is dependent on data availability for its accuracy. There are many views on data requirements for SSDs, but the confidence interval (CI) is sometimes used to specify the uncertainty of SSD predictions. This has been shown to be a fixed number depending on the sample size of data in a normal distribution: at a 5 % PAF, n = 3 results in a 46 % upper confidence limit, decreasing to 20 % at n = 10, and 12 % at n = 30 (Aldenberg and Jaworska 2000). Wheeler et al. (2002) similarly observed that their SSD outputs stabilized at 10–15 data points and consequently recommended that regulatory decisions should be based on SSDs of at least 10 species. The present study used a pragmatic approach to allow assessment of 19 of 26 pesticides with a minimum of 6 species, but 12 pesticides could be modeled using at least 10 species (Table 4). The present study used a moderately wide selection of species for SSDs and assessed primary producers separately from fish and arthropods. However, these SSDs describe the sensitivity of the collective species group and may leave smaller, important groups of species at risk despite these being assessed. We have shown that assessing toxicity data on fish and arthropods together is overprotective for fish and underprotective for arthropods based on msPAF (see Fig. 2). Species selection consequently has a large impact on the interpretation of results when extrapolating or interpolating effects to other species than the precise group that is assessed. Similar effects have been observed for herbicides, and only the most sensitive primary producers were recommended to be used in SSDs (Van den Brink et al. 2006a). Our results also show that three or four pesticides are required to explain 90 % of cumulative mixture toxicity in well-studied species groups in the RMD (Table 7). This suggests that risks derived for individual substances are poor estimates for mixtures and consequently that SSDs are not suitable for assessing mixtures. The SSD concept has an established role in setting environmental threshold concentrations (i.e., EQS) for individual toxic substances, but the present study show that mixture scenarios should be considered when deriving EQS for highly polluted ecosystems. Mixture assessment factors have been proposed as a method to account for mixture effects when developing EQS, see Backhaus et al. (2010). The msPAF model uses the principles of the SSD concept to assess the toxicity of complex mixtures. We found that msPAF resulted in higher risk values than those predicted by the single substance approach of SSDs, and that the toxicity of mixtures should be considered over single substances when pesticide mixtures occur in the field. However, further research is needed to determine which available TMoA classifications are the most suitable for use in msPAF and other toxicity models (such as PERPEST) that aim to apply both CA and RA models to assess the toxicity of mixtures. The present study has found that setting a minimum sample size for msPAF may have negative effects on the results of a mixture risk assessment. We compared the msPAF results in the present study over a range of hypothetical minimum sample sizes (2, 4, 6, and 10 species) and found that the apparent toxicity (as an estimate of actual toxicity) to species with scarce data (aquatic plants) diminished above a 4 species minimum, and that the apparent maximum toxicity to a well-studied species group (algae) was strongly reduced above a 6-species minimum (Fig. 3). Furthermore, the number of assessed pesticides decreased continuously with an increase in minimum sample size (Fig. 4). These trends suggest that implementing a minimum number of species (or toxicity tests) may lead to a less protective risk assessment, as the excluded fraction of mixture components may strongly contribute to the apparent toxicity of the mixture. The msPAF model may, as part of a first tier assessment, be used to support a decision between further quantification of risks, remediation actions, or approval of the ecological status of an ecosystem, and it is therefore paramount that risks are not underestimated due to an assessment being carried out on partial mixtures. The maximum and average msPAF toxicity values for four species groups in the RMD at four select minimum sample sizes Number of pesticides (n) available for msPAF modeling of four species groups in the RMD at four select minimum sample sizes The present study has found that pesticides detected downstream banana, pineapple, and rice plantations in the River Madre de Dios (RMD) pose high risks to zooplankton, macroinvertebrates, algae, macrophytes, and overall community metabolism. Measures are urgently needed in order to reduce these toxicity risks and the release of highly toxic pesticides into the RMD. Seven pesticides were identified to cause 90 % of apparent toxicity risks in the msPAF model: the fungicide difenoconazole, the herbicides ametryn and diuron, and the insecticides carbaryl, chlorpyrifos, diazinon, and ethoprophos. This study included 26 pesticides that could be analyzed and were detected in the RMD, but several other pesticides are applied in these plantations, such as mancozeb and pyrethroids, that were not included as they are more difficult to analyze in the local laboratories. The apparent toxicity risks may therefore be underestimated, further stressing the need for mitigation actions in the RMD. We suggest that further studies should be carried out to determine the causes of reported fish kills, focusing on peak concentrations following pesticide application and rainfalls and multiple stressors other than pesticides (e.g., nutrients, oxygen content, temperature, algal blooms). The PERPEST model was found to be well-suited for screening risk assessments of pesticide mixtures. The SSD concept can be used to set protective environmental quality standards for single substances within mixtures provided appropriate safety factors are used. The msPAF model was here found to be the most comprehensive tool for environmental risk assessment of mixtures and offers the advantage of assessing pesticides with very limited toxicity data provided that their toxic modes of action are known. The authors would like to acknowledge Julio Knight and his family for assistance in the field, Dick de Zwart (Wageningen Univ., the Netherlands) for his assistance with the initial msPAF methodology and Geannina Moraga (UNA, Costa Rica) for her assistance with maps and charting land use in the watershed. Funding was provided through the Swedish Research Council FORMAS (grant no. 2007-282), through a Minor Field Study grant by the Swedish International Development Cooperation Agency (SIDA), by funding from Universidad Nacional, and by faculty funding from Stockholm University. Aldenberg T, Jaworska JS (2000) Uncertainty of the hazardous concentration and fraction affected for normal species sensitivity distributions. Ecotoxicol Environ Saf 46:1–18. doi: 10.1006/eesa.1999.1869 CrossRefGoogle Scholar Altenburger R, Arrhenius Å, Backhaus T, Coors A, Faust M, Zitzkat D (2013) Ecotoxicological combined effects from chemical mixtures. Part 1: relevance and adequate consideration in environmental risk assessment of plant protection products and biocides. Federal Environment Agency (UBA), Dessau-RoßlauGoogle Scholar Altenburger R, Backhaus T, Boedeker W, Faust M, Scholze M, Grimme LH (2000) Predictability of the toxicity of multiple chemical mixtures to Vibrio fischeri: mixtures composed of similarly acting chemicals. Environ Toxicol Chem 19:2341–2347. doi: 10.1897/1551-5028(2000)019<2341:Pottom>2.3.Co;2 CrossRefGoogle Scholar Backhaus T, Altenburger R, Faust M, Frein D, Frische T, Johansson P, Kehrer A, Porsbring T (2013) Proposal for environmental mixture risk assessment in the context of the biocidal product authorization in the EU. Environ Sci Eur 25:9. doi: 10.1186/2190-4715-25-4 CrossRefGoogle Scholar Backhaus T, Blanck H, Faust M (2010) Hazard and risk assessment of chemical mixtures under REACH—state of the art, gaps and options for improvement. Swedish Chemicals Agency (KemI), SundbybergGoogle Scholar Bravo V, de la Cruz E, Herrera Ledezma G, Ramírez F (2013) Agricultural pesticide use as tool for monitoring health hazards. Uniciencia 27:351–376 in SpanishGoogle Scholar Castillo LE, de la Cruz E, Ruepert C (1997) Ecotoxicology and pesticides in tropical aquatic ecosystems of Central America. Environ Toxicol Chem 16:41–51. doi: 10.1897/1551-5028(1997)016<0041:Eapita>2.3.Co;2 CrossRefGoogle Scholar Castillo LE, Martinez E, Ruepert C, Savage C, Gilek M, Pinnock M, Solis E (2006) Water quality and macroinvertebrate community response following pesticide applications in a banana plantation, Limon, Costa Rica. Sci Total Environ 367:418–432. doi: 10.1016/j.scitotenv.2006.02.052 CrossRefGoogle Scholar CCME (2007) A protocol for the derivation of water quality guidelines for the protection of aquatic life 2007. Canadian Council of Ministers of the Environment, WinnipegGoogle Scholar CGR (2013) Report on the effectiveness of the State to ensure the quality of water in its different uses. Controloria General de la Republica, San José, Costa Rica. https://cgrfiles.cgr.go.cr/publico/jaguar/sad_docs/2013/DFOE-AE-IF-01-2013.pdf (in Spanish) CORBANA (2013) Banana production areas. Corporación Bananera Nacional. www.corbana.co.cr/categories/categoria_1348198131. Accessed 2014–11-03 (in Spanish) Crane M, Newman MC (2000) What level of effect is a no observed effect? Environ Toxicol Chem 19:516–519CrossRefGoogle Scholar Daam MA, Van den Brink PJ (2010) Implications of differences between temperate and tropical freshwater ecosystems for the ecological risk assessment of pesticides. Ecotoxicology 19:24–37. doi: 10.1007/s10646-009-0402-6 CrossRefGoogle Scholar De la Cruz E, Bravo-Duran V, Ramirez F, Castillo LE (2014) Environmental hazards associated with pesticide import into Costa Rica, 1977-2009. J Environ Biol 35:43–55Google Scholar De Zwart D (2002) Observed regularities in species sensitivity distributions for aquatic species. In: Posthuma L, Suter II GW, Traas TP (eds) Species sensitivity distributions in ecotoxicology. Lewis Publishers, Boca Raton, pp. 133–154Google Scholar De Zwart D, Posthuma L (2005) Complex mixture toxicity for single and multiple species: proposed methodologies. Environ Toxicol Chem 24:2665–2676. doi: 10.1897/04-639r.1 CrossRefGoogle Scholar De Zwart D, Posthuma L, Gevrey M, von der Ohe PC, de Deckere E (2009) Diagnosis of ecosystem impairment in a multiple-stress context—how to formulate effective river basin management plans. Integr Environ Assess Manag 5:38. doi: 10.1897/ieam_2008-030.1 CrossRefGoogle Scholar Diepens NJ, Pfennig S, Van den Brink PJ, Gunnarsson JS, Ruepert C, Castillo LE (2014) Effect of pesticides used in banana and pineapple plantations on aquatic ecosystems in Costa Rica. J Environ Biol 35:73–84Google Scholar Echeverria-Saenz S, Mena F, Pinnock M, Ruepert C, Solano K, de la Cruz E, Campos B, Sanchez-Avila J, Lacorte S, Barata C (2012) Environmental hazards of pesticides from pineapple crop production in the Rio Jimenez watershed (Caribbean Coast, Costa Rica). Sci Total Environ 440:106–114. doi: 10.1016/j.scitotenv.2012.07.092 CrossRefGoogle Scholar EFSA (2013) Guidance on tiered risk assessment for plant protection products for aquatic organisms in edge-of-field surface waters. Vol 11. European Food Safety Authority, ParmaGoogle Scholar FAO (2014) Banana market review and banana statistics 2012–2013. Intergovernmental group on bananas and tropical fruits. Food and Agriculture Organization of the United Nations, RomeGoogle Scholar FAOSTAT Database (2016) Statistics Division, Food and Agriculture Organization of the United Nations. http://faostat3.fao.org/. Accessed 2016–04-01 Faust M, Altenburger R, Backhaus T, Blanck H, Boedeker W, Gramatica P, Hamer V, Scholze M, Vighi M, Grimme LH (2003) Joint algal toxicity of 16 dissimilarly acting chemicals is predictable by the concept of independent action. Aquat Toxicol 63:43–63. doi: 10.1016/S0166-445x(02)00133-9 CrossRefGoogle Scholar Grant PB, Woudneh MB, Ross PS (2013) Pesticides in blood from spectacled caiman (Caiman crocodilus) downstream of banana plantations in Costa Rica. Environ Toxicol Chem 32:2576–2583. doi: 10.1002/etc.2358 CrossRefGoogle Scholar Gregorio V, Buchi L, Anneville O, Rimet F, Bouchez A, Chevre N (2012) Risk of herbicide mixtures as a key parameter to explain phytoplankton fluctuation in a great lake: the case of Lake Geneva, Switzerland. Ecotoxicology 21:2306–2318. doi: 10.1007/s10646-012-0987-z CrossRefGoogle Scholar Hickey GL, Craig PS (2012) Competing statistical methods for the fitting of normal species sensitivity distributions: recommendations for practitioners. Risk Anal 32:1232–1243. doi: 10.1111/j.1539-6924.2011.01728.x CrossRefGoogle Scholar Humbert S, Margni M, Charles R, Salazar OMT, Quirós AL, Jolliet O (2007) Toxicity assessment of the main pesticides used in Costa Rica. Agric Ecosyst Environ 118:183–190. doi: 10.1016/j.agee.2006.05.010 CrossRefGoogle Scholar Jager T (2012) Bad habits die hard: the NOEC's persistence reflects poorly on ecotoxicology. Environ Toxicol Chem 31:228–229. doi: 10.1002/etc.746 CrossRefGoogle Scholar Jesenska S, Nemethova S, Blaha L (2013) Validation of the species sensitivity distribution in retrospective risk assessment of herbicides at the river basin scale—the Scheldt river basin case study. Environ Sci Pollut Res 20:6070–6084. doi: 10.1007/s11356-013-1644-7 CrossRefGoogle Scholar Klemens JA, Wieland ML, Flanagin VJ, Frick JA, Harper RG (2003) A cross-taxa survey of organochlorine pesticide contamination in a Costa Rican wildland. Environ Pollut 122:245–251CrossRefGoogle Scholar Kooijman SALM (1987) A safety factor for LC50 values allowing for differences in sensitivity among species. Water Res 21:269–276CrossRefGoogle Scholar Kwok KWH et al. (2007) Comparison of tropical and temperate freshwater animal species' acute sensitivities to chemicals: implications for deriving safe extrapolation factors. Integr Environ Assess Manag 3:49–67CrossRefGoogle Scholar Lambert JC, Lipscomb JC (2007) Mode of action as a determining factor in additivity models for chemical mixture risk assessment. Regul Toxicol Pharmacol 49:183–194. doi: 10.1016/j.yrtph.2007.07.002 CrossRefGoogle Scholar Landis WG, Chapman PM (2011) Well past time to stop using NOELs and LOELs. Integr Environ Assess Manag 7:vi–viii. doi: 10.1002/ieam.249 CrossRefGoogle Scholar Laskowski R (1995) Some good reasons to ban the use of NOEC, LOEC and related concepts in ecotoxicology. Oikos 73:140–144CrossRefGoogle Scholar Lewis KA, Green A, Tzilivakis J, Warner D (2016) An international database for pesticide risk assessments and management. Hum Ecol Risk Assess. doi: 10.1080/10807039.2015.1133242 CrossRefGoogle Scholar Lurling M, Roessink I (2006) On the way to cyanobacterial blooms: impact of the herbicide metribuzin on the competition between a green alga (Scenedesmus) and a cyanobacterium (Microcystis). Chemosphere 65:618–626. doi: 10.1016/j.chemosphere.2006.01.073 CrossRefGoogle Scholar Maltby L, Blake N, Brock TC, van den Brink PJ (2005) Insecticide species sensitivity distributions: importance of test species selection and relevance to aquatic ecosystems. Environ Toxicol Chem 24:379–388CrossRefGoogle Scholar Maltby L, Brock TC, Van den Brink PJ (2009) Fungicide risk assessment for aquatic ecosystems: importance of interspecific variation, toxic mode of action, and exposure regime. Environ Sci Technol 43:7556–7563CrossRefGoogle Scholar MINAE (2011) General procedure and guidelines for ERA for registration of synthetic pesticide formulas. Ministry of Environment and Energy, San José in SpanishGoogle Scholar Paerl HW, Otten TG (2013) Harmful cyanobacterial blooms: causes, consequences, and controls. Microb Ecol 65:995–1010. doi: 10.1007/s00248-012-0159-y CrossRefGoogle Scholar Polidoro BA, Morra MJ (2016) An ecological risk assessment of pesticides and fish kills in the Sixaola watershed, Costa Rica. Environ Sci Pollut Res. doi: 10.1007/s11356-016-6144-0 CrossRefGoogle Scholar Rico A, Waichman AV, Geber-Corrêa R, van den Brink PJ (2011) Effects of malathion and carbendazim on Amazonian freshwater organisms: comparison of tropical and temperate species sensitivity distributions. Ecotoxicology 20(4):625–634Google Scholar Smetanova S, Blaha L, Liess M, Schafer RB, Beketov MA (2014) Do predictions from species sensitivity distributions match with field data? Environ Pollut 189:126–133. doi: 10.1016/j.envpol.2014.03.002 CrossRefGoogle Scholar Suter GW II, Traas TP, Posthuma L (2002) Issues and practices in the derivation and use of species sensitivity distributions. In: Posthuma L, Suter II GW, Traas TP (eds) Species sensitivity distributions in ecotoxicology. Lewis Publishers, Boca Raton, pp. 437–474Google Scholar Sanchez-Bayo F, Hyne RV (2011) Comparison of environmental risks of pesticides between tropical and nontropical regions. Integr Environ Assess Manag 7(4):577–586Google Scholar Traas TP, Van de Meent D, Posthuma L, Hamers T, Kater BJ, De Zwart D, Aldenberg T (2002) The potentially affected fraction as a measure of ecological risk. In: Posthuma L, Suter II GW, Traas TP (eds) Species sensitivity distributions in ecotoxicology. Lewis Publishers, Boca Raton, pp. 315–344Google Scholar USEPA (2000) Methodology for deriving ambient water quality criteria for the protection of human health (2000). U.S. Environmental Protection Agency, Washington, D.C.Google Scholar USEPA (2016) ECOTOX User Guide: ECOTOXicology Database System. Version 4.0. Available: epa.goc/ecotox/. U.S. Environmental Protection Agency, Washington, D.C., USAGoogle Scholar Van den Brink PJ, Blake N, Brock TC, Maltby L (2006a) Predictive value of species sensitivity distributions for effects of herbicides in freshwater ecosystems. Hum Ecol Risk Assess 12:645–674. doi: 10.1080/10807030500430559 CrossRefGoogle Scholar Van den Brink PJ, Brown CD, Dubus IG (2006b) Using the expert model PERPEST to translate measured and predicted pesticide exposure data into ecological risks. Ecol Model 191:106–117. doi: 10.1016/j.ecolmodel.2005.08.015 CrossRefGoogle Scholar Van den Brink PJ, Roelsma J, Van Nes EH, Scheffer M, Brock TC (2002) Perpest model, a case-based reasoning approach to predict ecological risks of pesticides. Environ Toxicol Chem 21:2500–2506CrossRefGoogle Scholar Van Vlaardingen P, Traas T, Wintersen A, Aldenberg T (2004) A program to calculate hazardous concentrations and fraction affected, based on normally distributed toxicity data. National Institute for Public Health and the Environment (RIVM), Bilthoven, the NetherlandsGoogle Scholar Vargas-Montero M, Freer E, Jiménez-Montealegre R, Guzmán JC (2006) Occurrence and predominance of the fish killer Cochlodinium polykrikoides on the Pacific coast of Costa Rica. S Afr J Mar Sci 28:215–217. doi: 10.2989/18142320609504150 CrossRefGoogle Scholar Wheeler JR, Grist EP, Leung KM, Morritt D, Crane M (2002) Species sensitivity distributions: data and model choice. Mar Pollut Bull 45:192–202CrossRefGoogle Scholar 1.Department of Ecology, Environment and Plant Sciences (DEEP)Stockholm UniversityStockholmSweden 2.Alterra, Wageningen University and Research CentreWageningenThe Netherlands 3.Department of Aquatic Ecology and Water Quality ManagementWageningen UniversityWageningenThe Netherlands 4.Central American Institute for Studies on Toxic Substances (IRET)Universidad NacionalHerediaCosta Rica Rämö, R.A., van den Brink, P.J., Ruepert, C. et al. Environ Sci Pollut Res (2018) 25: 13254. https://doi.org/10.1007/s11356-016-7375-9 Accepted 01 August 2016
CommonCrawl
\begin{document} \begin{abstract} We introduce a class of $f(t)$-factorials, or $f(t)$-Pochhammer symbols, that includes many, if not most, well-known factorial and multiple factorial function variants as special cases. We consider the combinatorial properties of the corresponding generalized classes of Stirling numbers of the first kind which arise as the coefficients of the symbolic polynomial expansions of these $f$-factorial functions. The combinatorial properties of these more general parameterized Stirling number triangles we prove within the article include analogs to known expansions of the ordinary Stirling numbers by $p$-order harmonic number sequences through the definition of a corresponding class of $p$-order $f$-harmonic numbers. \end{abstract} \maketitle \section{Introduction} \subsection{Generalized $f$-factorial functions} \subsubsection*{Definitions} For any function, $f: \mathbb{N} \rightarrow \mathbb{C}$, and fixed non-zero indeterminates $x, t \in \mathbb{C}$, we introduce and define the \emph{generalized $f(t)$-factorial function}, or alternately the \emph{$f(t)$-Pochhammer symbol}, denoted by $(x)_{f(t),n}$, as the following products: \begin{align} \label{eqn_xft_gen_PHSymbol_def} (x)_{f(t),n} & = \prod_{k=1}^{n-1} \left(x + \frac{f(k)}{t^k}\right). \end{align} Within this article, we are interested in the combinatorial properties of the coefficients of the powers of $x$ in the last product expansions which we consider to be generalized forms of the \emph{Stirling numbers of the first kind} in this setting. Section \ref{subSection_Intro_GenSNumsDefs} defines generalized Stirling numbers of both the first and second kinds and motivates the definitions of auxiliary triangles by special classes of formal power series generating function transformations and their corresponding negative-order variants considered in the references \cite{GFTRANS2016,GFTRANSHZETA2016}. \subsubsection*{Special cases} Key to the formulation of applications and interpreting the generalized results in this article is the observation that the definition of \eqref{eqn_xft_gen_PHSymbol_def} provides an effective generalization of almost all other related factorial function variants considered in the references when $t \equiv 1$. The special cases of $f(n) := \alpha n+\beta$ for some integer-valued $\alpha \geq 1$ and $0 \leq \beta < \alpha$ lead to the motivations for studying these more general factorial functions in \cite{GFTRANSHZETA2016}, and form the expansions of multiple $\alpha$-factorial functions, $n!_{(\alpha)}$, studied in the triangular coefficient expansions defined by \cite{MULTIFACT-CFRACS,MULTIFACTJIS}. The \emph{factorial powers}, or \emph{generalized factorials of $t$ of order $n$ and increment $h$}, denoted by $t^{(n, h)}$, or the \emph{Pochhammer k-symbol} denoted by $(x)_{n,h} \equiv p_n(h, t) = t(t+h)(t+2h)\cdots(t+(n-1)h)$, studied in \cite{q-DIFFSq-FACTS,MULTIFACT-CFRACS,CK} form particular special cases, as do the the forms of the generalized \emph{Roman factorials} and \emph{Knuth factorials} for $n \geq 1$ defined in \cite{LOEBBINOM}, and the \emph{$q$-shifted factorial functions} considered in \cite{q-SHIFTEDFACTS,q-DIFFSq-FACTS}. When $(f(n), t) \equiv (q^{n+1}, 1)$ these products are related to the expansions of the finite cases of the \emph{$q$-Pochhammer symbol} products, $(a; q)_n = (1-a)(1-aq)\cdots(1-aq^{n-1})$, and the corresponding definitions of the generalized Stirling number triangles defined in \eqref{eqn_genS1ft_rec_def} of the next subsection are precisely the \emph{Gaussian polynomials}, or \emph{$q$-binomial coefficients}, studied in relation to the $q$-series expansions and $q$-hypergeometric functions in \cite[\S 17]{NISTHB}. \subsubsection*{New results proved in the article} The results proved within this article, for example, provide new expansions of these special factorial functions in terms of their corresponding \emph{$p$-order $f$-harmonic number sequences}, \[ F_n^{(p)}(t) := \sum_{k \leq n} \frac{t^k}{f(k)^p}, \] which generalize known expansions of Stirling numbers by the ordinary \emph{$p$-order harmonic numbers}, $H_n^{(p)} \equiv \sum_{1 \leq k \leq n} k^{-r}$, in \cite{STIRESUMS,MULTIFACTJIS,GFTRANS2016,GFTRANSHZETA2016}. Still other combinatorial sums and properties satisfied by the symbolic polynomial expansions of these special case factorial functions follow as corollaries of the new results we prove in the next sections. The next subsection precisely expands the generalized factorial expansions of \eqref{eqn_xft_gen_PHSymbol_def} through the generalized class of Stirling numbers of the first kind defined recursively by \eqref{eqn_genS1ft_rec_def} below. \subsection{Definitions of generalized $f$-factorial Stirling numbers} \label{subSection_Intro_GenSNumsDefs} We first employ the next recurrence relation to define the generalized triangle of Stirling numbers of the first kind, which we denote by $\gkpSI{n}{k}_{f(t)} := [x^{k-1}] (x)_{f(t),n}$, or just by $\gkpSI{n}{k}_f$ when the context is clear, for natural numbers $n, k \geq 0$ \cite[\cf \S 3.1]{MULTIFACTJIS} \footnote{ The bracket symbol $\Iverson{\mathtt{cond}}$ denotes \emph{Iverson's convention} which evaluates to exactly one of the values in $\{0, 1\}$ and where $\Iverson{\mathtt{cond}} = 1$ if and only if the condition $\mathtt{cond}$ is true. }. \begin{align} \label{eqn_genS1ft_rec_def} \gkpSI{n}{k}_{f(t)} & = f(n-1) \cdot t^{1-n} \gkpSI{n-1}{k}_{f(t)} + \gkpSI{n-1}{k-1}_{f(t)} + \Iverson{n = k = 0} \end{align} We also define the corresponding generalized forms of the \emph{Stirling numbers of the second kind}, denoted by $\gkpSII{n}{k}_{f(t)}$, so that we can consider inversion relations and combinatorial analogs to known identities for the ordinary triangles by the sum \begin{align*} \gkpSII{n}{k}_{f(t)} & = \sum_{j=0}^{k} \binom{k}{j} \frac{(-1)^{k-j} f(j)^n}{t^{jn} \cdot j!}, \end{align*} from which we can prove the following form of a particularly useful generating function transformation motivated in the references when $f(n)$ has a Taylor series expansion in integral powers of $n$ about zero \cite[\cf \S 3.3]{MULTIFACTJIS} \cite[\cf \S 7.4]{GKP} \cite{SQSERIESMDS,GFTRANSHZETA2016}: \begin{align} \label{eqn_S2ft_GFTrans_geom_series_exp} \sum_{0 \leq j \leq n} \frac{f(j)^k}{t^{jk}} z^j & = \sum_{0 \leq j \leq k} \gkpSII{k}{j}_{f(t)} z^j \times D_z^{(j)}\left[\frac{1-z^{n+1}}{1-z}\right]. \end{align} The negative-order cases of the infinite series transformation in \eqref{eqn_S2ft_GFTrans_geom_series_exp} are motivated in \cite{GFTRANSHZETA2016} where we define modified forms of the Stirling numbers of the second kind by \begin{align*} \gkpSII{k}{j}_{f^{\ast}} & = \sum_{1 \leq m \leq j} \binom{j}{m} \frac{(-1)^{j-m}}{j! \cdot f(m)^k}, \end{align*} which then implies that the transformed ordinary and exponential zeta-like power series enumerating generalized polylogarithm functions and the $f$-harmonic numbers, $F_n^{(p)}(t)$, are expanded by the following two series variants \cite{GFTRANSHZETA2016}: \begin{align*} \sum_{n \geq 1} \frac{z^n}{f(n)^k} & = \sum_{j \geq 0} \gkpSII{k}{j}_{f^{\ast}} \frac{z^j \cdot j!}{(1-z)^{j+1}} \\ \sum_{n \geq 1} \frac{F_n^{(r)}(1) z^n}{n!} & = \sum_{j \geq 0} \gkpSII{k}{j}_{f^{\ast}} \frac{z^j \cdot e^{z} (j+1+z)}{(j+1)}. \end{align*} We focus on the combinatorial relations and sums involving the generalized positive-order Stirling numbers in the next few sections. \section{Generating functions and expansions by $f$-harmonic numbers} \subsection{Motivation from a technique of Euler} We are motivated by Euler's original technique for solving the \emph{Basel problem} of summing the series, $\zeta(2) = \sum_n n^{-2}$, and later more generally all even-indexed integer zeta constants, $\zeta(2k)$, in closed-form by considering partial products of the sine function \cite[pp. 38-42]{GAMMA}. In particular, we observe that we have both an infinite product and a corresponding Taylor series expansion in $z$ for $\sin(z)$ given by \begin{align*} \sin(z) & = \sum_{n \geq 0} \frac{(-1)^n z^{2n+1}}{(2n+1)!} = z \prod_{j \geq 1} \left(1 - \frac{z^2}{j^2 \pi^2}\right). \end{align*} Then if we combine the form of the coefficients of $z^3$ in the partial product expansions at each finite $n \in \mathbb{Z}^{+}$ with the known trigonometric series terms defined such that $[z^3] \sin(z) = -\frac{1}{3!}$ given on each respective side of the last equation, we see inductively that \begin{align*} H_n^{(2)} = -\pi^2 \cdot [z^2] \prod_{1 \leq j \leq n} \left(1 - \frac{z^2}{j^2 \pi^2}\right) \qquad\longrightarrow\qquad \zeta(2) = \frac{\pi^2}{6}. \end{align*} In our case, we wish to similarly enumerate the $p$-order $f$-harmonic numbers, $F_n^{(p)}(t)$, through the generalized product expansions defined in \eqref{eqn_xft_gen_PHSymbol_def}. \subsection{Generating the integer order $f$-harmonic numbers} We first define a shorthand notation for another form of generalized ``\emph{$f$-factorials}'' that we will need in expanding the next products as follows: \begin{equation*} n!_f := \prod_{j=1}^n f(j) \qquad \text{ and } \qquad n!_{f(t)} := \prod_{j=1}^{n} \frac{f(j)}{t^j} = \frac{n!_f}{t^{n(n+1)/2}}. \end{equation*} If we let $\zeta_p \equiv \exp(2\pi\imath / p)$ denote the \emph{primitive $p^{th}$ root of unity} for integers $p \geq 1$, and define the coefficient generating function, $\widetilde{f}_n(w) \equiv \widetilde{f}_n(t; w)$, by \begin{align*} \widetilde{f}_n(w) & := \sum_{k \geq 2} \gkpSI{n+1}{k}_{f(t)} w^k = \left(\prod_{j=1}^{n} \left(w+f(j) t^{-j}\right) - \gkpSI{n+1}{1}_{f(t)}\right) w, \end{align*} we can factor the partial products in \eqref{eqn_xft_gen_PHSymbol_def} to generate the $p$-order $f$-harmonic numbers in the following forms: \begin{align} \label{eqn_fkp_partialsum_fCf2_exp_forms} \sum_{k=1}^{n} \frac{t^{kp}}{f(k)^p} & = \frac{t^{pn(n+1) / 2}}{\left(n!_{f}\right)^p} [w^{2p}]\left((-1)^{p+1} \prod_{m=0}^{p-1} \sum_{k=0}^{n+1} \FcfII{f(t)}{n+1}{k} \zeta_p^{m(k-1)} w^k\right) \\ \notag & = \frac{t^{pn(n+1) / 2}}{\left(n!_{f}\right)^p} [w^{2p}]\left(\sum_{j=0}^{p-1} \frac{(-1)^{j} w^{j}\ p}{p-j} \FcfII{f(t)}{n+1}{1}^j \widetilde{f}_n(w)^{p-j}\right) \\ \label{eqn_fkp_partialsum_fCf2_exp_forms_v2} \sum_{k=1}^{n} \frac{t^{k}}{f(k)^p} & = \frac{t^{n(n+1) / 2}}{\left(n!_{f}\right)^p} [w^{2p}]\left((-1)^{p+1} \prod_{m=0}^{p-1} \sum_{k=0}^{n+1} \FcfII{f\left(t^{1 / p}\right)}{n+1}{k} \zeta_p^{m(k-1)} w^k\right). \end{align} \begin{example}[Special Cases] For a fixed $f$ and any indeterminate $t \neq 0$, let the shorthand notation $\bar{F}_n(k) := \FcfII{f(t)}{n+1}{k}$. Then the following expansions illustrate several characteristic forms of these prescribed partial sums for the first several special cases of \eqref{eqn_fkp_partialsum_fCf2_exp_forms} when $2 \leq p \leq 5$: \begin{align} \label{eqn_pth_partial_coeff_sums_p234} \sum_{k=1}^{n} \frac{t^{2k}}{f(k)^2} & = \frac{t^{n(n+1)}}{(n!_{f})^2}\left(\bar{F}_n(2)^2 - 2 \bar{F}_n(1) \bar{F}_n(3) \right) \\ \notag \sum_{k=1}^{n} \frac{t^{3k}}{f(k)^3} & = \frac{t^{3n(n+1) / 2}}{(n!_{f})^3}\left(\bar{F}_n(2)^3 - 3 \bar{F}_n(1) \bar{F}_n(2) \bar{F}_n(3) + 3 \bar{F}_n(1)^2 \bar{F}_n(4)\right) \\ \notag \sum_{k=1}^{n} \frac{t^{4k}}{f(k)^4} & = \frac{t^{4n(n+1)}}{(n!_{f})^4}\bigl(\bar{F}_n(2)^4 - 4 \bar{F}_n(1) \bar{F}_n(2)^2 \bar{F}_n(3) + 2 \bar{F}_n(1)^2 \bar{F}_n(3)^2 + 4 \bar{F}_n(1)^2 \bar{F}_n(2) \bar{F}_n(4) \\ \notag & \phantom{= \frac{t^{4n(n+1)}}{(n!_{f})^4}\bigl( \quad \ } - 4 \bar{F}_n(1)^3 \bar{F}_n(5)\bigr) \\ \notag \sum_{k=1}^{n} \frac{t^{5k}}{f(k)^5} & = \frac{t^{5n(n+1) / 2}}{(n!_{f})^5}\bigl(\bar{F}_n(2)^5 - 5 \bar{F}_n(1) \bar{F}_n(2)^3 \bar{F}_n(3) + 5 \bar{F}_n(1)^2 \bar{F}_n(2) \bar{F}_n(3)^2 + 5 \bar{F}_n(1)^2 \bar{F}_n(2)^2 \bar{F}_n(4) \\ \notag & \phantom{\frac{t^{5n(n+1) / 2}}{(n!_{f})^5}\bigl( \ \quad} - 5 \bar{F}_n(1)^3 \bar{F}_n(3) \bar{F}_n(4) - 5 \bar{F}_n(1)^3 \bar{F}_n(2) \bar{F}_n(5) + 5 \bar{F}_n(1)^4 \bar{F}_n(6)\bigr). \end{align} For each fixed integer $p > 1$, the particular partial sums defined by the ordinary generating function, $\widetilde{f}_n(w)$, correspond to a function in $n$ that is fixed with respect to the lower indices for the triangular coefficients defined by \eqref{eqn_genS1ft_rec_def}. Moreover, the resulting coefficient expansions enumerating the $f$-harmonic numbers at each $p \geq 2$ are isobaric in the sense that the sum of the indices over the lower index $k$ is $2p$ in each individual term in these finite sums. \end{example} \subsection{Expansions of the generalized coefficients by $f$-harmonic numbers} The \emph{elementary symmetric polynomials} depending on the function $f$ implicit to the product-based definitions of the generalized Stirling numbers of the first kind expanded through \eqref{eqn_xft_gen_PHSymbol_def} provide new forms of the known $p$-order harmonic number, or \emph{exponential Bell polynomial}, expansions of the ordinary Stirling numbers of the first kind enumerated in the references \cite{STIRESUMS,COMBIDENTS,ADVCOMB,UC}. Thus, if we first define the weighted sums of the $f$-harmonic numbers, denoted $w_f(n, m)$, recursively according to an identity for the Bell polynomials, $\ell \cdot Y_{n,\ell}(x_1, x_2, \ldots)$, for $x_k \equiv (-1)^k F_n^{(k)}(t^k) (k-1)!$ as \cite[\S 4.1.8]{UC} \begin{align*} w_f(n+1, m) & := \sum_{0 \leq k < m} (-1)^{k} F_n^{(k+1)}(t^{k+1}) (1-m)_k w_f(n+1,m-1-k) + \Iverson{m = 1}, \end{align*} we can expand the generalized coefficient triangles through these weighted sums as \begin{align} \label{eqn_FcfII_wfnm_genHNum_weighted_sum_exps_rdef} \FcfII{f(t)}{n+1}{k} & = \frac{n!_{f}}{(k-1)!}\ w_f(n+1, k) \\ \notag & = \sum_{j=0}^{k-2} \FcfII{f(t)}{n+1}{k-1-j} \frac{(-1)^{j} F_n^{(j+1)}(t^{j+1})}{(k-1)} + n!_{f(t)} \cdot \Iverson{k = 1}. \end{align} This definition of the weighted $f$-harmonic sums for the generalized triangles in \eqref{eqn_genS1ft_rec_def} implies the special case expansions given in the next corollary. \begin{cor}[Weighted $f$-Harmonic Sums for the Generalized Stirling Numbers] The first few special case expansions of the coefficient identities in \eqref{eqn_FcfII_wfnm_genHNum_weighted_sum_exps_rdef} are stated for fixed $f$, $t \neq 0$, and integers $n \geq 0$ in the following forms: \begin{align} \label{eqn_fCf_GenFHHarmonic_exps} \FcfII{f(t)}{n+1}{2} & = \frac{n!_{f}}{t^{n(n+1) / 2}}\ F_n^{(1)}(t) \\ \notag \FcfII{f(t)}{n+1}{3} & = \frac{n!_{f}}{2\ t^{n(n+1) / 2}}\left( F_n^{(1)}(t)^2 - F_n^{(2)}(t^2)\right) \\ \notag \FcfII{f(t)}{n+1}{4} & = \frac{n!_{f}}{6\ t^{n(n+1) / 2}}\left( F_n^{(1)}(t)^3 - 3 F_n^{(1)}(t) F_n^{(2)}(t^2) + 2 F_n^{(3)}(t^3) \right) \\ \notag \FcfII{f(t)}{n+1}{5} & = \frac{n!_{f}}{24\ t^{n(n+1) / 2}}\left( F_n^{(1)}(t)^4 - 6 F_n^{(1)}(t)^2 F_n^{(2)}(t^2) + 3 F_n^{(2)}(t^2)^2 + 8 F_n^{(1)}(t) F_n^{(3)}(t^3) - 6 F_n^{(4)}(t^4)\right). \end{align} \end{cor} \begin{proof} These expansions are computed explicitly using the recursive formula in \eqref{eqn_FcfII_wfnm_genHNum_weighted_sum_exps_rdef} for the first few cases of the lower triangle index $2 \leq k \leq 5$. \end{proof} We will return to the expansions of these coefficients in \eqref{eqn_FcfII_wfnm_genHNum_weighted_sum_exps_rdef} to formulate new finite sum identities providing functional relations between the $p$-order $f$-harmonic number sequences in the next section. \subsection{Combinatorial sums and functional equations for the $f$-harmonic numbers} The next several properties give interesting expansions of the $p$-order $f$-harmonic numbers recursively over the parameter $p$ that can then be employed to remove, or at least significantly obfuscate, the current direct cancellation problem with these forms phrased by the examples in \eqref{eqn_pth_partial_coeff_sums_p234} and in \eqref{eqn_fCf_GenFHHarmonic_exps}. \begin{prop} For any fixed $p \geq 1$ and $n \geq 0$, we have the following coefficient product identities generating the $p$-order $f$-harmonic numbers, $F_n^{(p)}(t)$: \begin{align} \label{eqn_Fnpt_pvar_rform_exps} F_n^{(p+1)}(t) & = F_n^{(p)}(t) + \frac{(-1)^{p} t^{n(n+1) / 2}}{t^{\frac{pn(n+1)}{2(p+1)}} n!_{f}} \FcfII{f(t^{1 / (p+1)})}{n+1}{p+2} \\ \notag & \phantom{= F_n^{(p)}(t)\ } + \sum_{j=0}^{p-1} \frac{p\ (-1)^{j+1} t^{n(n+1) / 2}}{t^{\frac{jn(n+1)}{2p}} (n!_{f})^{p-j} (p-j)} \left( \sum_{\substack{0 \leq i_1, \ldots, i_{p-j} \leq j \\ i_1 + \cdots + i_{p-j} = j}} \FcfII{f(t^{1 / p})}{n+1}{i_1 + 2} \cdots \FcfII{f(t^{1 / p})}{n+1}{i_{p-j} + 2}\right) \\ \notag & \phantom{= F_n^{(p)}(t)\ } + \sum_{j=0}^{p-1} \sum_{i=0}^{j} \frac{(p+1) t^{n(n+1) / 2} (-1)^{j}}{ t^{\frac{jn(n+1)}{2(p+1)}} (n!_{f})^{p+1-j} (p+1-j)} \FcfII{f(t^{1 / (p+1)})}{n+1}{i+2} \times \\ \notag & \phantom{= F_n^{(p)}(t) + \sum\sum\ } \times \left( \sum_{\substack{0 \leq i_1, \ldots, i_{p-j} \leq j-i \\ i_1 + \cdots + i_{p-j} = j-i}} \prod_{m=1}^{p-j} \FcfII{f(t^{1 / (p+1)})}{n+1}{i_m + 2}\right). \end{align} \end{prop} \begin{proof} To begin with, observe the following rephrasing of the partial sums expansions from equations \eqref{eqn_fkp_partialsum_fCf2_exp_forms} and \eqref{eqn_fkp_partialsum_fCf2_exp_forms_v2} as \begin{align*} F_n^{(p+1)}(t) & = \frac{t^{n(n+1) / 2}}{(n!_{f})^{p+1}} \sum_{j=0}^{p} \frac{(p+1)\ (-1)^j}{(p+1-j)} \FcfII{f\left(t^{1 / (p+1)}\right)}{n+1}{1}^j [w^{2p+2-j}] \widetilde{f}_n(w)^{p+1-j} \\ & = \frac{(p+1) (-1)^{p} t^{n(n+1) / 2}}{t^{\frac{pn(n+1)}{2(p+1)}} n!_{f}} \FcfII{f(t^{1 / (p+1)})}{n+1}{p+2} \\ & \phantom{= \quad \ } + \sum_{j=0}^{p-1} \frac{(p+1) (-1)^{j} t^{n(n+1) / 2}}{t^{\frac{jn(n+1)}{2(p+1)}} (n!_{f})^{p+1-j} (p+1-j)} [w^{j}] \left( \frac{\widetilde{f}_n(w)}{w^2}\right)^{p+1-j}. \end{align*} The coefficients involved in the partial sum forms for each sequence of $F_n^{(p)}(t)$ are implicitly tied to the form of $t \mapsto t^{1 / p}$ in the triangle definition of \eqref{eqn_genS1ft_rec_def}. Given this distinction, let the generating function $\widetilde{f}$ be defined equivalently in the more careful definition as $\widetilde{f}_n(w) :\equiv \widetilde{f}_n(t;\ w)$. The powers of the generating function $\widetilde{f}_n(w)$ from the previous equations satisfy the coefficient term expansions according to the next equation \cite[\cf \Section 7.5]{GKP}. \begin{align*} [w^{2p-j}] \widetilde{f}_n(w)^{p-j} & := [w^{2p-j}] \widetilde{f}_n(t;\ w)^{p-j} = [w^{j}] \left(\frac{\widetilde{f}_n(t;\ w)}{w^2}\right)^{p-j} \\ & \phantom{:} = \sum_{\substack{0 \leq i_1, \ldots, i_{p-j} \leq j \\ i_1 + \cdots i_{p-j} = j}} \FcfII{f(t)}{n+1}{i_1 + 2} \cdots \FcfII{f(t)}{n+1}{i_{p-j} + 2} \end{align*} Then by taking the difference of the harmonic sequence terms over successive indices $p \geq 1$ and at a fixed index of $n \geq 1$, the stated recurrences for these $p$-order sequences result. \end{proof} The generating function series over $n$ in the next proposition is related to the forms of the \emph{Euler sums} considered in \cite{STIRESUMS} and to the context of the generalized zeta function transformations considered in \cite{GFTRANSHZETA2016} briefly noted in the introduction. We suggest the infinite sums over these generalized identities for $n \geq 1$ as a topic for future research exploration in the concluding remarks of Section \ref{Section_Concl}. \begin{prop}[Functional Equations for the $f$-Harmonic Numbers] \label{prop_FHNum_fnal_eqn_and_coeff_exps} For any integers $n \geq 0$ and $p \geq 2$, we have the following functional relations between the $p$-order and $(p-1)$-order $f$-harmonic numbers over $n$ and $p$: \begin{align*} F_{n+1}^{(p)}(t^p) & = F_n^{(p)}(t^p) + \sum_{1 \leq j < p} \gkpSI{n+2}{p+1-j}_{f(t)} \frac{(-1)^{p+1-j} t^{j(n+1)}}{f(n+1)^{j} (n+1)!_{f(t)}} + \gkpSI{n+1}{p}_{f(t)} \frac{(-1)^{p+1}}{(n+1)!_{f(t)}} \\ & = F_n^{(p)}(t^p) + \frac{t^{(p-1)(n+1)}}{f(n+1)^{p-1}} + \frac{(-1)^{p-1}}{(n+1)!_{f(t)}} \left( \gkpSI{n+1}{p}_{f(t)} + \gkpSI{n+1}{p-1}_{f(t)} \right) \\ & \phantom{= F_n^{(p)}(t^p) \ } + \gkpSI{n+2}{p}_{f(t)} \frac{(-1)^{p} t^{n+1}}{f(n+1) (n+1)!_{f(t)}} \\ & \phantom{= F_n^{(p)}(t^p) \ } + \sum_{j=0}^{p-3} \gkpSI{n+2}{j+2}_{f(t)} \frac{(-1)^{j+1} \left(f(n+1)t^{-(n+1)} - 1\right) t^{(p-1-j)(n+1)}}{ f(n+1)^{p-1-j} (n+1)!_{f(t)}}. \end{align*} \end{prop} \begin{proof} First, notice that \eqref{eqn_FcfII_wfnm_genHNum_weighted_sum_exps_rdef} implies that we have the following weighted harmonic number sums for the $p$-order $f$-harmonic numbers: \begin{align*} F_n^{(p)}(t^p) & = \sum_{1 \leq j < p} \gkpSI{n+1}{p+1-j}_{f(t)} \frac{(-1)^{p+1-j} F_n^{(j)}(t^j)}{n!_{f(t)}} + \gkpSI{n+1}{p+1}_{f(t)} \frac{p (-1)^{p+1}}{n!_{f(t)}}. \end{align*} Next, we use \eqref{eqn_genS1ft_rec_def} twice to expand the differences of the left-hand-side of the previous equation as \begin{align*} \frac{t^{p(n+1)}}{f(n+1)^p} & = F_{n+1}^{(p)}(t^p) - F_n^{(p)}(t^p) \\ & = \sum_{1 \leq j < p} \gkpSI{n+2}{p+1-j}_{f(t)} \frac{(-1)^{p+1-j} F_{n+1}^{(j)}(t^j)}{(n+1)!_{f(t)}} - \sum_{1 \leq j < p} \gkpSI{n+1}{p+1-j}_{f(t)} \frac{(-1)^{p+1-j} F_{n}^{(j)}(t^j)}{n!_{f(t)}} \\ & \phantom{= \sum \quad \ } + \gkpSI{n+2}{p+1}_{f(t)} \frac{p (-1)^{p+1}}{(n+1)!_{f(t)}} - \frac{f(n+1)}{t^{n+1}} \gkpSI{n+1}{p+1}_{f(t)} \frac{p (-1)^{p+1}}{(n+1)!_{f(t)}} \\ & = \sum_{1 \leq j < p} \gkpSI{n+2}{p+1-j}_{f(t)} \frac{(-1)^{p+1-j} t^{j(n+1)}}{f(n+1)^{j} (n+1)!_{f(t)}} - \sum_{1 \leq j < p} \gkpSI{n+1}{p-j}_{f(t)} \frac{(-1)^{p-j} F_n^{(j)}(t^j)}{(n+1)!_{f(t)}} \\ & \phantom{= \sum \quad \ } + \gkpSI{n+1}{p}_{f(t)} \frac{p (-1)^{p+1}}{(n+1)!_{f(t)}} \\ & = \sum_{1 \leq j < p} \gkpSI{n+2}{p+1-j}_{f(t)} \frac{(-1)^{p+1-j} t^{j(n+1)}}{f(n+1)^{j} (n+1)!_{f(t)}} - \gkpSI{n+1}{p}_{f(t)} \frac{(p-1) (-1)^{p+1}}{(n+1)!_{f(t)}} \\ & \phantom{= \sum \quad \ } + \gkpSI{n+1}{p}_{f(t)} \frac{p (-1)^{p+1}}{(n+1)!_{f(t)}}. \end{align*} The second identity is verified similarly by combining the coefficient terms as in the last equations and adding the right-hand-side differences of the $(p-1)$-order $f$-harmonic numbers to the first identity. \end{proof} One immediate corollary that must by its importance be expanded in turn explicitly in the next example provides new expansions of the $p$-order harmonic numbers in terms of the ordinary triangle of Stirling numbers of the first kind corresponding to the case where $(f(n), t) \equiv (n, 1)$ in the previous proposition. Similar expansions of identities related to the generalized generating function transformations in \cite{GFTRANSHZETA2016} result for the special cases of the proposition where $(f(n), t) \equiv (\alpha n+\beta, t)$ for some application-dependent prescribed $\alpha, \beta \in \mathbb{C}$ defined such that $-\frac{\beta}{\alpha} \notin \mathbb{Z}$. Another special case worth noting and independently expanding provides analogous relations between the $q$-binomial coefficients implicit to the forms of the \emph{$q$-binomial theorem} expanding the $q$-Pochhammer symbols, $(a; q)_n$, for each $n \geq 0$ \cite[\cf \S 17.2]{NISTHB}. \begin{example}[Stirling Numbers and Euler Sums] \label{example_SpCase_S1HNum_FnalEqn_Ident} For all integers $p \geq 3$ and fixed $n \in \mathbb{Z}^{+}$, we have the following identity relating the successive differences of the $p$-order harmonic numbers and the Stirling numbers of the first kind: \begin{align} \label{eqn_S1HNum_fnaleqn_exp_v1} \frac{1}{n^p} & = \frac{1}{n^{p-1}} + \frac{(-1)^{p-1}}{n!}\left( \gkpSI{n}{p} + \gkpSI{n}{p-1}\right) + \gkpSI{n+1}{p} \frac{(-1)^p}{n \cdot n!} \\ \notag & \phantom{=\frac{1}{n^{p-1}}\ } + \sum_{j=0}^{p-3} \gkpSI{n+1}{j+2} \frac{(-1)^{j+1} (n-1)}{ n^{p-1-j} \cdot n!}. \end{align} The relation in \eqref{eqn_S1HNum_fnaleqn_exp_v1} certainly implies new finite sum identities between the $p$-order harmonic numbers and the Stirling numbers of the first kind, though the generating functions and limiting cases of these sums provide more information on infinite sums considered in several of the references. With this in mind, we define the \emph{Nielsen generalized polylogarithm}, $S_{t,k}(z)$, by the infinite generating series over the $t$-power-scaled Stirling numbers as \cite[\cf \S 5]{STIRESUMS} \begin{align*} S_{t,k}(z) & := \sum_{n \geq 1} \gkpSI{n}{k} \frac{z^n}{n^t \cdot n!}. \end{align*} We see immediately that \eqref{eqn_S1HNum_fnaleqn_exp_v1} provides strictly enumerative relations between the polylogarithm function generating functions, $\operatorname{Li}_p(z) / (1-z)$, for the $p$-order harmonic numbers and the Nielsen polylogarithms. Perhaps more interestingly, we also find new identities between the Riemann zeta functions, $\zeta(p)$ and $\zeta(p-1)$, and the special classes of \emph{Euler sums} given by $S_{t,k}(1)$ for $t \in [2, p-1]$ and $k \in [2, p]$ defined as in the reference \cite[\S 5]{STIRESUMS}. \end{example} \section{Coefficient identities and generalized forms of the Stirling convolution polynomials} \subsection{Generalized Coefficient Identities and Relations} There are several immediate for small-indexed columns of the triangle defined by \eqref{eqn_genS1ft_rec_def} and that can both be given immediately and that follow from an inductive argument. The next identities in \eqref{eqn_gen_FcfII_ftnk_gen_k_idents} are given for general lower column index $k \geq 1$ by \begin{align} \label{eqn_gen_FcfII_ftnk_gen_k_idents} \FcfII{f(t)}{n}{k} & = [w^{k-1}]\left(\prod_{j=1}^{n-1} (w + f(j)\ t^{-j})\right) \Iverson{n \geq 1} + \Iverson{n = k = 0} \\ \notag & = \sum_{\substack{ 0 < i_1 < \cdots < i_{n-k} < n}} f(i_1) \cdots f(i_{n-k}) \cdot t^{-(i_1 + \cdots + i_{n-k})}, \end{align} which follows immediately by considering the first products of the form $\prod_i (z + x_i)$ in the context of elementary symmetric polynomials for these specific $x_i$. \begin{prop}[Horizontal and Vertical Column Recurrences] The generalized Stirling numbers of the first kind over the first several special case columns for the shifted upper index of $n+1$ in the expansions of \eqref{eqn_genS1ft_rec_def} are given by the next recurrence relations for all $n \geq 0$ and any $k \geq 2$. \begin{align} \label{eqn_FcfII_ftnk_spcase_cols_and_rdefs} \FcfII{f(t)}{n+1}{1} & = \frac{n!_{f}}{t^{n(n+1) / 2}} \\ \notag \FcfII{f(t)}{n+1}{k} & = \frac{n!_{f}}{t^{n(n+1) / 2}} \sum_{j=1}^{n} \FcfII{f(t)}{j}{k-1} \frac{t^{j(j+1) / 2}}{j!_{f}},\ \text{ if $k \geq 2$} \end{align} \end{prop} \begin{proof} We begin by observing that by \eqref{eqn_genS1ft_rec_def} when $k \equiv 1$, we have that \begin{align*} \gkpSI{n+1}{1}_{f(t)} & = \frac{f(n)}{t^n} \gkpSI{n}{1}_{f(t)} + \gkpSI{n}{0}_{f(t)} \\ & = \frac{f(n)}{t^n} \gkpSI{n}{1}_{f(t)} + \Iverson{n = 0}, \end{align*} which implies the first claim by induction since $\gkpSI{1}{1}_{f(t)} = 1$ and $\gkpSI{0}{1}_{f(t)} = 1$. To prove the column-wise recurrence relation given in \eqref{eqn_FcfII_ftnk_spcase_cols_and_rdefs}, we notice again by induction that for any functions $g(n)$ and $b(n) \neq 0$, the sequence, $f_k(n)$, defined recursively by \begin{align*} f_k(n) & = \begin{cases} b(n) \cdot f_k(n-1) + g(n-1) & \text{ if $n \geq 1$ } \\ 1 & \text{ if $n = 0$, } \end{cases} \end{align*} has a closed-form solution given by \begin{align*} f_k(n) & = \left(\prod_{j=1}^{n-1} b(j)\right) \times \sum_{0 \leq j < n} \frac{g(j)}{\prod_{i=1}^{j} b(j)}. \end{align*} Thus by \eqref{eqn_genS1ft_rec_def} the second claim is true. \end{proof} \subsection{Generalized forms of the Stirling convolution polynomials} \begin{definition}[Stirling Polynomial Analogs] \label{def_CvlPolyAnalogs} For $x,n,x-n \geq 1$, we suggest the next two variants of the generalized \emph{Stirling convolution polynomials}, denoted by $\sigma_{f(t),n}(x)$ and $\widetilde{\sigma}_{f(t),n}(x)$, respectively, as the right-hand-side coefficient definitions in the following equations: \begin{align} \label{eqn_fnx_poly_coeff_def} \sigma_{f(t),n}(x) := \FcfII{f(t)}{x}{x-n} \frac{(x-n-1)!}{x!_{f}} & \quad \iff \quad \FcfII{f(t)}{n+1}{k} = \frac{(n+1)!_{f}}{(k-1)!}\ \sigma_{f(t),n+1-k}(n+1) \\ \notag \widetilde{\sigma}_{f(t),n}(x) := \FcfII{f(t)}{x}{x-n} \frac{(x-n-1)!}{x!} & \quad \iff \quad \FcfII{f(t)}{n+1}{k} = \frac{(n+1)!}{(k-1)!}\ \widetilde{\sigma}_{f(t),n+1-k}(n+1). \end{align} \end{definition} \begin{prop}[Recurrence Relations] For integers $x,n,x-n \geq 1$, the analogs to the Stirling convolution polynomial sequences defined by \eqref{eqn_fnx_poly_coeff_def} each satisfy a respective recurrence relation stated in the next equations. \begin{align} \notag f(x+1) \sigma_{f(t),n}(x+1) & = (x-n) \sigma_{f(t),n}(x) + f(x)\ t^{-x} \cdot \sigma_{f(t),n-1}(x) + \Iverson{n = 0} \\ \label{eqn_fnx_snx_genCvlPolySeqs_recs} (x+1) \widetilde{\sigma}_{f(t),n}(x+1) & = (x-n) \widetilde{\sigma}_{f(t),n}(x) + f(x)\ t^{-x} \cdot \widetilde{\sigma}_{f(t),n-1}(x) + \Iverson{n = 0} \end{align} \end{prop} \begin{proof} We give a proof of the second identity since the first recurrence follows almost immediately from this result. Let $x,n,x-n \geq 1$ and consider the expansion of the left-hand-side of \eqref{eqn_fnx_snx_genCvlPolySeqs_recs} according to Definition \ref{def_CvlPolyAnalogs} as follows: \begin{align*} (x + 1) \widetilde{\sigma}_{f(t),n}(x + 1) & = \gkpSI{x+1}{x+1-n}_{f(t)} \frac{(x-n)!}{x!} \\ & = \left(f(x) t^{-x} \gkpSI{x}{x+1-n}_{f(t)} + \gkpSI{x}{x-n}_{f(t)} \right) (x-n) \cdot \frac{(x-n-1)!}{x!} \\ & = (x-n) \widetilde{\sigma}_{f(t),n}(x) + f(x) t^{-x} \cdot \widetilde{\sigma}_{f(t),n-1}(x). \end{align*} For any non-negative integer $x$, when $n = 0$, we see that $\gkpSI{x+1}{x+1}_{f(t)} \equiv 1$, which implies the result. \end{proof} \begin{remark}[A Comparison of Polynomial Generating Functions] The generating functions for the Stirling convolution polynomials, $\sigma_n(x)$, and the $\alpha$-factorial polynomials, $\sigma_n^{(\alpha)}(x)$, from \cite{MULTIFACTJIS} each have the comparatively simple special case closed-form generating functions given by \begin{align} \label{eqn_SPoly_def_and_GF} x \sigma_n(x) & = \gkpSI{x}{x-n} \frac{(x-n-1)!}{(x-1)!} = [z^n] \left(\frac{z e^{z}}{e^{z}-1}\right)^{x} && \text{ for } (f(n), t) \equiv (n, 1) \\ \notag x \sigma_n^{(\alpha)}(x) & = \gkpSI{x}{x-n}_{\alpha} \frac{(x-n-1)!}{(x-1)!} = [z^n] e^{(1-\alpha)z} \left(\frac{\alpha z e^{\alpha z}}{e^{\alpha z}-1}\right)^{x} && \text{ for } (f(n), t) \equiv (\alpha n + 1 - \alpha, 1) \\ \notag x \sigma_n^{(\alpha; \beta)}(x) & = \gkpSI{x}{x-n}_{(\alpha; \beta)} \frac{(x-n-1)!}{(x-1)!} = [z^n] e^{\beta z} \left(\frac{\alpha z e^{\alpha z}}{e^{\alpha z}-1}\right)^{x} && \text{ for } (f(n), t) \equiv (\alpha n + \beta, 1). \end{align} The Stirling polynomial sequence in \eqref{eqn_SPoly_def_and_GF} is a special case of a more general class of \emph{convolution polynomial} sequences defined by Knuth in his article \cite{CVLPOLYS}. These polynomial sequences are defined by a general sequence of coefficients, $s_n^{\ast}$ with $s_0^{\ast} = 1$, such that the corresponding polynomials, $s_n(x)$, are enumerated by the power series over the original sequence as \begin{equation*} \sum_{n=0}^{\infty} s_n(x) z^n := S(z)^{x} \equiv \left(1 + \sum_{n=1}^{\infty} s_n^{\ast} z^n\right)^{x}. \end{equation*} Polynomial sequences of this form satisfy a number of interesting properties, and in particular, the next identity provides a generating function for a variant of the original convolution polynomial sequence over $n$ when $t \in \mathbb{C}$ is fixed. \begin{equation} \label{eqn_CvlPoly_Stz_GF_rdef} \mathcal{S}_t(z) := S\left(z \mathcal{S}_t(z)^t\right) \quad \implies \quad \frac{x s_n(x+tn)}{(x+tn)} = [z^n] \mathcal{S}_t(z)^{x} \end{equation} This result is also useful in expanding many identities for the $t := 1$ case as given for the Stirling polynomial case in \cite[\Section 6.2]{GKP} \cite{CVLPOLYS}. A related generalized class of polynomial sequences is considered in Roman's book defining the form of \emph{Sheffer polynomial} sequences. \nocite{UC} The polynomial sequences of this particular type, say with sequence terms given by $s_n(x)$, satisfy the form in the following generating function identity where $A(z)$ and $B(z)$ are prescribed power series satisfying the initial conditions from the reference \cite[\cf \Section 2.3]{UC}: \begin{equation*} \sum_{n=0}^{\infty} s_n(x) \frac{z^n}{n!} := A(z) e^{x B(z)}. \end{equation*} For example, the form of the generalized, or higher-order Bernoulli polynomials (numbers) is a parameterized sequence whose generating function yields the form of many other special case sequences, including the Stirling polynomial case defined in equation \eqref{eqn_SPoly_def_and_GF} \cite[\cf \Section 4.2.2]{UC} \cite[\cf \Section 5]{MULTIFACTJIS}. \end{remark} \subsubsection*{An experimental procedure towards evaluating the generalized polynomials} We expect that the generalized convolution polynomial analogs defined in \eqref{eqn_fnx_poly_coeff_def} above form a sequence of finite-degree polynomials in $x$, for example, as in the Stirling polynomial case when we have that \begin{align*} \gkpSI{x}{x-n} & = \sum_{k \geq 0} \gkpEII{n}{k} \binom{x+k}{2n}, \end{align*} where $\gkpEII{n}{k}$ denotes the special triangle of \emph{second-order Eulerian numbers} for $n, k \geq 0$ and where the binomial coefficient terms in the previous equations each have a finite-degree polynomial expansion in $x$ \cite[\S 6.2]{GKP}. The previous identity also allows us to extend the Stirling numbers of the first kind to \emph{arbitrary} real, or complex-valued inputs. Given the relatively simple and elegant forms of the generating functions that enumerate the polynomial sequences of the special case forms in \eqref{eqn_SPoly_def_and_GF}, it seems natural to attempt to extend these relations to the generalized polynomial sequence forms defined by \eqref{eqn_fnx_poly_coeff_def}. However, in this more general context we appear to have a stronger dependence of the form and ordinary generating functions of these polynomial sequences on the underlying function $f$. Specifically, for the form of the first sequence in \eqref{eqn_fnx_poly_coeff_def}, we suppose that the function $f(n)$ is arbitrary. Based on the first several cases of these polynomials, it appears that the generating function for the sequence can be expanded as \begin{align} \label{eqn_fnx_poly_GF_ident_v1} & f_n(x) := [z^n] F(z)^{x} \quad \text{ where } \quad F(z) := \sum_{n=0}^{\infty} g_n(x) z^n \\ \notag & \phantom{f_n(x) :} \implies g_n(x) = \frac{\sum_{j=0}^{n-1} f(x)^n \numpoly_n(j;\ x) x^{n-1-j} (1+x)^j f(x+1)^j}{n!\ t^{nx}\ \sum_{j=0}^{2n-1} \denompoly_n(j;\ x) x^{2n-1-j} (1+x)^j f(x+1)^j}\ \Iverson{n \geq 1} + \Iverson{n = 0} \end{align} where the forms $\numpoly_n(j;\ x)$ and $\denompoly_n(j;\ x)$ denote polynomial sequences of finite non--negative integral degree indexed over the natural numbers $n, j \geq 0$. Similarly it has been verified for the first $16$ of each $n$ and $k$ that the following equation holds where the terms $g_n(x)$ involved in the series for $F(z)$ are defined through the form of the last equation. \begin{equation*} s_n(k) := f_{n-k}(n) \implies s_n(k) = [z^n] z^k F(z)^n = \sum_{j=1}^{n-k} \binom{n}{j} [z^{n-k}] (F(z) - 1)^j + \Iverson{n = k} \end{equation*} Note that the coefficients defined through these implicit power series forms must also satisfy an implicit relation to the particular values of the polynomial parameter $x$ as formed through the last equations, which is much different in construction than in the cases of the special polynomial sequence generating functions remarked on above. Other different expansions may result for special cases of the function $f(n)$ and explicit values of the parameter $t$. \section{Conclusions and future research} \label{Section_Concl} \subsection{Summary} We have defined a generalized class of factorial product functions, $(x)_{f(t),n}$, that generalizes the forms of many special and symbolic factorial functions considered in the references. The coefficient-wise symbolic polynomial expansions of these $f$-factorial function variants define generalized triangles of Stirling numbers of the first kind which share many analogs to the combinatorial properties satisfied by the ordinary combinatorial triangle cases. Surprisingly, many inversion relations and other finite sum properties relating the ordinary Stirling number triangles are not apparent by inspection of these corresponding sums in the most general cases. A study of ordinary Stirling-number-like sums, inversion relations, and generating function transformations is not contained in the article. We pose formulating these analogs in the most general coefficient cases as a topic for future combinatorial work with the generalized Stirling number triangles defined in Section \ref{subSection_Intro_GenSNumsDefs}. \subsection{Topics suggested for future research} Another new avenue to explore with these sums and the generalized $f$-zeta series transformations motivated in \cite{GFTRANS2016,GFTRANSHZETA2016} is to consider finding new identities and expressions for the Euler-like sums suggested by the generalized identity in Proposition \ref{prop_FHNum_fnal_eqn_and_coeff_exps} and by the special case expansions for the Stirling numbers of the first kind given in Example \ref{example_SpCase_S1HNum_FnalEqn_Ident}. In particular, if we define a class of so-termed ``\emph{$f$-zeta}'' functions, $\zeta_f(s) := \sum_{n \geq 1} f(n)^{-s}$, we seek analogs to these infinite Euler sum variants expanded through $\zeta_f(s)$ just as the Euler sums are expressed through sums and products of the \emph{Riemann zeta function}, $\zeta(s)$, in the ordinary cases from \cite{STIRESUMS}. For example, it is well known that for real-valued $r > 1$ \begin{align*} \sum_{n \geq 1} \frac{H_n^{(r)}}{n^r} & = \frac{1}{2}\left(\zeta(r)^2 + \zeta(2r)\right), \end{align*} and moreover, summation by parts shows us that for any real $r > 1$ and any $t \in \mathbb{C}^{\ast}$ such that we have a convergent limiting zeta function series we have that \begin{align*} \sum_{n \geq 1} \frac{F_n^{(r)}(t^r) t^{rn}}{f(n)^{r}} & = \lim_{n\longrightarrow\infty}\ \left\{ \left(F_n^{(r)}(t^r)\right)^2 - \sum_{0 \leq j < n} \frac{F_j^{(r)}(t^r) t^{r(j+1)}}{f(j+1)^r} \right\} \\ & = \lim_{n\longrightarrow\infty}\ \left\{ \left(F_n^{(r)}(t^r)\right)^2 - \sum_{0 \leq j < n} \frac{F_{j+1}^{(r)}(t^r) t^{r(j+1)}}{f(j+1)^r} + \sum_{0 \leq j < n} \frac{t^{2r(j+1)}}{f(j+1)^{2r}} \right\}, \end{align*} which similarly implies that \begin{align*} \sum_{n \geq 1} \frac{F_n^{(r)}(1)}{f(n)^r} & \quad \overset{: \rightsquigarrow}{\longrightarrow} \quad \frac{1}{2}\left(\zeta_f(r)^2 + \zeta_f(2r)\right). \end{align*} Additionally, we seek other analogs to known identities for the infinite Euler-like-sum variants over the weighted $f$-harmonic number sums of the form \begin{align*} H_f\left(\varpi_1, \ldots, \varpi_k; s, t, z\right) & := \sum_{n \geq 1} \frac{F_n^{\left(\varpi_1\right)}\left(t^{\varpi_1}\right) \cdots F_n^{\left(\varpi_k\right)}\left(t^{\varpi_k}\right) z^{sn}}{f(n)^{s}}, \end{align*} when $t = \pm 1$, or more generally for any fixed $t \in \mathbb{C}^{\ast}$, and where the right-hand-side series in the previous equation converges, say for $|z| \leq 1$. \renewcommand{References}{References} \hrule \end{document}
arXiv
\begin{definition}[Definition:Bounded Above Set/Unbounded] Let $\struct {S, \preceq}$ be an ordered set. A subset $T \subseteq S$ is '''unbounded above (in $S$)''' {{iff}} it is not bounded above. Category:Definitions/Boundedness \end{definition}
ProofWiki
Improving the operating efficiency of the more electric aircraft concept through optimised flight procedures Ravinka Seresinhe1, Craig Lawson1 & Irfan Madani1 CEAS Aeronautical Journal volume 10, pages463–478(2019)Cite this article The increasing awareness of the environmental risks and costs due to the growing demand in aviation has prompted both academic and industrial research into short-term and long-term technologies which could help address the challenges. Among these, the more electric aircraft has been identified as a key design concept which would make aircraft more environmentally friendly and cost effective in the long run. Moreover, the notion of free-flight and optimised trajectories has been identified as a key operational concept which would help curb the environmental effects of aircraft as well as reduce overall costs. The research in this paper presents a methodology in which these two concepts can be coupled to study the benefits of more electric aircraft (MEA) flying optimised trajectories. A wide range of issues from aircraft performance, engine performance, airframe systems operation, power off-take penalties, emission modelling, optimisation algorithms and optimisation frameworks has been addressed throughout the study. The case study is based on a popular short haul flight between London Heathrow and Amsterdam Schiphol. The culmination of the study establishes the advantage of the MEA over conventional aircraft and also addresses the enhanced approach to the classical aircraft trajectory optimisation problem. The study shows that the operation procedures to achieve a minimum fuel burn are significantly different for a conventional aircraft and MEA. Trajectory optimisation reduced the fuel burn by 17.4% for the conventional aircraft and 12.2% for the more electric compared to the respective baseline cases. Within the constraints of the study, the minimum fuel burn trajectory for the MEA consumed 9.9% less fuel than the minimum fuel burn trajectory for the conventional aircraft. An aircraft with all secondary power systems operating electrically can be thought of as an all-electric aircraft (AEA). The definition of a more electric aircraft (MEA) can be derived as an aircraft where the majority of the systems or a higher percentage of systems compared to conventional aircraft are powered electrically. For the purpose of this research, the "more electric aircraft" has been defined as an aircraft which uses proportionally more electrical secondary power than a legacy or conventional aircraft. Feiner [1] suggests that aircraft with all-electric secondary power systems are expected to "cost less, be more reliable and be less expensive to operate". He also goes on to say that benefits include reduced design complexity, reduced parts count, easier aircraft modification and less environmental impact. It is further endorsed by Arguelles et al. [2], where the MEA is highlighted as a pathway to achieving a lower environmental impact due to aviation. Moreover, it means that future aircraft will possibly have most equipment operating through electrical power [3]. ACARE lists the MEA as an enabler to reach the 2020 goals [4]. The scope for improvement of aircraft efficiency has also been extended to the aircraft operation domain. With global aviation growing at a fast rate, the traditional navigation and guidance measures need to be improved to achieve more robust and efficient aircraft operations to reduce the environmental impact. Trajectory control and trajectory optimisation play a key role in aircraft operation. Typically, the classical approach to trajectory optimisation consists of aircraft dynamic representation, engine performance and environmental impact models such as emissions and noise assessment [5, 6]. The airframe system penalties are either not represented or represented as a constant. However, the system operation depends on the flight condition itself and as demonstrated in [7] the fuel penalty due to the system operation is a variable and cannot be set as a constant in the context of trajectory optimisation. Overall the MEA systems, power requirements are even more sensitive to the flight conditions than the conventional airframe systems. In this study, the classical approach has been enhanced by representing more realistic airframe system performance within the optimisation loop. The airframe system operation is vital in representing real aircraft behaviour as it consumes a sizeable proportion of the aircraft engine's power which has a knock-on effect on the fuel burn and consequently the optimisation of aircraft trajectory. Moreover, it is very important to establish from the outset that the concept of "more electric aircraft trajectory optimisation" cannot be discussed by ignoring the airframe systems, since an aircraft can only become more electric by substituting the conventional pneumatically and hydraulically powered systems with electrically powered systems. Hence, in the topic of trajectory optimisation for future aircraft the airframe systems need to be represented more accurately within the problem definition. The basis for representing the systems within the optimisation process is discussed in [7]. To address the challenges of aviation in Europe, the European Commission (EC) initiated the Clean Sky program. Clean Sky programme is organised into six Integrated Technology Demonstrators (ITDs) and Technology Evaluator (TE) to evaluate the outputs of the ITDs. Systems for Green Operations (SGO) ITD address the novel and more efficient ways of managing aircraft energy, as well as aircraft trajectory and mission. This work is carried out as part of System for Green Operations (SGO) ITD and involves the development of a multi-objective optimisation framework for planning environmentally efficient trajectories to provide quantitative estimates of the energy used by them with a view to improving their efficiencies. The electrical evolution in aircraft is discussed in detail in [8]. The focus in aircraft design has shifted towards the MEA to obtain greater overall efficiency. From the outset it should be realised that to fully utilise the advantages of the MEA, the operation as well as the design should be considered carefully. Two separate studies done by airframe manufacturers and research centres such as the National Aeronautics and Space Administration (NASA) give an indication of what loads would be present in a typical all-electric secondary power system for civil passenger aircraft. Figure 1 shows the estimated loads for a 300-passenger tri-engine aircraft [9]. These are the results of three studies conducted by the NASA Lewis Research Centre to assess the operational, weight and cost advantages for commercial transport aircraft with all-electric secondary power systems. Electric load demands—300-passenger, tri-engine aircraft [9] The following is an illustration on the load results found in the studies. A further separate study by NASA on a 600-passenger, four-engine aircraft produced the following preliminary estimates (Fig. 2). Electric load demands—600-passenger, four-engine aircraft [1] The two studies, though focusing on MEA, were done for different aircraft sizes. The loading details of the studies cannot be directly compared due to the differences in the breakdown of loads. However, there are certain observations which are common in both cases. The ECS is established as the largest power user by a considerable margin. Other major power users are the IPS and the flight control actuators; all three of these loads are not powered electrically in the conventional secondary power system configurations. The baseline for the airframe systems model used in this research was a 180-passenger twin-engine turbofan short-haul conventional aircraft which is similar to the Airbus A320. The objective of the airframe systems models is to provide the bleed air requirement and shaft power requirement to operate the secondary power system at any given operating condition. The ECS, IPS and the electrics were modelled in detail to represent the majority of the power requirement within the secondary power system. The power requirements for the actuation were not considered during this study. In the conventional aircraft the flight control surface actuators are powered hydraulically. The hydraulic system is constantly pressurised and thus is not a significant variable power off-take. The model was constructed in Matlab/Simulink and converted to a dynamic link library to improve execution times and integration capabilities. The baseline aircraft was converted to a MEA by replacing the non-electrically powered components with electrical components. The conventional electrical loads were derived using the methodology described in [10]. The requirements of the systems were not changed. The actuation for the MEA was not modelled. It is expected that the electrically powered actuation system will require a significant peak power. However, due to the nature of operation the actuator en route, the energy used is negligible [11]. Further reading on the baseline aircraft, the configuration of the systems and validation can be found in [12]. Detailed modelling techniques for the conventional IPS and electro-thermal de-icing systems can be found at [13, 14], respectively. Off-takes Typically, turbofan aircraft engines are capable of providing bleed air off-takes and shaft power off-takes. The bleed air provides cabin pressurisation and ventilation as well as airframe anti-icing [15]. Moreover, it is also used to pressurise the hydraulic reservoir and the water tanks. Rolls-Royce [15] states that ideally the bleed air should be extracted at an early stage of the engine cycle, preferably the early stages of the compressor. However, it also confirms that to maintain appropriate temperatures and pressures, the bleed air may need to be extracted at a later compressor stage. Tagge et al. [16] estimated the following bleed air requirements, shown in Fig. 3, for a 207-passenger, twin-engine aircraft which was sized by the Boeing Commercial Airplane Company [17]. Bleed airflow requirements—IDEA study by NASA [16] After conducting a critical review of the methods available, the kP method was chosen as the baseline for further development [18]. The kP method, presented in [19] represents the off-take penalties by a factor which relates the off-take power to net thrust ratio and the increase in the SFC due to off-takes. The off-takes are the interface between the airframe system power requirements and the engine performance. It is the enabler to represent airframe systems loads within the aircraft and engine performance. Trajectory optimisation According to Perry [20], the need for better air traffic management (ATM) is a driver for aircraft trajectory planning and optimisation in commercial aircraft. And optimising the flight trajectory for environmental gains is an important goal and a significant extension of the traditional avionic flight management system (FMS) and ATM tasks. The area of aircraft trajectory optimisation has been and is a key research area in Aerospace Engineering and it is also one of the key research topics addressed by the Clean Sky SGO ITD [21]. Studies such as Refs. [22, 23] provide surveys of trajectory optimisation methods and the approach to apply the methods to commercial aircraft trajectory optimisation. There are many techniques and approaches that can be used to generate optimal commercial aircraft trajectories. In general, the trajectory of an aircraft can be optimised for many different objectives such as fuel, time, noise and emissions among others. Typically, the trajectory optimisation is heavily based on aircraft flight dynamics and performance, engine performance and optimisation techniques. This is the classical setup for aircraft optimisation. The approach has been enhanced in this study by including the airframe system penalties within the optimisation loop. Green aircraft trajectories under ATM constraints (GATAC) GATAC is the framework that has been developed to model, simulate, optimise and analyse aircraft trajectories within the SGO ITD Management of Trajectory and Mission (MTM) research framework. The GATAC tool has been discussed in depth in Chircop et al. [24]. The framework allows the user to set up a flight case by defining initial and final flight points as well as flight constraints. A typical setup is shown in Fig. 4. Typical setup in GATAC for trajectory optimisation [25] The approach for this study has been to use genetic algorithms as optimisers. It is accepted that optimal control theory is the common approach in solving trajectory optimisation problems. However, optimal control theory requires parameterisation for controls and states of the problem and typically uses gradient-based techniques to find the solution. Gradient-based optimisers are local minimum optimisation techniques. The vision within the Clean Sky SGO mission trajectory management is that many aspects such as real/forecast weather, airframe system penalties, operational business models and engine degradation could be included within the trajectory optimisation loop to closely represent real aircraft behaviour. Certain aspects of these representations, especially the weather which is not limited to the wind but also influences icing and contrails, cannot be easily parameterised without losing significant accuracy. To overcome these issues, genetic algorithms, which are able to search for a solution involving multiple imposed constraints and do not need heavy parameterisation techniques that are required by the optimal control approach, were preferred within the research which finds globally optimal solutions. However, this is a compromise, since the uses of GAs are expected to present challenges such as computational inefficiency in comparison to gradient-based optimisers; inefficiencies in handling a large number of input variables; producing results closer to the optimal solution rather than the local optima. Defining the baseline cases A real trajectory, which had a similar ground track to that shown in Fig. 7 was simulated. The vertical profile of the trajectory is shown in Figs. 5 and 6. London to Amsterdam typical flight; altitude profile London to Amsterdam typical flight: speed profile Table 1 shows the fuel burn and time results comparing zero system power, conventional system power and MEA system power. Table 1 Results summary of a typical flight—fuel burn and time The baseline case is the "zero power off-takes" in which, the airframe system power off-take penalties are not accounted for within the problem definition. For the comparison of the gains achieved by trajectory optimisation, the baseline cases are configuration dependent. This section presents the final results of the research. It presents the gains that are achieved by including airframe system penalties within the optimisation loop and compares the optimum flight operations for conventional and more electric aircraft in terms of fuel burn, flight time and emissions. Aircraft, engine and system setup The baseline aircraft and engine for the short-haul study was similar to the Airbus A320 and the CFM-56-5B. The airframe systems were set according to the baseline setups described in Seresinhe et al. [12]. As a baseline, it was assumed that there would be an icing cloud between 7000 ft and 10,000 ft at a uniform temperature of 253 K with a liquid water content of 0.23 g/m3. Framework and optimiser setup GATAC trajectory optimisation software framework was used to run the simulation. GATAC has a set of optimisers which include a genetic-based optimiser called NSGAMO, and a multi-objective tabu search (MOTS) and also a hybrid optimiser [26, 27]. For this study, the NSGAMO was used. The setup is as follows: The flight trajectory was divided into three phases. Each flight phase was optimised with and without considering airframe system power off-take penalties. Optimiser used: NSGAMO (genetic algorithm developed by Cranfield University and based on NSGA-2 algorithm). Optimisation objectives were fuel burn and flight time for all three flight phases. Mission route The mission case chosen for this study is from London Heathrow airport to Amsterdam Schiphol airport. The mission was divided into three flight phases (departure, en route and arrival). Departure phase begins at 83 ft AGL (Above Ground Level) with an airspeed of 140 kts and terminates at the end of the Standard Instrumental Departure (SID). The SID selected for the departure phase is BPK7F. The ground track is shown in Fig. 7. Short-haul ground track The en route phase starts after the aircraft has reached the BPK VOR waypoint and ends when the aircraft enters the Amsterdam Schiphol STAR procedure. During this phase, a minimum altitude of FL100 and a maximum of FL390 are used. These bounds give the optimiser the freedom to choose an optimum flight level within both lower and upper airspaces. The airspeed during the en route is limited by KCAS 310 for the lower boundary and by the maximum operation Mach number for the upper boundary. The arrival phase starts when the aircraft passes over SUGOL and terminates at 2000 ft AGL. The STAR used in this phase for Amsterdam Schiphol airport is RNAV-Night RWY06 and the entry altitude is set to FL100. Terminology used to discuss results The terminology used in the section needs to be clarified. Min. fuel Trajectory optimised for minimum fuel burn. Min. time Trajectory optimised for minimum flight time. Zero power off-take No account is made for system power off-takes. With system power Conventional system power off-takes are modelled in the optimisation. System power post-processed Conventional system power off-takes are not included in the optimisation, but are added on in post-processing. MEA More electric system power off-takes are modelled in the optimisation. $${\text{Penalty}}\,{\text{due}}\,{\text{to}}\,{\text{systems}}\,\% =\frac{{{\text{system}}\,{\text{power}}\,{\text{post}}\,{\text{processed}} - {\text{zero}}\,{\text{power}}\,{\text{offtakes}}}}{{{\text{zero}}\,{\text{power}}\,{\text{offtakes}}}},$$ $${\text{Fuel}}\,{\text{saving}}\,{\text{due}}\,{\text{to}}\,{\text{enhance}}\,{\text{approach}}\,\% =\frac{{{\text{with}}\,{\text{system}}\,{\text{power}} - {\text{systems}}\,{\text{power}}\,{\text{post-processed}}}}{{{\text{system}}\,{\text{power}}\,{\text{post-processed}}}}.$$ Results and analysis: conventional system configuration Departure results Figure 8 shows the Pareto fronts obtained at the end of the optimisations for the departure phase. It is possible to see that the setup with systems included is shifted to higher values of fuel consumption; this is obviously due to the consumption the systems introduce. However, the results regarding the minimum time are not so different between the two setups. It should be noted that better Pareto fronts could be obtained by increasing the number of evaluations which the optimiser performs. However, the objective of the research was to reach acceptable Pareto fronts with the ability to assess the impact of the airframe systems; therefore, the optimiser settings where set equal for both the setups. For clarity, only the "Min. fuel" results have been shown and discussed in detail. Pareto fronts—departure, short haul The departure trajectories in Fig. 9 show a saw-tooth pattern for the MEA and no-system cases. When the optimiser attempts to reduce the fuel consumed by levelling and then reducing the airspeed (see also Fig. 10), the aircraft then descends. This may be explained by the fact that the aircraft flight model uses a three degree-of-freedom dynamic model, hence there is no direct pitch attitude control. From an aircraft operational point of view, this could be prevented by the pitch control for improved passenger comfort, and a smoother trajectory could be achieved by the optimisation if a six degree-of-freedom model was implemented. Such further work would allow additional insights to be drawn, including regarding differences resulting from system off-takes. Altitude vs distance—departure, short haul True air speed vs distance—departure, short haul Figures 9 and 10 show the aircraft altitude and aircraft true airspeed for the "Min. fuel" results. The "Min. fuel—with system power" case climbs continuously to 10,000 ft and then flies level. It also flies faster at the beginning and it continues at a higher speed than the other two cases. The "Min. fuel—MEA" and "Min. fuel—zero power off-takes" are similar. The "Min. fuel—MEA" flies faster than the "Min. fuel—zero power off-takes" case but both fly at a lower speed than the "Min. fuel—with system power" and accelerate towards the end to meet the final conditions as specified. There is a distinct difference in the "Min. fuel" trajectories. The reason for the difference is the effect of the systems, which is shown in Fig. 11. Total power off-take per engine vs distance—departure, short haul As can be seen in Fig. 11, the MEA power off-take is much less than that of the conventional aircraft. The "Min. fuel—system power post-processed" and "Min fuel—with system power" have similar power off-take requirements during the first half of the phase. In the second half of the phase, the "Min. fuel—with system power" has a larger power off-take. However, the fuel penalty due to power off-take is dependent on the throttle setting of the engine as well. Large off-takes at lower throttle settings will cause large fuel penalties than large off-takes at higher throttle settings. Figure 11 shows combined power extractions of ECS, IPS, actuators and conventional electrical loads. Each of the systems have been modelled and validated in [10,11,12, 14]. Individual system off-takes have not been analysed. However, the driver of the difference between the conventional aircraft and MEA is the ECS due to the difference in off-take nature and controllability of the bleed driven and electrical ECS, respectively. The fuel penalty due to systems is not significant enough to change the trajectory when the setup is optimised for time. However, when the objective is to fly with the minimum fuel burn, the effect of the systems is significant. By studying the trajectory using Figs. 11 and 12, it was observed that for the "system power post-processed" trajectories, there was a relatively high off-take at lower thrust conditions which caused a significant fuel penalty. It should be noted that the total power off-take is the sum of the shaft power off-take and the bleed air off-take. The bleed air flow is converted to a power using Throttle vs distance—departure, short haul $$\dot {Q}={\dot {m}_{\text{a}}}{C_{\text{p}}}({T_{\text{e}}} - {T_{\text{i}}}).$$ The exit temperature of air for the secondary power system is arguable. For this study, the exit temperature of air has been established as the ambient temperature at the operating environment of the aircraft. Even though the exit temperature of the ECS is the cabin temperature and the exit temperature for the IPS is the temperature at the exit of the piccolo tubes, at the point of exit for both systems, there is still energy stored within the air. Hence, only a proportion of the actual energy within the bleed flow is exhausted by the ECS and IPS. Since there is no energy recovery within the typical conventional secondary power system, using exit temperatures of the systems cannot be justified and cannot be used to calculate the energy extracted from the engine to operate the pneumatic based systems. A key difficulty in interpreting the results was that the behaviour of the optimised trajectory cannot be easily predicted since there are numerous parameters significantly influencing the optimisation process. This is especially true for the effect of airframe systems since the relationship between the airframe system operation and optimum flight trajectory is twofold; the system off-takes influence the trajectory due to fuel burn and the trajectory and the ambient conditions also influence the power requirements of the overall systems. However, the summary of the results in Table 2 indicates the advantage in using the enhanced approach to aircraft trajectory optimisation; which is to include the airframe systems within the optimisation loop. The systems add a penalty of 5.15% on the fuel burn if the "Min. fuel—zero power off-takes" trajectory is applied in an aircraft with conventional systems. The fuel burn can be reduced by 2.78% using the enhanced optimisation approach. This is the gain of the "Min. fuel—with system power" over the "Min. fuel—system power post-processed". Table 2 Results' summary of the departure segment, short haul Figures 13 and 14 illustrate the advantage in terms of emissions. CO2 and NOX emissions are lower for the "Min. fuel—with system power" than the "Min. fuel—system power post-processed", which establishes the environmental gains that the enhanced approach offers. Total CO2 emissions vs distance—departure, short haul Total NOX emissions vs distance—departure phase The enhanced approach to optimisation provided the platform to define and study the problem of "more electric aircraft trajectory optimisation". The same city pair and constraints were applied to a more electric aircraft. The results showed that there was significant reduction in the fuel burn. The work presented here focuses on the minimum fuel burn trajectories, since one of the advantages of the MEA is the expected environmental gain in terms of fuel efficiency. The starting mass of the aircraft was the same as for the conventional aircraft. There were many reasons for this. First, the increase in mass using state-of-the-art electrical systems compared to the overall aircraft mass will likely be small. Furthermore, the system mass is a fixed mass and is not a variable mass such as the fuel. This limits the effect the MEA mass increase has on the overall trajectory optimisation procedure. Finally, with the current trends in technology development, it could be assumed that the power to weight ratio of more electric aircraft components will improve to a level where there is no mass penalty. It is inferred that the combined effect of the throttle setting and power off-take allows the more electric aircraft to fly lower and accelerate heavily at the end of the phase to reach the final condition without a significant fuel penalty in the last segments. The power off-takes for the MEA are comparatively lower and that enables the aircraft to fly at lower throttle conditions (in the descending sections) without a heavy fuel penalty, whereas the aircraft with conventional systems climbs constantly at a lower gradient until it reaches 10,000 ft and then levels off. This is further evidence on the importance of combining the system operation and aircraft operation in optimisation studies and indicates that more electric aircraft operations should be different to conventional aircraft within trajectory optimisation. The total fuel burn for the "Min. fuel—MEA" was 586 kg. This is 1.5% less than "Min fuel—with system power". This results in lower CO2 emissions but higher NOX emissions as shown in Figs. 13 and 14. The higher NOX is a result of the engine operating at a much higher temperature during the later stages of the departure to climb to 10,000 ft, whereas in the aircraft with conventional systems, the aircraft reaches 10,000 ft much quicker and flies level as shown in Fig. 9 (Table 3). Table 3 Comparison of MEA to conventional aircraft, short-haul departure En route results Figures 15 and 16 show the aircraft altitude and aircraft Mach number profiles for the minimum fuel burn trajectories for the two different setups. Altitude vs distance—en route, short haul Flight Mach number vs distance—en route, short haul The altitude profile of the simulation "Min. fuel—zero power off-takes" where the aircraft systems are not considered keeps climbing until the descent and then descends to the end point of the phase. In contrast, the simulation "Min. fuel—with system power" where the aircraft systems are considered generates a profile where it is possible to see several cruise levels before starting the descent to the end point of the phase. It is, therefore, noticeable that the setup without systems in the loop for minimum fuel climbs at lower Mach numbers until the top of the descent and then accelerates and descends at higher speed. The setup with systems in the loop for minimum fuel instead cruises at higher speed and then decelerates and descends at lower speed. This is quite an important characteristic since most theoretical studies show that for minimum fuel burn, an aircraft should have a continuous climb and then a continuous descent. Yet from an aircraft system operational point of view, a continuous climb and a continuous descend would cause a higher operational load. For example, a continuous climb would cause a heavy load on the ECS pressurisation and thermal regulation, whereas a continuous descent would cause a significantly higher power off-take to thrust ratio, which causes higher fuel penalties. Hence, it was interesting to observe the compromise reached when the systems were operational. Moreover, the MEA shows intermediate characteristics. In the "Min. fuel" trajectories the MEA shows the tendency to have a continuous climb but also shows signs of levelling off and starts to descend at a higher rate than the "Min. fuel—zero power off-takes" trajectory. The "Min. time" trajectories tend to fly faster at lower altitudes and this is observed in Figs. 15 and 16. The environmental gains are shown in Figs. 17 and 18. For both CO2 and NOX, the MEA has an advantage over the conventional aircraft. The characteristics are very similar to those observed during the departure phase. Total CO2 emissions vs distance—en route, short haul Total NOX emissions vs distance—en route, short haul During the en route, the enhanced approach of including airframe systems within the optimisation loop gave a 2.6% (from Table 4) fuel saving, when trajectories were optimised for fuel burn. When the trajectories were optimised for flight time, the fuel saving was 3.7%. The MEA showed a significantly lower fuel burn than the conventional aircraft offering an 11.4% (from Table 5) reduction in fuel burn for the "Min. fuel" trajectories. Table 4 Results' summary of the en route segment, short haul Table 5 Comparison of MEA to conventional aircraft, short-haul en route Effects due to systems studied in detail for the en route segment The en route segment, due to the higher impact on the total mission flight time, was analysed in detail to study the behaviour of the systems and the consequent effects. The "Min. fuel—system power post-processed" was compared to the "Min. fuel—with system power" trajectory. Figure 19 shows the difference in the fuel flow rates, while the CO2, NOX, throttle and the total power off-take are shown in Figs. 17, 18, 20, and 21, respectively. One characteristic of importance is that the system power off-take affects the fuel burn heavily during lower engine operating conditions. This can be clearly identified by studying the power off-take, throttle and fuel flow in conjunction with each other. At the later stages of the en route when the aircraft is in the initial descent stages, there is a distinct peak in the fuel flow rates. This is partly due to the fact that the "Min. fuel—system power post-processed" trajectory is at a higher altitude (refer Fig. 15) and higher speed (refer Fig. 16) during this stage. But it is also due to the impact that the power off-take has on low-throttle engine operating conditions. During this stage, it was observed that the throttle required for both flight procedures was low. The off-take influences the fuel flow more than it would do in higher throttle settings and is an expected characteristic in large commercial turbofan engines. Fuel flow vs distance—en route, short haul Throttle vs distance—en route, short haul Power off-take per engine vs distance—en route, short haul When comparing the two flight procedures, it was observed that the "Min. fuel—with system power" had a comparatively higher average throttle rating. This is also reflected in the NOX emissions in Fig. 18. The higher throttle settings indicate that the engine is operating at a higher temperature and it is expected that the NOX emissions would be comparatively higher. By studying Figs. 17 and 18 it is clear that there is a trade-off between NOX and CO2 as is expected in large commercial turbofan engines. With regard to the MEA, the power off-takes were lower than the conventional aircraft as expected. The en route was simulated in ISA atmospheric conditions; hence, the major consumer which is the ECS did not reach the design limits. The lower off-take combined with the throttle, altitude and speed profiles enabled the MEA to achieve a more efficient procedure for flight and presents significant environmental gains. The CO2 reduction was 11.4% and the NOX reduction was 3.8%. Arrival results Due to the nature of the arrival problem, it was observed that the number of feasible results were far less than the departure and en route phase. The cause of this issue was inherent in the definition of the problem. With regard to the specific setup used in this study it presented a significant challenge in defining the "arrival" phase optimisation. The optimisation setup tries to find the best route possible, in terms of the objective, between two given points in 3-D space. Due to this nature, it was observed that occasionally the setup was not able to converge on feasible arrival trajectories. It was observed that the aircraft descended rapidly and then flew level just above ground for a great distance. Even though in theory this consumes less fuel, it is not accepted in an aircraft operational environment. Hence, steps were taken to limit the final point of the aircraft arrival phase to an altitude of 2000 ft. This ensured that the setup always produced feasible flight procedures. The final descent (from 2000 ft to final altitude of the airport) was calculated manually, assuming a constant glide angle. Figures 22 and 23 show the aircraft altitude and aircraft true airspeed as function of arrival phase for the minimum fuel burn and minimum flight time trajectories. As expected, the minimum fuel burn trajectories tend to descend slower and reach the final point while descending, whereas the minimum time trajectories descend faster and then fly level to reach the final point. Altitude vs distance—arrival, short haul True air speed vs distance—arrival, short haul Unlike in the previous flight phases, the "Min. time" trajectories are quite different from each other. However, the "Min. time" trajectories all prefer to descend as soon as possible and fly level at the minimum altitude whereas the "Min. fuel" trajectories prefer to reach the final condition through a continuous descent. Figures 24 and 25 show the CO2 and NOX emissions for the arrival phase. The emissions for the MEA are significantly less than the conventional aircraft. The lower throttle settings and the comparatively lower off-takes result in a lower fuel burn for the MEA. Total CO2 emissions vs distance—arrival, short haul Total NOX emissions vs distance—arrival Tables 6 and 7 summarise the results for the "arrival" optimisation for minimum fuel burn and minimum flight time. Table 6 Results summary of the arrival segment Summary of the short-haul flight A summary of the results for the complete mission (departure, en route, arrival) is shown in Tables 8 and 9. Table 8 Results' summary of the short-haul flight, conventional aircraft Table 9 Results summary of the short-haul flight, MEA Table 10 summarizes the gains achieved using trajectory optimisation. Each aircraft configuration was compared to the respective baseline case. The "zero power off-takes" showed the biggest gain for fuel efficiency while the "MEA" had the lowest gain. The result here shows that the classical approach to trajectory optimisation may exaggerate the gains due to optimisation. Table 10 Gains by optimising for fuel burn and flight time From the overall results it was observed that had the "Min. fuel" results obtained through the classical approach been applied in a real aircraft, the conventional airframe systems would have caused the flight to consume 16.6% more fuel than was calculated. However, by considering the conventional system power requirements within the optimisation loop, this penalty was reduced by 2.5%. The optimal way (considering the optimisation constraints) to fly the aircraft with consideration of the conventional system off-takes was significantly different from an aircraft with zero power off-takes. The minimum fuel burn trajectory for the MEA consumed 9.9% less fuel than the minimum fuel burn trajectory for the conventional aircraft. However, it should be noted that phenomena which have not been considered here, such as induced drag due to more electric compressors and extra weight due to the heavier electrical components, may reduce this advantage based on current technology. The enhanced approach, as discussed above, provided a fuel reduction of 2.5% which directly results in a 2.5% reduction of CO2 emissions but the NOX emissions increased by 2.9%. However, the optimisation objectives were fuel and time. With the inversely proportional relationship between CO2 and NOX this is an expected phenomenon. The MEA proved to have 9.9% less CO2 emissions and 1.97% less NOX emissions. When the objective was the minimum flight time, the results in terms of the altitude and speed profiles did not vary as much as the "Min. fuel" trajectories. Nevertheless, applying the enhanced optimisation approach to conventional aircraft showed that overall fuel burn can be reduced by 2.6%, the CO2 emissions can be reduced by 2.6% and the NOX emissions reduced by 4.3%. This is significant enough to challenge the validity of the optimality of the classical approach even when the optimisation objective is different to the fuel burn. When optimised for the minimum flight time, the MEA showed 8.1% reduction in fuel burn, 8.1% reduction in CO2 and a 1.77% increase in NOX emissions. The flight time of the MEA was less, which was due to the MEA flying faster at a higher throttle. The lower off-takes of the MEA configuration allowed the aircraft to operate at a higher throttle without causing a significant penalty on the fuel flow. However, the higher throttle meant that the engine operating temperatures, especially the combustor inlet temperature was higher for longer periods resulting in an increase in the NOX emissions. The increase in NOX is in contrast to the "Min. fuel" results and show that the complex dependencies within the system aircraft dynamics, airframe system performance, and engine performance have to be accounted for to have valid results for trajectory optimisation. A robust methodology to model the airframe system penalties within the trajectory optimisation scope has been presented in this research. Moreover, the study clearly demonstrated the need for the representation of the airframe system penalties within the optimisation loop. It established and defined the problem "more electric aircraft trajectory optimisation". The overall results show the importance of including the airframe systems off-takes in the optimisation loop. More importantly, the results establish that the optimum methods to operate conventional aircraft and MEA are significantly different and that these results can be obtained if only the airframe systems are considered in the problem definition. Moreover, the results also showed that gains due to optimisation could be exaggerated when the airframe system penalties were not represented in the problem setup. Trajectory optimisation reduced the fuel burn by 17.4% for the conventional aircraft and 12.2% for the more electric compared to the respective baseline cases, when the mission was optimised for fuel burn. Furthermore, the MEA proves to be more fuel efficient than the conventional aircraft. The minimum fuel burn trajectory for the MEA consumed 9.9% less fuel than the minimum fuel burn trajectory for the conventional aircraft. Moreover, the MEA proved to have 9.9% less CO2 emissions and 1.97% less NOX emissions compared to the conventional aircraft when both were optimised for minimum fuel burn. The MEA showed 8.1% reduction in fuel burn, 8.1% reduction in CO2 and a 1.77% increase in NOX emissions when the flight was optimised for minimum flight time. This study has focused on a single-aircraft and single-trajectory result. However, when applied to the vast amount of flights flown everyday across distances small and large, the methodology presented could lead to significant global gains. Further work Further work is planned to include more models within the optimisation scope to represent phenomena such as real-weather patterns and engine degradation to enhance the optimisation approach further such that the theoretical studies will closely represent operational aircraft. Additionally, cost and operational business aspects will be included in post-processing of results. Large users of electrical power on MEA (such as ECS and wing IPS) are expected to have a significant weight penalty compared to conventional pneumatic systems based on current electrical machines and power electronic technologies. Future work should account for these differences to refine the mission fuel burn calculations. Moreover, this study has focused only on the vertical flight trajectory, but further studies will be done in optimising the 3-D space by including lateral trajectory optimisation to study the advantages of the concept of "free flight". Moreover, study of concepts such as "intelligent flying with intelligent systems" where the aircraft will change the flying trajectory due to weather conditions such as icing clouds, with the minimum fuel penalty, is planned. In addition to the above-mentioned scenarios, more objectives such as minimum NOX emissions, cost and minimum persistent contrail formation will be studied. Future work is also planned to compare short-haul and long-haul optimisation results to assess the optimum strategy to replace conventional aircraft with more electric aircraft. 3-D: ACARE: ADP: Air-driven pump AEA: All-electric aircraft AGL: Above ground level ATM: Air traffic management CO2 : Conv.: Conventional aircraft (with conventional airframe systems) EC: ECS: Environmental control system EPNL: Effective perceived noise level FL: Flight level FMS: Flight management system GA: Genetic algorithm GATAC: Green aircraft trajectories under ATM constraints HYD: IPS: Ice protection system ISA: International standard atmosphere ITD: Integrated technology demonstrator KCAS: Calibrated air speed in knots LAMAX: A-weighted maximum sound level MEA: More electric aircraft MOTS: Multi-objective tabu search MTM: Mission trajectory management NOX : NSGA: Non-dominated sorting genetic algorithm NSGAMO: Non-dominated sorting genetic algorithm multi-objective PNLTM: Tone-corrected maximum perceived noise level RNAV: RWY: SEL: Sound exposure level SGO: Systems for greener operations Standard instrument departure Standard instrument arrival TAI: Thermal anti-ice TAS: True airspeed TE: Technology evaluator VOR: VHF omnidirectional range WP: Way point ft: g/m3 : Grams per cubic metre kg/s: Kilograms per seconds kN: KiloNewton kts: KiloWatts lb: Feiner, L.J.: Power-by-Wire Aircraft Secondary Power Systems. In: AIAA/IEEE, 1993. ISBN 0-7803-1343-7 Arguelles, P., et al.: European Aeronautics: A Vision for 2020—Meeting Society's Needs and Winning Global Leadership. European Commission, Luxembourg (2001) Rosero, J.A., et al.: Moving towards a more electric aircraft. IEEE Aerosp. Electron. Syst. Mag. 22(3), 3–9 (2007) ACARE.: Strategic Research Agenda, vol. 2. ACARE (2002). https://www.acare4europe.org/contacts Norman, P.D., et al.: Development of the Technical Basis for a New Emissions Parameter Covering the Whole Aircraft Operation: NEPAIR. 2003. Final Technical Report; NEPAIR/WP4/WPR/01. EC Contract Number G4RD-CT-2000-00182 DuBois, D., Paynter, G.: "Fuel Flow Method2" for estimating aircraft emissions. In: SAE International, 2006. SAE Technical Paper 3006-01-1987. https://doi.org/10.4271/2006-01-1987 Seresinhe, R., Lawson, C., Sabatini, R.: Environmental impact assessment, on the operation of conventional and more electric large commercial aircraft. SAE Int. J. Aerosp. (2013). https://doi.org/10.4271/2013-01-2086 Seresinhe, R., Lawson, C.: The MEA evolution in commercial aircraft and the consequences for initial aircraft design. STM J. 3, 1–15 (2013) (ISSN 2231-038X) Renz, D.D.. Comparison of All-Electric Secondary Power Systems for Civil Subsonic Transport. In: SAE-929493, 1992 Seresinhe, R., Lawson, C.: Electrical load sizing methodology to aid conceptual and preliminary design of large commercial aircraft. Proc. Inst. Mech. Eng. Part G J. Aerosp. Eng. (2014). https://doi.org/10.1177/0954410014534638 (SAGE (On behalf of IMechE)) Cooper, M., Lawson, M., Zare Shahneh, A.: Simulating actuator energy consumption for trajectory optimisation. Proc. Inst. Mech. Eng. Part G. J. Aerosp. Eng. 232(11), 2178–2192 (2017) Seresinhe, R., Lawson, C., Shinkafi, A., Quaglia, D., Madani, I.: Airframe Systems Power Off-take Modelling in More-Electric Large Aircraft for Use in Trajectory Optimisation. ICAS, St. Petersburgh (2014) Seresinhe, R., Lawson, C., Shinkafi, A., Quaglia, D., Madani, I.: An Intelligent Ice Protection System for Next Generation Aircraft Trajectory Optimisation. ICAS, St Petersburgh (2014) Shinkafi, A., Lawson, C.: Enhanced method of conceptual sizing of aircraft electro-thermal de-icing system. Int. J. Mech. Aerosp. Manuf. Ind. Sci. Eng. 8(6), 1069–1076 (2014) Rolls-Royce. The jet engine. Rolls-Royce plc, Derby, UK (1986) (ISBN 0902121235) Tagge, G.E., Irish, L.A., Bailey, A.R.: Systems Study for an Integrated Digital/Electric Aircraft (IDEA). Scientific and Technical Information Branch, National Aeronautics and Space Administration (NASA). NASA, 1985. NASA Contractor Report 3840 Boeing Commercial Airplane Company.: Integrated Application of Active Controls (IAAC) Technology to an Advanced Subsonic Transport Project—Initial Act Configuration Design Study, Summary Report. Scientific and Technical Information Branch, National Aeronautics and Space Administration (NASA). NASA, 1980. NASA Contractor Report 3304 Giannakakis, P., Laskaridis, P., Pilidis, P.: Effects of offtakes for aircraft secondary-power systems on jet engine efficiency. J. Propuls. Power (2011). https://doi.org/10.2514/1.55872 Scholz, D., Seresinhe, R., Ingo, S., Lawson, C.: Fuel Consumption due to Shaft Power Off-takes from the Engine. In: Aircraft system technologies workshop, Hamburg, Germany, 14–15 April 2013 Perry, T.S.: In search of the future of air traffic control. IEEE Spectr. (1997). https://doi.org/10.1109/6.609472 Clean Sky.: Systems for Green Operation. Clean Sky. http://www.cleansky.eu/systems-for-green-operations-sgo. Cited 07 Jul 2017 Betts, J.T.: Survey of numerical methods for trajectory optimization. J. Guid. Control Dyn. 21(2), 193–207 (1998) Betts, J.T., Cramer, E.J.: Application of direct transcription to commercial aircraft trajectory optimization. J. Guid. Control Dyn. 18, 151–159 (1995) Chircop, K., et al.: A Generic Framework for Multi-parameter Optimization of Flight Trajectories. In: 27th ICAS Congress, Nice, France. 19–24 September 2010 Hartjes, S., et al.: Systems for Green Operations (SGO) ITD—Report on the Performance Analysis of the Trajectories—Cycle 1—WP3.2. Clean Sky—SGO ITD (Internal) (2012) Pervier, H., et al.: Application of genetic algorithm for preliminary trajectory optimization. SAE Int. J. Aerosp. 4(2), 973–987 (2011) Tsotskas, C., Kipouros, T., Savill, A.M.. The design and implementation of a GPU-enabled multi-objective tabu-search intended for real world and high-dimensional applications. Procedia Comput. Sci. 29, 2152–2161 (2014) This work has been carried out as part of collaboration between members and associate members involved in the SGO ITD of Clean Sky. The project is co-funded by the European Community's Seventh Framework Programmes (FP7/2007-2014) for the Clean Sky Joint Technology Initiative. School of Aerospace Transport and Manufacturing, Cranfield University, Cranfield, UK Ravinka Seresinhe, Craig Lawson & Irfan Madani Ravinka Seresinhe Craig Lawson Irfan Madani Correspondence to Craig Lawson. Seresinhe, R., Lawson, C. & Madani, I. Improving the operating efficiency of the more electric aircraft concept through optimised flight procedures. CEAS Aeronaut J 10, 463–478 (2019). https://doi.org/10.1007/s13272-018-0327-y Revised: 28 February 2018 Issue Date: 01 June 2019 Aircraft trajectory optimisation Aircraft emissions Aircraft secondary power
CommonCrawl
# Setting up the Python environment for calculus To start applying calculus with Python, we first need to set up our Python environment. This includes installing the necessary libraries and tools that we will use throughout the textbook. First, you'll need to install Python on your computer if you haven't already. You can download the latest version of Python from the official website: [https://www.python.org/downloads/](https://www.python.org/downloads/). Follow the installation instructions for your operating system. Next, we'll install the necessary libraries. The most important library for calculus is NumPy, which provides powerful support for numerical operations. You can install NumPy using pip, the Python package manager. Open your command prompt or terminal and run the following command: ``` pip install numpy ``` We'll also use the SciPy library, which provides additional functionality for scientific and technical computing. Install it using the following command: ``` pip install scipy ``` Finally, we'll install the SymPy library, which allows us to perform symbolic computations. This is useful for solving equations and working with mathematical expressions. Install SymPy using the following command: ``` pip install sympy ``` Now you have everything you need to start applying calculus with Python. Let's move on to understanding the basics of differentiation. # Understanding the basics of differentiation Before we can apply differentiation to real-world scenarios, we need to understand the basics of differentiation. Differentiation is the process of finding the derivative of a function, which tells us how the function changes as its input changes. There are several rules for differentiation, but we'll focus on the most important ones: the power rule, the chain rule, and the product rule. The power rule states that the derivative of a function raised to a power is equal to the power function multiplied by the original function's derivative, raised to the power minus one. For example, if we have the function $f(x) = x^2$, its derivative is $f'(x) = 2x$. The chain rule states that if we have a function of the form $f(g(x))$, then its derivative is $f'(g(x)) \cdot g'(x)$. For example, if we have the function $f(x) = \sin(x^2)$, its derivative is $f'(x) = 2x \cdot \cos(x^2)$. The product rule states that if we have a function of the form $f(x) = u(x) \cdot v(x)$, then its derivative is $f'(x) = u'(x) \cdot v(x) + u(x) \cdot v'(x)$. For example, if we have the function $f(x) = x^2 \cdot \sin(x)$, its derivative is $f'(x) = 2x \cdot \sin(x) + x^2 \cdot \cos(x)$. Now that we understand the basics of differentiation, let's move on to applying differentiation in real-world scenarios. # Applying differentiation in real-world scenarios Now that we understand the basics of differentiation, we can apply it to real-world scenarios. Let's start with an example: the velocity of an object moving with constant acceleration. The velocity of an object is given by the equation $v(t) = a \cdot t + v_0$, where $a$ is the acceleration, $t$ is the time, and $v_0$ is the initial velocity. We can differentiate this equation to find the acceleration of the object as a function of time. Using the power rule, we can find that the derivative of $v(t)$ is $v'(t) = a$. This means that the acceleration of the object is constant over time. Let's write a Python program to compute the derivative of the velocity function: ```python import numpy as np import sympy as sp t = sp.Symbol('t') a = sp.Symbol('a') v_0 = sp.Symbol('v_0') v = a * t + v_0 v_prime = sp.diff(v, t) print(v_prime) ``` This program defines the variables $t$, $a$, and $v_0$, and creates a symbolic expression for the velocity function $v(t)$. It then computes the derivative of $v(t)$ with respect to $t$ and prints the result. Now let's move on to numerical methods for solving calculus problems. # Introduction to numerical methods for solving calculus problems Numerical methods are algorithms that allow us to approximate the solutions to calculus problems. These methods are essential for solving problems that cannot be solved analytically. One of the most common numerical methods for solving calculus problems is the trapezoidal rule. The trapezoidal rule approximates the definite integral of a function by dividing the region under the curve into trapezoids and summing their areas. The trapezoidal rule formula is: $$ \int_a^b f(x) dx \approx \frac{h}{2} \cdot \left(f(a) + f(b) + 2 \cdot \sum_{i=1}^{n-1} f(a + ih)\right) $$ where $h$ is the width of the trapezoids, $n$ is the number of trapezoids, and $a$ and $b$ are the limits of integration. Let's write a Python program to compute the definite integral of a function using the trapezoidal rule: ```python import numpy as np def trapezoidal_rule(f, a, b, n): h = (b - a) / n return (h / 2) * (f(a) + f(b) + 2 * np.sum(f(a + i * h) for i in range(1, n))) # Example function: f(x) = x^2 def f(x): return x**2 # Compute the definite integral of f(x) from 0 to 1 result = trapezoidal_rule(f, 0, 1, 100) print(result) ``` This program defines a function `trapezoidal_rule` that takes a function `f`, the limits of integration `a` and `b`, and the number of trapezoids `n`. It then computes the definite integral of `f` using the trapezoidal rule and prints the result. Now let's move on to solving ordinary differential equations using Python. # Solving ordinary differential equations using Python Solving ordinary differential equations (ODEs) is a common application of calculus. ODEs describe the rate of change of a function with respect to its input. One of the most common methods for solving ODEs is the Euler method. The Euler method approximates the solution to an ODE by taking small steps in the direction of the derivative. The Euler method formula is: $$ y_{n+1} = y_n + h \cdot f(t_n, y_n) $$ where $y_n$ is the approximate solution at time $t_n$, $h$ is the step size, and $f(t, y)$ is the derivative of the function. Let's write a Python program to solve the ODE $dy/dt = -2t \cdot y$ using the Euler method: ```python import numpy as np def euler_method(f, y0, t0, tf, h): y = np.zeros(int((tf - t0) / h) + 1) y[0] = y0 for i in range(len(y) - 1): t = t0 + i * h y[i + 1] = y[i] + h * f(t, y[i]) return y # Example function: f(t, y) = -2t * y def f(t, y): return -2 * t * y # Solve the ODE with initial condition y(0) = 1 result = euler_method(f, 1, 0, 1, 0.1) print(result) ``` This program defines a function `euler_method` that takes a function `f`, the initial condition `y0`, the limits of integration `t0` and `tf`, and the step size `h`. It then solves the ODE using the Euler method and prints the approximate solution. Now let's move on to applying integration in real-world scenarios. # Applying integration in real-world scenarios Now that we understand the basics of integration, we can apply it to real-world scenarios. Let's start with an example: the area under a curve. The area under a curve is given by the definite integral of the curve's height function from the lower limit to the upper limit. Let's write a Python program to compute the area under a curve using the trapezoidal rule: ```python import numpy as np def area_under_curve(f, a, b, n): return trapezoidal_rule(f, a, b, n) # Example function: f(x) = x^2 def f(x): return x**2 # Compute the area under the curve from 0 to 1 result = area_under_curve(f, 0, 1, 100) print(result) ``` This program defines a function `area_under_curve` that takes a function `f`, the limits of integration `a` and `b`, and the number of trapezoids `n`. It then computes the area under the curve using the trapezoidal rule and prints the result. Now let's move on to introduction to machine learning and its applications. # Introduction to machine learning and its applications Machine learning is a field of study that focuses on the development of algorithms and models that can learn from and make predictions or decisions based on data. One of the most common types of machine learning is supervised learning, where the model is trained on a labeled dataset and learns to make predictions based on the input-output pairs. One of the most popular supervised learning algorithms is linear regression, which models the relationship between a dependent variable and one or more independent variables. Let's write a Python program to perform linear regression using the SciPy library: ```python import numpy as np from scipy.stats import linregress # Example data: x = [1, 2, 3, 4, 5], y = [2, 4, 6, 8, 10] x = np.array([1, 2, 3, 4, 5]) y = np.array([2, 4, 6, 8, 10]) slope, intercept, r_value, p_value, std_err = linregress(x, y) print("Slope:", slope) print("Intercept:", intercept) print("R-squared:", r_value**2) ``` This program defines the input data `x` and output data `y`. It then performs linear regression on the data using the `linregress` function from the SciPy library. It prints the slope, intercept, and R-squared value of the regression line. Now let's move on to optimization problems and their solutions using Python. # Optimization problems and their solutions using Python Optimization problems are a common application of calculus. They involve finding the best solution to a problem, often by minimizing or maximizing a function. One of the most common optimization algorithms is the gradient descent method. The gradient descent method iteratively updates the function's parameters to minimize the function's value. Let's write a Python program to minimize a function using the gradient descent method: ```python import numpy as np def gradient_descent(f, x0, learning_rate, max_iterations): x = np.array(x0) for _ in range(max_iterations): gradient = np.array(f.gradient(x)) x -= learning_rate * gradient return x # Example function: f(x) = x^2 def f(x): return x**2 # Minimize the function with initial guess x0 = [1] result = gradient_descent(f, [1], 0.1, 100) print(result) ``` This program defines a function `gradient_descent` that takes a function `f`, the initial guess `x0`, the learning rate `learning_rate`, and the maximum number of iterations `max_iterations`. It then minimizes the function using the gradient descent method and prints the approximate minimum. Now let's move on to advanced optimization techniques using Python. # Advanced optimization techniques using Python Advanced optimization techniques are used when the gradient descent method is not sufficient for finding the optimal solution. One such technique is the Newton-Raphson method, which uses the first and second derivatives of the function to find the minimum. Let's write a Python program to minimize a function using the Newton-Raphson method: ```python import numpy as np def newton_raphson(f, x0, max_iterations, tolerance): x = np.array(x0) for _ in range(max_iterations): gradient = np.array(f.gradient(x)) hessian = np.array(f.hessian(x)) x -= np.linalg.inv(hessian) @ gradient if np.linalg.norm(gradient) < tolerance: break return x # Example function: f(x) = x^2 def f(x): return x**2 # Minimize the function with initial guess x0 = [1] result = newton_raphson(f, [1], 100, 1e-6) print(result) ``` This program defines a function `newton_raphson` that takes a function `f`, the initial guess `x0`, the maximum number of iterations `max_iterations`, and the tolerance `tolerance`. It then minimizes the function using the Newton-Raphson method and prints the approximate minimum. Now let's move on to data analysis with Python. # Data analysis with Python Data analysis is a crucial step in many real-world scenarios, where we need to extract insights from large datasets. Python provides powerful tools for data analysis, such as the Pandas library. Let's write a Python program to load and analyze a dataset using the Pandas library: ```python import pandas as pd # Load the dataset from a CSV file data = pd.read_csv('data.csv') # Analyze the dataset print(data.describe()) ``` This program loads the dataset from a CSV file using the `read_csv` function from the Pandas library. It then analyzes the dataset by computing basic statistical measures using the `describe` function. Now let's move on to creating a project using Python and calculus. # Creating a project using Python and calculus Now that we have learned the basics of calculus and how to apply it with Python, let's create a project that combines both. For example, we can create a project that predicts the stock prices of a company based on historical data. We can use linear regression to model the relationship between the stock price and other factors, such as the company's revenue or market share. To create this project, we would first collect historical data for the company, including stock prices, revenue, and market share. We would then preprocess the data, such as removing outliers or normalizing the data. Next, we would use linear regression to model the relationship between the stock price and the other factors. We would use the gradient descent method or the Newton-Raphson method to find the optimal parameters for the regression line. Finally, we would use the trained model to make predictions for future stock prices, based on the company's current financial performance. This project demonstrates how to apply calculus with Python in a real-world scenario. By combining our knowledge of calculus and Python, we can solve complex problems and extract valuable insights from data.
Textbooks
THEMIS: Towards a Decentralized Ad Platform with Reporting Integrity (Part 1) June 25, 2020 | Announcements This post describes the work done by Gonçalo Pestana, Research Engineer, Iñigo Querejeta-Azurmendi, Cryptography Engineer, Dr. Panagiotis Papadopoulos, Security Researcher, and Dr. Ben Livshits, Chief Scientist; this post is also part of a series that focuses on further progressive decentralization for Brave ads. Note: THEMIS is primarily a research effort for now and does not constitute a commitment regarding product plans around Brave Rewards. The whitepaper introducing the Basic Attention Token (BAT) [1] was released mid 2017 and, since then, BAT has been used by millions of users, advertisers, and publishers, each using and earning BAT through the Brave Browser (Figure 1) [2]. It has been a long ride since 2017 and we're very proud that BAT is acknowledged as one of the most successful use cases for decentralized ledgers and utility tokens. The BAT token powers the BAT-based advertising ecosystem. The main goal of the BAT-based ad ecosystem is to provide the choice for users to value their attention, while keeping full control over their data and personal privacy. The main tenets of the BAT-based advertising ecosystem are to provide privacy by default, to restore control to users over their data, and to provide a decentralized marketplace where Brave Browser users are incentivized to watch ads and to contribute to creators. Through these principles, Brave's vision is to fix the current online advertising industry [1], and get rid of widespread fraud schemes [3] [3.1], [4], market fragmentation [5] [6] and privacy issues [7] [8]. In line with these goals, Brave's research team has been working on a decentralized and privacy-by-design protocol that further improves upon the current BAT-based ad ecosystem. In this first post in a series of blog posts, we present THEMIS: a novel privacy-by-design ad platform that requires zero trust from both users and advertisers alike. THEMIS provides auditability to all participants, rewards users for interacting with ads, and allows advertisers to verify the performance and billing reports of their ad campaigns. In this blog series, we describe the THEMIS protocol and its building blocks. In the next post, we will present a preliminary scalability evaluation of THEMIS in a deployment environment. Figure 1. Example of an ad notification delivered through the Browser for Brave Ads users. The current web advertising ecosystem Digital advertising is the most popular way of funding websites. However, web advertising has fundamental flaws such as market fragmentation, rampant fraud, and unprecedented invasion of privacy. Further, web users are increasingly opting out of web advertising, costing publishers millions of dollars in ad revenues every year. A growing number of users (47% of internet users globally, as of today [13]) use ad-blockers. Academia and industry have responded by designing new monetization systems. These systems generally emphasize properties such as user choice, privacy protection, fraud prevention, and performance improvements. Privad [11], and Adnostic [12] are examples of academic projects that focus on privacy-friendly advertising. Despite the contributions of these systems, they have significant shortcomings that have limited their adoption. These systems either (i) do not scale, (ii) require the user to trust central authorities within the system to process ad transactions, or (iii) do not allow advertisers to accurately gauge campaign performance. To make matters worse, current advertising systems lack proper auditability: The ad network exclusively determines how much advertisers will be charged, as well as the revenue share that the publishers may get. Malicious ad networks can overcharge advertisers or underpay publishers. Another issue is non-repudiation, as ad networks do not generally prove that the claimed ad views/clicks occurred in reality. Figure 1. A high-level visual overview of THEMIS. Ad distribution and ad interaction reporting activities. Users are rewarded for interacting with ads. In THEMIS, a campaign manager and advertisers agree on ad campaigns, which are encoded in a smart contract running on a side-chain. Using Brave Browser, users request rewards from a smart contract, which implements a cryptographic protocol that moves us towards decentralization, transparency, and privacy. Our Approach: THEMIS In this blog post series, the Brave Research team presents THEMIS (Figure 1), a private-by-design ad platform that makes a significant step towards decentralizing the ad ecosystem by leveraging a side-chain and smart contracts to eliminate centralized ad network management. We believe in progressive decentralization, which means that the system presented in the first blog post is not yet fully decentralized; subsequent blog posts will discuss further decentralization steps. The current implementation of Brave Ads protects user privacy and anonymity through the use of privacy-preserving cryptographic protocols, client-side ad matching, and other anonymization techniques. For example, Brave servers cannot determine which ads a user has interacted with, and they do not receive any data concerning a specific user's interests or browsing habits. The THEMIS protocol provides the same strong anonymity properties as Brave Ads, while making an important step toward progressive decentralization of the Brave Ads ecosystem. THEMIS is highly relevant to the BAT Apollo mission [14]. As discussed in a BAT Community-run AMA [15], the main goals of the BAT Apollo mission are to improve transparency, to decrease transaction costs, and to further decentralize Brave Ads. By combining the strong privacy properties with decentralization, THEMIS: Effectively addresses the auditability and non-repudiation issues of the current ecosystem by requiring all participants to generate cryptographic proofs of correct behaviour. Every participants can verify that everybody is following the protocol correctly; And provides the advertisers with the necessary feedback regarding the performance of their ad campaigns without compromising the end-user privacy. By guaranteeing the computational integrity of this reporting, advertisers can accurately learn how many users viewed and interacted with their ads without learning exactly which of them. In this section, we sketch a brief technical background regarding the mechanisms and building blocks used by THEMIS; we also describe why and how THEMIS leverages them. Permissioned Blockchains THEMIS relies on a blockchain with smart contract functionality to provide a decentralized ad platform. Smart contracts enable the business logic and payments to be performed without relying on a central authority. THEMIS could, for example, run on the Ethereum Mainnet. However, due to Ethereum's low transaction throughput, the high gas costs, and the current scalability issues, THEMIS relies on a Permissioned Blockchain instead, more concretely on a Proof-of-Authority (PoA) blockchain. A PoA blockchain consists of a distributed ledger that relies on consensus achieved by a permissioned pool of validator nodes. PoA validators can rely on fast consensus protocols such as IBFT/IBFT2.0 and Clique, which result in faster minted blocks and thus PoA can reach higher transaction throughput than traditional PoW based blockchains. As opposed to traditional, permissionless blockchains (such as Bitcoin and Ethereum), the number of nodes participating in the consensus is relatively small and all nodes are authenticated. In our case publishers, and other industry entities, are potential participants of the pool of validators. Cryptographic Tools THEMIS uses an additively homomorphic encryption scheme to calculate the ads payouts for each user, while keeping the user behavior (e.g. ad clicks) private. Given a public-private key-pair [[(\sk, \pk)]], the encryption scheme is defined by four functions: Encryption: first, the encryption function, where given a public key and a message, outputs a ciphertext, [[\ctxt = \enc(\pk, \message)]]; Decryption: secondly, the decryption function, that given a ciphertext and a private key, outputs a decrypted message, [[\message = \dec(\sk, \ctxt)]]; Sign: next, the signing function, where given a message and a secret key, outputs a signature on the message, [[\signature = \sign(\sk, \message)]]. Verify: finally, the signature verification function, where given a signature and a public key, outputs [[\bot, \top]] if the signature fails or validates respectively, [[\signverify(\signature, \pk)\in\{\bot, \top\}]]. The additive homomorphic property guarantees that the addition of two ciphertexts, $$ \ctxt_{1} = \enc(\pk, \message_{1}), \ctxt_{2} = \enc(\pk, \message_{2}) $$ encrypted under the same key, results in the addition of the encryption of its messages, more precisely: $$ \ctxt_{1} + \ctxt_{2} = \enc(\pk, \message_{1} + \message_{2}) $$ Some examples of such encryption algorithms are ElGamal [9] or Paillier [10] encryption schemes. To prove correct decryption, THEMIS leverages Zero Knowledge Proofs (ZKP) which allow an entity (i.e. the prover) to convince a different entity (i.e. the verifier) that a certain statement is true over a private input without disclosing any other information from that input other than whether statement is true or not. We denote proofs with \(\Pi\), and its verifications as \(\verify(\Pi)\in\{\bot, \top\}\). Distribution of trust THEMIS distributes trust to generate a public-private key-pair for each ad campaign, under which the sensitive information is encrypted. For this, it uses a distributed key generation (DKG) protocol to share the knowledge of the secret. This allows a group of players to distributively generate the key-pair, [[(\sk_T, \pk_T)]], where each player has a share of the private key, [[\sk_{T_{i}}]], and no player ever gains knowledge of the full private key, [[\sk_{T}]]. Moreover, the resulting key-pair is a threshold key-pair which requires at least a well-defined number of participants – out of the peers that distributively generated the key – to interact during the decryption or signing operations. We follow a similar DKG protocol as presented by Schindler et.al. [11]. In order to choose this selected group of key generation players in a distributed way, THEMIS leverages Verifiable Random Functions (VRFs). In general, VRFs enable users to generate a random number and prove its randomness. In THEMIS, we use VRFs to select a random pool of users and generate the distributed keys. Given a public-private key-pair, [[(\VRFsk, \VRFpk)]], VRFs are defined by a function which outputs a random number and a zero knowledge proof of correct generation. System Properties and Guarantees The main properties we focused on while designing THEMIS included privacy, accountability, reporting integrity, and decentralization: In the context of a sustainable ad ecosystem, we define privacy as the ability for users and advertisers to use our system without disclosing any critical information about themselves and their business: For the user, privacy means being able to interact with ads without revealing their interests/preferences to advertisers, other protocol participants or eavesdroppers. In THEMIS, we preserve the privacy of the user not only when they are interacting with ads but also when they claim the corresponding rewards for these ads. Brave Ads currently protects advertiser privacy. For advertisers, privacy means that they are able to set up ad campaigns without revealing any policies (i.e. what is the reward of each of their ads) to the prying eyes of their competitors. THEMIS keeps these ad policies confidential throughout the whole process, while enabling users to claim rewards based on ad policies. Decentralization and auditability Existing works require a central authority to manage and orchestrate the proper execution of the protocol, either in terms of user privacy or billing. What if this (considered as trusted) entity censors users by denying or transferring an incorrect amount of rewards? What if it attempts to charge advertisers more than what they should pay based on users' ad interactions? What if the advertising policies are not applied as agreed with the advertisers when setting up ad campaigns? One of the primary goals of our system is to be decentralized and transparent. To achieve this, THEMIS leverages a permissioned blockchain with smart contract functionality. Ad platforms need to be able to scale seamlessly and serve millions of users. However, important proposed systems fail to achieve this. We consider scalability as an important aspect affecting the practicability of the system. THEMIS needs to not only serve ads in a privacy preserving way to millions of users but also finalize the payments related to their ad rewards as timely as possible. Contrary to existing works, THEMIS does not rely on a trusted central authority. Therefore, it needs to provide both the users and the advertisers with mechanisms to verify the authenticity of the statements and the performed operations. Achieving such integrity guarantees requires the use of zero-knowledge proofs to ensure every participant can prove and verify the correctness and validity of billing and reporting. System Overview – A Strawman Approach The remainder of this blog post will be dedicated to outline a straw-man approach to describe the basic principles and steps of THEMIS. In an upcoming blog post, we build on the straw-man approach and introduce the decentralization into the system. Our straw-man approach is the first step towards a privacy-preserving and decentralized online advertising system. Our goal at this stage is to provide a mechanism for advertisers to create ad campaigns and to be correctly charged, based on the user's interactions with their ads. In addition, the system aims at keeping track of the ads viewed by users, so that (i) advertisers can have feedback about their ad campaigns and (ii) users can be rewarded for interacting with ads. All these goals should be achieved while preserving the privacy of the ad policies and the user behaviour. We assume three different roles in this straw-man approach: (i) the users, (ii) the advertisers, and (iii) an ad Campaigns Manager (CM). The users are incentivized to view and interact with ads created by the advertisers. The CM is responsible (a) for orchestrating the protocol, (b) for handling the ad views reporting and finally (c) for calculating the rewards that need to be paid to users according to the policies defined by the advertisers. Note that the straw-man approach assumes a semi-trusted Campaign Manager. This role will be removed in the full THEMIS protocol, which is described in the next blogpost. For the sake of this initial introduction to THEMIS, relying on a CM entity allows us to simplify the explanation. Privacy-preserving Ad Matching In THEMIS – as in the current Brave Rewards architecture – the user downloads an updated version of the ad catalog, which includes ads and their metadata from all active ad-campaigns. The CM maintains and provides the ad catalog for users to download periodically. The ad-matching happens locally based on a pre-trained model and the user's interests extracted from their web browsing history in a similar way as in Brave Rewards. In order to serve and match ads to the user interests, no data leaves the user's device. This creates a walled garden of browsing data that is used for recommending the best matching ad while user privacy is guaranteed. Incentives for Ad-viewing User incentives to interact with ads are at the core of THEMIS. Each viewed/clicked ad yields an amount of BAT rewards. Different ads may provide different amounts of reward to the users. This amount is agreed by the corresponding ad creator (i.e. the advertiser) and the Campaign Manager. The user can claim rewards periodically (e.g. every week or every month). In Figure 4, we present an overview of the reward request generation and the steps to claim the ad rewards in the straw-man approach. The straw-man approach We now outline the different phases of the straw-man version of THEMIS. Phase 1: Defining Ad Rewards In order for an advertiser to have their ad campaign included in the next version of the ad catalog, they first need to agree with the CM on the policies of the given campaign (i.e. rewards per ad, ad impressions per user, etc.) (step 1 in Figure 4). Once the advertiser agrees off-band with the CM on the ads that will be part of the campaign and respective payouts, the CM encodes the agreed policy as a vector, [[\policyvector]], where each index corresponds to the amount of tokens that an ad yields when viewed/clicked (e.g. Ad1: 0.4 BAT, Ad2: 2 BAT, Ad3: 1.2 BAT). The CM stores this vector privately and the advertiser needs to trust that the policies are respected (this will be addressed in the full THEMIS protocol – see next blog post). The indices used in the policy vector maintain the same order as the corresponding indices of its ads in the ad catalog. Figure 4. High-level overview of the user rewards claiming procedure of our straw-man approach. Advertisers can set how much they reward each ad click without disclosing that to competitors. The user can claim rewards without exposing which ads they interacted with. In addition to agreeing with the CM on the ads policies for the campaign, the advertiser also transfers to an escrow account the necessary funds to cover the campaign. At the end of the campaign, unused funds (i.e. when users have not clicked/interacted with enough ads to use up all the escrowed funds), are released back to the advertisers. For the sake of simplicity, throughout this section, we consider one advertiser who participates in our ad platform and runs multiple ad campaigns. In a real world scenario many advertisers can participate running many ad campaigns simultaneously. We also consider as agreed policies the amount of tokens an ad provides as reward to a clicking user. Phase 2: Claiming Ad Rewards The user generates locally an interaction vector, which keeps track of the number of times each ad of the catalog was viewed/clicked (.eg Ad1: was viewed 3 times, Ad2: was viewed 0 times, Ad3: was viewed 2 times). In every payout period, the user encrypts the state of the interaction vector. More technically, let [[\adclicks]] (ac in Figure 4) be the interaction vector containing the number of views/clicks of users with each ad, where element [[i]] of vector [[\adclicks]] represents the number of times [[\ad_i]] was viewed/clicked. On every payout period, the user generates a new ephemeral key pair [[\sk, \pk]], to ensure the unlinkability of the payout requests. The user then proceeds at each entry of [[\adclicks]] with the newly generate public key: $$ \encryptedvector = \left[\enc(\pk, \nrinteractions_1)\ldots, \enc(\pk, \nrinteractions_{\nrads})\right] $$ where [[\nrinteractions_i]] is the number of interactions for ad [[i]], and [[\nrads]] is the total number of ads. It proceeds to send [[\encryptedvector]] to the Campaign Manager (step 2a in Figure 4). Note that the CM cannot decrypt the received vector and thus cannot learn the user's ad interactions (and consequently their interests). Instead, they leverage the additive homomorphic property of the underlying encryption scheme (as described in the Background Section) to calculate the sum of all payouts based on the interactions encoded in the encrypted vector [[\encryptedvector]] (step 2b in Figure 4). More formally, the CM computes the aggregate payout for the user as follows: $$ \aggrresult = \sum_{i=1}^{\nrads} \policyvector[i]\cdot\encryptedvector[i] $$ where [[\policyvector[i]]] is the ad policy associated with the ad in the position [[i]] of the vector. Then CM signs the computed aggregate result: $$ \signreward = \sign(\aggrresult, \sk_{CM}) $$ and sends the 2-tuple [[(\aggrresult, \signreward)]] back to the user. Upon receiving this tuple (step 2c in Figure 4), the user verifies the signature of the result: [[\signverify(\aggrresult, \signreward)]] and proceeds with decrypting the result of the aggregate: $$ \decryptedaggr = \dec(\sk, \aggrresult) $$ As a final step, it proves the correctness of the decryption by creating a zero knowledge proof of correct decryption: [[\proofresult]] (i.e. proving that the decryption is, in fact, associated with the encrypted aggregate). Phase 3: Payment Request Finally, the user generates the payment request and sends the following 4-tuple to the CM (step 3a in Figure 4): $$ (\decryptedaggr, \aggrresult, \signreward,\proofresult) $$ As a next step (step 3b in Figure 4), the CM verifies that the payment request is valid. More specifically, CM will reject the payment request of the user if $$ \signverify(\pk_{CM}, \signreward, \aggrresult) = \bot $$ $$ \verify(\proofresult) = \bot $$ Otherwise, it proceeds with transferring the proper amount (equal to [[\decryptedaggr]]) of reward to the user. Reporting to Advertisers THEMIS aims at providing feedback about the ad campaigns to the advertisers. During billing procedure the advertisers need to be able to verify the integrity of the reported statistics by the Campaign Manager regarding the number of times an ad was viewed/clicked by the users. To achieve this, whenever a new version of the ad-catalog is online and retrieved from the users, a new key-pair, [[\pk_{T}]], is generated. This key is used to encrypt a copy of the adclicks vector CM (remember step 2a in Figure 4). The key used in this step, [[\pk_{T}]], is a public threshold key generated in a distributed way. In order to generate such a key, a pool of multiple participating users (Users are incentivized to participate in this pool. Details on how to orchestrate the incentives are left outside the scope of this blog post.), the consensus pool, is created (more details on how the consensus pool is created will be discussed in the next blog post). For this purpose, the consensus pool runs a distributed key generation algorithm. This results in a shared public key [[\pk_{T}]] and each consensus pool participant owning a privacy key share [[\sk_{T,i}]]. The public key, [[\pk_{T}]], is sent to the CM, so the key can be shared to all users. Hence, apart from the [[\encryptedvector]] each user also sends [[\encryptedvector']] to the CM, where: $$ \encryptedvector' = \left[\enc(\pk_{T}, \nrinteractions_{1}), \ldots, \enc(\pk_{T}, \nrinteractions_{\nrads})\right] $$ When the ads campaign is over, all the [[\encryptedvector']] generated by the users will be processed to calculate how many rewards were paid per advertiser. By using the same additively homomorphic properties used to calculate the payouts for the users, the CM can also calculate the payout per advertiser using all [[\encryptedvector']]. Thus, considering all the [[\encryptedvector']] of the campaign, the encrypted amount of ads payout for the ad in position [[i]], can be calculated by the CM in the following way: $$ \encadspayout_i = \sum_{i=1}^{\nrads}\encryptedvector'_{0}[i] + \cdots + \encryptedvector'_{\nrusers}[i] $$ where [[\nrusers]] is the number of users. Each of the [[\encadspayout_{i}]] be decrypted using the threshold public-private key-pair, which requires a minimum number of pool participants to decrypt. The decrypted values are shared with the advertisers, which then allow them to verify whether the funds used by the CM to pay the users are the correct ones, based on the users interactions with the ad campaign. In this first blog post, we presented the motivation and goals for THEMIS, a novel privacy-by-design ad platform design and implemented by Brave's Research team. Similarly to Brave ads, THEMIS provides strong anonymity to users. In addition, it is decentralized and requires zero trust from users and advertisers. THEMIS core protocol (i) provides auditability to all participants, (ii) rewards users for interacting with ads, and (iii) allows advertisers to verify the performance and billing reports of their ad campaigns. In addition to introducing and motivating THEMIS, we outlined a simplified straw-man design of the core protocol, which guarantees that: The user receives rewards they earned by interacting with ads. The same property holds as with Brave Ads: THEMIS does not disclose which ads users have interacted with to Brave or advertisers. The campaign manager is able to correctly apply the pricing policy of each ad without disclosing any information to users or potential competitors of the advertiser. However, the straw-man approach does not cover all the properties we would like to achieve for THEMIS, particularly in terms of trust. In the straw-man approach, the campaign manager is responsible for orchestrating the protocol: it handles the user request for payouts and calculates the rewards. In addition, the CM stores the ad policies privately and both users and the advertisers need to trust that the policies are respected when the payouts are calculated. Finally, the straw-man system does not address the privacy-preserving payment mechanism for rewards. In the upcoming blog post, we improve the simplified straw-man approach and present the end-to-end THEMIS protocol; we will also present a scalability evaluation, which shows how THEMIS operates at scale. [1] BAT whitepaper [2] Brave Rewards Stats & Token Activity [3] N. Kshetri, "The Economics of Click Fraud," in IEEE Security & Privacy, vol. 8, no. 3, pp. 45-53, May-June 2010. [3.1] The Dark Alleys of Madison Avenue: Understanding Malicious Advertisements [4] Kumari, Shilpa, et al. "Demystifying ad fraud." 2017 IEEE Frontiers in Education Conference (FIE). IEEE, 2017. [5] Bashir, Arshad, et.al "Tracing Information Flows Between Ad Exchanges Using Retargeted Ads". 25th USENIX Security Symposium (USENIX Security 16) [6] Papadopoulos, Kourtellis and Markatos "Cookie Synchronization: Everything You Always Wanted to Know But Were Afraid to Ask" [7] Speicher, T., Ali, M., Venkatadri, et. al. (2018) "Potential for Discrimination in Online Targeted Advertising". Proceedings of the 1st Conference on Fairness, Accountability and Transparency [8] Venkatadri, Athanasios, et. al. (2018). Privacy Risks with Facebook's PII-Based Targeting: Auditing a Data Broker's Advertising Interface. [9] El Gamal Encryption [10] Paillier Cryptosystem [11] Privad: practical privacy in online advertising [12] Adnostic: Privacy Preserving Targeted Advertising [13] Global Ad-Blocking Behaviors In 2019 – Stats & Consumer Trends (infographic) [14] BAT roadmap [15] BAT Apollo AMA with Marshall Rose STATE OF THE BAT: 2021 Recap and 2022 Outlook 2021 was a significant year for the growth and development of Brave and BAT. Brave grew over 2x to 50.2 million monthly active users in 2021, with over 8 million users earning BAT through Brave Rewards, becoming the 11th most distributed token by on-chain holders according to etherscan.io. For the fifth year in a row, we've doubled the number of our monthly active users, going from 24 million MAU on December 31st, 2020, to over 50 million by the end of 2021.
CommonCrawl
Outline of logic Logic is the formal science of using reason and is considered a branch of both philosophy and mathematics and to a lesser extent computer science. Logic investigates and classifies the structure of statements and arguments, both through the study of formal systems of inference and the study of arguments in natural language. The scope of logic can therefore be very large, ranging from core topics such as the study of fallacies and paradoxes, to specialized analyses of reasoning such as probability, correct reasoning, and arguments involving causality. One of the aims of logic is to identify the correct (or valid) and incorrect (or fallacious) inferences. Logicians study the criteria for the evaluation of arguments. Foundations of logic Philosophy of logic • Analytic-synthetic distinction • Antinomy • A priori and a posteriori • Definition • Description • Entailment • Identity (philosophy) • Inference • Logical form • Logical implication • Logical truth • Logical consequence • Name • Necessity • Material conditional • Meaning (linguistic) • Meaning (non-linguistic) • Paradox  (list) • Possible world • Presupposition • Probability • Quantification • Reason • Reasoning • Reference • Semantics • Strict conditional • Syntax (logic) • Truth • Truth value • Validity Branches of logic • Affine logic • Alethic logic • Aristotelian logic • Boolean logic • Buddhist logic • Bunched logic • Categorical logic • Classical logic • Computability logic • Deontic logic • Dependence logic • Description logic • Deviant logic • Doxastic logic • Epistemic logic • First-order logic • Formal logic • Free logic • Fuzzy logic • Higher-order logic • Infinitary logic • Informal logic • Intensional logic • Intermediate logic • Interpretability logic • Intuitionistic logic • Linear logic • Many-valued logic • Mathematical logic • Metalogic • Minimal logic • Modal logic • Non-Aristotelian logic • Non-classical logic • Noncommutative logic • Non-monotonic logic • Ordered logic • Paraconsistent logic • Philosophical logic • Predicate logic • Propositional logic • Provability logic • Quantum logic • Relevance logic • Sequential logic • Strict logic • Substructural logic • Syllogistic logic • Symbolic logic • Temporal logic • Term logic • Topical logic • Traditional logic • Zeroth-order logic Philosophical logic Informal logic and critical thinking Informal logic Critical thinking Argumentation theory • Argument • Argument map • Accuracy and precision • Ad hoc hypothesis • Ambiguity • Analysis • Attacking Faulty Reasoning • Belief • Belief bias • Bias • Cognitive bias • Confirmation bias • Credibility • Critical pedagogy • Critical reading • Decidophobia • Decision making • Dispositional and occurrent belief • Emotional reasoning • Evidence • Expert • Explanation • Explanatory power • Fact • Fallacy • Higher-order thinking • Inquiry • Interpretive discussion • Occam's razor • Opinion • Practical syllogism • Precision questioning • Propaganda • Propaganda techniques • Prudence • Pseudophilosophy • Reasoning • Relevance • Rhetoric • Rigour • Socratic questioning • Source credibility • Source criticism • Theory of justification • Topical logic • Vagueness Theories of deduction • Anti-psychologism • Conceptualism • Constructivism • Conventionalism • Counterpart theory • Deflationary theory of truth • Dialetheism • Fictionalism • Formalism (philosophy) • Game theory • Illuminationist philosophy • Logical atomism • Logical holism • Logicism • Modal fictionalism • Nominalism • Polylogism • Pragmatism • Preintuitionism • Proof theory • Psychologism • Ramism • Semantic theory of truth • Sophism • Trivialism • Ultrafinitism Fallacies • Fallacy  (list) – incorrect argumentation in reasoning resulting in a misconception or presumption. By accident or design, fallacies may exploit emotional triggers in the listener or interlocutor (appeal to emotion), or take advantage of social relationships between people (e.g. argument from authority). Fallacious arguments are often structured using rhetorical patterns that obscure any logical argument. Fallacies can be used to win arguments regardless of the merits. There are dozens of types of fallacies. Formal logic • Formal logic – Mathematical logic, symbolic logic and formal logic are largely, if not completely synonymous. The essential feature of this field is the use of formal languages to express the ideas whose logical validity is being studied. • List of mathematical logic topics Logical symbols Main articles: Table of logic symbols and Symbol (formal) • Logical variables • Propositional variable • Predicate variable • Literal • Metavariable • Logical constants • Logical connective • Quantifier • Identity • Brackets Logical connectives Logical connective • Converse implication • Converse nonimplication • Exclusive or • Logical NOR • Logical biconditional • Logical conjunction • Logical disjunction • Material implication • Material nonimplication • Negation • Sheffer stroke Common logical connectives • Tautology/True $\top $ • Alternative denial (NAND gate) $\uparrow $ • Converse implication $\leftarrow $ • Implication (IMPLY gate) $\rightarrow $ • Disjunction (OR gate) $\lor $ • Negation (NOT gate) $\neg $ • Exclusive or (XOR gate) $\not \leftrightarrow $ • Biconditional (XNOR gate) $\leftrightarrow $ • Statement (Digital buffer) • Joint denial (NOR gate) $\downarrow $ • Nonimplication (NIMPLY gate) $\nrightarrow $ • Converse nonimplication $\nleftarrow $ • Conjunction (AND gate) $\land $ • Contradiction/False $\bot $  Philosophy portal Strings of symbols Main article: Well-formed formula • Atomic formula • Open sentence Types of propositions Proposition • Analytic proposition • Axiom • Atomic sentence • Clause (logic) • Contingent proposition • Contradiction • Logical truth • Propositional formula • Rule of inference • Sentence (mathematical logic) • Sequent • Statement (logic) • Subalternation • Tautology • Theorem Rules of inference Rule of inference  (list) • Biconditional elimination • Biconditional introduction • Case analysis • Commutativity of conjunction • Conjunction introduction • Constructive dilemma • Contraposition (traditional logic) • Conversion (logic) • De Morgan's laws • Destructive dilemma • Disjunction elimination • Disjunction introduction • Disjunctive syllogism • Double negation elimination • Generalization (logic) • Hypothetical syllogism • Law of excluded middle • Law of identity • Modus ponendo tollens • Modus ponens • Modus tollens • Obversion • Principle of contradiction • Resolution (logic) • Simplification • Transposition (logic) Formal theories Main article: Theory (mathematical logic) • Formal proof • List of first-order theories Expressions in a metalanguage Metalanguage • Metalinguistic variable • Deductive system • Metatheorem • Metatheory • Interpretation Propositional logic Propositional logic • Absorption law • Clause (logic) • Deductive closure • Distributive property • Entailment • Formation rule • Functional completeness • Intermediate logic • Literal (mathematical logic) • Logical connective • Logical consequence • Negation normal form • Open sentence • Propositional calculus • Propositional formula • Propositional variable • Rule of inference • Strict conditional • Substitution instance • Truth table • Zeroth-order logic Boolean logic • Boolean algebra   (list) • Boolean logic • Boolean algebra (structure) • Boolean algebras canonically defined • Introduction to Boolean algebra • Complete Boolean algebra • Free Boolean algebra • Monadic Boolean algebra • Residuated Boolean algebra • Two-element Boolean algebra • Modal algebra • Derivative algebra (abstract algebra) • Relation algebra • Absorption law • Laws of Form • De Morgan's laws • Algebraic normal form • Canonical form (Boolean algebra) • Boolean conjunctive query • Boolean-valued model • Boolean domain • Boolean expression • Boolean ring • Boolean function • Boolean-valued function • Parity function • Symmetric Boolean function • Conditioned disjunction • Field of sets • Functional completeness • Implicant • Logic alphabet • Logic redundancy • Logical connective • Logical matrix • Product term • True quantified Boolean formula • Truth table Predicate logic Predicate logic • Atomic formula • Atomic sentence • Domain of discourse • Empty domain • Extension (predicate logic) • First-order logic • First-order predicate • Formation rule • Free variables and bound variables • Generalization (logic) • Monadic predicate calculus • Predicate (mathematical logic) • Predicate logic • Predicate variable • Quantification • Second-order predicate • Sentence (mathematical logic) • Universal instantiation Relations Mathematical relation • Finitary relation • Antisymmetric relation • Asymmetric relation • Bijection • Bijection, injection and surjection • Binary relation • Composition of relations • Congruence relation • Connected relation • Converse relation • Coreflexive relation • Covering relation • Cyclic order • Dense relation • Dependence relation • Dependency relation • Directed set • Equivalence relation • Euclidean relation • Homogeneous relation • Idempotence • Intransitivity • Involutive relation • Partial equivalence relation • Partial function • Partially ordered set • Preorder • Prewellordering • Propositional function • Quasitransitive relation • Reflexive relation • Serial relation • Surjective function • Symmetric relation • Ternary relation • Transitive relation • Trichotomy (mathematics) • Well-founded relation Mathematical logic Mathematical logic Set theory Set theory  (list) • Aleph null • Bijection, injection and surjection • Binary set • Cantor's diagonal argument • Cantor's first uncountability proof • Cantor's theorem • Cardinality of the continuum • Cardinal number • Codomain • Complement (set theory) • Constructible universe • Continuum hypothesis • Countable set • Decidable set • Denumerable set • Disjoint sets • Disjoint union • Domain of a function • Effective enumeration • Element (mathematics) • Empty function • Empty set • Enumeration • Extensionality • Finite set • Forcing (mathematics) • Function (set theory) • Function composition • Generalized continuum hypothesis • Index set • Infinite set • Intension • Intersection (set theory) • Inverse function • Large cardinal • Löwenheim–Skolem theorem • Map (mathematics) • Multiset • Morse–Kelley set theory • Naïve set theory • One-to-one correspondence • Ordered pair • Partition of a set • Pointed set • Power set • Projection (set theory) • Proper subset • Proper superset • Range of a function • Russell's paradox • Sequence (mathematics) • Set (mathematics) • Set of all sets • Simple theorems in the algebra of sets • Singleton (mathematics) • Skolem paradox • Subset • Superset • Tuple • Uncountable set • Union (set theory) • Von Neumann–Bernays–Gödel set theory • Zermelo set theory • Zermelo–Fraenkel set theory Metalogic Metalogic – The study of the metatheory of logic. • Completeness (logic) • Syntax (logic) • Consistency • Decidability (logic) • Deductive system • Interpretation (logic) • Cantor's theorem • Church's theorem • Church's thesis • Effective method • Formal system • Gödel's completeness theorem • Gödel's first incompleteness theorem • Gödel's second incompleteness theorem • Independence (mathematical logic) • Logical consequence • Löwenheim–Skolem theorem • Metalanguage • Metasyntactic variable • Metatheorem • Object language – see metalanguage • Symbol (formal) • Type–token distinction • Use–mention distinction • Well-formed formula Proof theory Proof theory – The study of deductive apparatus. • Axiom • Deductive system • Formal proof • Formal system • Formal theorem • Syntactic consequence • Syntax (logic) • Transformation rules Model theory Model theory – The study of interpretation of formal systems. • Interpretation (logic) • Logical validity • Non-standard model • Normal model • Model • Semantic consequence • Truth value Computability theory Computability theory – branch of mathematical logic that originated in the 1930s with the study of computable functions and Turing degrees. The field has grown to include the study of generalized computability and definability. The basic questions addressed by recursion theory are "What does it mean for a function from the natural numbers to themselves to be computable?" and "How can noncomputable functions be classified into a hierarchy based on their level of noncomputability?". The answers to these questions have led to a rich theory that is still being actively researched. • Alpha recursion theory • Arithmetical set • Church–Turing thesis • Computability logic • Computable function • Computation • Decision problem • Effective method • Entscheidungsproblem • Enumeration • Forcing (computability) • Halting problem • History of the Church–Turing thesis • Lambda calculus • List of undecidable problems • Post correspondence problem • Post's theorem • Primitive recursive function • Recursion (computer science) • Recursive language • Recursive set • Recursively enumerable language • Recursively enumerable set • Reduction (recursion theory) • Turing machine Semantics of natural language Formal semantics (natural language) • Formal systems • Alternative semantics • Categorial grammar • Combinatory categorial grammar • Discourse representation theory • Dynamic semantics • Inquisitive semantics • Montague grammar • Situation semantics • Concepts • Compositionality • Counterfactuals • Generalized quantifier • Logic translation • Mereology • Modality (natural language) • Opaque context • Presupposition • Propositional attitudes • Scope (formal semantics) • Type shifter • Vagueness Classical logic Classical logic • Properties of classical logics: • Law of the excluded middle • Double negation elimination • Law of noncontradiction • Principle of explosion • Monotonicity of entailment • Idempotency of entailment • Commutativity of conjunction • De Morgan duality – every logical operator is dual to another • Term logic • General concepts in classical logic • Baralipton • Baroco • Bivalence • Boolean logic • Boolean-valued function • Categorical proposition • Distribution of terms • End term • Enthymeme • Immediate inference • Law of contraries • Logical connective • Logical cube • Logical hexagon • Major term • Middle term • Minor term • Octagon of Prophecies • Organon • Polysyllogism • Port-Royal Logic • Premise • Prior Analytics • Relative term • Sorites paradox • Square of opposition • Sum of Logic • Syllogism • Tetralemma • Truth function Modal logic Modal logic • Alethic logic • Deontic logic • Doxastic logic • Epistemic logic • Temporal logic Non-classical logic Non-classical logic • Affine logic • Bunched logic • Computability logic • Decision theory • Description logic • Deviant logic • Free logic • Fuzzy logic • Game theory • Intensional logic • Intuitionistic logic • Linear logic • Many-valued logic • Minimal logic • Non-monotonic logic • Noncommutative logic • Paraconsistent logic • Probability theory • Quantum logic • Relevance logic • Strict logic • Substructural logic Concepts of logic • Deductive reasoning • Inductive reasoning • Abductive reasoning Mathematical logic • Proof theory • Set theory • Formal system • Predicate logic • Predicate • Higher-order logic • Propositional calculus • Proposition • Boolean algebra • Boolean logic • Truth value • Venn diagram • Peirce's law • Aristotelian logic • Non-Aristotelian logic • Informal logic • Fuzzy logic • Infinitary logic • Infinity • Categorical logic • Linear logic • Metalogic • order • Ordered logic • Temporal logic • Linear temporal logic • Linear temporal logic to Büchi automaton • Sequential logic • Provability logic • Interpretability logic • Interpretability • Quantum logic • Relevant logic • Consequent • Affirming the consequent • Antecedent • Denying the antecedent • Theorem • Axiom • Axiomatic system • Axiomatization • Conditional proof • Invalid proof • Degree of truth • Truth • Truth condition • Truth function • Double negation • Double negation elimination • Fallacy • Existential fallacy • Logical fallacy • Syllogistic fallacy • Type theory • Game theory • Game semantics • Rule of inference • Inference procedure • Inference rule • Introduction rule • Law of excluded middle • Law of non-contradiction • Logical constant • Logical connective • Quantifier • Logic gate • Boolean Function • Quantum logic gate • Tautology • Logical assertion • Logical conditional • Logical biconditional • Logical equivalence • Logical AND • Negation • Logical OR • Logical NAND • Logical NOR • Contradiction • Subalternation • Logicism • Polysyllogism • Syllogism • Hypothetical syllogism • Major premise • Minor premise • Term • Singular term • Major term • Middle term • Quantification • Plural quantification • Logical argument • Validity • Soundness • Inverse (logic) • Non sequitur • Tolerance • Satisfiability • Logical language • Paradox • Polish notation • Principia Mathematica • Quod erat demonstrandum • Reductio ad absurdum • Rhetoric • Self-reference • Necessary and sufficient • Sufficient condition • Nonfirstorderizability • Occam's Razor • Socratic dialogue • Socratic method • Argument form • Logic programming • Unification History of logic History of logic Literature about logic Journals • Journal of Logic, Language and Information • Journal of Philosophical Logic • Linguistics and Philosophy Books • A System of Logic • Attacking Faulty Reasoning • Begriffsschrift • Categories (Aristotle) • Charles Sanders Peirce bibliography • De Interpretatione • Gödel, Escher, Bach • Introduction to Mathematical Philosophy • Language, Truth, and Logic • Laws of Form • Novum Organum • On Formally Undecidable Propositions of Principia Mathematica and Related Systems • Organon • Philosophy of Arithmetic • Polish Logic • Port-Royal Logic • Posterior Analytics • Principia Mathematica • Principles of Mathematical Logic • Prior Analytics • Rhetoric (Aristotle) • Sophistical Refutations • Sum of Logic • The Art of Being Right • The Foundations of Arithmetic • Topics (Aristotle) • Tractatus Logico-Philosophicus Logic organizations • Association for Symbolic Logic Logicians • List of logicians • List of philosophers of language See also • Index of logic articles • Mathematics • List of basic mathematics topics • List of mathematics articles • Philosophy • List of basic philosophy topics • List of philosophy topics • Outline of discrete mathematics – for introductory set theory and other supporting material External links • Taxonomy of Logical Fallacies • forall x: an introduction to formal logic, by P.D. Magnus, covers sentential and quantified logic • Translation Tips, by Peter Suber, for translating from English into logical notation • Math & Logic: The history of formal mathematical, logical, linguistic and methodological ideas. In The Dictionary of the History of Ideas. • Logic test Test your logic skills • Logic Self-Taught: A Workbook (originally prepared for on-line logic instruction) Logic • Outline • History Major fields • Computer science • Formal semantics (natural language) • Inference • Philosophy of logic • Proof • Semantics of logic • Syntax Logics • Classical • Informal • Critical thinking • Reason • Mathematical • Non-classical • Philosophical Theories • Argumentation • Metalogic • Metamathematics • Set Foundations • Abduction • Analytic and synthetic propositions • Contradiction • Paradox • Antinomy • Deduction • Deductive closure • Definition • Description • Entailment • Linguistic • Form • Induction • Logical truth • Name • Necessity and sufficiency • Premise • Probability • Reference • Statement • Substitution • Truth • Validity Lists topics • Mathematical logic • Boolean algebra • Set theory other • Logicians • Rules of inference • Paradoxes • Fallacies • Logic symbols •  Philosophy portal • Category • WikiProject (talk) • changes
Wikipedia
\begin{document} \title[Control Policies for HGI Performance]{Control Policies Approaching HGI Performance in Heavy Traffic for Resource Sharing Networks} \date{} \subjclass[2010]{Primary: 60K25, 68M20, 90B36. Secondary: 60J70.} \keywords{Stochastic networks, dynamic control, heavy traffic, diffusion approximations, Brownian control problems, reflected Brownian motions, threshold policies, resource sharing networks, Internet flows. } \author[Budhiraja]{Amarjit Budhiraja$^1$} \author[Johnson]{Dane Johnson$^1$} \address{$^1$Department of Statistics and Operations Research, 304 Hanes Hall, University of North Carolina, Chapel Hill, NC 27599} \email{[email protected], [email protected]} \maketitle \begin{abstract} We consider resource sharing networks of the form introduced in the work of Massouli\'{e} and Roberts(2000) as models for Internet flows. The goal is to study the open problem, formulated in Harrison et al. (2014), of constructing simple form rate allocation policies for broad families of resource sharing networks with associated costs converging to the Hierarchical Greedy Ideal performance in the heavy traffic limit. We consider two types of cost criteria, an infinite horizon discounted cost, and a long time average cost per unit time. We introduce a sequence of rate allocation control policies that are determined in terms of certain thresholds for the scaled queue length processes and prove that, under conditions, both type of costs associated with these policies converge in the heavy traffic limit to the corresponding HGI performance. The conditions needed for these results are satisfied by all the examples considered in Harrison et al. (2014). \end{abstract} \section{Introduction} In \cite{harmandhayan} the authors have formulated an interesting and challenging open problem for resource sharing networks that were introduced in the work of Massouli\'{e} and Roberts \cite{masrob} as models for Internet flows. A typical network of interest consists of $I$ resources (labeled $1, \ldots , I$) with associated capacities $C_i$, $i = 1, \ldots , I$. Jobs of type $1, \ldots , J$ arrive according to independent Poisson processes with rates depending on the job-type and the job-sizes of different job-type are exponentially distributed with parameters once more depending on the type. Usual assumptions on mutual independence are made. The processing of a job is accomplished by allocating a {\em flow rate} to it over time and a job departs from the system when the integrated flow rate equals the size of the job. A typical job-type requires simultaneous processing by several resources in the network. This relationship between job-types and resources is described through a $I\times J$ incidence matrix $K$ for which $K_{ij}= 1$ if $j$-th job-type requires processing by resource $i$ and $K_{ij}=0$ otherwise. Denoting by $x = (x_1, \ldots , x_J)'$ the vector of flow rates allocated to various job-types at any given time instant, $x$ must satisfy the capcity constraint $Kx \le C$, where $C= (C_1, \ldots , C_I)'$. One of the basic problems for such networks is to construct ``good'' dynamic control policies that allocate resource capacities to jobs in the system. A ``good'' performance is usually quantified in terms of an appropriate cost function. One can formulate an optimal stochastic control problem using such a cost function, however in general such control problems are intractable and therefore one considers an asymptotic formulation under a suitable scaling. The paper \cite{harmandhayan} formulates a Brownian control problem (BCP) that formally approximates the system manager's control under heavy traffic conditions. Since finding optimal solutions of such general Brownian control problems and constructing asymptotically optimal control policies for the network based on such solutions is a notoriously hard problem, the paper \cite{harmandhayan} proposes a different approach in which the goal is not to seek an asymptotically optimal solution for the network but rather control policies that achieve the so called {\em Hierarchical Greedy Ideal} (HGI) performance in the heavy traffic limit. Formally speaking, HGI performance is the cost associated with a control in the BCP (which is in general sub-optimal), under which (I) no resource's capacity is underutilized when there is work for that resource in the system, and (II) the total number of jobs of each type at any given instant is the minimum consistent with the vector of workloads for the various resources. Desirability of such control policies has been argued in great detail in \cite{harmandhayan} through simulation and numerical examples and will not be revisited here. The main open problem formulated in \cite{harmandhayan} is to construct simple form rate allocation policies for broad families of resource sharing networks with associated costs converging to the HGI performance determined from the corresponding BCP. The goal of this work is to make progress on this open problem. We consider two types of cost criteria, the first is an infinite horizon discounted cost (see \eqref{eq: costdisc}) and the second is a long time average cost per unit time (see \eqref{eq: costerg}). In particular the second cost criterion is analogous to the cost function considered in \cite{harmandhayan}. We introduce a sequence of rate allocation control policies that are determined in terms of certain thresholds for the scaled queue length processes and prove in Theorems \ref{thm:thm6.5} and \ref{thm:thm6.5disc} that, under conditions, the costs \eqref{eq: costerg} and \eqref{eq: costdisc} associated with these policies converge in the heavy traffic limit to the corresponding HGI performance. We now comment on the conditions that are used in establishing the above results. The first main condition (Condition \ref{cond:loctrafcond}) we need is the existence of {\em local traffic} on each resource, namely for each resource $i$ there is a unique job type that only uses resource $i$. This basic condition, first introduced in \cite{kankelleewil}, is also a key assumption in \cite{harmandhayan} and is needed in order to ensure that the state space of the {\em workload process} is all of the positive orthant (see Section \ref{sec:hgi} for a discussion of this point). Our second condition (Condition \ref{cond:HT1}) is a standard heavy traffic condition and a stability condition for diffusion scaled workload processes. The stability condition will be key in Section \ref{sec:unifmom} when establishing moment bounds that are uniform in time and scaling parameter. We now describe the final main condition used in this work. In Section \ref{sec:polandmaires} we will see that the collection of all job-types can be decomposed into the so called {\em primary} jobs and {\em secondary } jobs. Primary jobs are those with `high' holding cost and intuitively are the ones we want to process first. It will also be seen in Section \ref{sec:polandmaires} that the collection $\mathcal{S}^1$ of all job-types that only require processing from a single resource is contained in the collection $\mathcal{S}^s$ of all secondary jobs. Our third main condition, formulated as Condition \ref{cond:viableRankExists}, says that there is a {\em ranking} of all job-types in $\mathcal{S}^m \doteq \mathcal{S}^s\setminus \mathcal{S}^1$. A precise notion of a ranking is given in Definition \ref{def:viableRank}, but roughly speaking, the job-types with larger rank value will get higher `attention' in a certain sense under our proposed policy. We note that the ranking is given through a deterministic map that only depends on system parameters and not on the state of the system. The condition is somewhat nontransparent and notationally cumbersome and so we provide two sufficient conditions in Theorems \ref{thm:SuffSubsetJobs} and \ref{thm:SuffRemoveJobCond} for Condition \ref{cond:viableRankExists} to hold. We also discuss in Remarks \ref{rem:firsthm} and \ref{rem:secthm} some examples where one of these sufficient conditions holds. In particular, all the examples in \cite{harmandhayan} (2LLN, 3LLN, C3LN, and the negative example of Section 13 therein) satisfy Condition \ref{cond:viableRankExists}. Furthermore, there are many other networks not covered by Theorems \ref{thm:SuffSubsetJobs} and \ref{thm:SuffRemoveJobCond} where Condition \ref{cond:viableRankExists} is satisfied and in Example \ref{exam:out} we provide one such example. Finally, it is not hard to construct examples where Condition \ref{cond:viableRankExists} fails and in Example \ref{exam:outout} we give such an example. Construction of simple form rate allocation policies that achieve HGI performance in the heavy traffic limit for general families of models as in Example \ref{exam:outout} remains a challenging open problem. We expect that suitable notions of state dependent ranking maps will be needed in order to use the ideas developed in the current work for treating such models, however the proofs and constructions are expected to be substantially more involved. Our rate allocation policy is introduced in Definition \ref{def:workAllocScheme}. Implementation of this policy requires first determining the collection of secondary jobs. This step, using the definition in \eqref{eq:primjobdef}, can be completed easily by solving a finite collection of linear programming problems. The next step is to determine a {\em viable ranking} (if it exists) of all jobs in $\mathcal{S}^m$. In general when $\mathcal{S}^m$ is very large, determining this ranking may be a numerically hard problem, however as discussed in Section \ref{sec:simsuffcond}, for many examples this ranking can be given explicitly in a simple manner. Once a ranking is determined, the policy in Definition \ref{def:workAllocScheme} is explicit given in terms of arbitrary positive constants $c_1, c_2$ with $c_1<c_2$ and $\alpha \in (0, 1/2)$. Roughly speaking, our approach is applicable to systems where job-types have a certain ordering of ``urgency'' in the sense that, regardless of the particular workload, we want as much of it as possible to come from the least urgent job types. A second concern that needs to be addressed is that a resource should work at `near' full capacity when there is `non negligible' amount of work for it. A detailed discussion of how the proposed policy achieves these goals is given in Remark \ref{rem:poldisc} where we also comment on connection between this policy and the UFO policies proposed in \cite{harmandhayan}. We now comment on the proofs of our main results, Theorems \ref{thm:thm6.5} and \ref{thm:thm6.5disc}. Both results rely on large deviation probability estimates and stopping time constructions of the form introduced first in the works of Bell and Williams \cite{belwil1, belwil2} (see also \cite{budgho1} and \cite{atakum}). A key result is Theorem \ref{thm:discCostInefBnd} which relates the cost under our policy with the workload cost function $\mathcal{C}$ in \eqref{eq:eq942}. This estimate is crucial in achieving property (II) of the HGI asymptotically. Asymptotic achievement of property (I) of HGI is a consequence of Theorem \ref{thm:finTimeConvToRBM}, the estimate in \eqref{eq:eqworkldfi} and continuity properties of the Skorohod map. Proof of Theorem \ref{thm:thm6.5} requires additional moment estimates that are uniform in time and the scaling parameter (see Section \ref{sec:unifmom}). A key such estimate is given in Theorem \ref{thm:WmomBnd}, the proof of which relies on the construction of a suitable Lyapunov function (see Proposition \ref{thm:timeDecayV}). Once uniform moment bounds are available, one can argue tightness of certain path occupation measures (see Theorem \ref{thm:occMeasTight}) and characterize their limit points in a suitable manner (see Theorem \ref{thm:limitMeasProp}). Desired cost convergence then follows readily by appealing to continuous mapping theorem and uniform integrability estimates. The paper is organized as follows. In Section \ref{sec:backg} we introduce the state dynamics, cost functions of interest, and two of our main conditions. Section \ref{sec:hgi} gives the precise definition of Hierarchical Greedy Ideal Performance in terms of certain costs associated with $I$ dimensional reflected Brownian motions. In Section \ref{sec:polandmaires} we introduce our final key condition (Condition \ref{cond:viableRankExists}), present our dynamic rate allocation policy, and give our two main convergence results: Theorems \ref{thm:thm6.5} and \ref{thm:thm6.5disc}. Section \ref{sec:example} discusses Condition \ref{cond:viableRankExists} and presents some sufficient conditions for it to be satisfied. This section also gives an example where the condition fails to hold. Sections \ref{sec:secworkcost} - \ref{sec:pathoccmzr} form the technical heart of this work. Section \ref{sec:secworkcost} proves some useful properties of the workload cost function $\mathcal{C}(\cdot)$ introduced in \eqref{eq:eq942} and Section \ref{sec:ratallpol} studies some important structural properties of our proposed rate allocation policy. Section \ref{sec:ldest} is technically the most demanding part of this work. It provides some key estimates on costs under our scheme in terms of the workload cost function and establishes certain moment estimates that are uniform in time and the scaling parameter. In Section \ref{sec:pathoccmzr} we introduce certain path occupation measures, prove their tightness, and characterize the limit points. Finally Section \ref{sec:pfsmainthms} completes the proof our two main results. An appendix contains some standard large deviation estimates for Poisson processes. The following notation will be used. For a Polish space $\mathbb{S}$, denote the corresponding Borel $\sigma$-field by $\mathcal{B}(\mathbb{S})$. Denote by $\mathcal{P}(\mathbb{S})$ (resp.\ $\mathcal{M}(\mathbb{S})$) the space of probability measures (resp. finite measures) on $\mathbb{S}$, equipped with the topology of weak convergence. For $f \colon \mathbb{S} \to \mathbb{R}$, let $\|f\|_\infty \doteq \sup_{x \in \mathbb{S}} |f(x)|$. For a Polish space $\mathbb{S}$ and $T>0$, denote by $C([0,T]:\mathbb{S})$ (resp.\ $D([0,T]:\mathbb{S})$) the space of continuous functions (resp.\ right continuous functions with left limits) from $[0,T]$ to $ \mathbb{S}$, endowed with the uniform topology (resp.\ Skorokhod topology). We say a collection $\{ X^n \}$ of $\mathbb{S}$-valued random variables is tight if the distributions of $X^n$ are tight in $\mathcal{P}(\mathbb{S})$. Equalities and inequalities involving vectors are interpreted component-wise. \section{General Background} \label{sec:backg} Assume there are $J$ types of jobs and $I$ resources for processing them. \ The network is described through the $I\times J$ matrix $K$ that has entries $K_{ij}=1$ if resource $i$ works on job type $j$, and $K_{i,j}=0$ otherwise. We will assume (for simplicity) that no two columns of $K$ are identical, namely, given a subset of resources, there is at most one job-type that has this subset as the associated set of resources. Given $m \in \mathbb{N}$, we let $\mathbb{N}_m \doteq \{1, 2, \ldots m\}$. In particular, $\mathbb{N}_I=\{1, \ldots I\}$ and $\mathbb{N}_J=\{1, \ldots J\}$. Denote by $N_j$ the set of resources that work on type $j$ jobs, i.e. \begin{equation*} N_j \doteq \{i \in \mathbb{N}_I: K_{i,j}=1\}. \end{equation*} Let $\mathcal{S}^1$ be the collection of all job types that use only one resource. I.e. \begin{equation*} \mathcal{S}^1 \doteq \{j \in \mathbb{N}_J: {\mathbf{1}}^T Ke_j = \sum_{i=1}^I K_{ij} = 1\}, \end{equation*} where $e_j$ is the unit vector in $\mathbb{R}^J$ with $1$ in the $j$-th coordinate and $\mathbf{1}$ is the $I$-dimensional vector of ones. Throughout we assume that for every resource there is a unique job type that only uses that resource, namely the following condition is satisfied. \begin{condition} \label{cond:loctrafcond} $\bigcup_{j \in \mathcal{S}^1} N_j = \mathbb{N}_I $ \end{condition} We denote the unique job-type that uses only resource $i$ as $\check j(i)$. Similarly for $j \in \mathcal{S}^1$, we denote by $\hat i(j)$ the unique resource that processes this job-type. The capacity for resource $i$ is given by $C_{i}$. Let $\{\eta _{j}^{r}(k)\}_{k=1}^{\infty }$ be the i.i.d. inter-arrival times for the $j$ -th job type and let $\{\Delta _{j}^{r}(k)\}_{k=1}^{\infty }$ be the associated i.i.d. amounts of work for the $j$-th job type. If at a given instant work of type $j$ is processed at rate $x_{j}$ then the capacity constraint requires that $C\geq Kx$. We assume the $\{\eta _{j}^{r}(k)\}_{k=1}^{\infty }$ are exponentially distributed with rates $\lambda _{j}^{r}$ and the $\{\Delta _{j}^{r}(k)\}_{k=1}^{\infty }$ are exponentially distributed with rates $\mu _{j}^{r}$. \ Define Poisson processes \begin{equation*} A_{j}^{r}(t)=\max \left\{ k:\sum_{i=1}^{k}\eta _{j}^{r}(i)\leq t\right\},\; S_{j}^{r}(t)=\max \left\{ k:\sum_{i=1}^{k}\Delta _{j}^{r}(i)\leq t\right\} \text{.} \end{equation*} \ Let $\varrho_{j}^{r}=\frac{\lambda _{j}^{r}}{\mu _{j}^{r}}$ and $\varrho^r \doteq (\varrho_j^r)_{j=1}^J$. The following will be our main heavy traffic condition. The requirement $v^*>0$ will ensure the stability of the reflected Brownian motion in \eqref{eq:eqrbm} and will be a key ingredient for uniform moment estimates in Section \ref{sec:unifmom}. \begin{condition} \label{cond:HT1} $C > K\varrho^{r}$ for all $r$. For some $\lambda_j, \mu_j \in (0,\infty)$, $\lim_{r\rightarrow \infty } \lambda_{j}^{r} = \lambda_j$, $\lim_{r\rightarrow \infty } \mu_{j}^{r} = \mu_j$, for all $j \in \mathbb{N}_J$. With $\varrho_j = \frac{\lambda_j}{\mu_j}$ and $\varrho = (\varrho_j)_{j\in \mathbb{N}_J}$, $ C= K\varrho$, $ \lim_{r\rightarrow \infty}r(\varrho - \varrho^{r})= \beta^*$, $v^* \doteq K\beta^*>0$. \ \end{condition} Consider a $J$-dimensional absolutely continuous, nonnegative, non-decreasing stochastic process $ \{B^r(t)\}$ where $B^r_j(t)$ represents the amount of type $j$ work processed by time $t$ under a given policy. Note that such a process must satisfy the resource constraint: \begin{equation} K\dot{B}^r_j(t) \le C, \mbox{ for all } t \ge 0.\label{eq:resconst} \end{equation} Define the $I$ dimensional capacity-utilization process $T^r = KB^r$. Then $ T^r_i(t)$ represents the amount of work processed by the $i$-th resource by time $t$. Letting $I^r(t)= tC -T^r(t)$, $I^r_i(t)$ represents the unused capacity of resource $i$ by time $t$. Let $\{Q^{r}(t)\}$ be the $J$-dimensional process, where $Q^{r}_j(t)$ represents the number of jobs in the queue for type $j$ jobs. Then \begin{equation} Q^{r}(t)=q^r+A^{r}(t)-S^{r}\left(B^r(t)\right),\label{eq:queleneqn} \end{equation} where $q^r$ denotes the initial queue-length vector. For $B^r$ to be a valid rate allocation policy, $Q^r$ defined by \eqref{eq:queleneqn} must satisfy \begin{equation} \label{eq:qrnonneg} Q^r(t) \ge 0 \mbox{ for all } t\ge 0. \end{equation} Any absolutely continuous, nonnegative, non-decreasing stochastic process $ \{B^r(t)\}$ satisfying \eqref{eq:resconst}, \eqref{eq:qrnonneg} and appropriate non-anticipativity conditions will be referred to as a {\bf resource allocation policy} or simply a {\bf control policy}. Non-anticipativity conditions on $\{B^r\}$ are formulated using multi-parameter filtrations as in \cite{budgho2} (see Definition 2.6 (iv) therein). We omit details here, however we will note that from Theorem 5.4 of \cite{budgho2} it follows that the control policy constructed in Section \ref{sec:secallostra} is non-anticipative in the sense of \cite{budgho2}. Let $W^{r}(t)$ be the $I$ -dimensional workload process given by $W^{r}(t)=KM^{r}Q^{r}(t)$ where $M^{r} $ is the diagonal matrix with entries $1/\mu _{j}^{r}$. Define the fluid-scaled quantities by \begin{align}\label{eq:eq935} \begin{split} &\bar{T}^{r}(t)=T^{r}(r^{2}t)/r^{2},\;\; \bar{B}^{r}(t)=B^{r}(r^{2}t)/r^{2}, \;\; \bar{I}^{r}(t)=I^{r}(r^{2}t)/r^{2} \\ &\bar{A}^{r}(t)=A(r^{2}t)/r^{2},\;\; \bar{S}^{r}(t)=S^{r}(r^{2}t)/r^{2}, \\ & \bar{Q}^{r}(t)=Q^{r}(r^{2}t)/r^{2}, \;\; \bar{W}^{r}(t)=W(r^{2}t)/r^{2} \end{split} \end{align} and the diffusion scaled quantities \begin{align}\label{eq:eq938} \begin{split} &\hat{T}^{r}(t)=T(r^{2}t)/r,\;\;\hat{B}^{r}(t)=B^{r}(r^{2}t)/r,\;\;\hat{I} ^{r}(t)=I^{r}(r^{2}t)/r, \\ &\hat{A}^{r}(t)=(A(r^{2}t)-\lambda ^{r}r^{2}t)/r,\;\; \hat{S} ^{r}(t)=(S^{r}(r^{2}t)-\mu ^{r}r^{2}t)/r, \\ &\hat{Q}^{r}(t)=Q(r^{2}t)/r,\;\; \hat{W}^{r}(t)=W^{r}(r^{2}t)/r\text{.} \end{split} \end{align} Note that, with $G^r \doteq KM^r$ and $\hat w^r \doteq G^r\hat q^r$, \begin{align} \hat{W}^{r}(t) = G^r\hat{Q}^{r}(t) = \hat w^r +G^{r}(\hat{A}^{r}(t)-\hat{S}^{r}(\bar{B}^{r}(t)))+tr(K\varrho ^{r}-C)+r\bar{I}^{r}(t)\text{.}\label{eq:eq939} \end{align} Let $h$ be a given $I$-dimensional strictly positive vector. Associated with a control policy $B^r$, We will be interested in two types of cost structures: \begin{itemize} \item \textbf{Infinite horizon discounted cost:} Fix $\theta \in (0,\infty)$. \begin{equation} \label{eq: costdisc} J_D^r(B^r, q^r) \doteq \int_0^{\infty} e^{-\theta t} E \left(h \cdot \hat Q^r(t)\right) dt. \end{equation} \item \textbf{Long-term cost per unit time:} \begin{equation} \label{eq: costerg} J_E^r(B^r, q^r) \doteq \limsup_{T\to \infty} \frac{1}{T} \int_0^T E \left(h \cdot \hat Q^r(t)\right) dt. \end{equation} \end{itemize} The goal of this work is to construct dynamic rate allocation policies that asymptotically achieve the Hierarchical Greedy Ideal(HGI) performance as $r\to \infty$. The next section gives the precise definition of HGI performance. \section{Hierarchical Greedy Ideal} \label{sec:hgi} Similar to $M^r$ and $G^r$ in Section \ref{sec:backg}, let $M$ be the $ J\times J$ diagonal matrix with entries $\{1/\mu_j\}_{j=1}^J$ and let $G \doteq KM$. Define for $w \in \mathbb{R}_+^I$ (regarded as a workload vector), the set of possible associated queue lengths $\mathcal{Q}(w)$ by the relation \begin{equation*} \mathcal{Q}(w)\doteq \{ q\in \mathbb{R}_+^J: Gq=w\}. \end{equation*} Note that by our assumption on $K$, $\mathcal{Q}(w)$ is compact for every $w \in \mathbb{R}_+^I$. Also the local traffic condition (Condition \ref{cond:loctrafcond}) ensures that $\mathcal{Q}(w)$ is nonempty for every $w \in \mathbb{R}_+^I$. HGI performance introduced in \cite{harmandhayan} is motivated by the Brownian control problem (BCP), as introduced in \cite{har1}, associated with the network in Section \ref{sec:backg} and the holding cost vector $h$. This BCP has an equivalent workload formulation (EWF) from the results of \cite{harvan} (see Section 10 of \cite{harmandhayan}). The EWF in the current setting is a singular control problem with state space that is all of the positive orthant $\mathbb{R}_+^I$(due to the local traffic condition). In the EWF the cost is given by a nonlinear function $\mathcal{C}$ defined as \begin{equation} \label{eq:eq942} \mathcal{C}(w) \doteq \inf_{q\in \mathcal{Q}(w)}\{ h\cdot q\}, \; w \in \mathbb{R}_+^I . \end{equation} One particular control in the EWF is the one corresponding to no-action in the interior and normal reflection on the boundary of the orthant. This control yields the (coordinate-wise) minimal controlled state process in the EWF given as the $I$-dimensional reflected Brownian motion in $\mathbb{R}_+^I$ with normal reflection. The HGI performance is the cost, in terms of the the workload cost function $\mathcal{C}$, associated with this minimal state process. We now give precise definitions. We first recall the definition of the Skorohod problem and Skorohod map with normal reflections on the $d$-dimensional positive orthant. \begin{definition} \label{def:smsp} Let $\psi \in D([0,T]: \mathbb{R}^d)$ such that $\psi(0)\in \mathbb{R}^d_+$. The pair $(\varphi,\eta) \in D([0,T]: \mathbb{R}^d\times \mathbb{R}^d)$ is said to solve the {\em Skorohod problem} for $\psi$ (in $\mathbb{R}^d_+$, with normal reflection) if $\varphi = \psi+\eta$; $\varphi(t)\in \mathbb{R}^d_+$ for all $t \ge 0$; $\eta(0)=0$; $\eta$ is nondecreasing and $\int_{[0,T]} 1_{\{\varphi_i(t)>0\}} d\eta_i(t) =0$. We write $\varphi = \Gamma_d(\psi)$ and refer to $\Gamma_d$ as the $d$-dimensional {\em Skorohod map}. \end{definition} It is known that there is a unique solution to the above Skorohod problem for every $\psi \in D([0,T]: \mathbb{R}^d)$ and that the Skorohod map has the following Lipschitz property: There exists $K_{\Gamma_d} \in (0,\infty)$ such that for all $T>0$ and $\psi_i \in D([0,T]: \mathbb{R}^d)$ such that $\psi_i(0) \in \mathbb{R}^d_+$, $i=1,2$ $$\sup_{0\le t \le T}|\Gamma_d(\psi_1)(t) - \Gamma_d(\psi_2)(t)| \le K_{\Gamma_d} \sup_{0\le t \le T}|\psi_1(t)- \psi_2(t)|.$$ Also note that for $\psi \in D([0,T]: \mathbb{R}^d)$, $\Gamma_d(\psi)_i = \Gamma_1(\psi_i)$ for all $i = 1, \ldots d$. When $d=I$ we will write $\Gamma_d=\Gamma_I$ as simply $\Gamma$. Let $(\check \Omega, \check \mathcal{F}, \{\check \mathcal{F}_t\}, \check P)$ be a filtered probability space on which is given a $J$-dimensional standard $\{\check \mathcal{F}_t\}$- Brownian motion $\{\check B(t)\}$. Let $\zeta_j \doteq 2\varrho_j/\mu_j$ for $j \in J$ and let $\mbox{{ Diag}}(\mathbf{\zeta})$ be the $J\times J$ diagonal matrix with $j$-th diagonal entry $\zeta_j$. Let $\Lambda \doteq K(\mbox{{ Diag}}(\mathbf{\zeta}))^{1/2}$. For $w_0 \in \mathbb{R}_+^I$, let $\check W^{w_0}$ be a $\mathbb{R}_+^I$ valued continuous stochastic process defined as \begin{equation} \label{eq:eqrbm} \check W^{w_0}(t) = \Gamma (w_0 - v^*\iota + \Lambda \check B(\cdot))(t), \; t \ge 0 \end{equation} where $\iota: [0,\infty) \to [0,\infty)$ is the identity map. Then $\check W^{w_0}$ is a $I$-dimensional reflected Brownian motion with initial value $w_0$, drift $-v^*$ and covariance matrix $\Lambda \Lambda'$. It is well known\cite{harwil1} that $\{\check W^{w_0}\}_{w_0 \in \mathbb{R}_+^I}$ defines a Markov process that has a unique invariant probability distribution which we denote as $\pi$. Suppose $\hat q^r \doteq q^r/r \to q_0$ as $r\to \infty$ and let $w_0\doteq Gq_0$. Then the HGI cost associated with the costs $J_D^r(B^r, q^r) $ and $J_E^r(B^r, q^r) $ are given respectively as \begin{align} \mbox{HGI}_D(w_0) &\doteq \int_0^{\infty} e^{-\theta t} E \left(\mathcal{C}(\check W^{w_0}(t))\right) dt\nonumber\\ \mbox{HGI}_E &\doteq \int_{\mathbb{R}^+_I} \mathcal{C}(w) \pi(dw). \label{eq:hgicosts} \end{align} \section{Control Policy and Convergence to HGI} \label{sec:polandmaires} This section will introduce our final key condition on the model and present our main results. Denote by $g_1, \ldots g_J$ the columns of the matrix $G$, i.e. $G=[g_1, \ldots, g_J]$. We will partition the set $\mathbb{N}_J$ into sets $\mathcal{S}^p$ and $ \mathcal{S}^s$ corresponding to the set of \emph{primary jobs} and the set of \emph{ secondary jobs} respectively, defined as follows \begin{equation} \label{eq:primjobdef} \mathcal{S}^p \doteq \{j \in \mathbb{N}_J: \mathcal{C}(g_j) < h_j\}, \; \mathcal{S}^s \doteq \mathbb{N} _J\setminus \mathcal{S}^p. \end{equation} Intuitively, $\mathcal{S}^p$ corresponds to the set of jobs that we want to process first. Within the set of secondary jobs we will distinguish the set $\mathcal{S}^1$, introduced earlier, of all job types that use only one resource. Note that $ \mathcal{S}^1$ is indeed a subset of $\mathcal{S}^s$ since for $j \in \mathcal{S}^1$, $\mathcal{Q}(g_j) = \{e_j\}$ and so \begin{equation*} \mathcal{C}(g_j) = \inf_{q\in \mathcal{Q}(g_j)} \{h \cdot q\} = h_j. \end{equation*} We now introduce the notion of \emph{minimal covering} sets associated with any $j \in \mathbb{N}_J$ and also define, for given $F \subset \mathbb{N}_J \setminus \{j\}$, minimal covering sets of $j$ that are not covering sets for any $j^{\prime }\in F$. \begin{definition} \label{def:minCovCollect} Given $E\subset\mathbb{N}_J$ and $k\in \mathbb{N}_J$ we define $\mathcal{M} ^{E,k}$ to be the collection of all minimal sets of jobs in $E$ other than $k$ such that $N_k$ is contained in the set of all resources associated with the jobs in the set, namely, \begin{equation*} \mathcal{M}^{E,k} \doteq \left\{M\subset E\setminus \{k\}: N_{k}\subseteq \bigcup _{j\in M}N_{j} \mbox{ and } N_k \nsubseteq \bigcup _{j\in M\setminus \{l\}}N_{j} \mbox{ for all } l\in M\right\}. \end{equation*} In addition, given $F\subset \mathbb{N}_J$ define $\mathcal{M}_{F}^{E,k}$ to be the collection of all $M \in \mathcal{M}^{E,k}$ such that the set of resources associated with any job in $F$ is not contained in the set of resources associated with jobs in $M$, namely, \begin{equation*} \mathcal{M}_{F}^{E,k} \doteq \left\{M \in \mathcal{M}^{E,k}: N_{l}\nsubseteq \bigcup _{j\in M}N_{j} \mbox{ for any } l\in F\right\}. \end{equation*} \end{definition} Minimal covering sets will be used to determine the collection of jobs which do not have lower priority than any other job in a given subset of $\mathbb{N }_J$. For that we introduce the following definition. Let $\mathcal{S}^m \doteq \mathcal{S}^s\setminus \mathcal{S}^1$ be the collection of secondary jobs that use multiple resources and let $m \doteq |\mathcal{S}^m|$. Denote the $j$-th column of $K$ by $K_j$ , i.e. $K = [K_1, \ldots , K_J]$. \begin{definition} \label{def:optJob} Given sets $E,F\subset \mathcal{S}^m$ define the set $\mathcal{O} _{F}^{E}\subset E$ by $j^{\prime }\in \mathcal{O}_{F}^{E}$ if and only if for all $M\in \mathcal{M}_{F}^{E\bigcup \mathcal{S}^1,j^{\prime }}$ \begin{equation} \label{eq:eq929} \mu _{j^{\prime }}h_{j^{\prime }}+\mathcal{C}\left( \sum_{j\in M}K_j-K_{j^{\prime }}\right) \leq \mathcal{C}\left(\sum_{j\in M}K_j\right). \end{equation} and the set $\mathcal{O}^{E}\subset E$ by $j^{\prime }\in \mathcal{O}^{E}$ if and only if \eqref{eq:eq929} holds for all $M\in \mathcal{M}^{E\bigcup \mathcal{S}^1,j^{\prime }}$. \end{definition} Note that since a $M\in \mathcal{M}_{F}^{E\bigcup \mathcal{S}^1,j^{\prime }}$ covers $j'$, $\sum_{j\in M} K_j - K_{j'}$ is a nonnegative vector. We now introduce the notion of a \emph{viable ranking} of jobs in $\mathcal{S}^m$. \begin{definition} \label{def:viableRank}A viable ranking of jobs in $\mathcal{S}^m$ is a bijection $\rho: \mathbb{N}_m\rightarrow \mathcal{S}^m$, such that for all $k\in \mathbb{N}_m$, $\rho(k)\in \mathcal{O}_{F_k}^{E_k}$, where for $k \in \mathbb{N}_m$, $F_k \doteq \{\rho(1),...,\rho(k-1)\}$ and $ E_k \doteq \mathcal{S}^m \setminus F_k$, with the convention that $\mathcal{O} _{F_k}^{E_k}=\mathcal{O}^{\mathcal{S}^m}$ for $k=1$. \end{definition} For an interpretation of a viable ranking, see Remark \ref{rem:poldisc}. The following will be one of main assumptions that will be taken to hold throughout this work. This assumption (and Conditions \ref{cond:loctrafcond} and \ref{cond:HT1}) will not be noted explicitly in the statements of the results. \begin{condition} \label{cond:viableRankExists} There exists a viable ranking of jobs in $\mathcal{S}^m$. \end{condition} In Section \ref{sec:example} we illustrate through examples that this condition holds for a broad family of models. We can now present our dynamic rate allocation policy. \subsection{Resource Allocation Policy} \label{sec:secallostra} For $k\in \mathbb{N}_m$ let \begin{equation}\label{eq:eqzetaik} \zeta _{i}^{k}=\{j\in \mathbb{N}_{J}\setminus F_{k+1}:K_{i,j}=1\}. \end{equation} This class can be interpreted as the collection of jobs which impact node $i$ and have a higher processing priority than job $\rho(k)$ (see Remark \ref{eq:jobprior}). \ Let $0<\alpha <1/2$ and $0<c_{1}<c_{2}$. \ Define \begin{equation*} \sigma ^{r}(t)\doteq\left\{ j\in \mathbb{N}_{J}:Q_{j}^{r}(t)\geq c_{2}r^{\alpha }\right\} \end{equation*} to be the set of job-types whose queue length is at least $c_{2}r^{\alpha }$ at time $t$. \ Define \begin{equation*} \varpi ^{r}(t)\doteq \bigcup _{j\in \sigma ^{r}(t)}N_{j} \end{equation*} to be the subset of $\mathbb{N}_I$ consisting of resources associated with job-types in $\sigma ^{r}(t)$, namely with queue lengths at least $ c_{2}r^{\alpha }$. We will use the following work allocation scheme. \begin{definition} \label{def:workAllocScheme}Let $\delta =\frac{\min_{j}\varrho _{j}}{2J }$. For $t\ge 0$, define the vector $y(t) = (y_j(t))_{j \in \mathbb{N}_J}$ as follows.\newline \noindent \textbf{Primary jobs.} For $j\in \mathcal{S}^p$ \begin{equation} y_{j}(t)\doteq \left\{ \begin{array}{cc} \varrho _{j}+\delta, & \text{ if } j\in \sigma ^{r}(t) \\ \ & \\ \varrho _{j}- \frac{\delta }{ J2^{m +3} }, & \text{ if } j\notin \sigma ^{r}(t). \end{array} \right. \label{eq:A1prim} \end{equation} \newline \noindent \textbf{Jobs in $\mathcal{S}^m$.} For $k\in \mathbb{N}_m$ \begin{equation} y_{\rho (k)}(t)\doteq \left\{ \begin{array}{cc} \varrho _{\rho (k)}-2^{k-m -2}\delta, & \text{ if } \zeta _{i}^{k}\cap \sigma ^{r}(t)\neq \emptyset \mbox{ for all } i\in N_{\rho (k)} \\ \ & \\ \varrho _{\rho (k)}+2^{k-m -2}\delta, & \text{ if } \zeta _{i}^{k}\cap \sigma ^{r}(t)= \emptyset \mbox{ for some } i\in N_{\rho (k)} \mbox{ and } \rho (k)\in \sigma ^{r}(t) \\ \ & \\ \varrho _{\rho (k)}-2^{-k-m -2}\delta, & \text{ if } \zeta _{i}^{k}\cap \sigma ^{r}(t)= \emptyset \mbox{ for some } i\in N_{\rho (k)} \mbox{ and } \rho (k)\notin \sigma ^{r}(t). \end{array} \right. \label{eq:A2mult} \end{equation} \newline \noindent \textbf{Jobs in $\mathcal{S}^1$.} For $j \in \mathcal{S}^1$ \begin{equation} y_{j}(t)\doteq \left\{ \begin{array}{cc} C_{\hat i(j)}-\sum_{l\neq j:K_{\hat i(j),l}=1}y_{l}(t), & \text{ if } \hat i(j)\in \varpi^{r}(t) \\ \ & \\ \varrho _{j}-\delta, & \text{ if } \hat i(j)\notin \varpi^{r}(t). \end{array} \right. \label{eq:A3sing} \end{equation} For all $j$, define stopping times \begin{equation*} \tau _{1}^{j}=\inf \{t\geq 0:Q_{j}^{r}(t)<c_{1}r^{\alpha }\}\text{,} \end{equation*} \begin{equation*} \tau _{2l}^{j}=\inf \{t\geq \tau _{2l-1}^{j}:Q_{j}^{r}(t)\geq c_{2}r^{\alpha }\}\text{,} \end{equation*} and \begin{equation*} \tau _{2l+1}^{j}=\inf \{t\geq \tau _{2l}^{j}:Q_{j}^{r}(t)<c_{1}r^{\alpha }\} \text{,} \end{equation*} for all $l>0$. \ Define $\mathcal{E}^{r}(t)\in \{0,1\}^{J}$ by \begin{equation*} \mathcal{E}_{j}^r(t)\doteq \left\{ \begin{array}{cc} 1, & \text{ if } t\in \left[ \tau _{2l-1}^j,\tau _{2l}^j\right) \text{ for some } l>0 \\ \ & \\ 0, & \text{ otherwise. } \end{array} \right. \end{equation*} Finally, define $x(t) \in \mathbb{R}^J$ as $x_j(t) \doteq y_j(t) 1_{\{ \mathcal{E}_{j}^r(t) =0\}}$ for $j \in \mathbb{N}_J$. \end{definition} We note that $y_j(t)$ and $x_j(t)$ depend on $r$ but this dependence is suppressed in the notation. \begin{remark} \label{rem:poldisc} Roughly speaking, under the allocation policy in Definition \ref{def:workAllocScheme}, jobs are prioritized as follows: \begin{equation} \label{eq:jobprior} \mathcal{S}^p \succ \mathcal{S}^1 \succ \rho(m) \succ \rho(m-1) \cdots \succ \rho(1). \end{equation} However the above priority order needs to be interpreted with some care. We will call the $j$-th queue {\em stocked} at time instant $t$ if $Q^r_j(t) \ge c_2 r^{\alpha}$ and we will call it {\em depleted} at time instant $t$ if $Q^r_j(t) < c_1 r^{\alpha}$. The last line of Definition \ref{def:workAllocScheme} says that any queue once depleted does not get any rate allocation until it gets stocked again. Beyond that, rate allocation by a typical resource $i$ is decided as follows. First we consider all the primary job-types associated with resource $i$, i.e. $j \in \mathcal{S}^p$ such that $K_{ij}=1$. If the associated queue is stocked then it gets higher than nominal rate allocation according to the first line in \eqref{eq:A1prim} and otherwise a lower than nominal allocation as in the second line of \eqref{eq:A1prim}. Next we look at all the job-types in $\mathcal{S}^m$ associated with resource $i$. Denote these as $j_1, j_2, \ldots j_k$ and assume without loss of generality that $\rho(j_1)< \rho(j_2) \cdots < \rho(j_k)$. We consider the top ranked job $\rho(j_k)$ first and look at all the resources (including resource $i$) that process this job-type. If every associated resource has at least one job-type rated higher according to \eqref{eq:jobprior} with a stocked queue then rate allocated to job-type $\rho(j_k)$ is lower than nominal as given in the first line of \eqref{eq:A2mult}. On the other hand, if there is at least one associated resource such that none of its job-types that are rated higher that $\rho(j_k)$ (according to \eqref{eq:jobprior}) has a stocked queue , we assign $\rho(j_k)$ a flow rate higher than nominal, according to the second line in \eqref{eq:A2mult} if the queue for job-type $\rho(j_k)$ is stocked and a lower than nominal flow rate according to the third line in \eqref{eq:A2mult} if the queue is not stocked. Note that all resources processing job-type $\rho(j_k)$ allocate the same flow rate to it. We then successively consider $\rho(j_{k-1}), \rho(j_{k-2}), \ldots \rho(j_{1})$ and allocate rate flows to it in a similar fashion as above. Finally, if the unique job-type $\check j(i)$ queue associated with resource $i$ is stocked, we allocate it all remaining capacity of resource $i$ (this may be larger or small than nominal allocation) and if this queue is not stocked we assign it less than nominal allocation given by the second line in \eqref{eq:A3sing}. Lemma \ref{lem:admispol} will show that $B^r(t) \doteq \int_0^t x(s) ds$ is nonnegative, nondecreasing and satisfies the resource constraint \eqref{eq:resconst}. Also, clearly the associated $Q^r$ defined by \eqref{eq:queleneqn} satisfies \eqref{eq:qrnonneg}. Finally, it can be checked that the process $B^r(t)$ is non-anticipative in the sense of Definition 2.6 (iv) of \cite{budgho2}. Thus $B^r$ is a resource allocation policy as defined in Section \ref{sec:backg}. We remark that the formal priority ordering given in \eqref{eq:jobprior} is consistent with the UFO priority scheme proposed in Section 12 of \cite{harmandhayan} for 2LLN and 3LLN networks. However, the UFO scheme for C3LN network in \cite{harmandhayan} appears to be of a different form. \end{remark} \subsection{Main Results} Recall that we assume throughout that Conditions \ref{cond:loctrafcond}, \ref{cond:HT1} and \ref{cond:viableRankExists} are satisfied. We now present the main results of this work. The first result considers the ergodic cost whereas the second the discounted cost. Recall $q^r$ introduced in \eqref{eq:queleneqn}. \begin{theorem} \label{thm:thm6.5} Suppose $\hat q^r \doteq q^r/r$ satisfies $\sup_{r>0} \hat q^r <\infty$. Let $t_{r} \uparrow \infty$ as $r\to \infty$. Then as $r\to \infty$ $\frac{1}{t_{r}}\int_{0}^{t_{r}} h\cdot \hat{Q}^{r}(t) dt$ converges in $L^1$ to $\int \mathcal{C}(y)\pi (dy)$. In particular, as $r\to \infty$, $$J_E^r(B^r, q^r) \to \mbox{HGI}_E.$$ \end{theorem} \begin{theorem} \label{thm:thm6.5disc} Suppose that $\hat q^r \to q_0$ as $r\to \infty$. Let $w_0 = Gq_0$. Then \begin{equation*} \lim_{r\rightarrow \infty }J_D^r(B^r, q^r) = \mbox{HGI}_D(w_0). \end{equation*} \end{theorem} Proofs of the above theorems are given in Section \ref{sec:pfsmainthms}. \section{Verification of Condition \ref{cond:viableRankExists}.} \label{sec:example} In this section we will give two more transparent sets of criteria which imply Condition \ref{cond:viableRankExists} and provide some examples of networks which satisfy them. \ Note that these alternative conditions are more restrictive and by no means necessary for Condition \ref {cond:viableRankExists} to hold. \ We present them because for certain types of networks they provide an easy way to verify Condition \ref {cond:viableRankExists}. \ We will then provide an example of a simple network which does not satisfy Condition \ref{cond:viableRankExists} and consequently does not fall in the family of systems analyzed here. Verifying Condition \ref{cond:viableRankExists} and finding the optimal cost/queue length for a particular workload only involves jobs in $\mathcal{S}^{s}$ (see Theorem \ref{thm:restOptJobs}). For this reason, sufficient conditions below impose conditions only on jobs in $\mathcal{S}^{s}$. Finally, for notational convenience, in this section we will denote the job type $j$ that requires service from nodes $i_{1},...,i_{n}$ by $\chi _{i_{1},...,i_{n}}$. Similarly, we will use notation $h_{\chi _{i_{1},...,i_{n}}}$, $\mu _{\chi _{i_{1},...,i_{n}}}$, and $N_{\chi _{i_{1},...,i_{n}}}$ for the corresponding $h_j, \mu_j, N_j$. \subsection{Some Simple Sufficient Conditions for Condition \protect\ref {cond:viableRankExists}} \label{sec:simsuffcond} We present below two basic sufficient (but not necessary) conditions for Condition \ref{cond:viableRankExists} to be satisfied in order to illustrate networks that are covered by our approach. \begin{theorem} \label{thm:SuffSubsetJobs} If for all $j,k\in \mathcal{S}^m$ either $ N_{j}\subset N_{k}$, $N_{k}\subset N_{j}$, or $N_{j}\cap N_{k}=\emptyset $ then Condition \ref{cond:viableRankExists} is satisfied. \end{theorem} \begin{proof} We will use the notation from Definition \ref{def:viableRank}, namely $ F_{k}\doteq \{\rho (1),...,\rho (k-1)\}$ and $E_{k}\doteq \mathcal{S}^m\setminus F_{k}$. Take $\rho$ to be an arbitrary map from $\mathbb{N}_m$ to $\mathcal{S}^m$ with the property that for all $j,k \in \mathbb{N}_m$ with $j<k$, either $N_{\rho (k)}\subset N_{\rho (j)}$ or $N_{\rho (j)}\cap N_{\rho (k)}=\emptyset $. Note that our assumption in the statement of the theorem ensures that such a map always exists. We now argue that this $\rho$ defines a viable ranking, namely Condition \ref{cond:viableRankExists} is satisfied. For this we need to show that for every $k \in \mathbb{N}_m$, $\rho(k) \in \mathcal{O}^{E_k}_{F_k}$, namely for all $M \in \mathcal{M}_{F_k}^{E_k\bigcup \mathcal{S}^1, \rho(k)}$ \begin{align} \label{eq:eqmurhok} \mu_{\rho(k)} h_{\rho(k)} + \mathcal{C}\left(\sum_{j\in M} K_j - K_{\rho(k)}\right) \le \mathcal{C}\left(\sum_{j\in M} K_j \right). \end{align} Now consider such a $k$ and $M$. Note that $M \subset \{\rho(k+1), \ldots \rho(m)\} \bigcup \mathcal{S}^1$. Since $M$ defines a minimal covering, if for $l\neq l'$, $\rho(l), \rho(l') \in M$, we must have that $N_{\rho(l)} \cap N_{\rho(l')} = \emptyset$. From minimality of $M$ we also have that, $(\bigcup_{j\in \mathcal{S}^1\cap M} N_j) \cap N_{\rho(l)} = \emptyset$ for every $l\ge k+1$ such that $\rho(l)\in M$. We thus have $\sum_{j\in M}|N_j| = |N_{\rho(k)}|$ which implies that \begin{equation} \label{eq:eqkjkk} \sum_{j\in M} K_j = K_{\rho(k)}. \end{equation} Therefore, \begin{align*} \mu_{\rho(k)} h_{\rho(k)} + \mathcal{C}\left(\sum_{j\in M} K_j - K_{\rho(k)}\right) = \mu_{\rho(k)} h_{\rho(k)} = \mu_{\rho(k)} \mathcal{C}(g_{\rho(k)}) = \mathcal{C}(\mu_{\rho(k)}g_{\rho(k)}) = \mathcal{C}(K_{\rho(k)}) = \mathcal{C}(\sum_{j\in M} K_j), \end{align*} where the first and last equality use \eqref{eq:eqkjkk} and the second equality uses the fact that $\rho(k)$ is a secondary job. This proves \eqref{eq:eqmurhok} (in fact with equality) and completes the proof of the theorem. \end{proof} \begin{remark} \label{rem:firsthm} One simple consequence of Theorem \ref{thm:SuffSubsetJobs} is that any network where $\mathcal{S}^m=\emptyset $ (meaning $\mathcal{S}^{s}=\mathcal{S}^1$) satisfies Condition \ref{cond:viableRankExists}. We note that condition $\mathcal{S}^m=\emptyset $ does not rule out existence of jobs that require service from multiple nodes. Here is one elementary example to illustrate this point. Suppose $I=3$ and $J=6$ with $\mu_j=1$ for all $j$. Also let $h_{\chi_{1}} = h_{\chi_{2}}= h_{\chi_{2}}=1$, $h_{\chi_{1,2,3}} = h_{\chi_{1,2}} = h_{\chi_{2,3}} =4$. It is easy to check that for this example $\mathcal{S}^m=\emptyset$. Another consequence of Theorem \ref{thm:SuffSubsetJobs} is that any network where $ \mathcal{S}^m$ only contains one job (for instance a job which impacts all nodes) satisfies Condition \ref{cond:viableRankExists}. \ In particular any $2$ node network satisfies Condition \ref{cond:viableRankExists}. \ Another basic network covered by Theorem \ref{thm:SuffSubsetJobs} is one with $2n$ jobs where $\mathcal{S}^m=\{\chi _{1,2,...,2n},\chi _{1,2},\chi _{3,4},\chi _{2n-1,2n}\}$. Many other examples can be given. In particular 2LLN and 3LLN networks of \cite{harmandhayan} satisfy the sufficient condition in Theorem \ref{thm:SuffSubsetJobs} . \end{remark} The following theorem provides another sufficient condition for a network to satisfy Condition \ref {cond:viableRankExists}. Recall that $\mathcal{O}^{\mathcal{S}^m}$ is the collection of all $j' \in \mathcal{S}^m$ that satisfy \eqref{eq:eq929} for all $M\subset \mathcal{S}^s\setminus \{j'\}$ that are minimal covering sets for $j'$. \begin{theorem} \label{thm:SuffRemoveJobCond}If for all $j\in \left. \mathcal{S}^m\right\backslash \mathcal{O}^{\mathcal{S}^m}$ and $M\in \mathcal{M}_{\mathcal{O}^{\mathcal{S}^m}}^{\{ \left. \mathcal{S}^m\right\backslash \mathcal{O}^{\mathcal{S}^m} \} \bigcup \mathcal{S}^1,j}$ we have $\sum_{l\in M}|N_{l}|=|N_{j}|$ then Condition \ref{cond:viableRankExists} is satisfied. \end{theorem} \begin{proof} Consider the following ranking of jobs in $\mathcal{S}^m$. Assign the first $\bar m \doteq \left\vert \mathcal{O}^{\mathcal{S}^m}\right\vert $ ranks arbitrarily to jobs in $\mathcal{O}^{\mathcal{S}^m}$ and the remaining $m-\bar m$ ranks arbitrarily to jobs in $\mathcal{S}^m\left\backslash \mathcal{O} ^{\mathcal{S}^m}\right. $. In particular $\rho (k)\in \mathcal{O}^{\mathcal{S}^m}$ for all $k\in \{1,...,\bar m \}$ and $\rho (k)\in \mathcal{S}^m\left\backslash \mathcal{O}^{\mathcal{S}^m}\right. $ for all $k\in \{\bar m +1,...,m \}$. \ Note that, for $k\in \{1,...,\bar m \}$ we have $\mathcal{O}^{\mathcal{S}^m} \subset \mathcal{O}_{F_{k}}^{E_{k}}$ which says that $\rho (k)\in \mathcal{O}_{F_{k}}^{E_{k}}$ for all $k\in \{1,...,\bar m \}$. \ Let now $k\in \{\bar m +1,...,m \}$ be arbitrary and note that $\mathcal{M}_{F_{k}}^{E_{k}\bigcup \mathcal{S}^1,\rho (k)}\subset \mathcal{M}_{\mathcal{O}^{\mathcal{S}^m}}^{ \{\mathcal{S}^m\backslash \mathcal{O}^{\mathcal{S}^m}\}\bigcup \mathcal{S}^1,\rho (k)}$ so for all $M\in \mathcal{M}_{F_{k}}^{E_{k}\bigcup \mathcal{S}^1,\rho (k)}$ we have $\sum_{l\in M}|N_{l}|=|N_{\rho (k)}|$. \ This implies that \eqref{eq:eqkjkk} is satisfied which as in the proof of Theorem \ref{thm:SuffSubsetJobs} shows that (\ref{eq:eq929}) is satisfied for all $M\in \mathcal{M} _{F_{k}}^{E_{k}\bigcup \mathcal{S}^1,\rho (k)}$ and therefore $\rho (k)\in \mathcal{O}_{F_{k}}^{E_{k}}$ for all $k\in \{\bar m +1,...,m \}$. Thus $\rho$ defines a viable ranking and so Condition \ref{cond:viableRankExists} is satisfied. \end{proof} \begin{remark} \label{rem:secthm} The above theorem provides an easy way to check that Condition \ref{cond:viableRankExists} is satisfied. \ For instance, for $3$ node networks if $\chi _{1,2,3}\in \mathcal{S}^m$, from Theorem \ref{thm:SuffRemoveJobCond}, verification of Condition \ref{cond:viableRankExists} reduces to proving that $\chi _{1,2,3}\in \mathcal{O}^{\mathcal{S}^m}$. \ This is due to the fact that for a $3$ node network $\mathcal{S}^m\subset \{\chi _{1,2,3},\chi _{1,2},\chi _{1,3},\chi _{2,3}\}$, and consequently for any job $j\in \left. \mathcal{S}^m\right\backslash \{\chi _{1,2,3}\}$ and $M\in \mathcal{M}_{\mathcal{\{} \chi _{1,2,3}\}}^{\left. \mathcal{S}^m\right\backslash \mathcal{\{}\chi _{1,2,3}\}\bigcup \mathcal{S}^1,j} $ we must have $M\cap \mathcal{S}^m=\emptyset $ which says that $\sum_{l\in M}|N_{l}|=|N_{j}|$. \ In particular the C3LN in \cite{harmandhayan} satisfies the sufficient condition in Theorem \ref{thm:SuffRemoveJobCond} with one viable ranking given as $\rho(1) = \chi _{1,2,3}$, $\rho(2) = \chi _{1,2}$, $\rho(3) = \chi _{2,3}$. Similarly, for a $4$ node network with $\mathcal{S}^m\subset \{\chi _{1,2,3,4},\chi _{1,2,3},\chi _{1,2,4},\chi _{1,3,4},\chi _{2,3,4}\}$, from Theorem \ref{thm:SuffRemoveJobCond}, verification of Condition \ref{cond:viableRankExists} reduces to proving that $\chi_{1,2,3,4}\in \mathcal{O}^{\mathcal{S}^m}$. Many other examples can be given. In general Theorem \ref{thm:SuffRemoveJobCond} can be useful for verifying Condition \ref{cond:viableRankExists} for networks with high number of nodes when $\mathcal{S}^m$ has few elements. In particular the negative example in Section 13 of \cite{harmandhayan} satisfies the sufficient condition in the above theorem. In that example $J=9$, $I=6$ and $\mathcal{S}^m = \{\chi_{1,2,3}, \chi _{4,5,6}, \chi_{3,6}\}$. It is easy to see that with the values of holding costs and job sizes in the above paper $\mathcal{O}^{\mathcal{S}^m} = \{\chi_{1,2,3}, \chi _{4,5,6}\}$ and $\mathcal{S}^m\left\backslash \mathcal{O}^{\mathcal{S}^m}\right. = \{\chi_{3,6}\}$ and so the only $M \in \mathcal{M}_{\mathcal{O}^{\mathcal{S}^m}}^{\{ \left. \mathcal{S}^m\right\backslash \mathcal{O}^{\mathcal{S}^m} \} \bigcup \mathcal{S}^1,j}$ for $j = \chi_{3,6}$ is the set $\{\chi_{3}, \chi_{6}\}$ which clearly satisfies the property $\sum_{l\in M}|N_{l}|=|N_{j}|$. \end{remark} It should be noted that Theorems \ref{thm:SuffSubsetJobs} and \ref{thm:SuffRemoveJobCond} are much more restrictive than necessary, meaning that the class of networks which satisfy Condition \ref{cond:viableRankExists} is much wider than those covered by Theorem \ref{thm:SuffSubsetJobs} or Theorem \ref{thm:SuffRemoveJobCond}. To illustrate this we provide a simple example of one such network. \begin{example} \label{exam:out} Let $I=4$, $J=7$, and \begin{align*}\mu _{\chi _{1}}&=\mu _{\chi _{2}}=\mu _{\chi _{3}}=\mu _{\chi _{4}}=\mu _{\chi _{1,2}}=\mu _{\chi _{2,3}}=\mu _{\chi _{1,2,3,4}}=1\\ h_{\chi _{1}}&=h_{\chi _{2}}=h_{\chi _{3}}=h_{\chi _{4}}=4, h_{\chi _{1,2}}=6, h_{\chi _{2,3}}=7, h_{\chi _{1,2,3,4}}=13. \end{align*} It is easy to verify that $\mathcal{S}^m = \{\chi _{1,2},\chi _{2,3}, \chi _{1,2,3,4}\}$ and there is exactly one viable ranking as in Definition \ref{def:viableRank} which is $\rho(1)=\chi_{1,2,3,4}, \rho(2)=\chi _{1,2}, \rho(3)=\chi _{2,3}$ (so Condition \ref{cond:viableRankExists} is satisfied). In particular this implies $\mathcal{O}^{\mathcal{S}^m}=\{\chi _{1,2,3,4}\}$. However, note that $N_{\chi _{1,2}}\not\subset N_{\chi _{2,3}}$, $N_{\chi _{2,3}}\not\subset N_{\chi _{1,2}}$, and $N_{\chi _{1,2}}\cap N_{\chi _{1,2}}\neq \emptyset$ so this network does not satisfy the conditions of Theorem \ref{thm:SuffSubsetJobs}. In addition, $\chi _{2,3} \in \left. \mathcal{S}^m\right\backslash \mathcal{O}^{\mathcal{S}^m}$ and $\{ \chi _{1,2}, \chi _{3} \}\in \mathcal{M}_{\mathcal{O}^{\mathcal{S}^m}}^{\{ \left. \mathcal{S}^m\right\backslash \mathcal{O}^{\mathcal{S}^m} \} \bigcup \mathcal{S}^1, \chi _{2,3}}$ but $|N_{\chi _{1,2}}|+|N_{\chi _{3}}|>|N_{\chi _{2,3}}|$ so the conditions of Theorem \ref{thm:SuffRemoveJobCond} are not satisfied either. Consequently this simple network satisfies Condition \ref{cond:viableRankExists} although it is outside the scope of Theorems \ref{thm:SuffSubsetJobs} and \ref{thm:SuffRemoveJobCond}. \end{example} As seen in the last two theorems, Condition \ref{cond:viableRankExists} holds for a broad range of networks. However there are many interesting cases that are not covered by this condition. We now illustrate this point through an example. In this example $I=3$ and $J=6$ and $\mathcal{C}$ is a non decreasing function, however a viable ranking does not exist and therefore techniques of this paper do not apply. \begin{example}{\protect (Example That Doesn't Satisfy Condition \protect \ref{cond:viableRankExists})} \label{exam:outout} Suppose that \begin{align*}\mu _{\chi _{1}}&=\mu _{\chi _{2}}=\mu _{\chi _{3}}=\mu _{\chi _{1,2}}=\mu _{\chi _{2,3}}=\mu _{\chi _{1,2,3}}=1\\ h_{\chi _{1}}&=h_{\chi _{2}}=h_{\chi _{3}}=5, h_{\chi _{1,2}}=7, h_{\chi _{2,3}}=8, h_{\chi _{1,2,3}}=11. \end{align*} It is easy to check that in this case $\mathcal{S}^m = \{\chi _{1,2},\chi _{2,3}, \chi _{1,2,3}\}$. This network does not satisfy Condition \ref{cond:viableRankExists}\ because $ \mathcal{O}^{\left\{ \chi _{1,2},\chi _{2,3},\chi _{1,2,3}\right\} }=\varnothing $, since (\ref{eq:eq929}) does not hold for $\chi _{1,2}$ , $\chi _{2,3}$, or $\chi _{1,2,3}$. We leave the verification of this fact to the reader. Consequently a viable ranking cannot exist. {\em Workload cost and its minimizer.} The workload $\mathcal{C}$ for this example can be given explicitly as follows. Let for $w \in \mathbb{R}_+^3$, $w_{12} \doteq w_1 \wedge w_2$, $w_{23} \doteq w_2 \wedge w_3$, $w_{123} \doteq w_1 \wedge w_2 \wedge w_3$. For $w \in \mathbb{R}_+^3$ \begin{equation*} \mathcal{C}(w)\doteq \left\{ \begin{array}{cc} 5w_{2}+2w_{1}+3w_{3}, & \text{ if } w_{2}\geq w_{1}+w_{3} \\ 3w_{1}+4w_{2}+4w_{3}, & \text{ if } w_{1}+w_{3}>w_{2}\geq w_{1} \vee w_{3}\\ 5(w_{1}+w_{2}+w_{3})+w_{123}-3w_{12}-2w_{23}, & \text{ if } w_{1} \vee w_{3}>w_{2}\\. \end{array} \right. \end{equation*} The optimal $q^*(w)$ in $\mathcal{Q}(w)$ is given as follows. Let $q^* = (q_{\chi _{1}}^{\ast}, q_{\chi _{2}}^{\ast}, q_{\chi _{3}}^{\ast}, q_{\chi _{1,2}}^{\ast}, q_{\chi _{2,3}}^{\ast}, q_{\chi _{1,2,3}}^{\ast})$. Then {\small \begin{align*} &q^*(w)= \\ &\left\{ \begin{array}{cc} (0,\,w_{2}-w_{1}-w_{3},\,0,\,w_{1},\,w_{3},\,0), & \text{ if } w_{2}\geq w_{1}+w_{3} \\ ( 0,\,0,\,0,\,w_{2}-w_{3},\,w_{2}-w_{1},\,w_{1}+w_{3}-w_{2}), & \text{ if } w_{1}+w_{3}>w_{2}\geq w_{1} \vee w_{3}\\ (w_{1}-w_{12},\,w_{2}+w_{123}- w_{12}-w_{23},\, w_{3}- w_{23},\, w_{12}- w_{123},\, w_{23}-w_{123},\, w_{123}), & \text{ if } w_{1} \vee w_{3}>w_{2}\\. \end{array} \right. \end{align*}} Note that $\mathcal{C}$ and $q^*$ are continuous functions and $\mathcal{C}$ is nondecreasing. In particular the HGI performance in this case is also the optimal cost in the associated BCP. However, as noted above, there does not exist a viable ranking for this example. Thus the techniques developed in the current paper do not apply to this example. \end{example} \section{Some Properties of the Workload Cost Function} \label{sec:secworkcost} The following result on a continuous selection of a minimizer is well known (cf. Theorem 2 in \cite{boh1} or Proposition 8.1 in \cite{harmandhayan}). \begin{theorem} \label{thm:costAchieved} There is a continuous map $\bar q: \mathbb{R}_+^I \to \mathbb{R}_+^J$ such that for every $w \in \mathbb{R}_+^I$, $\bar q(w) \in \mathcal{Q}(w)$ and \begin{equation*} h \cdot \bar q(w) = \mathcal{C}(w). \end{equation*} \end{theorem} Define for a given workload vector $w \in \mathbb{R}_+^I$ the set $\mathcal{Q }^s(w)$ consisting of all queue-length vectors that produce the workload $w$ and have zero coordinates for queue-lengths corresponding to primary jobs, namely, \begin{equation*} \mathcal{Q}^{s}(w)=\left\{ q\in \mathcal{Q}(w): q_{j}=0\text{ for all }j\in \mathcal{S}^p\right\} \text{.} \end{equation*} The following theorem shows that in computing the infimum in \eqref{eq:eq942} we can replace $\mathcal{Q}(w)$ with $\mathcal{Q}^{s}(w)$. \begin{theorem} \label{thm:restOptJobs} For all $w\in \mathbb{R}_+^I$, $\bar q(w) \in \mathcal{Q}^{s}(w)$. In particular, \begin{equation*} \mathcal{C}(w)=\inf_{q\in \mathcal{Q}^{s}(w)}\left\{ h\cdot q\right\} \text{.} \end{equation*} \end{theorem} \begin{proof} Fix $w\in \mathbb{R}_+^I$. With $\bar q$ as in Theorem \ref{thm:costAchieved} , we have $\mathcal{C}(w)=h \cdot\bar{q}(w)$. Assume $\bar{q}_{k}(w)>0$ for some $k\in \mathcal{S}^p$. Then with $q^* \doteq \bar q(g_k)$, we have from the definition of $\mathcal{S}^p$ that \begin{equation} \label{eq:hcqstar} h\cdot q^* = \mathcal{C}(g_k) < h_k. \end{equation} Define $\tilde q \in \mathbb{R}_+^J$ by $\tilde{q}_{k}=\bar{q}_{k}(w){q} ^*_{k}$ and $\tilde{q}_{j}=\bar{q}_{j}(w)+\bar{q}_{k}(w){q}^*_{j}$ for $ j\neq k$. \ Then for $i \in \mathbb{N}_I$, noting that \begin{equation*} \sum_{j=1}^J G_{ij} q^*_j = \sum_{j=1}^J G_{ij} \bar q_j(g_k) = (g_k)_i = G_{ik}, \end{equation*} we have \begin{align*} w = \sum_{j\neq k}G_{ij}\bar{q}_{j}(w) +G_{ik}\bar{q}_{k}(w) =\sum_{j\neq k}G_{ij}\bar{q}_{j}(w)+\left(\sum_{j=1}^{J}G_{ij}{q} ^*_{j}\right)\bar{q}_{k}(w) = G\tilde q \end{align*} and consequently \begin{align*} \mathcal{C}(w) = \sum_{j\neq k}h_{j}\bar{q}_{j}(w)+h_{k}\bar{q}_{k}(w) >\sum_{j\neq k}h_{j}\bar{q}_{j}(w)+\bar{q}_{k}(w)\sum_{j=1}^{J}h_{j}{q} ^*_{j} = h\cdot\tilde{q} \ge \mathcal{C}(w) \end{align*} where the inequality in the above display is from \eqref{eq:hcqstar} and from the fact that, by assumption, $\bar{q}_{k}(w)>0$. Thus we have a contradiction and therefore $\bar{q}_{k}(w)=0$ for all $k\in \mathcal{S}^p$ which completes the proof. \end{proof} Hereafter we fix a viable ranking $\rho$. As was noted in Theorem \ref{thm:costAchieved}, there exists a continuous selection of the minimizer in \eqref{eq:eq942}. We now show that using the ranking $\rho$, one can give a rather explicit representation for such a selection function. Given $w \in \mathbb{R}_+^I$, define $q^*(w) \in \mathbb{R}_+^J$ as follows. Set $q^*_j(w) = 0$ for $j \in \mathcal{S}^p$. Define, \begin{equation} q_{\rho(1)}^{*}(w)=\min_{i\in N_{\rho(1)}}\{w_{i}\}\mu _{\rho(1)}\text{.} \label{eq:eq834} \end{equation} For $k\in \{2,\ldots, m \}$ define, recursively, \begin{equation} \label{eq:eq834b} {q}_{\rho(k)}^{*}(w)=\min_{i\in N_{\rho(k)}}\left\{w_{i}-\sum_{l=1}^{k-1}G_{i,\rho(l)} {q} _{\rho(l)}^{*}(w)\right\} \mu_{\rho(k)}\text{.} \end{equation} Finally, for $j\in \mathcal{S}^1$ define \begin{equation} \label{eq:eq753} {q}_{j}^{*}(w)=\left\{ w_{\hat i(j)}-\sum_{k=1}^{m }G_{\hat i(j),\rho(k)} {q} _{\rho(k)}^{*}(w)\right\} \mu _{j}\text{,} \end{equation} where recall that $\hat i(j)$ is the unique resource processing the job $j$. By a recursive argument it is easy to check that $q^*(w)$ defined above is a non-negative vector in $\mathbb{R}^J$. The following theorem shows that $q^*$ defined above is a continuous selection of the minimizer in \eqref{eq:eq942}. \begin{theorem} \label{thm:restCostJobOrd}For any $w\in \mathbb{R}_+^I$, $q^*(w) \in \mathcal{Q}^{s}(w)$ and \begin{equation} \label{eq:eq755} \mathcal{C}(w)= h\cdot {q}^{*}(w) =\sum_{k=1}^{m }h_{\rho(k)}{q}_{\rho(k)}^{*}(w)+\sum_{j \in \mathcal{S}^1}h_{j}{q}_{j}^{*}(w). \end{equation} \end{theorem} \begin{proof} Fix $w\in \mathbb{R}_+^I$. Let $\bar q(w)$ be as in Theorem \ref{thm:costAchieved}. Then $\mathcal{C}(w) = h \cdot \bar q(w)$ and the proof of Theorem \ref{thm:restOptJobs} shows that $ \bar q(w) \in \mathcal{Q}^{s}(w)$. Define \begin{equation*} s_{1}=\sup \left\{ q_{\rho(1)}:q\in \mathcal{Q}^{s}(w) \mbox{ and } h\cdot q =\mathcal{C}(w)\right\}. \end{equation*} Clearly the supremum is achieved, namely there is a $\check{q} \in \mathcal{Q }^{s}(w)$ s.t. $h\cdot \check{q} =\mathcal{C}(w)$ and $\check{q}_{\rho(1)} = s_1$. We now show that $s_1 = q^*_{\rho(1)}(w)$. First note that $s_1 \le q^*_{\rho(1)}$ since from \eqref{eq:eq834} there is an $i^* \in N_{\rho(1)}$ such that \begin{equation*} q^*_{\rho(1)}(w) = w_{i^*}\mu_{\rho(1)} = (G \check q)_{i^*} \mu_{\rho(1)} \ge \check q_{\rho(1)} = s_1, \end{equation*} where the second equality holds since $\check q \in \mathcal{Q}^{s}(w)$ and the next inequality is a consequence of the fact that $i^* \in N_{\rho(1)}$. We now show that in fact the inequality can be replaced by equality. We argue by contradiction and suppose that $s_{1}<{q}_{\rho(1)}^{*}(w)$. For all $ i\in N_{\rho(1)}$ define \begin{equation} \label{eq:eq517} j^*(i)=\arg \max_{j \neq \rho(1):i\in N_{j}}\left\{ \frac{\check{q}_{j} }{\mu _{j}}\right\} \end{equation} and note that for any $i \in N_{\rho(1)}$ \begin{align*} \frac{\check q_{j^*(i)}}{\mu_{j^*(i)}} \ge \frac{1}{J-1} \left( \sum_{j: i \in N_j} \frac{\check q_j}{\mu_j} - \frac{\check q_{\rho(1)}}{\mu_{\rho(1)}} \right) > \frac{1}{J} \left( w_i - \frac{\check q_{\rho(1)}}{\mu_{\rho(1)}} \right) \ge \frac{1}{J} \left(\frac{q^*_{\rho(1)}(w)-\check q_{\rho(1)}}{ \mu_{\rho(1)}}\right), \end{align*} where the second inequality uses the fact that $\check q \in \mathcal{Q} ^{s}(w)$ while the third uses \eqref{eq:eq834} once more. Thus, \begin{equation} \min_{i\in N_{\rho(1)}}\left\{ \frac{\check{q}_{j^*(i)}}{\mu _{j^{\ast }(i)}}\right\} >\frac{q_{\rho(1)}^{*}(w)-s_{1}}{J\mu _{\rho(1)}}\text{.} \label{eq:miniinnp} \end{equation} We can choose a subset $M\in \mathcal{M}^{\mathcal{S}^s,\rho(1)}$ such that $M\subset \left\{ j^*(i):i\in N_{\rho(1)}\right\} $. From the definition of $M$, $\sum_{j\in M} K_j - K_{\rho(1)}$ is a nonnegative vector. Since $\rho(1) \in \mathcal{O}^{\mathcal{S}^m}$, due to Definition \ref{def:optJob} \begin{equation} \mu_{\rho(1)} h_{\rho(1)} + \mathcal{C}(\sum_{j\in M} K_j - K_{\rho(1)}) \le \mathcal{C}(\sum_{j\in M} K_j). \label{eq:eqmurho1} \end{equation} Thus there exists $v^{1}\in \mathcal{Q}^{s}\left( \sum_{j\in M}K_j-K_{\rho(1)}\right) $ i.e., \begin{equation} \sum_{j\in M}K_j-K_{\rho(1)} = G v^{1} = \sum_{j=1}^JK_j b^{1}_{j} \mbox{ where } v^{1}_{j} = b^{1}_{j}\mu_j \mbox{ for } j \in \mathbb{N}_J,\label{eq:eqkjkrho} \end{equation} such that \begin{equation} \sum_{j=1}^J h_{j}b^{1}_{j}\mu_j = h \cdot v^{1} = \mathcal{C}(\sum_{j\in M}K_j-K_{\rho(1)}). \label{eq:hjb1j} \end{equation} Furthermore, $b^{1}_{\rho(1)} = 0$, since if $b^{1}_{\rho (1)}>0$ then $ \sum_{j\in M}K_{i,j}-K_{i,\rho (1)}\geq 1 $ for all $i\in N_{\rho (1)}$, so that for any $l\in M$ we have $ \sum_{j\in M\setminus \{l\}}K_{j}-K_{\rho (1)}\geq 0 $ which means $M$ is not minimal and contradicts $M\in \mathcal{M}^{\mathcal{S}^{s},\rho (1)}$. From \eqref{eq:eqmurho1} and \eqref{eq:hjb1j} we have \begin{equation} h_{\rho(1)}\mu _{\rho(1)}+\sum_{j=1}^{J}h_{j}b^{1}_{j}\mu _{j}\leq \sum_{j\in M}h_{j}\mu _{j}\text{.}\label{eq:15.1} \end{equation} Let \begin{equation} \label{eq:eq651} u_{1}\doteq \min_{j\in M}\left\{ \frac{\check{q}_{j}}{\mu _{j}}\right\} \end{equation} Since $M \subset \left\{j^*(i):i\in N_{\rho(1)}\right\}$ , from \eqref{eq:miniinnp} $ u_1 \ge \frac{q_{\rho(1)}^{*}(w)-s_{1}}{J\mu _{\rho(1)}}. $ Define $\tilde{q}\in \mathbb{R}_+^J$ by \begin{equation} \label{eq:eq653} \tilde{q}_{\rho(1)}=\check{q}_{\rho(1)}+u_{1}\mu _{\rho(1)}, \mbox{ and } \tilde{q}_{j}=\check{q}_{j}-\mathbf{1}_{\{j\in M\}}u_{1}\mu _{j}+u_{1}b^1_{j}\mu _{j} \mbox{ for } j\neq \rho(1). \end{equation} By definition of $u_1$, $\tilde q \in \mathbb{R}_+^J$. Also, \begin{eqnarray*} w &=&\sum_{j=1}^{J}K_j\left( \frac{\check{q}_{j}}{\mu _{j}}-\mathbf{1} _{\{j\in M\}}u_{1}\right) +u_{1}\sum_{j\in M}K_j \\ &=&\sum_{j=1}^{J}K_j\left( \frac{\check{q}_{j}}{\mu _{j}}-\mathbf{1}_{\{j\in M\}}u_{1}\right) +u_{1}\sum_{j=1}^{J}K_jb^{1}_{j}+u_{1}K_{\rho(1)} \\ &=&\sum_{j=1}^{J}K_{j}\frac{\tilde{q}_{j}}{\mu _{j}}, \end{eqnarray*} where the second equality uses \eqref{eq:eqkjkrho} and last equality uses the observation that $b^{1}_{\rho(1)}=0$. Thus $ \tilde{q}\in \mathcal{Q}^{s}(w)$. Furthermore, \begin{eqnarray*} \mathcal{C}(w) &=&\sum_{j=1}^{J}h_{j}\left( \check{q}_{j}-\mathbf{1}_{\{j\in M\}}u_{1}\mu _{j}\right) +u_{1}\sum_{j\in M}h_{j}\mu _{j} \\ &\geq &\sum_{j=1}^{J}h_{j}\left( \check{q}_{j}-\mathbf{1}_{\{j\in M\}}u_{1}\mu _{j}\right) +u_{1}h_{\rho(1)}\mu _{\rho(1)}+u_{1}\sum_{j=1}^{J}h_{j}b^{1}_{j}\mu _{j} \\ &=&\sum_{j=1}^{J}h_{j}\tilde{q}_{j} \ge \mathcal{C}(w), \end{eqnarray*} where the second line is from \eqref{eq:15.1} and the last inequality holds since $\tilde{q}\in \mathcal{Q}^{s}(w)$. So $h\cdot \tilde q = \mathcal{C}(w)$ and by definition of $s_1$, $\tilde q_{\rho(1)} \le s_1$. However, since by assumption $s_1 < {q}_{\rho(1)}^{*}(w)$, \begin{equation} \tilde{q}_{\rho(1)}=s_{1}+ u_{1}\mu _{\rho(1)}\geq s_{1}+\frac{{q} _{\rho(1)}^{*}(w)-s_{1}}{J}>s_{1} \label{eq:eq655} \end{equation} which is a contradiction. \ Thus we have shown $s_{1}={q}_{\rho(1)}^{*}(w)$. Denote $\check q$ as $q^1$. Then $q^1_{\rho(1)} = q^*_{\rho(1)}(w)$. Note that \begin{equation*} \mathcal{C}(w) = h \cdot q^1 = h_{\rho(1)} q^*_{\rho(1)} + \sum_{i\neq \rho(1)} h_i q^1_i. \end{equation*} Let $w^1 = w - \frac{q^*_{\rho(1)}(w)}{\mu_{\rho(1)}}K_{\rho(1)}$. Then $ w^1 = G \left[q^1 - q^*_{\rho(1)}(w) e_{\rho(1)}\right] $ and if for any $\tilde q \in \mathbb{R}_+^J$, $G\tilde q = w^1$, we have $ G\left[\tilde q + q^*_{\rho(1)}(w)e_{\rho(1)}\right] = Gq^1 = w $ and so \begin{equation*} h\cdot (\tilde q + q^*_{\rho(1)}(w)e_{\rho(1)}) \ge \mathcal{C}(w) = h_{\rho(1)} q^*_{\rho(1)}(w) + \sum_{i\neq \rho(1)} h_i q^1_i. \end{equation*} Thus $h\cdot \tilde q \ge \sum_{i\neq \rho(1)} h_i q^1_i$ and since $\tilde q $ is arbitrary vector in $\mathbb{R}_+^J$ satisfying $G\tilde q = w^1$ \begin{equation*} \mathcal{C}(w^1) = h\cdot q^1 - h_{\rho(1)}q^*_{\rho(1)}(w) = \mathcal{C}(w) - h_{\rho(1)}q^*_{\rho(1)}(w). \end{equation*} We now proceed via induction. Suppose that for some $k \in \{2, \ldots, m\}$ and all $w \in \mathbb{R}_+^I$ \begin{equation} \label{eq:eq651b} \mathcal{C}(w)=\sum_{l=1}^{k-1}h_{\rho(l)}{q}_{\rho(l)}^{*}(w)+\mathcal{C}\left( w^{k-1}\right) \end{equation} where \begin{equation*} w^{k-1}=w-\sum_{l=1}^{k-1} \frac{{q}_{\rho(l)}^{*}(w)}{\mu_{\rho(l)}} K_{\rho(l)}\text{.} \end{equation*} Note that we have shown \eqref{eq:eq651b} for $k=2$. With $\bar q$ as in Theorem \ref{thm:costAchieved} $\bar{q}\left( w^{k-1}\right) \in \mathcal{Q}^{s}\left( w^{k-1}\right) $ and \begin{equation*} \mathcal{C}\left( w^{k-1}\right) = \bar{q}\left( w^{k-1}\right)\cdot h \text{.} \end{equation*} Define \begin{equation*} s_{k}=\sup \left\{ q_{\rho(k)}:q\in \mathcal{Q}^{s}\left( w^{k-1}\right) , q\cdot h =\mathcal{C}(w^{k-1})\right\}. \end{equation*} Then there is $\check q \in \mathcal{Q}^{s}(w^{k-1})$ such that $\check{q} _{\rho(k)}=s_{k}$, and $\check{q}\cdot h =\mathcal{C}(w^{k-1})$. Also, using \eqref{eq:eq834b} we have for every $l<k$ an $i^* \in N_{\rho(l)} $ such that $ \frac{q^*_{\rho(l)}(w)}{\mu_{\rho(l)}} = w_{i^*}^{l-1}. $ Thus, \begin{equation*} 0\le w^{k-1}_{i^*} \le w_{i^*} - \sum_{u=1}^l G_{i^*,\rho(u)} q^*_{\rho(u)}(w) = w_{i^*}^{l-1} - \frac{q^*_{\rho(l)}(w)}{\mu_{\rho(l)}} = 0. \end{equation*} Consequently, \begin{equation} \label{eq:eq608} \mbox{ for every } l \in {1, \ldots k-1} \mbox{ there is an } i \in N_{\rho(l)} \mbox{ such that } w_i^{k-1}=0. \end{equation} Since $G\check q = w^{k-1}$, this in turn says that $\check q_{\rho(l)} = 0$ for $l \in {0, 1, \ldots k-1}$. Next, as for the case $k=1$, we can show that $s_k = {q}_{\rho(k)}^{*}(w)$. Indeed, the inequality $s_k \le q^*_{\rho(k)}$ follows on noting from \eqref{eq:eq834b} that for some $i^* \in N_{\rho(k)}$ \begin{equation*} q^*_{\rho(k)}(w) = w_{i^*}^{k-1}\mu_{\rho(k)} = (G \check q)_{i^*} \mu_{\rho(k)} \ge \check q_{\rho(k)} = s_k. \end{equation*} Next suppose $s_{k}<{q}_{\rho(k)}^{*}(w)$. Define $j^*(i)$ as in \eqref{eq:eq517} replacing $\rho(1)$ with $\rho(k)$, then as before (using \eqref{eq:eq834b} instead of \eqref{eq:eq834}) \begin{equation} \min_{i\in N_{\rho(k)}}\left\{ \frac{\check{q}_{j^*(i)}}{\mu _{j^*(i)}} \right\} >\frac{q_{\rho(k)}^{*}(w)-s_{k}}{J\mu _{\rho(k)}}\text{.} \label{eq:eq607} \end{equation} Thus from \eqref{eq:eq607} we have that $j^*(i) \notin \{\rho(1), \ldots \rho(k)\}$. We next claim that the set of resources associated with $\rho(l)$ for any $l<k$ is not a subset of the set of resources associated with $ \{j^*(i): i \in N_{\rho(k)}\}$. Indeed, if that were the case for some $l<k$ , then we will have \begin{equation} \label{eq:eq612} \sum_{i \in N_{\rho(k)}} K_{j^*(i)} - K_{\rho(l)} \ge 0. \end{equation} From \eqref{eq:eq608} there is an $i^*$ such that $K_{i^*,\rho(l)}=1$ and $ w_{i^*}^{k-1}=0$. Then from \eqref{eq:eq612} $K_{i^*, j^*(i)} =1$ for some $ i \in N_{\rho(k)}$. Since from \eqref{eq:eq607} $\check q_{j^*(i)} >0$, we have $w_{i^*}^{k-1}>0$ which is a contradiction. This proves the claim, namely $N_{\rho(l)} \not \subset \bigcup_{i \in N_{\rho(k)}} N_{j^*(i)}$ for $l = 1, \ldots, k-1$. We can now choose a subset $M^{k}\in \mathcal{M}_{F_k }^{\mathcal{S}^s\setminus F_k ,\rho(k)}$ such that $M^{k}\subset \left\{ j^{\ast}(i):i\in N_{\rho(k)}\right\} $. Since by definition $\rho(k) \in \mathcal{O}^{E_k}_{F_k}$ and by our choice $ M^k \in \mathcal{M}_{F_k }^{\mathcal{S}^s\setminus F_k ,\rho(k)}$, we have from Definition \ref{def:optJob} that there exists $b^{k}\in \mathbb{R}_+^J$ such that $b^k_{\rho(k)}=0$ and \begin{equation*} K_{\rho(k)}+\sum_{j=1}^{J}K_{j}b^{k}_{j}=\sum_{j\in M^{k}}K_{j}, \mbox{ and } h_{\rho(k)}\mu _{\rho(k)}+\sum_{j=1}^{J}h_{j}b^{k}_{j}\mu _{j}\leq \sum_{j\in M^{k}}h_{j}\mu _{j}. \end{equation*} With $u_k$ as defined in \eqref{eq:eq651} with $M$ replaced by $M^k$ (and with $\check q$ as above) \begin{equation*} u_{k} \ge \frac{q_{\rho(k)}^{*}(w)-s_{k}}{J\mu _{\rho(k)}}. \end{equation*} Define $\tilde q$ as in \eqref{eq:eq653} replacing $\rho(1)$ with $\rho(k)$, $u_{1}$ with $u_{k}$, and $M$ with $M^k$. Then as before $h\cdot \tilde q = \mathcal{C}(w^{k-1})$ and $ G\tilde q = w^{k-1}$; and as in the proof of \eqref{eq:eq655} we see using \eqref{eq:eq607} that $\tilde q_{\rho(k)} > s_k$ which contradicts the definition of $s_k$. This completes the proof that $s_k = {q} _{\rho(k)}^{*}(w)$. Setting $q^k = \check q$ we have that $q^k_{\rho(k)} = q^*_{\rho(k)}(w)$. Also, recalling that \begin{equation*} w^k = w^{k-1} - \frac{q^*_{\rho(k)}(w)}{\mu_{\rho(k)}}K_{\rho(k)} \end{equation*} and since $G q^k = w^{k-1}$, we have $G[q^k - q^*_{\rho(k)}(w) e_{\rho(k)}] = w^k$ and $h\cdot (q^k - q^*_{\rho(k)}(w) e_{\rho(k)}) = \mathcal{C}(w^{k-1}) - q^*_{\rho(k)}(w) h_{\rho(k)}$. Furthermore, using the fact that $h\cdot q^k = \mathcal{C}(w^{k-1})$, we have that if for $\tilde q \in \mathbb{R}^J_+$, $G\tilde q = w^k$, then $h\cdot \tilde q \ge \mathcal{C}(w^{k-1}) - q^*_{\rho(k)}(w) h_{\rho(k)}$. Thus we have that $\mathcal{C}(w^k) = \mathcal{C}(w^{k-1}) - q^*_{\rho(k)}(w) h_{\rho(k)}$. Combining this with the induction hypothesis \eqref{eq:eq651b}, we have that \eqref{eq:eq651b} holds with $k-1$ replaced with $k$. This completes the induction step and proves \eqref{eq:eq651b} for all $k=2, \ldots m+1$, in particular \begin{equation} \label{eq:eq651c} \mathcal{C}(w)=\sum_{l=1}^{m}h_{\rho(l)}{q}_{\rho(l)}^{*}(w)+\mathcal{C}\left( w^{m}\right) \end{equation} where \begin{equation} \label{eq:eq750} w^{m}=w-\sum_{l=1}^{m}K_{\rho(l)}{q}_{\rho(l)}^{*}(w)\text{.} \end{equation} Next, using \eqref{eq:eq608} with $k-1$ replaced with $m$ we see that for any $q \in \mathcal{Q}^s(w^m)$, $q_{\rho(l)} =0$ for all $l = 1, \ldots m$. Namely, \begin{equation*} \mathcal{C}(w^m) = \sum_{j \in \mathcal{S}^1} h_j \mu_j w^m_{\hat i(j)}. \end{equation*} From the definition of $w^m$ in \eqref{eq:eq750} and the definition of $ q^*_j(w)$ for $j \in \mathcal{S}^1$ in \eqref{eq:eq753} we then have that \begin{equation*} \mathcal{C}(w^m) = \sum_{j\in \mathcal{S}^1}h_{j}{q}_{j}^{*}(w),\; w = \sum_{l=1}^{m}K_{\rho(l)}{ q}_{\rho(l)}^{*}(w) + \sum_{j \in \mathcal{S}^1} K_{j}{q}_{j}^{*}(w). \end{equation*} This proves \eqref{eq:eq755} and the statement that $q^*(w) \in \mathcal{Q} ^s(w)$, and completes the proof of the theorem. \end{proof} Analogous to $\zeta _{i}^{k}$ introduced in Section \ref{sec:secallostra}, let \begin{equation}\label{eq:eqzetaiz} \zeta _{i}^{0}=\{j\in \mathcal{S}^p:K_{i,j}=1\} \end{equation} be the set of primary jobs which impact node $i$. \begin{theorem} \label{thm:costInefIneq} There exists $B\in (0,\infty)$ such that for any $q\in \mathbb{R}_+^{J}$ and the corresponding workload, $w=Gq$, we have \begin{equation*} \left\vert h\cdot q -\mathcal{C}(w)\right\vert \leq B\left( \sum_{k=1}^{m }\min_{i\in N_{\rho (k)}}\left\{ \sum_{j\in \zeta _{i}^{k}}q_{j}\right\} +\sum_{i=1}^{I}\sum_{j\in \zeta _{i}^{0}}q_{j}\right) \text{.} \end{equation*} \end{theorem} \begin{proof} Recall from Theorem \ref{thm:restCostJobOrd}\ that with $q^{\ast} = q^{\ast}(w)$ \begin{equation*} \mathcal{C}(w)=q^{\ast }\cdot h =\sum_{k=1}^{m }h_{\rho (k)}q_{\rho (k)}^{\ast }+\sum_{j\in \mathit{\mathcal{S}}^{1}}h_{j}q_{j}^{\ast } \text{.} \end{equation*} Since \begin{equation*} \frac{q_{\rho (1)}^{\ast }}{\mu _{\rho (1)}} =\min_{i\in N_{\rho (1)}}\left\{ w_{i}\right\} =\min_{i\in N_{\rho (1)}}\left\{ \sum_{j\in \zeta _{i}^{1}} \frac{q_{j}}{\mu _{j}}\right\} + \frac{q_{\rho (1)}}{\mu _{\rho (1)}} \end{equation*} we have \begin{equation*} q_{\rho (1)}=q_{\rho (1)}^{\ast }-\min_{i\in N_{\rho (1)}}\left\{ \sum_{j\in \zeta _{i}^{1}}\frac{q_{j}}{\mu _{j}}\right\} \mu _{\rho (1)} \end{equation*} from which we have \begin{equation*} \frac{1}{\mu _{\rho (1)}}\left\vert q_{\rho (1)}^{\ast }-q_{\rho (1)}\right\vert \leq \min_{i\in N_{\rho (1)}}\left\{ \sum_{j\in \zeta _{i}^{1}} \frac{q_{j}}{\mu_{j}}\right\} \text{.} \end{equation*} In general, for $2\leq k\leq m $ we have \begin{eqnarray*} \frac{q_{\rho (k)}^{\ast }}{\mu _{\rho (k)}} &=&\min_{i\in N_{\rho (k)}}\left\{ w_{i}-\sum_{l=1}^{k-1}K_{i,\rho (l)} \frac{q^{*}_{\rho (l)}}{\mu _{\rho (l)}}\right\} \\ &=&\min_{i\in N_{\rho (k)}}\left\{ \sum_{j\in \zeta _{i}^{k}} \frac{q_{j}}{\mu_{j}} -\sum_{l=1}^{k-1}K_{i,\rho (l)}\frac{(q_{\rho (l)}^{\ast }-q_{\rho (l)})}{\mu _{\rho (l)}}\right\} + \frac{q_{\rho (k)}}{\mu _{\rho (k)}} \end{eqnarray*} which gives \begin{equation*} \frac{1}{\mu _{\rho (k)}}\left|q_{\rho (k)}^{\ast }-q_{\rho (k)}\right|\leq \min_{i\in N_{\rho (k)}}\left\{ \sum_{j\in \zeta _{i}^{k}} \frac{q_{j}}{\mu_j} \right\} +\sum_{l=1}^{k-1}\frac{\left\vert q_{\rho (l)}^{\ast }-q_{\rho (l)}\right\vert}{ \mu _{\rho (l)}}. \end{equation*} Consequently for $k\in \{2,...,m \}$ we have \begin{align*} \frac{1}{\mu _{\rho (k)}}\left\vert q_{\rho (k)}^{\ast }-q_{\rho (k)}\right\vert &\leq \min_{i\in N_{\rho (k)}}\left\{ \sum_{j\in \zeta _{i}^{k}}\frac{q_{j}}{\mu_j} \right\} +\sum_{l=0}^{k-2}2^{l}\min_{i\in N_{\rho (k-1-l)}}\left\{ \sum_{j\in \zeta _{i}^{k-1-l}}\frac{q_{j}}{\mu _{j}}\right\} \text{.} \end{align*} For $j\in \mathit{S}^{1}$ we have with $i = \hat i(j)$ \begin{align*} \frac{q_{j}^{\ast }}{\mu _{j}} = w_{i}-\sum_{k=1}^{m }K_{i,\rho (k)}\frac{q_{\rho (k)}^{\ast }}{\mu _{\rho (k)}} = \sum_{j'\in \zeta _{i}^{0}} \frac{q_{j'}}{\mu_{j'}}-\sum_{k=1}^{m}K_{i,\rho (k)} \frac{(q_{\rho (k)}^{\ast }-q_{\rho (k)})}{\mu_{\rho (k)}} + \frac{q_{j}}{\mu _{j}} \end{align*} which gives \begin{equation*} \frac{1}{\mu _{j}}\left|q_{j}^{\ast }-q_{j}\right|\leq \sum_{j'\in \zeta _{i}^{0}} \frac{q_{j'}}{\mu_j'} +\sum_{l=1}^{m }\frac{\left\vert q_{\rho (l)}^{\ast }-q_{\rho (l)}\right\vert}{\mu _{\rho (l)}} \end{equation*} This, combined with our bounds on $\left\vert q_{\rho (k)}^{\ast }-q_{\rho (k)}\right\vert$ for $k\in \{1,...,m \}$, gives the following bound for $j \in \mathcal{S}^1$ \begin{align*} \frac{\left\vert q_{j}^{\ast }-q_{j}\right\vert}{\mu _{j}} \leq \sum_{j'\in \zeta _{\hat i(j)}^{0}} \frac{q_{j'}}{\mu _{j'}} +\sum_{l=0}^{m-1}2^{l}\min_{i\in N_{\rho (m -l)}}\left\{ \sum_{j'\in \zeta _{i}^{m -l}}\frac{q_{j'}}{\mu _{j'}}\right\} \text{.} \end{align*} Finally, for $j\in \mathcal{S}^p$ we have \begin{equation*} \frac{\left\vert q_{j}^{\ast }-q_{j}\right\vert}{\mu _{j}} = \frac{q_{j}}{\mu _{j}}\leq \min_{i\in N_{j}}\left\{ \sum_{j'\in \zeta _{i}^{0}} \frac{q_{j'}}{\mu _{j'}}\right\} \text{.} \end{equation*} Combining the above bounds \begin{align*} h\cdot q &= h\cdot q^{\ast} +h\cdot (q-q^{\ast}) \leq \mathcal{C}(w)+ \sum_{j \in \mathbb{N}_J} h_j |q_j-q^{\ast}| \\ &\leq \mathcal{C}(w)+\max_{j}\{h_{j}\}\sum_{j}\left\vert q_{j}-q^{\ast}_{j}\right\vert \\ &\leq \mathcal{C}(w)+\max_{j}\{h_{j}\}\max_{j}\{\mu_{j}\}J^22^{J}\left( \sum_{k=1}^{m }\min_{i\in N_{\rho (k)}}\left\{ \sum_{j\in \zeta _{i}^{k}}\frac{q_{j}}{\mu _{j}}\right\} +\sum_{i=1}^{I}\sum_{j\in \zeta _{i}^{0}}\frac{q_{j}}{\mu _{j}}\right) \\ &\leq \mathcal{C}(w)+\frac{\max_{j}\{h_{j}\}\max_{j}\{\mu_{j}\}}{\min_{j}\{\mu _{j}\}}J^22^{J}\left( \sum_{k=1}^{m }\min_{i\in N_{\rho (k)}}\left\{ \sum_{j\in \zeta _{i}^{k}}q_{j}\right\} +\sum_{i=1}^{I}\sum_{j\in \zeta _{i}^{0}}q_{j}\right) \text{.} \end{align*} Because $h\cdot q \geq \mathcal{C}(w)$ we have \begin{equation*} \left\vert h\cdot q -\mathcal{C}(w)\right\vert \leq B\left( \sum_{k=1}^{m }\min_{i\in N_{\rho (k)}}\left\{ \sum_{j\in \zeta _{i}^{k}}q_{j}\right\} +\sum_{i=1}^{I}\sum_{j\in \zeta _{i}^{0}}q_{j}\right) \end{equation*} where $ B=\frac{\max_{j}\{h_{j}\}\max_{j}\{\mu_{j}\}}{\min_{j}\{\mu _{j}\}}J^22^{J}\text{.} $ \end{proof} \section{Some Properties of the Rate Allocation Policy} In this section we record some important properties of the rate allocation policy $x(\cdot)$ introduced in Definition \ref{def:workAllocScheme}. Throughout this section $y(t), x(t)$ and $\mathcal{E}_{j}^r(t)$ will be as in Definition \ref{def:workAllocScheme}. Our first result shows that $x$ satisfies basic conditions for admissibility, namely, it is nonnegative and satisfies the capacity constraint. \label{sec:ratallpol} \begin{lemma} \label{lem:admispol} For all $t\ge 0$, $x(t) \ge 0$ and $Kx(t) \le C$. \end{lemma} \begin{proof} For the first statement in the lemma it suffices to show that $y_{j}(t)\geq 0 $ for all $j \in \mathbb{N}_J$ and $t\ge 0$. From definition of $\delta $ it is clear that $y_{j}(t)\geq 0$ for all $j\in \mathbb{N}_{J}\setminus \mathcal{S}^1$ and for $j\in \mathcal{S}^1$ with $\hat i(j)\notin \varpi ^{r}(t)$. Consider now a $ j\in \mathcal{S}^1$ for which $\hat i(j)\in \varpi ^{r}(t)$. Then \begin{equation*} y_{j}(t)=C_{\hat i(j)}-\sum_{l\neq j:K_{\hat i(j),l}=1}y_{l}(t)\text{.} \end{equation*} Also note that \begin{eqnarray*} \sum_{l\neq j:K_{\hat i(j),l}=1}y_{l}(t) \leq \sum_{l\neq j:K_{\hat i(j),l}=1}\left( \varrho _{l}+\delta \right) \leq \sum_{l\neq j:K_{\hat i(j),l}=1}\varrho _{l}+\frac{\min_{j'}\{\varrho _{j'}\}}{2} \end{eqnarray*} and thus since $K\varrho =C$ \begin{eqnarray*} y_{j}(t) = C_{\hat i(j)}-\sum_{l\neq j:K_{\hat i(j),l}=1}y_{l}(t) \geq C_{\hat i(j)}-\sum_{l\neq j:K_{\hat i(j),l}=1}\varrho _{l}-\frac{ \min_{j'}\{\varrho _{j'}\}}{2} \geq \varrho _{j}-\frac{\min_{j'}\{\varrho _{j'}\} }{2} \geq 0. \end{eqnarray*} This completes the proof of the first statement in the lemma. We now show that $Kx(t) \le C$ for all $t\ge 0$. Let $i\in \mathbb{N}_{I}$ be arbitrary. It suffices to show that for all $t\ge 0$, $ C_{i}\geq \sum_{j=1}^{J}K_{i,j}y_{j}(t)\text{.} $ From definition of $y_j(t)$ for $j \in \mathcal{S}^1$ in Definition \ref{def:workAllocScheme}, it is clear that when $i\in \varpi ^{r}(t)$, $ C_{i}=\sum_{j=1}^{J}K_{i,j}y_{j}(t)\text{.} $ Finally, if $i\notin \varpi ^{r}(t)$, then Definition \ref {def:workAllocScheme} gives $y_{j}(t)<\varrho _{j}$ for all $j$ with $ K_{i,j}=1$ and so \begin{eqnarray*} \sum_{j=1}^{J}K_{i,j}y_{j}(t) <\sum_{j=1}^{J}K_{i,j}\varrho _{j} <C_{i}. \end{eqnarray*} This completes the proof. \end{proof} The following two results are used in the proof of Theorem \ref{thm:IdleTimeExp}. \begin{lemma} For all $t\ge 0$ and $i\in \varpi ^{r}(t)$ such that $\sum_{j=1}^{J}K_{i,j}\mathcal{E}_{j}^r(t)=0, $ we have $C_{i}=\sum_{j=1}^{J}K_{i,j}x_{j}(t)\text{.} $ \label{lem:fullWorkloadCond} \end{lemma} \begin{proof} Let $t\ge 0$ and $i\in \varpi ^{r}(t)$ satisfy $\sum_{j=1}^{J}K_{i,j} \mathcal{E}_{j}^r(t)=0$. Then for all $j$ with $K_{i,j}=1$ we have $ x_{j}(t)=y_{j}(t)$ and so it suffices to prove that $C_{i}= \sum_{j=1}^{J}K_{i,j}y_{j}(t)\text{.} $ However, this is an immediate consequence of the definition of $y_j(t)$ for $j \in \mathcal{S}^1$ and $\hat i(j) \in \varpi ^{r}(t)$ in Definition \ref{def:workAllocScheme}. \end{proof} From Condition \ref{cond:HT1} we can find $\hat{R} \in (0,\infty)$ such that for all $r\geq \hat{R}$ and $j\in \mathbb{N}_{J}$ we have \begin{equation} \left\vert \varrho _{j}-\varrho _{j}^{r}\right\vert \leq 2^{-2m -6}\frac{ \delta }{J }\text{,}\;\; 2\lambda _{j}\geq \lambda _{j}^{r}\geq \lambda _{j}/2\text{, and }\; 2\mu _{j}\geq \mu _{j}^{r}\geq \mu _{j}/2\text{.}\label{def:Rhat} \end{equation} For the rest of this work we will assume without loss of generality that $ r\ge \hat R$. \begin{lemma} \label{lem:posDriftCond}For all $t\ge 0$ and $j\in \mathbb{N}_{J}$ if $ c_{1}r^{\alpha }\leq Q_{j}^{r}(t)<c_{2}r^{\alpha }$ then \begin{equation*} \lambda _{j}^{r}-\mu _{j}^{r}x_{j}(t)\geq \mu _{j}2^{-2m -5}\frac{\delta }{J }\text{.} \end{equation*} \end{lemma} \begin{proof} Note that if $\mathcal{E}_{j}^r(t)=1$ then $x_{j}(t)=0$ which, since $r \ge \hat R$, implies on recalling the definition of $\delta$ from Definition \ref{def:workAllocScheme} that \begin{equation*} \lambda _{j}^{r}-\mu _{j}^{r}x_{j}(t)=\lambda _{j}^{r}\geq \lambda _{j}/2=\mu _{j}\varrho _{j}/2\geq \mu _{j}\delta. \end{equation*} Thus the result holds in this case. We now consider the case $\mathcal{E}_{j}^r(t)=0$ so that $ x_{j}(t)=y_{j}(t)$. If $j\in \mathbb{N}_{J}\setminus \mathcal{S}^1$ or $j\in \mathcal{S}^1$ and $\hat i(j)\notin \varpi ^{r}(t)$, Definition \ref{def:workAllocScheme} gives \begin{equation*} y_{j}(t)\leq \varrho _{j}-2^{-2m-3}\frac{\delta }{J } \end{equation*} which combined with \eqref{def:Rhat} implies \begin{eqnarray*} \lambda _{j}^{r}-\mu _{j}^{r}x_{j}(t) &\geq& \lambda _{j}^{r}-\mu _{j}^{r}\left( \varrho _{j}-2^{-2m -3}\frac{\delta }{J }\right) = \mu _{j}^{r}\left( \varrho _{j}^{r}-\varrho _{j}\right) +\mu _{j}^{r}2^{-2m -3} \frac{\delta }{J } \\ &\geq& -\mu _{j}^{r}2^{-2m -6}\frac{\delta }{J }+\mu _{j}2^{-2m -4}\frac{ \delta }{J } \ge \mu _{j}2^{-2m -5}\frac{\delta }{J } \end{eqnarray*} and the result again holds. \ Finally we consider the remaining case, namely $j\in \mathcal{S}^1$, $\mathcal{E}_{j}^r(t)=0$ and $\hat i(j)\in \varpi ^{r}(t)$. We will consider two sub-cases.\newline \noindent \textbf{Case 1:} $\zeta _{\hat i(j)}^{0}\cap \sigma ^{r}(t) \neq \emptyset$. Let $l^{\ast }\in \zeta _{\hat i(j)}^{0}\cap \sigma ^{r}(t)$. Then $y_{l^{\ast }}(t)=\varrho _{l^{\ast }}+\delta $ and \begin{align*} y_{j}(t) = C_{\hat i(j)}-\sum_{l\neq j:K_{\hat i(j),l}=1}y_{l}(t) = C_{\hat i(j)}-y_{l^{\ast }}(t)-\sum_{k=1}^m K_{\hat i(j),\rho (k)}y_{\rho (k)}(t)-\sum_{l\in \zeta _{\hat i(j)}^{0}:l\neq l^{\ast }}y_{l}(t)\text{.} \end{align*} Furthermore, \begin{align*} -\sum_{k=1}^{m }K_{\hat i(j),\rho (k)}y_{\rho (k)}(t) \leq -\sum_{k=1}^{m }K_{\hat i(j),\rho (k)}\left( \varrho _{\rho (k)}-2^{k-m -2}\delta \right) \leq -\sum_{k=1}^{m }K_{\hat i(j),\rho (k)}\varrho _{\rho (k)}+\delta \left( 1-2^{-m -2}\right) \end{align*} and \begin{align*} -\sum_{l\in \zeta _{\hat i(j)}^{0}:l\neq l^{\ast }}y_{l}(t) \leq -\sum_{l\in \zeta _{\hat i(j)}^{0}:l\neq l^{\ast }}\left( \varrho_{l}-2^{-m -3}\frac{\delta }{J }\right) \leq -\sum_{l\in \zeta _{\hat i(j)}^{0}:l\neq l^{\ast }}\varrho _{l}+2^{-m -3}\delta. \end{align*} Consequently \begin{eqnarray*} y_{j}(t) \leq C_{\hat i(j)}-\sum_{l\neq j:K_{\hat i(j),l}=1}\varrho _{l}-\delta +\delta \left( 1-2^{-m -2}\right) +2^{-m -3}\delta \leq \varrho _{j}-\delta 2^{-m -2} \end{eqnarray*} which combined with \eqref{def:Rhat}\ gives \begin{eqnarray*} \lambda _{j}^{r}-\mu _{j}^{r}y_{j}(t) &\geq &\lambda _{j}^{r}-\mu _{j}^{r}\left( \varrho _{j}-2^{-m -2}\delta \right) \geq \mu _{j}^{r}\left( \varrho _{j}^{r}-\varrho _{j}\right) +\mu _{j}^{r}2^{-m -2}\delta \geq -\mu _{j}^{r}2^{-2m -6}\frac{\delta }{J }+\mu _{j}2^{-m -3}\delta \\ &\geq &-\mu _{j}2^{-2m -5}\frac{\delta }{J }+\mu _{j}2^{-m -3}\delta \geq \mu _{j}2^{-m -4}\delta \end{eqnarray*} and the result holds. \newline \noindent \textbf{Case 2:} $\zeta _{\hat i(j)}^{0}\cap \sigma ^{r}(t)=\emptyset $. In this case the assumption $\hat i(j)\in \varpi ^{r}(t) $ implies that there exists some $k\in \mathbb{N}_m$ such that $K_{\hat i(j),\rho (k)}=1$ and $\rho (k)\in \sigma ^{r}(t)$. \ Let \begin{equation*} k^{\ast }=\max \{k\in \mathbb{N}_m:K_{\hat i(j),\rho (k)}=1\text{ and }\rho (k)\in \sigma ^{r}(t)\}\text{.} \end{equation*} Consequently $\zeta _{\hat i(j)}^{k^{\ast }}\cap \sigma ^{r}(t)=\emptyset $ and $\rho (k^{\ast })\in \sigma ^{r}(t)$ so $ y_{\rho (k^{\ast })}=\varrho _{\rho (k^{\ast })}+2^{k^{\ast }-m -2}\delta $. Recall that \begin{eqnarray*} y_{j}(t) &=&C_{\hat i(j)}-\sum_{l\neq j:K_{\hat i(j),l}=1}y_{l}(t) \\ &=&C_{\hat i(j)}-y_{\rho (k^{\ast })}-\sum_{k=1}^{k^{\ast }-1}K_{\hat i(j),\rho (k)}y_{\rho (k)}(t)-\sum_{k=k^{\ast }+1}^{m }K_{\hat i(j),\rho (k)}y_{\rho (k)}(t)-\sum_{l\in \zeta _{\hat i(j)}^{0}}K_{\hat i(j),l}y_{l}(t) \text{.} \end{eqnarray*} For the third term on the right side, we have \begin{eqnarray*} -\sum_{k=1}^{k^{\ast }-1}K_{\hat i(j),\rho (k)}y_{\rho (k)}(t) &\leq &-\sum_{k=1}^{k^{\ast }-1}K_{\hat i(j),\rho (k)}\left( \varrho _{\rho (k)}-2^{k-m -2}\delta \right) \\ &\leq &-\sum_{k=1}^{k^{\ast }-1}K_{\hat i(j),\rho (k)}\varrho _{\rho (k)}+\left( 1-2^{-k^{\ast }+1}\right) 2^{k^{\ast }-m -2}\delta \\ &\leq &-\sum_{k=1}^{k^{\ast }-1}K_{\hat i(j),\rho (k)}\varrho _{\rho (k)}+2^{k^{\ast }-m -2}\delta -2^{-m -1}\delta \text{.} \end{eqnarray*} By the definition of $k^{\ast }$ for all $k\in \{k^{\ast }+1,...,m \}$ if $ K_{\hat i(j),\rho (k)}=1$ we have $\rho (k)\notin \sigma ^{r}(t)$ and $\zeta^k_{\hat i(j)}\cap \sigma^r(t) = \emptyset$, consequently $ y_{\rho (k)}(t)=\varrho _{\rho (k)}-2^{-k-m -2}\delta \text{.} $ This gives \begin{align*} -\sum_{k=k^{\ast }+1}^{m }K_{\hat i(j),\rho (k)}y_{\rho (k)}(t) &= -\sum_{k=k^{\ast }+1}^{m }K_{\hat i(j),\rho (k)}\left( \varrho _{\rho (k)}-2^{-k-m -2}\delta \right) \\ &\le -\sum_{k=k^{\ast }+1}^{m }K_{\hat i(j),\rho (k)}\varrho _{\rho (k)}+2^{-k^{\ast }-m -2}\delta (1-2^{-m+k^*})\text{.} \end{align*} Finally, by assumption, $\zeta _{\hat i(j)}^{0}\cap \sigma ^{r}(t)=\emptyset $ and therefore \begin{align*} -\sum_{l\in \zeta _{\hat i(j)}^{0}}K_{\hat i(j),l}y_{l}(t) = -\sum_{l\in \zeta _{\hat i(j)}^{0}}K_{\hat i(j),l}\left( \varrho _{l}-2^{-m -3}\frac{ \delta }{J }\right) \leq -\sum_{l\in \zeta _{\hat i(j)}^{0}}K_{\hat i(j),l}\varrho _{l}+2^{-m -3}\delta \text{.} \end{align*} This gives \begin{eqnarray*} y_{j}(t) &\le C_{\hat i(j)}-\left(\sum_{l\neq j:K_{\hat i(j),l}=1}\varrho _{l}\right)-2^{k^{\ast }-m -2}\delta +2^{k^{\ast }-m -2}\delta -2^{-m -1}\delta +2^{-k^{\ast }-m -2}\delta +2^{-m -3}\delta \\ &\le \varrho _{j}-2^{-m -3}\delta \end{eqnarray*} which combined with \eqref{def:Rhat} implies \begin{eqnarray*} \lambda _{j}^{r}-\mu _{j}^{r}y_{j}(t) &\geq &\lambda _{j}^{r}-\mu _{j}^{r}\left( \varrho _{j}-2^{-m -3}\delta \right) = \mu _{j}^{r}\left( \varrho _{j}^{r}-\varrho _{j}\right) +\mu _{j}^{r}2^{-m -3}\delta \geq -\mu _{j}^{r}2^{-2m -6}\frac{\delta }{J }+\mu _{j}2^{-m -4}\delta \\ &\geq &-\mu _{j}2^{-2m -5}\frac{\delta }{J }+\mu _{j}2^{-m -4}\delta \geq \mu _{j}2^{-m -5}\delta \end{eqnarray*} and completes the proof. \end{proof} The following lemma will be used in the proofs of Propositions \ref{thm:initInef} and \ref{thm:runningInef}. \begin{lemma} \label{lem:lem3_4} (a) Let $t\ge 0$ and $k\in \mathbb{N}_m$ be such that $\zeta _{i'}^{k}\cap \sigma ^{r}(t)\neq \emptyset $ for all $i'\in N_{\rho (k)}$. Then for any $i\in N_{\rho (k)}$ satisfying $\sum_{j\in \zeta _{i}^{k}} \mathcal{E}_{j}^r(t)=0$, we have \begin{equation} \sum_{j\in \zeta _{i}^{k}}\left( \varrho _{j}^{r}-x_{j}(t)\right) \leq -2^{-m -2}\delta \text{.}\label{eq:eq636} \end{equation} (b) Let $i\in \mathbb{N}_{I}$ and $t\ge 0$ be such that $\zeta _{i}^{0}\cap \sigma ^{r}(t)\neq \emptyset $ and $\sum_{j\in \zeta _{i}^{0}}\mathcal{E} _{j}^{r}(t)=0$. Then, we have \begin{equation*} \sum_{j\in \zeta _{i}^{0}}\left( \varrho _{j}^{r}-x_{j}(t)\right) \leq -2^{-2}\delta \text{.} \end{equation*} \end{lemma} \begin{proof} (a) Recall that we assume $r\geq \hat{R}$ and consequently \eqref{def:Rhat} holds. \ Let $k\in \mathbb{N}_m$ and $t\ge 0$ be such that $\zeta _{i'}^{k}\cap \sigma^{r}(t)\neq \emptyset $ for all $i'\in N_{\rho (k)}$. Let $i\in N_{\rho (k)}$ be such that $\sum_{j\in \zeta _{i}^{k}}\mathcal{E}_{j}^r(t)=0$. We need to show that \eqref{eq:eq636} holds for such an $i$. Since $\zeta _{i'}^{k}\cap \sigma ^{r}(t)\neq \emptyset $ for all $i'\in N_{\rho (k)}$, Definition \ref{def:workAllocScheme} gives \begin{equation} y_{\rho (k)}(t)=\varrho _{\rho (k)}-2^{k-m -2}\delta \text{.}\label{eq:eq751} \end{equation} Since $\sum_{j\in \zeta _{i}^{k}}\mathcal{E}_{j}^r(t)=0$, for all $j\in \zeta _{i}^{k}$, $x_{j}(t)=y_{j}(t)$ so to prove \eqref{eq:eq636} it suffices to show \begin{equation} \sum_{j\in \zeta _{i}^{k}}\left( \varrho _{j}^{r}-y_{j}(t)\right) \leq -2^{-m -2}\delta \text{.}\label{eq:eq752} \end{equation} Due to the assumption that $\zeta _{i}^{k}\cap \sigma ^{r}(t)\neq \emptyset $ we have $i\in \varpi ^{r}(t)$ and consequently Definition \ref{def:workAllocScheme} gives \begin{equation*} y_{\check j(i)}(t)=C_{i}-\sum_{j\neq \check j(i):K_{i,j}=1}y_{j}(t)\text{.} \end{equation*} Therefore \begin{eqnarray*} \sum_{j\in \zeta _{i}^{k}}y_{j}(t) &=&y_{\check j(i)}(t)+\sum_{j\in \zeta _{i}^{k}:j\neq \check j(i)}y_{j}(t) \\ &=&C_{i}-\sum_{j\neq \check j(i):K_{i,j}=1}y_{j}(t)+\sum_{j\in \zeta _{i}^{k}:j\neq \check j(i)}y_{j}(t) \\ &=&C_{i}-y_{\rho (k)}(t)-\sum_{v=1}^{k-1}K_{i,v}y_{\rho (v)}(t)\text{.} \end{eqnarray*} However, from \eqref{eq:eq751} and Definition \ref{def:workAllocScheme} \begin{align*} C_{i}-y_{\rho (k)}(t)-\sum_{v=1}^{k-1}K_{i,v}y_{\rho (v)}(t) &\geq C_{i}-\left( \varrho _{\rho (k)}-2^{k-m -2}\delta \right) -\sum_{v=1}^{k-1}K_{i,v}\left( \varrho _{\rho (v)}+2^{v-m -2}\delta \right) \\ &\geq C_{i}-\sum_{v=1}^{k}K_{i,v}\varrho _{\rho (v)}+2^{k-m -2}\delta -2^{k-m -2}\delta +2^{-m -1}\delta \\ &\geq \sum_{j\in \zeta _{i}^{k}}\varrho _{j}+2^{-m -1}\delta \end{align*} which gives \begin{equation*} \sum_{j\in \zeta _{i}^{k}}y_{j}(t)\geq \sum_{j\in \zeta _{i}^{k}}\varrho _{j}+2^{-m -1}\delta . \end{equation*} Combining this with \eqref{def:Rhat} gives \begin{align*} \sum_{j\in \zeta _{i}^{k}}\left( \varrho _{j}^{r}-y_{j}(t)\right) = \sum_{j\in \zeta _{i}^{k}}\varrho _{j}^{r}-\sum_{j\in \zeta _{i}^{k}}y_{j}(t) \leq \sum_{j\in \zeta _{i}^{k}}\left( \varrho _{j}^{r}-\varrho _{j}\right) -2^{-m -1}\delta \leq J 2^{-2m -6}\frac{\delta }{J }-2^{-m -1}\delta \leq -2^{-m -2}\delta \text{.} \end{align*} This proves \eqref{eq:eq752} and completes the proof of part (a).\\ \noindent (b) Suppose now that $i\in \mathbb{N}_{I}$ and $t\ge 0$ are such that $\zeta _{i}^{0}\cap \sigma ^{r}(t)\neq \emptyset $ and $\sum_{j\in \zeta _{i}^{0}}\mathcal{E}_{j}^r(t)=0$. From the latter property we have $x_{j}(t)=y_{j}(t)$ for all $j\in \zeta _{i}^{0}$, and because $\zeta _{i}^{0}\cap \sigma ^{r}(t)\neq \emptyset $ there exists $ l^{\ast }\in \zeta _{i}^{0}$ such that $l^{\ast }\in \sigma ^{r}(t)$. \ From Definition \ref{def:workAllocScheme} $y_{l^{\ast }}(t)=\varrho _{l^{\ast }}+\delta$ and \begin{align*} \sum_{j\in \zeta _{i}^{0}}y_{j}(t) &= y_{l^{\ast }}(t)+\sum_{j\in \zeta _{i}^{0}:j\neq l^{\ast }}y_{j}(t) \geq \varrho _{l^{\ast }}+\delta +\sum_{j\in \zeta _{i}^{0}:j\neq l^{\ast }}\left( \varrho _{j}-2^{-m -3}\frac{\delta }{J }\right) \\ &\geq \sum_{j\in \zeta _{i}^{0}}\varrho _{j}+\delta -2^{-m -3}\delta \geq \sum_{j\in \zeta _{i}^{0}}\varrho _{j}+\frac{\delta}{2}\text{.} \end{align*} This combined with \eqref{def:Rhat} gives \begin{align*} \sum_{j\in \zeta _{i}^{0}}\left( \varrho _{j}^{r}-x_{j}(t)\right) &= \sum_{j\in \zeta _{i}^{0}}\varrho _{j}^{r}-\sum_{j\in \zeta _{i}^{0}}y_{j}(t) \leq \sum_{j\in \zeta _{i}^{0}}\varrho _{j}^{r}-\sum_{j\in \zeta _{i}^{0}}\varrho _{j}-\frac{\delta}{2} \leq \sum_{j\in \zeta _{i}^{0}}\left( \varrho _{j}^{r}-\varrho _{j}\right) -\frac{\delta}{2} \\ &\leq J 2^{-2m -6}\frac{\delta }{J }-\frac{\delta}{2} \leq -2^{-2}\delta \text{.} \end{align*} This completes the proof of (b). \end{proof} \section{\protect Large Deviation Estimates} \label{sec:ldest} Recall the allocation scheme $x(\cdot)$ given by Definition \ref{def:workAllocScheme} and define processes $Q^r, B^r, T^r$ associated with this allocation scheme with $\dot B^r(t)=x(t)$, $t\ge 0$, as in Section \ref{sec:backg}. Also recall the other associated processes as defined in \eqref{eq:eq935} --\eqref{eq:eq939}. Note that the allocation scheme depends on a parameter $\alpha \in (0, 1/2)$ and $c_1, c_2\in (0,\infty)$. Let $ X^{r}(t)=\left( Q^{r}(t),\mathcal{E}^{r}(t)\right) $ and let \begin{equation} \label{eq:eqhatxrt} \hat{X}^{r}(t) =\left( \hat{Q}^{r}(t),\mathcal{E}^{r}(r^{2}t)\right) =\left( Q^{r}(r^{2}t)/r,\mathcal{E}^{r}(r^{2}t)\right),\; t\ge 0. \end{equation} Note that although $\hat Q^r$ is not Markovian, the pair $\hat{X}^{r}$ defines a strong Markov process with state space $\mathcal{S}^r \doteq (\mathbb{R}_+ \cap \frac{1}{r}\mathbb{N}_0)^J \times \{0,1\}^J$. Expectations of various functionals of the Markov process $\hat{X}^{r}$ when $\hat{X}^{r}(0)=x$ will be denoted as $E_x$ and the associated probabilities by $P_x$. The following theorem is a key step in estimating the idleness terms in state dynamics. \begin{theorem} \label{thm:IdleTimeExp}For any $\epsilon \in(0,\infty)$ and $j\in \mathbb{N}_{J}$ there exist $\hat{B}_{1},\hat{B}_{2},\hat{B}_{3},\hat{B}_{4},R \in (0,\infty) $ such that for all $r\geq R$, $t\geq 1$ and $x\in \mathcal{S}^r$ we have \begin{equation} P_x\left( \int_{0}^{tr^{1/2}}\mathit{I}_{\{\mathcal{E}_{j}^r(s)=1\}}ds\geq \epsilon r^{1/4+\alpha /2}t\right) \leq \hat{B}_{1}e^{-r^{1/4+\alpha /2}t \hat{B}_{2}}+ \left( 1+\frac{\hat{B}_{3}}{r^{1/4+\alpha /2}}\right) ^{-\hat{B}_{4}r^{1/2}t} \label{eq:rootIdleTimeResult} \end{equation} and \begin{equation} P_x\left( \int_{0}^{tr^{2}}\mathit{I}_{\{\mathcal{E}_{j}^r(s)=1\}}ds\geq \epsilon rt\right) \leq \hat{B}_{1}e^{-rt\hat{B}_{2}}+ \left( 1+\frac{ \hat{B}_{3}}{r^{1+\alpha }}\right) ^{-\hat{B}_{4}r^{2}t}\text{.} \label{eq:squareIdleTimeResult} \end{equation} \end{theorem} \begin{proof} Let $j\in \mathbb{N}_{J}$, $x \in \mathcal{S}^r$ and $\epsilon >0$ be arbitrary. Recall $c_1, c_2$ from Section \ref{sec:secallostra}. \ Define \begin{equation*} \tau _{0}^{r,j}\doteq\inf \left\{ s\geq 0:Q_{j}^{r}(s)\geq c_{2}r^{\alpha }\right\} , \end{equation*} \begin{equation*} \tau _{2l-1}^{r,j}\doteq\inf \left\{ s\geq \tau _{2l-2}^{r,j}:Q_{j}^{r}(s)<r^{\alpha }\frac{c_{2}+c_{1}}{2}\right\} , \end{equation*} and \begin{equation*} \tau _{2l}^{r,j}\doteq \inf \left\{ s\geq \tau _{2l-1}^{r}:Q_{j}^{r}(s)\geq c_{2}r^{\alpha }\right\} \end{equation*} for all $l\geq 1$. Recall the functions $\mathcal{E}_j$ introduced in Definition \ref{def:workAllocScheme}. \ Define the indicator functions \[ \theta _{l}^{r,j} \doteq \left\{ \begin{array}{cc} 1, & \text{ if }\mathcal{E}_{j}^r(s)=1\text{ for some }s\in \left( \tau _{2l-1}^{r,j},\tau _{2l}^{r,j}\right] \\ \ & \\ 0, & \text{ otherwise.} \end{array} \right. \] For $t>0$ let \begin{equation} \eta _{t}^{r,j}=\max \left\{ l:\tau _{2l-1}^{r,j}\leq tr^{1/2}\right\},\; \hat{\eta}_{t}^{r,j}=\max \left\{ l:\tau _{2l-1}^{r,j}\leq tr^{2}\right\} \label{eq:etaDef} \end{equation} and $N_{k}^{r,j}=\sum_{l=1}^{k}\theta _{l}^{r,j}$. Consider the events, \begin{equation*} \mathcal{B}_{1}^{r,j}=\left\{ \eta _{t}^{r,j}\leq 2\lambda _{j}^{r}r^{1/2}t\right\} \text{,}\;\; \mathcal{\hat{B}}_{1}^{r,j}=\left\{ \hat{\eta} _{t}^{r,j}\leq 2\lambda _{j}^{r}r^{2}t\right\} \text{,} \end{equation*} \begin{equation*} \mathcal{B}_{2}^{r,j}=\left\{ N_{\left\lceil 2\lambda _{j}^{r}tr^{1/2}\right\rceil }^{r,j}\leq \frac{\lambda _{j}^{r}\epsilon }{ 2(c_{2}-c_{1})}r^{1/4-\alpha /2}t\right\}, \; \mathcal{\hat{B}}_{2}^{r,j}=\left\{ N_{\left\lceil 2\lambda _{j}^{r}tr^{2}\right\rceil }^{r,j}\leq \frac{\lambda _{j}^{r}\epsilon }{ 2(c_{2}-c_{1})}r^{1-\alpha }t\right\} \text{.} \end{equation*} Let $$\mathcal{C}^r \doteq \left\{\int_{0}^{r^{1/2}t}\mathit{I}_{\left\{ \mathcal{E}_{j}^r(s)=1\right\} }ds\geq \epsilon r^{1/4+\alpha /2}t\right\},\; \hat{\mathcal{C}}^r \doteq \left\{\int_{0}^{r^{2}t}\mathit{I}_{\left\{ \mathcal{E}_{j}^r(s)=1\right\} }ds\geq \epsilon rt\right\}.$$ Then \begin{equation} P\left( \mathcal{C}^r\right) \leq P\left( (\mathcal{B}_{1}^{r,j})^c\right) +P\left( (\mathcal{B}_{2}^{r,j})^c\right) + P\left( \mathcal{B}_{1}^{r,j}\cap \mathcal{B}_{2}^{r,j}\cap \mathcal{C}^r\right) \label{eq:mainIneqRoot} \end{equation} and \begin{equation} P\left( \hat{\mathcal{C}}^r\right) \leq P\left( (\hat{\mathcal{B}}_{1}^{r,j})^c\right) +P\left( (\hat{\mathcal{B}}_{2}^{r,j})^c\right) + P\left( \hat{\mathcal{B}}_{1}^{r,j}\cap \hat{\mathcal{B}}_{2}^{r,j}\cap \hat{\mathcal{C}}^r\right). \label{eq:mainIneqSquare} \end{equation} Noting that each occurrence of $\tau _{2l-1}^{r,j}$ requires an arrival of a job of type $j$, we have $$ P\left( ({\mathcal{B}}_{1}^{r,j})^c\right) = P\left(\eta _{t}^{r,j} > 2\lambda _{j}^{r}r^{1/2}t\right) \le P\left(A_j^r(tr^{1/2}) \ge 2\lambda _{j}^{r}r^{1/2}t\right).$$ Similarly, $$ P\left( (\hat{\mathcal{B}}_{1}^{r,j})^c\right) \le P\left(A_j^r(tr^{2}) \ge 2\lambda _{j}^{r}r^{2}t\right).$$ Thus from the first inequality in Theorem \ref{thm:LDP} in Appendix we can find $R_1 \in (0,\infty)$ and $\kappa_1, \kappa_2 \in (0,\infty)$ such that for all $r \ge R_1$, $t\ge 1$ and $j \in \mathbb{N}_J$ \begin{equation} P\left( ({\mathcal{B}}_{1}^{r,j})^c\right) \le \kappa_1 e^{-tr^{1/2}\kappa_2}, \; P\left( (\hat{\mathcal{B}}_{1}^{r,j})^c\right) \le \kappa_1 e^{-tr^{2}\kappa_2}. \label{eq:firstTermRoot} \end{equation} We now estimate $P\left( ({\mathcal{B}}_{2}^{r,j})^c\right)$, $P\left( (\hat{\mathcal{B}}_{2}^{r,j})^c\right)$. Note that the $\left\{ \theta _{l}^{r,j}\right\} _{l=1}^{\infty }$ are i.i.d. Bernoulli with parameter $p(r)$ where \begin{equation*} p(r) = P(\theta _{l}^{r,j}=1)=P\left( Q_{j}^{r}(\varsigma _{l}^{r,j})< c_{1}r^{\alpha }\right) \end{equation*} and \begin{equation} \varsigma _{l}^{r,j}\doteq \inf \left\{ s\geq \tau _{2l-1}^{r,j}:Q_{j}^{r}(t)< c_{1}r^{\alpha }\text{ or }Q_{j}^{r}(t)\geq c_{2}r^{\alpha }\right\} \text{.} \label{eq:bndryHitTime} \end{equation} The probability $p(r)$ can be estimated as follows. Note that from Lemma \ref{lem:posDriftCond}, for $\tau _{2l-1}^{r,j}\leq s<\varsigma _{l}^{r,j}$ \begin{equation}\label{eq:eq1023} \lambda _{j}^{r}-\mu _{j}^{r}x_{j}(s)\geq \mu _{j}\kappa \end{equation} where $ \kappa \doteq 2^{-2m -5}\frac{\delta }{ J }$. Letting $\bar C = \max_i\{C_i\}$ and $d_j \doteq (c_2-c_1)/(\mu_j\kappa)$, define \begin{align*} \mathcal{A}_{l}^{r,j} &=\Bigg\{ \sup_{0\leq s\leq d_jr^{\alpha } }\left\vert A_{j}^{r}(\tau _{2l-1}^{r,j}+s)-A_{j}^{r}(\tau _{2l-1}^{r,j})-\lambda _{j}^{r}s\right\vert \\ &+\sup_{0\leq s\leq \bar C d_jr^{\alpha} }\left\vert S_{j}^{r}(B_{j}^{r}(\tau _{2l-1}^{r,j})+s)-S_{j}^{r}(B_{j}^{r}(\tau _{2l-1}^{r,j}))-\mu _{j}^{r}s\right\vert \geq \frac{\left( c_{2}-c_{1}\right) r^{\alpha }}{4}\Bigg\}. \end{align*} From Theorem \ref{thm:LDP} and strong Markov property there exist $\kappa_3, \kappa_4 \in (0, \infty)$ and $R_{2}\in \left[ R_{1},\infty \right) $ such that for all $ r\geq R_{2}$, $j \in \mathbb{N}_J$, and $l\geq 1$ \begin{equation*} P\left( \mathcal{A}_{l}^{r,j}\right) \leq \kappa_3 e^{-r^{\alpha }\kappa_4}. \end{equation*} We can also assume without loss of generality that for $r \ge R_2$, $ r^{\alpha }\frac{c_{2}-c_{1}}{4}>2\text{.} $ From \eqref{eq:eq1023}, on the event $\left( \mathcal{A}_{l}^{r,j}\right) ^{c}$, we have for $s\in \left[ \tau _{2l-1}^{r,j},\varsigma _{l}^{r,j}\wedge \left( \tau _{2l-1}^{r,j}+d_j r^{\alpha } \right) \right) $ \begin{eqnarray*} Q_{j}^{r}(s) &\geq &r^{\alpha }\frac{c_{2}+c_{1}}{2}-1+\left( A_{j}^{r}(s)-A_{j}^{r}(\tau _{2l-1}^{r,j})\right) -\left( S_{j}^{r}(B_{j}^{r}(s))-S_{j}^{r}(B_{j}^{r}(\tau _{2l-1}^{r,j}))\right) \\ &\geq &r^{\alpha }\frac{c_{2}+c_{1}}{2}-1-r^{\alpha }\frac{c_{2}-c_{1}}{4} +(s- \tau _{2l-1}^{r,j})\mu _{j}\Delta. \end{eqnarray*} Since the expression on the right side with $s=\tau_{2l-1}^{r,j}+d_j r^{\alpha }$ is larger than $c_2 r^{\alpha}$ we have that on $\left( \mathcal{A}_{l}^{r,j}\right) ^{c}$, $\varsigma _{l}^{r,j}<\tau _{2l-1}^{r,j}+d_j r^{\alpha }$ and so $Q_{j}^{r}(\varsigma _{l}^{r,j}) > c_1r^{\alpha}$. Thus $\left( \mathcal{A}_{l}^{r,j}\right) ^{c}\cap \{\theta_{l}^{r,j}=1\}=\emptyset$ and \begin{equation*} p(r) \le P\left( \mathcal{A}_{l}^{r,j}\right) \leq \kappa_3e^{-r^{\alpha }\kappa_4}\text{.} \end{equation*} Choose $R_{3}\in \left[ R_{2},\infty \right) $ such that for all $r\geq R_{3}$ we have \begin{equation} \epsilon /[10(c_{2}-c_{1})r^{1+\alpha }]\geq 2p(r)\text{,}\;\; \epsilon /[5(c_{2}-c_{1})r^{1/4+\alpha /2}]\leq 1/2\text{,}\;\; \left( 2\lambda _{j}^{r}r^{1/2}+1\right) /5\leq \lambda _{j}^{r}r^{1/2}/2. \label{eq:greaterThanPr} \end{equation} so in particular from the third inequality, for all $t\geq 1$, \begin{equation} \left\lceil 2\lambda _{j}^{r}tr^{1/2}\right\rceil \left( \epsilon /[5(c_{2}-c_{1})r^{1/4+\alpha /2}]\right) \leq \lambda _{j}^{r}tr^{1/4-\alpha /2}\epsilon /[2(c_{2}-c_{1}]) \label{eq:rootIdleTimeProofIneq} \end{equation} and \begin{equation} \left\lceil 2\lambda _{j}^{r}tr^{2}\right\rceil \left( \epsilon /[5(c_{2}-c_{1})r^{1+\alpha }]\right) \leq \lambda _{j}^{r}tr^{1-\alpha }\epsilon/[2(c_{2}-c_{1})]. \label{eq:squareIdleTimeProofIneq} \end{equation} Note that if $Z\sim \mbox{Bin}(L,p)$ then, for all $u>0$ $$P(Z \ge u) \le (1+ p(e-1))^L e^{-u}.$$ Thus we have \begin{eqnarray*} P\left( N_{\left\lceil 2\lambda _{j}^{r}tr^{1/2}\right\rceil }^{r}\geq \frac{ \lambda _{j}^{r}\epsilon }{2(c_{2}-c_{1})}r^{1/4-\alpha /2}t\right) &\leq &e^{-\frac{\lambda _{j}^{r}\epsilon }{2(c_{2}-c_{1})}r^{1/4-\alpha /2}t}\left( 1+p(r)(e^{1}-1)\right) ^{\left\lceil 2\lambda _{j}^{r}tr^{1/2}\right\rceil } \\ &\leq &\left( \frac{1+2p(r)}{e^{\epsilon /5(c_{2}-c_{1})r^{1/4+\alpha /2}}}\right) ^{\left\lceil 2\lambda _{j}^{r}tr^{1/2}\right\rceil }\text{.} \end{eqnarray*} where the second line uses (\ref{eq:rootIdleTimeProofIneq}) and the fact that if for positive $a,b,c,d$, $ab\le c$, then $$e^{-c}(1+ d(e-1))^b \le \left(\frac{1+2d}{e^a}\right)^b.$$ For all $ r\geq R_{3}$ we have \begin{align} \left( \frac{1+2p(r)}{e^{\epsilon /[5(c_{2}-c_{1})r^{1/4+\alpha /2}]}} \right) ^{\left\lceil 2\lambda _{j}^{r}tr^{1/2}\right\rceil } &\leq \left( \frac{1+\epsilon /[10(c_{2}-c_{1})r^{1/4+\alpha /2}]}{1+\epsilon /[5(c_{2}-c_{1})r^{1/4+\alpha /2}]} \right) ^{\left\lceil 2\lambda _{j}^{r}tr^{1/2}\right\rceil } \notag \\ &\leq \left( \frac{1}{1+4\epsilon /[50(c_{2}-c_{1})r^{1/4+\alpha /2}]}\right) ^{\left\lceil 2\lambda _{j}^{r}tr^{1/2}\right\rceil } \notag \\ &\leq \left( 1+\frac{4\epsilon /[50(c_{2}-c_{1})]}{r^{1/4+\alpha /2}} \right)^{-\lambda _{j}r^{1/2}t} \label{eq:secondTermRoot} \end{align} where the first line uses the inequality $e^x \ge 1+x$ and the first bound in (\ref{eq:greaterThanPr}), the second uses the second bound in (\ref{eq:greaterThanPr}) along with the inequality $(1+x)/(1+2x) \le 5/(5+4x)$ for $x \in [0,1/4]$, and the third uses \eqref{def:Rhat} to bound $\lambda _{j}^{r}$ by $\lambda _{j}$. Thus we have shown \begin{equation} \label{eq:eq1121} P\left( ({\mathcal{B}}_{2}^{r,j})^c\right) \le \left( 1+\frac{\hat{B}_{3}}{r^{1/4+\alpha /2}}\right) ^{-\hat{B}_{4}r^{1/2}t} \end{equation} where $\hat B_3 = 4\epsilon /[50(c_{2}-c_{1})]$ and $\hat B_4=1$. A similar calculation shows that \begin{equation} \label{eq:eq1125} P\left( (\hat{\mathcal{B}}_{2}^{r,j})^c\right) \le \left( 1+\frac{\hat{B}_{3}}{r^{1+\alpha}}\right) ^{-\hat{B}_{4}r^{2}t} \end{equation} Finally we estimate the third probability on the right sides of \eqref{eq:mainIneqRoot} and \eqref{eq:mainIneqSquare}. Note that \begin{equation*} \int_{0}^{tr^{1/2}}\mathit{I}_{\left\{ \mathcal{E}_{j}^r(s)=1\right\} }ds\leq \int_{0}^{\tau _{0}^{r,j}}\mathit{I}_{\left\{ \mathcal{E}_{j}^r(s)=1\right\} }ds+\sum_{l=1}^{\eta _{t}^{r,j}}\int_{\tau _{2l-1}^{r,j}}^{\tau _{2l}^{r,j}} \mathit{I}_{\left\{ \mathcal{E}_{j}^r(s)=1\right\} }ds \end{equation*} From (\ref{eq:bndryHitTime}) we see that \begin{equation}\label{eq:eq1136} \int_{\tau _{2l-1}^{r,j}}^{\tau _{2l}^{r,j}}\mathit{I}_{\left\{ \mathcal{E} _{j}(s)=1\right\} }ds=\tau _{2l}^{r,j}-\varsigma _{l}^{r,j}. \end{equation} Indeed, if $\theta _{l}^{r,j}=0$ then $\varsigma _{l}^{r,j}=\tau _{2l}^{r,j}$ and the integral on the left side is $0$. Also, if $\theta _{l}^{r,j}=1$ then $Q_{j}^{r}(\varsigma _{l}^{r,j})=\left\lceil c_{1}r^{\alpha }\right\rceil -1$, $\varsigma _{l}^{r,j}<\tau _{2l}^{r,j}$ and $\mathcal{E}_j(s) =1$ for all $s \in [\varsigma _{l}^{r,j}, \tau _{2l}^{r,j}]$, giving once more the identity in \eqref{eq:eq1136}. In the latter case we also have the representation \begin{equation} \tau _{2l}^{r,j}-\varsigma _{l}^{r,j} =\inf \left\{ s\geq 0:A_{j}^{r}(\varsigma _{l}^{r,j}+s)-A_{j}^{r}(\varsigma _{l}^{r,j})\geq \left\lceil c_{2}r^{\alpha }\right\rceil -\left\lceil c_{1}r^{\alpha }\right\rceil +1\right\} \text{.} \label{eq:idleTimeDefGen} \end{equation} Similarly if we define \begin{equation*} \varsigma _{0}^{r,j}=\inf \left\{ s\geq 0:Q_{j}^{r}(t)\leq c_{1}r^{\alpha } \text{ or }Q_{j}^{r}(t)\geq c_{2}r^{\alpha }\right\} \end{equation*} then \begin{equation*} \int_{0}^{\tau _{0}^{r,j}}\mathit{I}_{\left\{ \mathcal{E}_{j}^r(s)=1\right\} }ds=\tau _{0}^{r,j}-\varsigma _{0}^{r,j} \end{equation*} where if $\varsigma _{0}^{r,j}<\tau _{0}^{r,j}$ we have \begin{equation} \tau _{0}^{r,j}-\varsigma _{0}^{r,j}=\inf \left\{ s\geq 0:A_{j}^{r}(\varsigma _{0}^{r,j}+s)-A_{j}^{r}(\varsigma _{0}^{r,j})\geq \left\lceil c_{2}r^{\alpha }\right\rceil -\left\lceil c_{1}r^{\alpha }\right\rceil +1\right\} \text{.} \label{eq:idleTimeDefInit} \end{equation} Consequently , since on $\mathcal{B}_{1}^{r,j}$, $\eta _{t}^{r,j} \le 2\lambda _{j}^{r}r^{1/2}t$, by taking $r$ suitably large \begin{align*} & P\left( \mathcal{B}_{1}^{r,j}\cap \mathcal{B}_{2}^{r,j}\cap \mathcal{C}^r \right) \\ & \quad \leq P\left( \mathcal{B}_{2}^{r,j}\cap \left\{ \tau _{0}^{r,j}-\varsigma _{0}^{r,j}+\sum_{l=1}^{\left\lceil 2\lambda _{j}^{r}tr^{1/2}\right\rceil }\left( \tau _{2l}^{r,j}-\varsigma _{l}^{r,j}\right) \geq \epsilon r^{1/4+\alpha /2}t\right\} \right) \\ & \quad \leq P\left( \inf \left\{ s\geq 0:\check{A}_{j}^{r}(s)\geq \left( \frac{3\lambda _{j}^{r}\epsilon }{4}r^{1/4+\alpha /2}t\right) \right\} \geq \epsilon r^{1/4+\alpha /2}t\right) \\ & \quad \leq P \left (\check{A}_{j}^{r}(\epsilon r^{1/4+\alpha /2}t) \le \frac{3\lambda _{j}^{r}\epsilon }{4}r^{1/4+\alpha /2}t\right) \end{align*} where $\check{A}_{j}^{r}$ is a Poisson process with rate $\lambda _{j}^{r}$ and the second inequality comes from the representations in (\ref{eq:idleTimeDefInit}) and (\ref {eq:idleTimeDefGen}). From Theorem \ref{thm:LDP} there exist $\kappa_5, \kappa_6 \in (0,\infty)$ and $R_{4}\in \left[ R_{3},\infty \right) $ such that for all $r\geq R_{4}$ \begin{align} P\left( \mathcal{B}_{1}^{r,j}\cap \mathcal{B}_{2}^{r,j}\cap \mathcal{C}^r \right) \leq P\left( \sup_{0\leq s\leq \epsilon r^{1/4+\alpha /2}t}\left\vert \check{A}_{j}^{r}(s)-\lambda ^{r}s\right\vert >\frac{ \epsilon }{2}r^{1/4+\alpha /2}t\right) \leq \kappa_5e^{-r^{1/4+\alpha /2}t\kappa_6}. \label{eq:thirdTermRoot} \end{align} A similar calculation shows that \begin{align} P\left( \mathcal{\hat{B}}_{1}^{r,j}\cap \mathcal{\hat{B}}_{2}^{r,j}\cap \hat{\mathcal{C}}^r \right) \leq \kappa_5e^{-rt\kappa_6}\text{.} \label{eq:thirdTermSquare} \end{align} Finally (\ref{eq:firstTermRoot}), (\ref{eq:eq1121}), (\ref {eq:thirdTermRoot}), and (\ref{eq:mainIneqRoot}) prove (\ref {eq:rootIdleTimeResult}) while (\ref{eq:firstTermRoot}), \eqref{eq:eq1125}, \eqref{eq:thirdTermSquare} and \eqref{eq:mainIneqSquare} prove (\ref{eq:squareIdleTimeResult}). This completes the proof. \end{proof} Let $c_3 \doteq \frac{2Jc_{2}}{\min_{j}\mu_{j}}$ and recall that $\bar{C} \doteq \max_{i \in \mathbb{N}_I}\{C_i\}$. Note that if for given $s \ge 0$, $W^{r}_{i}(s)>c_{3}r^{\alpha}$ for some $i \in \mathbb{N}_{I}$ then we must have that $Q^r_j(s)\ge c_2 r^{\alpha}$ for some $j \in \mathbb{N}_{J}$ with $K_{ij}=1$, namely $i \in \hat \omega^r(s)$. From Lemma \ref{lem:fullWorkloadCond} it then follows that for such a $s$ if $C_i > \sum_{j=1}^J K_{ij} x_j(s)$, the $\mathcal{E}_j^r(t)\neq 0$ for some $j$ with $K_{ij}=1$. From this it follows that for any $t\ge 0$ $$\int_0^t \mathit{I}_{\{W^{r}_{i}(s)>c_{3}r^{\alpha}\}}(s)dI^{r}_{i}(s) \le \bar{C} \sum_{j: K_{ij}=1}\int_0^t \mathit{I}_{ \{\mathcal{E}_j^r(s)=1\}} ds.$$ This along with Theorem \ref{thm:IdleTimeExp} implies that for any $\epsilon >0$ and $i\in \mathbb{N}_{I}$ there exist $\hat{B}_{1},\hat{B}_{2},\hat{B}_{3},\hat{B}_{4},R \in (0,\infty)$ such that for all $r\geq R$, $t\geq 1$ and $x\in \mathcal{S}^r$ we have \begin{equation} P_x\left( \int_{0}^{tr^{1/2}}\mathit{I}_{\{W^{r}_{i}(s)\ge c_{3}r^{\alpha}\}}(s)dI^{r}_{i}(s)\geq \epsilon r^{1/4+\alpha /2}t\right) \leq \hat{B}_{1}e^{-r^{1/4+\alpha /2}t \hat{B}_{2}}+\left( 1+\frac{\hat{B}_{3}}{r^{1/4+\alpha /2}}\right) ^{-\hat{B}_{4} r^{1/2} t}\label{eq:eqpxtr2} \end{equation} and \begin{equation} P_x\left( \int_{0}^{tr^{2}}\mathit{I}_{\{W^{r}_{i}(s)\ge c_{3}r^{\alpha}\}}(s)dI^{r}_{i}(s)\geq \epsilon rt\right) \leq \hat{B}_{1}e^{-rt\hat{B}_{2}}+\left( 1+\frac{ \hat{B}_{3}}{r^{1+\alpha }}\right) ^{-\hat{B}_{4}r^{2}t}\text{.} \label{eq:squareIdleTimeResultImplication} \end{equation} \subsection{Estimating holding cost through workload cost.} Recall the matrix $M$ introduced in Section \ref{sec:hgi}. Along with the process $\hat W^r = KM^r \hat Q^r$, it will be convenient to also consider the process $\tilde W^r \doteq KM \hat Q^r$. The following is the main result of the section which says that under the scheme introduced in Definition \ref{def:workAllocScheme}, the queue lengths for the associated workload are `asymptotically optimal' in a certain sense. This result will be key in showing that under our policy, property (II) of HGI holds asymptotically. \begin{theorem} \label{thm:discCostInefBnd} There exist $B,R \in (0,\infty) $ such that for all $r\geq R$, $x=(q,z) \in \mathcal{S}^r$, $\theta >0$ and $T\ge 1$, we have \begin{equation*} \left\vert E_x\left[ \int_{0}^{\infty }e^{-\theta t} h\cdot \hat{Q} ^{r}(t) dt\right] -E_x\left[ \int_{0}^{\infty }e^{-\theta t}\mathcal{C}\left( \tilde{W}^{r}(t)\right) dt\right] \right\vert \leq Br^{\alpha -1/2} \frac{1+|q|^2}{1-e^{-\theta}} \end{equation*} and \begin{equation*} \left\vert E_x\left[ \frac{1}{T}\int_{0}^{T} h \cdot \hat{Q} ^{r}(t) dt\right] -E_x\left[ \frac{1}{T}\int_{0}^{T} \mathcal{C}\left( \tilde{W}^{r}(t)\right) dt\right] \right\vert \leq Br^{\alpha -1/2} (1+|q|^2)\text{.} \end{equation*} \end{theorem} In order to prove the result we begin with the following two propositions. Recall the sets $\zeta^0_i$, $\zeta_i^k$ from \eqref{eq:eqzetaiz} and \eqref{eq:eqzetaik} and that $c_3 = \frac{2Jc_{2}}{\min_{j}\mu_{j}}$. For $\xi\geq 0$, $i\in \mathbb{N}_{I}$ and $k \in \mathbb{N}_m$ let \begin{equation} \hat{\tau}_{i}^{1}(\xi)\doteq \inf \left\{ t\geq \xi:\sum_{j\in \zeta _{i}^{0}} \frac{Q_{j}^{r}(s)}{\mu _{j}^{r}} <2c_{3}r^{\alpha }\right\},\; \hat{\tau}_{k}^{s}(\xi)\doteq \inf \left\{ t\geq \xi:\min_{i\in N_{\rho (k)}}\left\{ \sum_{j\in \zeta _{i}^{k}}\frac{Q_{j}^{r}(s)}{\mu _{j}^{r}}\right\} <2c_{3} r^{\alpha }\right\}. \label{eq:defTauS} \end{equation} \begin{proposition} \label{thm:initInef} There exist $R,B\in (0,\infty)$ such that for all $r\geq R$, $i\in \mathbb{N}_{I}$, $x=(q,z) \in \mathcal{S}^r$, and $k\in \mathbb{N}_m$ we have \begin{equation*} \frac{1}{r^{3}}E_x \int_{0}^{\hat{\tau}_{i }^{1}(0)}\sum_{j\in \zeta _{i}^{0}} \frac{Q_{j}^{r}(s)}{\mu _{j}}ds \leq B (1+|q|^2) r^{-1} \end{equation*} and \begin{equation*} \frac{1}{r^{3}}E_x \int_{0}^{\hat{\tau}_{k }^{s}(0)}\min_{i\in N_{\rho (k)}}\left\{ \sum_{j\in \zeta _{i}^{k}} \frac{Q_{j}^{r}(s)}{\mu _{j}}\right\} ds \leq B(1+|q|^2)r^{-1}. \end{equation*} \end{proposition} \begin{proof} Let $k\in \mathbb{N}_m$ be arbitrary. Note that under $P_x$, $Q^{r}(0)=r\hat{Q}^{r}(0)=rq$ . \ Choose $\check{i}(0)\in N_{\rho (k)}$ such that \begin{equation*} \sum_{j\in \zeta _{\check{i}(0)}^{k}}r \frac{q_{j}}{\mu _{j}}=\min_{i\in N_{\rho (k)}}\left\{ \sum_{j\in \zeta _{i}^{k}}r\frac{q_{j}}{\mu _{j}}\right\} \end{equation*} and define \begin{equation} d=\sum_{j\in \zeta _{\check{i}(0)}^{k}}\frac{q_{j}}{\mu _{j}} \mbox{ and } \Delta =2^{-m -2}\delta \text{,} \label{eq:defTriangle} \end{equation} where $\delta$ is as in Definition \ref{def:workAllocScheme}. If $rd<2c_{3}r^{\alpha }$ then $\hat{\tau}_{k }^{s}(0)=0$ and the result holds trivially. \ Consider now $rd\ge 2c_{3}r^{\alpha }$ so that $\hat{\tau}_{k }^{s}(0)>0$. We claim that for $t\in \left[ 0,\hat{\tau}_{k }^{s}(0)\right) $ and $i' \in N_{\rho(k)}$ we have $\zeta _{i'}^{k} \cap \sigma^r(t)\neq \emptyset$. To see the claim note that for such $t$, for all $i' \in N_{\rho(k)}$, from the definition of $\hat{\tau}_{k }^{s}(0)$ $$\sum_{j \in \zeta _{i'}^{k}} \frac{Q_j^r(t)}{\mu_j^r} \ge \min_{i \in N_{\rho(k)}} \sum_{j \in \zeta _{i}^{k}} \frac{Q_j^r(t)}{\mu_j^r} \ge 2c_3 r^{\alpha}.$$ Thus, from the definition of $c_3$ there is a $j \in \zeta_{i'}^k$ such that $$Q_j^r(t) \ge \frac{2c_3}{J} r^{\alpha}\mu_j^r \ge \frac{c_3}{J} r^{\alpha}\mu_j \ge c_2r^{\alpha},$$ namely $j\in \sigma^r(t)$. Thus we have $\zeta _{i'}^{k} \cap \sigma^r(t)\neq \emptyset$ proving the claim. From Lemma \ref{lem:lem3_4}(a) we now have that for $i\in N_{\rho (k)}$ and $t\in \left[ 0,\hat{\tau}_{k }^{s}(0)\right) $ such that\ $\sum_{j\in \zeta _{i}^{k}}\mathcal{E}_{j}^r(t)=0$ \begin{equation} \sum_{j\in \zeta _{i}^{k}}\left( \varrho _{j}^{r}-x_{j}(t)\right) \leq -2^{-m -2}\delta =-\Delta \text{.}\label{eq:eq129} \end{equation} Recall that $\bar{C} = \max_{i} \{C_i\}$. Define for $y\ge 0$, the events \begin{equation*} \mathcal{A}_{y}^{r}=\left\{ \sum_{i\in \mathbb{N}_{I}}\int_{0}^{(2ry/\Delta ) \wedge \hat{\tau}_{k }^{s}(0) }\mathbf{1}_{\left\{ \sum_{j\in \zeta _{i}^{k}}\mathcal{E} _{j}^r(s)>0\right\} }ds\geq \frac{yr}{4(\bar C\vee \Delta)}\right\} \end{equation*} and \begin{eqnarray*} \mathcal{B}_{y}^{r} &=& \bigcup_{j \in \mathbb{N}_J}\left\{ \sup_{0\leq t\leq 2ry/\Delta }\left\vert A_{j}^{r}(t)-t\lambda _{j}^{r}\right\vert +\sup_{0\leq t\leq 2\bar{C}ry/\Delta }\left\vert S_{j}^{r}(t)-t\mu _{j}^{r}\right\vert \geq \frac{y\bar{\mu}_{\min}r}{ 4J}\right\} . \end{eqnarray*} From Theorem \ref{thm:IdleTimeExp} (cf. \eqref{eq:rootIdleTimeResult} with $\frac{2r^{1/2}y}{\Delta}$ substituted in for $t$) and Theorem \ref{thm:LDP} there exist $B_{1},B_{2} \in (0,\infty) $ and $R_{1}\in \left[ \hat{R},\infty \right) $ (recall (\eqref{def:Rhat})) such that for all $r\geq R_{1}$ and $y\geq \max \{\frac{\Delta}{2},d,1\}$, \begin{equation*} P\left( \mathcal{A}_{y}^{r}\bigcup \mathcal{B}_{y}^{r}\right) \leq B_{1}e^{-B_{2}y}\text{.} \end{equation*} Also on the event $\left( \mathcal{A}_{y}^{r}\bigcup \mathcal{B} _{y}^{r}\right) ^{c}$ for all $t\in \left[ 0,\hat{\tau}_{k }^{s}(0)\wedge 2ry/\Delta \right) $ we have \begin{eqnarray*} \min_{i\in N_{\rho (k)}}\left\{ \sum_{j\in \zeta _{i}^{k}} \frac{Q_{j}^{r}(t)}{\mu_{j}^{r}}\right\} &\leq &\sum_{j\in \zeta _{\check{i}(0)}^{k}}\frac{Q_{j}^{r}(t)}{\mu _{j}^{r}} \\ &\leq &rd+\sum_{j\in \zeta _{\check{i}(0)}^{k}} \frac{A_{j}^{r}(t)}{\mu _{j}^{r}}-\sum_{j\in \zeta _{\check{i}(0)}^{k}} \frac{S_{j}^{r}(B_{j}(t))}{\mu _{j}^{r}} \\ &\leq &rd+\sum_{j\in \zeta _{\check{i}(0)}^{k}}\frac{y\bar{\mu}_{\min}r}{ 4 J \mu _{j}^{r}}+\sum_{j\in \zeta _{\check{i}(0)}^{k}}\left( t\varrho _{j}^{r}-B_{j}(t)\right) \end{eqnarray*} where the last line follows from the definition of the event $\mathcal{B}_{y}^{r}$. Next note that $$B_j(t) = \int_0^t x_j(s) ds = \int_0^t x_j(s) \mathbf{1}_{\left\{ \sum_{j\in \zeta _{\check{i}(0)}^{k}}\mathcal{E}_{j}^r(s)=0\right\} } ds + \int_0^t x_j(s) \mathbf{1}_{\left\{ \sum_{j\in \zeta _{\check{i}(0)}^{k}}\mathcal{E}_{j}^r(s)>0\right\} } ds.$$ From \eqref{eq:eq129} , on the above event, for $t\in \left[ 0,\hat{\tau}_{k }^{s}(0)\wedge 2ry/\Delta \right)$ $$\sum_{j\in \zeta _{\check{i}(0)}^{k}}\int_0^t x_j(s) \mathbf{1}_{\left\{ \sum_{j\in \zeta _{\check{i}(0)}^{k}}\mathcal{E}_{j}^r(s)=0\right\} } ds \ge \int_0^t (\sum_{j\in \zeta _{\check{i}(0)}^{k}} \varrho _{j}^{r})\mathbf{1}_{\left\{ \sum_{j\in \zeta _{\check{i}(0)}^{k}}\mathcal{E}_{j}^r(s)=0\right\} } ds + \Delta \int_0^t \mathbf{1}_{\left\{ \sum_{j\in \zeta _{\check{i}(0)}^{k}}\mathcal{E}_{j}^r(s)=0\right\} } ds$$ Thus, recalling the definition of $\mathcal{A}_{y}^{r}$ \begin{align*} \sum_{j\in \zeta _{\check{i}(0)}^{k}}\left( t\varrho _{j}^{r}-B_{j}(t)\right) &\le \int_0^t \sum_{j\in \zeta _{\check{i}(0)}^{k}} (\varrho _{j}^{r}-x_j(s)) \mathbf{1}_{\left\{ \sum_{j\in \zeta _{\check{i}(0)}^{k}}\mathcal{E}_{j}^r(s)\neq0\right\} } ds - \Delta t +\Delta \int_0^t \mathbf{1}_{\left\{ \sum_{j\in \zeta _{\check{i}(0)}^{k}}\mathcal{E}_{j}^r(s)\neq0\right\} } ds\\ &\le \frac{C_{\check{i}(0)}yr}{4\bar C}-\Delta t + \Delta \frac{yr}{4\Delta}\\ &\le r\frac{y}{2}-\Delta t\text{} \end{align*} and consequently on the event $\left( \mathcal{A}_{y}^{r}\bigcup \mathcal{B} _{y}^{r}\right) ^{c}$ for all $t\in \left[ 0,\hat{\tau}_{k }^{s}(0)\wedge 2ry/\Delta \right) $ we have (since $y\ge d$) \begin{eqnarray*} \min_{i\in N_{\rho (k)}}\left\{ \sum_{j\in \zeta _{i}^{k}} \frac{Q_{j}^{r}(t)}{\mu_{j}^{r}}\right\} &\leq r(d+y)-\Delta t \le 2ry - \Delta t . \end{eqnarray*} Since at $t = 2ry/\Delta$, $2ry - \Delta t=0$, we must have $ \hat{\tau}_{k }^{s}(0)< 2ry/\Delta $ so that on the above event \begin{equation*} \int_{0}^{\hat{\tau}_{k }^{s}(0)}\min_{i\in N_{\rho (k)}}\left\{ \sum_{j\in \zeta _{i}^{k}}\frac{Q_{j}^{r}(t)}{\mu _{j}^{r}}\right\} dt\leq \frac{4}{\Delta } r^{2}y^{2}\text{.} \end{equation*} This gives for $r\geq R_{1}$ and $y\geq \max \{d,1\}$ \begin{equation*} P_x\left( \int_{0}^{\hat{\tau}_{k }^{s}(0)}\min_{i\in N_{\rho (k)}}\left\{ \sum_{j\in \zeta _{i}^{k}}\frac{Q_{j}^{r}(t)}{\mu _{j}^{r}}\right\} dt> \frac{4}{ \Delta }r^{2}y^{2}\right) \leq B_{1}e^{-B_{2}y}\text{.} \end{equation*} A straightforward calculation now shows that \begin{eqnarray*} E_x\left[ \int_{0}^{\hat{\tau}_{k }^{s}(0)}\min_{i\in N_{\rho (k)}}\left\{ \sum_{j\in \zeta _{i}^{k}}\frac{Q_{j}^{r}(t)}{\mu _{j}^{r}}\right\} dt\right] &\leq & r^{2}B_{3} (1+ |q|^2) \end{eqnarray*} where $B_3$ depends only on $B_1, B_2$ and $\delta$. This proves the second statement in the lemma. \ The proof of the first statement follows in a very similar manner and is omitted. \end{proof} The following proposition will be the second ingredient in the proof of Theorem \ref{thm:discCostInefBnd}. \begin{proposition} \label{thm:runningInef}There exist $H,R\in(0,\infty) $ such that for all $r\geq R$, $i\in \mathbb{N}_{I}$, $k\in\mathbb{N}_{m}$, and $0\leq T_{1}<T_{2}<\infty $ satisfying $T_{2}-T_{1}\geq 1$ we have \begin{equation*} \frac{1}{r^{3}}E\left[ \int_{\hat{\tau}_{i}^{1}(r^{2}T_{1})}^{\hat{\tau} _{i}^{1}(r^{2}T_{2})}\sum_{j\in \zeta _{i}^{0}}\frac{Q_{j}^{r}(s)}{\mu _{j}^{r}}ds \right] \leq (T_{2}-T_{1})H r^{\alpha -1/2} \end{equation*} and \begin{equation*} \frac{1}{r^{3}}E\left[ \int_{\hat{\tau}_{k}^{s}(r^{2}T_{1})}^{\hat{\tau} _{k}^{s}(r^{2}T_{2})}\min_{i\in N_{\rho (k)}}\left\{ \sum_{j\in \zeta _{i}^{k}}\frac{Q_{j}^{r}(s)}{\mu _{j}^{r}}\right\} ds\right] \leq (T_{2}-T_{1})Hr^{\alpha -1/2} \end{equation*} \end{proposition} \begin{proof} Once again we only prove the second statement since the proof of the first statement is similar. Many steps in the proof are similar to those in Proposition \ref{thm:initInef} but we give details to keep the proof self contained. Let $k\in \mathbb{N}_{m}$ be arbitrary. \ Recall $ \bar \mu_{\min}=\min_{j\in \mathbb{N}_{J}}\{\mu _{j}\}\text{.} $ and $ \bar C=\max_{i\in \mathbb{N}_{I}}\{C_{i}\}$. Also let for $k \in \mathbb{N}_{m}$ \begin{equation} Z^r_k(t) \doteq \min_{i\in N_{\rho (k)}}\left\{ \sum_{j\in \zeta _{i}^{k}}\frac{Q_{j}^{r}(t)}{\mu _{j}^{r}}\right\}.\label{eq:eqzrkt} \end{equation} Define the stopping times, $\tau_0 \doteq r^2T_1$ and for $l\in \mathbb{N}$ \begin{equation*} \tau _{2l-1}\doteq \inf \left\{ t\geq \tau _{2l-2}: Z^r_k(t) \geq 2c_{3}r^{\alpha }\right\},\; \tau _{2l}=\inf \left\{ t\geq \tau _{2l-1}:Z^r_k(t) <2c_{3}r^{\alpha }\right\} \text{.} \end{equation*} Let $\hat{l}\doteq \min \{l\geq 0:\tau _{2l+1}>r^{2}T_{2}\}$. Then recalling the definition of $\hat \tau_k^s(\xi)$ from (\ref{eq:defTauS}), $\hat{\tau}_{k}^{s}(r^{2}T_{2})=r^{2}T_{2}\vee \tau _{2\hat{l}}$. \ Consequently we can write \begin{align} E\left[ \int_{\hat{\tau}_{k}^{s}(r^{2}T_{1})}^{\hat{\tau} _{k}^{s}(r^{2}T_{2})}Z^r_k(s) ds\right] &\leq E\left[ \int_{\hat{\tau}_{k}^{s}(r^{2}T_{1})}^{\tau _{1}\wedge r^{2}T_{2}}Z^r_k(s) ds\right] + E\left[ \sum_{l=1}^{\infty }\mathit{I}_{\{\tau _{2l}\leq r^{2}T_{2}\}}\int_{\tau _{2l}}^{\tau _{2l+1}\wedge r^{2}T_{2}}Z^r_k(s) ds\right] \notag\\ &\quad+E\left[ \sum_{l=0}^{\infty }\mathit{I}_{\{\tau _{2l+1}\leq r^{2}T_{2}\}}\int_{\tau _{2l+1}}^{\tau _{2l+2}}Z^r_k(s) ds \right] \text{.} \label{eq:runCostDecomp} \end{align} By definition, for all $s\in \left[ \hat{\tau}_{k}^{s}(r^{2}T_{1}),\tau _{1}\wedge r^{2}T_{2}\right) $ and $s\in \left[ \tau _{2l},\tau _{2l+1}\wedge r^{2}T_{2}\right) $ we have $ Z^r_k(s) \leq 2c_{3}r^{\alpha } $ which gives \begin{equation} E\left[ \int_{\hat{\tau}_{k}^{s}(r^{2}T_{1})}^{\tau _{1}\wedge r^{2}T_{2}}Z^r_k(s) ds\right] +E\left[ \sum_{l=1}^{\infty }\mathit{I}_{\{\tau _{2l}\leq r^{2}T_{2}\}}\int_{\tau _{2l}}^{\tau _{2l+1}\wedge r^{2}T_{2}}Z^r_k(s) ds\right] \leq 2c_{3}r^{\alpha +2}(T_{2}-T_{1})\text{.} \label{eq:runCostBndedComp} \end{equation} For all $l\in \mathbb{N}$ let $\check{i}(l)\in N_{\rho (k)}$ satisfy \begin{equation*} \sum_{j\in \zeta _{\check{i}(l)}^{k}}\frac{Q_{j}^{r}(\tau _{2l+1})}{\mu _{j}^{r}}=\min_{i\in N_{\rho (k)}}\left\{ \sum_{j\in \zeta _{i}^{k}}\frac{Q_{j}^{r}(\tau _{2l+1})}{\mu _{j}^{r}}\right\} = Z^r_k(\tau _{2l+1}) \end{equation*} and note that \begin{equation*} \sum_{j\in \zeta _{\check{i}(l)}^{k}}\frac{Q_{j}^{r}(\tau _{2l+1})}{\mu _{j}^{r}}\leq 2c_{3}r^{\alpha }+\frac{2}{\bar \mu_{\min}}\text{.} \end{equation*} Recall the definition of $\Delta$ in \eqref{eq:defTriangle} and define for $y \in \mathbb{R}_+$ and $l \in \mathbb{N}$, the events \begin{equation*} \mathcal{A}_{l,y}^{r}=\left\{ \sum_{i\in \mathbb{N}_{I}}\int_{\tau _{2l+1}}^{\left( \tau _{2l+1}+2r^{1/4+\alpha /2}y/\Delta \right) \wedge \tau _{2l+2}}\mathit{I}_{\left\{ \sum_{j\in \zeta _{i}^{k}}\mathcal{E}_{j}^r(s)>0\right\} }ds\geq \frac{r^{1/4+\alpha /2}}{4\left( \bar C\vee \Delta \right) }y\right\} \end{equation*} and \begin{eqnarray*} \mathcal{B}_{l,y}^{r} &=&\left\{ \sum_{j\in \mathbb{N}_{J}}\sup_{\tau _{2l+1}\leq t\leq \tau _{2l+1}+2r^{1/4+\alpha /2}y/\Delta }\left\vert A_{j}^{r}(t)-A_{j}^{r}(\tau _{2l+1})-(t-\tau _{2l+1})\lambda _{j}^{r}\right\vert \right. \\ &&\left. +\sum_{j\in \mathbb{N}_{J}}\sup_{\tau _{2l+1}\leq t\leq \tau _{2l+1}+\bar C2r^{1/4+\alpha /2}y/\Delta }\left\vert S_{j}^{r}(t)-S_{j}^{r}(\tau _{2l+1})-(t-\tau _{2l+1})\mu _{j}^{r}\right\vert \geq \frac{\bar \mu_{\min}r^{1/4+\alpha /2}}{8}y\right\} \end{eqnarray*} From the strong Markov property, Theorems \ref{thm:IdleTimeExp} (cf. \eqref{thm:IdleTimeExp}) and \ref{thm:LDP} there exist $B_{1},B_{2} \in (0,\infty) $ and $R_{1}\in \left[ \hat{R},\infty \right) $ such that for all $r\geq R_{1}$, $y\geq \Delta /2$, and $l\in \mathbb{N}$ we have \begin{equation} r^{1/4+\alpha /2}\Delta /2>\frac{2}{\bar \mu_{\min}} \mbox{ and } P\left( \mathcal{A}_{l,y}^{r}\bigcup \mathcal{B}_{l,y}^{r}\right) \leq B_{1}e^{-B_{2}y}\text{.} \label{eq:triangleRcond} \end{equation} We claim that for $t\in \left[ \tau_{2l+1},\tau_{2l+2} \right) $ we have $\zeta _{i'}^{k} \cap \sigma^r(t)\neq \emptyset$ for all $i' \in N_{\rho(k)}$. To see the claim note that for all $i' \in N_{\rho(k)}$, $\sum_{j \in \zeta _{i'}^{k}} \frac{Q_j^r(t)}{\mu_j^r} \ge \min_{i \in N_{\rho(k)}} \sum_{j \in \zeta _{i}^{k}} \frac{Q_j^r(t)}{\mu_j^r} \ge 2c_3 r^{\alpha}.$ Thus, from the definition of $c_3$ there is a $j \in \zeta_{i'}^{k}$ such that $$Q_j^r(t) \ge \frac{2c_3}{J} r^{\alpha}\mu_j^r \ge \frac{c_3}{J} r^{\alpha}\mu_j \ge c_2r^{\alpha},$$ namely $j\in \sigma^r(t)$. Thus we have $\zeta _{i'}^{k} \cap \sigma^r(t)\neq \emptyset$ proving the claim. From Lemma \ref{lem:lem3_4}(a) for $i\in N_{\rho (k)}$ and $t\in \left[ \tau _{2l+1},\tau _{2l+2}\right) $ such that\ $\sum_{j\in \zeta _{i}^{k}} \mathcal{E}_{j}^r(t)=0$ we now have \begin{equation} \sum_{j\in \zeta _{i}^{k}}\left( \varrho _{j}^{r}-x_{j}(t)\right) \leq -2^{-m -2}\delta =-\Delta \text{ .} \label{eq:arrProcRateDif} \end{equation} Consequently on the event $\left( \mathcal{A}_{l,y}^{r}\bigcup \mathcal{B} _{l,y}^{r}\right) ^{c}$ for all $t\in \left[ \tau _{2l+1},\tau _{2l+2}\wedge \left( \tau _{2l+1}+2r^{1/4+\alpha /2}y/\Delta \right) \right) $ we have \begin{eqnarray*} \min_{i\in N_{\rho (k)}}\left\{ \sum_{j\in \zeta _{i}^{k}}\frac{Q_{j}^{r}(t)}{\mu _{j}^{r}}\right\} &\leq &\sum_{j\in \zeta _{\check{i}(l)}^{k}}\frac{Q_{j}^{r}(\tau _{2l+1})}{\mu _{j}^{r}}+\sum_{j\in \zeta _{\check{i}(l)}^{k}}\frac{Q_{j}^{r}(t)}{\mu _{j}^{r}}-\sum_{j\in \zeta _{\check{i}(l)}^{k}}\frac{Q_{j}^{r}(\tau _{2l+1})}{\mu _{j}^{r}} \\ &\leq &2c_{3}r^{\alpha }+\frac{2}{\bar \mu_{\min}}+\sum_{j\in \zeta _{\check{i}(l)}^{k}} \frac{1}{\mu _{j}^{r}}\left[\left( A_{j}^{r}(t)-A_{j}^{r}(\tau _{2l+1})\right) + \left( S_{j}^{r}(B_{j}(t))-S_{j}^{r}(B_{j}(\tau _{2l+1}))\right)\right] \\ &\leq &2c_{3}r^{\alpha }+\frac{2}{\bar \mu_{\min}}+\frac{r^{1/4+\alpha /2}}{4}y+\sum_{j\in \zeta _{ \check{i}(l)}^{k}}\left( (t-\tau _{2l+1})\varrho _{j}^{r}-(B_{j}(t)-B_{j}(\tau _{2l+1}))\right) \end{eqnarray*} where the last line comes from the definition of the event $\mathcal{B} _{l,y}^{r}$. \ Note that for all $j\in \mathbb{N}_{J}$ and $t\geq \tau _{2l+1}$ we have \begin{eqnarray*} B_{j}(t)-B_{j}(\tau _{2l+1})&=&\int_{\tau _{2l+1}}^{t}x_{j}(s)ds \\ &=&\int_{\tau _{2l+1}}^{t}x_{j}(s)\mathit{I}_{\left\{ \sum_{j\in \zeta _{i}^{k}}\mathcal{E}_{j}^r(s)>0\right\} }ds+ \int_{\tau _{2l+1}}^{t}x_{j}(s)\mathit{I}_{\left\{ \sum_{j\in \zeta _{i}^{k}}\mathcal{E}_{j}^r(s)=0\right\} }ds\text{.} \end{eqnarray*} From \eqref{eq:arrProcRateDif}, on the above event and for $t\in \left[ \tau _{2l+1},\tau _{2l+2}\wedge \left( \tau _{2l+1}+2r^{1/4+\alpha /2}y/\Delta \right) \right]$ \begin{equation} \int_{\tau _{2l+1}}^{t}\sum_{j\in \zeta _{i}^{k}}x_{j}(s)\mathit{I}_{\left\{ \sum_{j\in \zeta _{i}^{k}}\mathcal{E}_{j}^r(s)=0\right\} }ds \geq \int_{\tau _{2l+1}}^{t} \left( \sum_{j\in \zeta_{i}^{k}}\varrho^{r}_{j} \right) \mathit{I}_{\left\{ \sum_{j\in \zeta _{i}^{k}}\mathcal{E}_{j}^r(s)=0\right\} }ds +\Delta\int_{\tau _{2l+1}}^{t}\mathit{I}_{\left\{ \sum_{j\in \zeta _{i}^{k}}\mathcal{E}_{j}^r(s)=0\right\} }ds \end{equation} so that \begin{eqnarray*} \sum_{j\in \zeta _{\check{i}(l)}^{k}}\left( (t-\tau _{2l+1})\varrho _{j}^{r}-(B_{j}(t)-B_{j}(\tau _{2l+1}))\right) &\leq &\int_{\tau _{2l+1}}^{t}\sum_{j\in \zeta _{\check{i}(l)}^{k}}\left( \varrho _{j}^{r}-x_{j}(s)\right) \mathit{I}_{\left\{ \sum_{j\in \zeta _{i}^{k}} \mathcal{E}_{j}^r(s)>0\right\} }ds \\ &&-\Delta (t-\tau _{2l+1})+\Delta \int_{\tau _{2l+1}}^{t}\mathit{I} _{\left\{ \sum_{j\in \zeta _{i}^{k}}\mathcal{E}_{j}^r(s)>0\right\} }ds \\ &\leq &\frac{C_{\check{i}(l)}r^{1/4+\alpha /2}}{4\bar C}y-\Delta (t-\tau _{2l+1})+\frac{r^{1/4+\alpha /2}}{4}y \\ &\leq &\frac{r^{1/4+\alpha /2}}{2}y-\Delta (t-\tau _{2l+1}) \end{eqnarray*} where the second line follows because we are on the set $\left( \mathcal{A}_{l,y}^{r}\right) ^{c}$. Consequently on the event $\left( \mathcal{A}_{l,y}^{r}\bigcup \mathcal{B} _{l,y}^{r}\right) ^{c}$ for $t\in \left[ \tau _{2l+1},\tau _{2l+2}\wedge \left( \tau _{2l+1}+2r^{1/4 +\alpha /2}y/\Delta \right) \right) $ \begin{equation} Z^r_k(t) \leq 2c_{3}r^{\alpha }+\frac{2}{\bar \mu_{\min}}+r^{1/4+\alpha /2}y-\Delta (t-\tau _{2l+1})\text{.} \label{eq:minUpperBnd} \end{equation} The right side of \eqref{eq:minUpperBnd} with $t=\tau _{2l+1}+2r^{1/4+\alpha /2}y/\Delta $ equals \begin{eqnarray*} 2c_{3}r^{\alpha }+\frac{2}{\bar \mu_{\min}}+r^{1/4+\alpha /2}y-\Delta (2r^{1/4+\alpha /2}y/\Delta) < 2c_{3}r^{\alpha } \end{eqnarray*} where the inequality is from \eqref{eq:triangleRcond}, and so we must have $\tau _{2l+2}<\tau _{2l+1}+2r^{1/4+\alpha /2}y/\Delta $. This combined with \eqref{eq:minUpperBnd} gives on the event $\left( \mathcal{A}_{l,y}^{r}\bigcup \mathcal{B} _{l,y}^{r}\right) ^{c}$ \begin{equation*} \int_{\tau _{2l+1}}^{\tau _{2l+2}}Z^r_k(s) ds\leq Ky^{2}r^{1/2+\alpha } \end{equation*} for a $K<\infty $ depending only on $c_3, \bar \mu_{\min}$ and $\Delta$. \ Then for $y\geq B_{3}=\max \{2c_{3}+\frac{2}{\bar \mu_{\min}},\frac{\Delta}{2}\}$ we have from \eqref{eq:triangleRcond} \begin{equation*} P_{X^{r}(\tau _{2l+1})}\left( \int_{\tau _{2l+1}}^{\tau _{2l+2}}Z^r_k(t) dt > Ky^{2}r^{1/2+\alpha }\right) \leq B_{1}e^{-B_{2}y} \end{equation*} and a standard argument now gives \begin{eqnarray} E_{X^{r}(\tau _{2l+1})}\left[ \int_{\tau _{2l+1}}^{\tau _{2l+2}}Z^r_k(t) dt\right] \leq B_{4}r^{1/2+\alpha } \label{eq:runCostInefBndStopTime} \end{eqnarray} where the constant $B_{4}$ depends only on $B_1, B_2, B_3$ and $K$. Let \begin{equation*} L^{r}=\max \{l\geq 1:\tau _{2l+1}\leq T_{2}r^{2}\}\text{.} \end{equation*} Note that for all $l\geq 1$ each occurence of $\tau _{2l+1}$ implies an arrival of a job of type $j\in \bigcup _{i\in N_{\rho (k)}}\zeta _{i}^{k}$ in the interval $\left( \tau _{2l},\tau _{2l+1}\right] $, so that for some $K_1 \in (0,\infty)$ \begin{equation*} E_x L^{r}\leq K_1r^2(T_2-T_1) \mbox{ for all } x \in \mathcal{S}^r \end{equation*} Consequently \begin{eqnarray*} E\left[ \sum_{l=0}^{\infty }\mathit{I}_{\{\tau _{2l+1} \le r^2T_2\}} \int_{\tau_{2l+1} }^{\tau_{2l+2}} Z^r_k(s) ds\right] \leq B_{4}r^{1/2+\alpha }E_x\left[ {L}^{r}\right] \leq B_{5}r^{2+1/2+\alpha }(T_{2}-T_{1}), \end{eqnarray*} where $B_5 \doteq K_1B_4$. This, combined with (\ref{eq:runCostDecomp}) and (\ref{eq:runCostBndedComp}) gives \begin{equation*} \frac{1}{r^{3}}E\left[ \int_{\hat{\tau}_{k}^{s}(r^{2}T_{1})}^{\hat{\tau} _{k}^{s}(r^{2}T_{2})}Z^r_k(s)ds\right] \leq \left(2c_{3}r^{\alpha -1}+B_{5}r^{\alpha -1/2}\right)(T_{2}-T_{1})\text{.} \end{equation*} The result follows. \end{proof} We can now complete the proof of Theorem \ref{thm:discCostInefBnd}. {\bf Proof of Theorem \ref{thm:discCostInefBnd}.} Let $R<\infty $ be given by the maximum of the two $R$ values from Propositions \ref{thm:initInef} and \ref{thm:runningInef}. \ Note that by (\ref{eq:eq942} ) , for all $t\ge 0$ \begin{equation}\label{eq:eq851} h \cdot \hat{Q}^{r}(t) \ge \mathcal{C}\left( \tilde{W}^{r}(t)\right) \end{equation} and by Theorem \ref{thm:costInefIneq}, there is a $B_1 \in (0,\infty)$ such that for all $t,r$, \begin{equation}\label{eq:eq853} h \cdot \hat{Q}^{r}(t) - \mathcal{C}\left( \tilde{W}^{r}(t)\right) \le B_1 \left(\sum_{k\in \mathbb{N}_{m}}\min_{i\in N_{\rho (k)}}\left\{ \sum_{j\in \zeta _{i}^{k}}\frac{\hat{ Q}_{j}^{r}(t)}{\mu^r_j}\right\} + \sum_{i=1}^{I}\sum_{j\in \zeta _{i}^{0}}\frac{\hat{Q}_{j}^{r}(t)}{\mu^r_j}\right). \end{equation} Let $Z_k^r$ be as in \eqref{eq:eqzrkt}. From monotone convergence we have for $\theta \ge 0$ \begin{equation} \lim_{n\rightarrow \infty }\frac{1}{r^{3}}E\left[ \int_{0}^{\hat{\tau} _{k}^{s}(r^{2}n)}e^{-\theta t/r^{2}}Z^r_k(t) dt\right] =E\left[ \int_{0}^{\infty }e^{-\theta t}\min_{i\in N_{\rho (k)}}\left\{ \sum_{j\in \zeta _{i}^{k}}\frac{\hat{ Q}_{j}^{r}(t)}{\mu^r_j}\right\} dt\right] \text{.}\label{eq:eqmct931} \end{equation} Note that \begin{eqnarray*} \frac{1}{r^{3}}E\left[ \int_{0}^{\hat{\tau}_{k}^{s}(r^{2}n)}e^{-\theta t/r^{2}}Z^r_k(t) dt\right] &=&\frac{1}{r^{3}}E\left[ \int_{0}^{ \hat{\tau}_{k}^{s}(0)}e^{-\theta t/r^{2}}Z^r_k(t)dt\right] \\ &&+\sum_{l=1}^{n}\frac{1}{r^{3}}E\left[ \int_{\hat{\tau} _{k}^{s}(r^{2}(l-1))}^{\hat{\tau}_{k}^{s}(r^{2}l)}e^{-\theta t/r^{2}}Z^r_k(t) dt\right] \text{.} \end{eqnarray*} From Proposition \ref{thm:initInef}, we have for some $B_2 \in (0,\infty)$, for $r\geq R$, $\theta\ge 0$ and $x \in \mathcal{S}^r$, \begin{equation} \label{eq:eq903} \frac{1}{r^{3}}E_x\left[ \int_{0}^{\hat{\tau}_{k}^{s}(0)}e^{-\theta t/r^{2}}Z^r_k(t) dt\right] \leq B_{2}r^{-1}(1+|q|^2). \end{equation} Also, from Theorem \ref{thm:runningInef}, there is $B_3 \in (0,\infty)$ such that for $k \in \mathbb{N}_m$, $r\geq R$ and any $l\in \mathbb{N}$ \begin{eqnarray*} \frac{1}{r^{3}}E\left[ \int_{\hat{\tau}_{k}^{s}(r^{2}(l-1))}^{\hat{\tau} _{k}^{s}(r^{2}l)}e^{-\theta t/r^{2}}Z^r_k(t) dt\right] &\leq &\frac{1}{ r^{3}}e^{-\theta (l-1)}E\left[ \int_{\hat{\tau}_{k}^{s}(r^{2}(l-1))}^{\hat{ \tau}_{k}^{s}(r^{2}l)}Z^r_k(t) dt\right] \\ &\leq &B_{3}e^{-\theta (l-1)}r^{\alpha -1/2} \end{eqnarray*} Consequently for $r\geq R$ \newline \begin{equation*} \frac{1}{r^{3}}E\left[ \int_{0}^{\hat{\tau}_{k}^{s}(r^{2}n)}e^{-\theta t/r^{2}}Z^r_k(t) dt\right] \leq \left( B_{2} (1+|q|^2)+B_{3}\sum_{l=0}^{n-1}e^{-l\theta }\right) r^{\alpha -1/2}. \end{equation*} Sending $n\to \infty$, using \eqref{eq:eqmct931}, we have for $\theta >0$ and all $k \in \mathbb{N}_m$ \begin{eqnarray*} E\left[ \int_{0}^{\infty }e^{-\theta t} \min_{i\in N_{\rho (k)}}\left\{ \sum_{j\in \zeta _{i}^{k}}\frac{\hat{ Q}_{j}^{r}(t)}{\mu^r_j}\right\} dt \right] &\leq &\left( B_{2} (1+|q|^2)+\frac{B_{3}}{1-e^{-\theta }}\right) r^{\alpha -1/2}. \end{eqnarray*} A similar argument shows that there are $B_4,B_5 \in (0,\infty)$ such that for all $i\in \mathbb{N}_{I}$ and $r\geq R$ \begin{equation*} E\left[ \int_{0}^{\infty }e^{-\theta t}\sum_{j\in \zeta _{i}^{0}} \frac{\hat{ Q}_{j}^{r}(t)}{\mu^r_j}dt\right] \leq \left( B_{4} (1+|q|^2)+\frac{B_{5}}{ 1-e^{-\theta }}\right) r^{\alpha -1/2}. \end{equation*} Combining the above two estimates with \eqref{eq:eq851} and \eqref{eq:eq853} we have the first inequality in the theorem. For the second inequality, we write \begin{align*} E_x\left[ \frac{1}{T}\int_{0}^{T}\min_{i\in N_{\rho (k)}}\left\{ \sum_{j\in \zeta _{i}^{k}}\frac{\hat{Q}_{j}^{r}(t)}{\mu_j^r}\right\} dt\right] &= E_x\left[ \frac{1}{ Tr^{3}}\int_{0}^{Tr^{2}}Z^r_k(t) dt\right] \\ &\leq \frac{1}{r^{3}}E_x\left[ \frac{1}{T}\int_{0}^{\hat{\tau} _{k}^{s}(0)}Z^r_k(t) dt\right] +\frac{1}{r^{3}}E_x\left[ \frac{1}{T}\int_{\hat{\tau}_{k}^{s}(0)}^{\hat{\tau} _{k}^{s}(r^{2}T)}Z^r_k(t) dt\text{.}\right] \end{align*} Applying \eqref{eq:eq903} with $\theta=0$ we have for $T\ge 1$ \begin{equation} \label{eq:eq903b} \frac{1}{r^{3}}E_x\left[ \frac{1}{T}\int_{0}^{\hat{\tau}_{k}^{s}(0)} Z^r_k(t) dt\right] \leq B_{2}r^{-1}(1+|q|^2). \end{equation} Also, from Theorem \ref{thm:runningInef} for $ r\geq R$ we have, for some $\tilde B_3 \in (0,\infty)$ and all $T\ge 1$, $k \in \mathbb{N}_m$, \begin{eqnarray*} \frac{1}{r^{3}}E_x\left[ \frac{1}{T}\int_{\hat{\tau}_{k}^{s}(0)}^{\hat{\tau} _{k}^{s}(r^{2}T)}Z^r_k(t) dt\right] &\leq &\frac{1}{T} \tilde B_{3} r^{\alpha -1/2}T \leq \tilde B_{3} r^{\alpha -1/2}. \end{eqnarray*} Consequently, for all $T\ge 1$ and $k \in \mathbb{N}_m$ \begin{equation*} E_x\left[ \frac{1}{T}\int_{0}^{T}\min_{i\in N_{\rho (k)}}\left\{ \sum_{j\in \zeta _{i}^{k}}\frac{\hat{Q}_{j}^{r}(t)}{\mu_j^r}\right\} dt\right] \leq B_{2}r^{-1}(1+ |q|^2)+\tilde B_{3}r^{\alpha -1/2} \end{equation*} A similar argument shows that for some $\tilde B_4, \tilde B_5 \in (0,\infty)$, and all $i\in \mathbb{N}_{I}$, $T\ge 1$ \begin{equation*} E_x\left[ \frac{1}{T}\int_{0}^{T}\sum_{j\in \zeta^{0}_{i}}\frac{\hat{Q}_{j}^{r}(t)}{\mu^r_j}dt \right] \leq \tilde B_{4}r^{-1} (1+|q|^2)+\tilde B_{5} r^{\alpha -1/2}\text{.} \end{equation*} Combining the above two estimates with \eqref{eq:eq851} and \eqref{eq:eq853} once more, we have the second inequality in the theorem. \qed \subsection{Lyapunov function and uniform moment estimates.} \label{sec:unifmom} In this section we establish uniform in $t$ and $r$ moment bounds on $\hat W^r(t)$. The following is the main result of this section. \begin{theorem} \label{thm:WmomBnd}There exist $\beta, \gamma, R,H \in (0,\infty)$ such that for all $i \in \mathbb{N}_I$, $t\ge 0$ and $r\ge R$ $$E_x\left[e^{\gamma \hat W^r_i(t)}\right] \le H\left( 1 + e^{-\beta t} V_i(x)\right).$$ \end{theorem} The proof is given at the end of the section. Let \begin{equation} \check{\tau}_{i,\xi }^{r}\doteq \inf \left\{ t\geq \xi :\left\vert \hat{W }_{i}^{r}(t)\right\vert \leq 2c_{3}\right\},\label{eq:eq1012} \end{equation} where recall that $c_3 = \frac{2Jc_{2}}{\min_{j}\mu_{j}}$. We begin by establishing a bound on certain exponential moments of $\check{\tau}_{i,\xi }^{r}$. \begin{proposition} \label{thm:stopTimeExpMomBnd} There exist $\delta^{*},R\in (0,\infty)$ and $H_1: \mathbb{R}_+ \to \mathbb{R}_+$ such that for all $i\in \mathbb{N}_{I}$, $r\geq R$ and $0<\beta <\delta^{*} $ \begin{equation*} E_x\left[ e^{\beta \check{\tau}_{i,\xi }^{r}}\right] <H_{1}(\beta)e^{H_{1}(\beta)(w_{i}+ \xi )} \end{equation*} for all $x=(q,z)\in \mathcal{S}^r$ and $\xi \geq 0$, where $w=G^{r}q$. \end{proposition} \begin{proof} Fix $i \in \mathbb{N}_I$. Given $x=\left( q,z\right) \in \mathcal{S}^r$ let $w_{i}=(G^{r}q)_{i}$. \ Recall the definition of $v^{\ast }$ given in Condition \ref{cond:HT1}. \ Fix $\xi\ge 0$ and let \begin{equation}\label{eq:Mdef} t\geq \max \{2\xi ,8w_{i} /v_{i}^{\ast },1\}\doteq M_{\xi}. \end{equation} Consider the events \begin{equation*} \mathcal{A}_{i,t}^{r}=\left\{ \int_{0}^{r^{2}t}\mathit{I}_{\left\{ W_{i}^{r}(t) \ge c_{3}r^{\alpha }\right\} }(s)dI_{i}^{r}(s)\geq \frac{v_{i}^{\ast}}{32C_{i}}rt\right\} \end{equation*} and \begin{equation*} \mathcal{B}_{i,t}^{r}=\bigcup _{j\in \mathbb{N}_{J}}\left\{ \sup_{0\leq s\leq r^{2}t}\left\vert A_{j}^{r}(s)-s\lambda _{j}^{r}\right\vert +\sup_{0\leq s\leq C_{i}r^{2}t}\left\vert S_{j}^{r}(s)-s\mu _{j}^{r}\right\vert \geq \frac{\min \{1,\bar{\mu}_{\min}\}v_{i}^{\ast }}{256J} rt\right\} \text{.} \end{equation*} Using \eqref{eq:squareIdleTimeResultImplication} and Theorem \ref{thm:LDP} we can choose $\hat{H}_{1},\hat{H}_{2} \in (0,\infty)$ and $R \in (\hat{R}, \infty)$ such that for all $r\geq R$ and $t\geq 1$ \begin{equation*} P_x\left( \mathcal{A}_{i,t}^{r}\bigcup \mathcal{B}_{i,t}^{r}\right) \leq \hat{H} _{1}e^{-t\hat{H}_{2}}\text{,} \end{equation*} where $\hat R$ was introduced above \eqref{def:Rhat}. Furthermore, we can assume that $R$ is large enough so that for all $r\ge R$, \begin{equation}\label{eq:larger1155} \frac{2v_{i}^{\ast }}{r}\ge C_{i}-\sum_{j=1}^{J}K_{i,j}\rho _{j}^{r}\geq \frac{v_{i}^{\ast }}{2r}\text{,}\;\;\; c_{3}r^{\alpha -1}+\frac{2}{\bar{\mu}_{\min}}r^{-1}\leq 2c_{3}. \end{equation} Then, for all $r\geq R$ and $s_{1},s_{2}\in \lbrack 0,r^{2}t]$ satisfying $s_{2}>s_{1}$, on the event $ \left( \mathcal{A}_{i,t}^{r}\bigcup \mathcal{B}_{i,t}^{r}\right) ^{c} $, we have \begin{eqnarray*} \sum_{j=1}^{J}\frac{K_{i,j}}{\mu_{j}^{r}} S_{j}^{r}(B_{j}^{r}(s_{2}))-\sum_{j=1}^{J}\frac{K_{i,j}}{\mu_{j}^{r}} S_{j}^{r}(B_{j}^{r}(s_{1})) &\geq &\sum_{j=1}^{J}K_{i,j}\left( B_{j}^{r}(s_{2})-B_{j}^{r}(s_{1})\right) -\sum_{j=1}^{J}K_{i,j}\frac{\min \{1,\bar{\mu}_{\min}\}v_{i}^{\ast }}{128J\mu _{j}^{r}}rt \\ &\geq &\sum_{j=1}^{J}K_{i,j}\left( B_{j}^{r}(s_{2})-B_{j}^{r}(s_{1})\right) - \frac{v_{i}^{\ast }}{64 }rt \end{eqnarray*} and \begin{eqnarray*} \sum_{j=1}^{J} \frac{K_{i,j}}{\mu_{j}^{r}} A_{j}^{r}(s_{2})- \sum_{j=1}^{J}\frac{K_{i,j}}{\mu_{j}^{r}}A_{j}^{r}(s_{1}) &\leq &\sum_{j=1}^{J}K_{i,j}\rho _{j}^{r}(s_{2}-s_{1})+\sum_{j=1}^{J}K_{i,j} \frac{\min \{1,\bar{\mu}_{\min}\}v_{i}^{\ast }}{128J\mu _{j}^{r}}rt \\ &\leq &\sum_{j=1}^{J}K_{i,j}\rho _{j}^{r}(s_{2}-s_{1})+\frac{v_{i}^{\ast }}{ 64}rt\text{.} \end{eqnarray*} Let $\sigma_0=0$ and for $k\ge 1$ \begin{equation*} \sigma _{2k-1}=\inf \{s\geq \sigma _{2k-2}:W_{i}^{r}(t)\geq c_{3}r^{\alpha }\},\; \sigma _{2k}=\inf \{s\geq \sigma _{2k-1}:W_{i}^{r}(t)<c_{3}r^{\alpha }\}. \end{equation*} \ Then, on the event $ \left( \mathcal{A}_{i,t}^{r}\bigcup \mathcal{B}_{i,t}^{r}\right) ^{c} $, for any $\sigma _{2k-1}<\xi r^{2}$, $k\ge 1$, we have on noting that $W_{i}(\sigma _{2k-1}) \le c_{3}r^{\alpha }+\frac{2}{\bar{\mu}_{\min}}+w_{i}r$ \begin{align*} \sup_{\sigma _{2k-1}\leq s\leq \sigma _{2k}\wedge \xi r^{2}}W_{i}^{r}(s)& \leq \sup_{\sigma _{2k-1}\leq s\leq \sigma _{2k}\wedge \xi r^{2}}\left( W_{i}^r(\sigma _{2k-1})+\sum_{j=1}^{J}\frac{K_{i,j}}{\mu_{j}^{r}} A_{j}^{r}(s)-\sum_{j=1}^{J}\frac{K_{i,j}}{\mu_{j}^{r}} A_{j}^{r}(\sigma _{2k-1})\right . \\ & \qquad \qquad \qquad \quad \left. -\left( \sum_{j=1}^{J}\frac{K_{i,j}}{\mu_{j}^{r}} S_{j}^{r}(B_{j}^{r}(s))-\sum_{j=1}^{J}\frac{K_{i,j}}{\mu_{j}^{r}} S_{j}^{r}(B_{j}^{r}(\sigma _{2k-1}))\right) \right) \\ & \leq \sup_{\sigma _{2k-1}\leq s\leq \sigma _{2k}\wedge \xi r^{2}}\left( \sum_{j=1}^{J}K_{i,j}\rho _{j}^{r}(s-\sigma _{2k-1})- \sum_{j=1}^{J}K_{i,j}\left( B_{j}^{r}(s)-B_{j}^{r}(\sigma _{2k-1})\right) \right. \\ & \qquad \qquad \quad \qquad \left. +c_{3}r^{\alpha }+\frac{2}{\bar{\mu}_{\min}}+w_{i}r+\frac{ v_{i}^{\ast }}{32}rt\right) \\ \qquad \qquad \qquad \qquad \quad & \leq c_{3}r^{\alpha }+\frac{2}{\bar{\mu}_{\min}}+w_{i}r +\frac{v_{i}^{\ast }}{16}rt \end{align*} where the third inequality follows from recalling that we are on the event $\left( \mathcal{A}_{i,t}^{r}\right) ^{c}$ so \begin{align*} \sum_{j=1}^{J}K_{i,j} \left[\rho _{j}^{r}(s-\sigma _{2k-1}) - ( B_{j}^{r}(s)-B_{j}^{r}(\sigma _{2k-1})\right] &\le \left(\sum_{j=1}^{J}K_{i,j}\rho _{j}^{r} - C_i\right) (s-\sigma _{2k-1}) + \frac{v_{i}^{\ast }}{32}rt\\ &\le -\frac{v_{i}^{\ast }}{2r} (s-\sigma _{2k-1}) + \frac{v_{i}^{\ast }}{32}rt \le \frac{v_{i}^{\ast }}{32}rt. \end{align*} Thus on the event $\left( \mathcal{A}_{i,t}^{r}\bigcup \mathcal{B} _{i,t}^{r}\right) ^{c}$ we have \begin{equation*} \hat{W}_{i}^{r}(\xi )\leq \frac{v_{i}^{\ast }}{16}t+w_{i}+c_{3}r^{\alpha -1}+\frac{2}{\bar{\mu}_{\min}}r^{-1}\text{.} \end{equation*} Consequently on the event $\left( \mathcal{A}_{i,t}^{r}\bigcup \mathcal{B} _{i,t}^{r}\right) ^{c}\cap \{\check{\tau}_{i,\xi }^{r}> t\}$ we have, by a similar calculation, \begin{eqnarray*} \hat{W}_{i}^{r}(t) &=&\hat{W}_{i}^{r}(\xi )+\left( \hat{W}_{i}^{r}(t)-\hat{W} _{i}^{r}(\xi )\right) \\ &\leq &\frac{v_{i}^{\ast }}{16}t+w_{i}+c_{3}r^{\alpha -1}+\frac{2}{\bar{\mu}_{\min}}r^{-1}+r(t-\xi )\sum_{j=1}^{J}K_{i,j}\rho _{j}^{r}-rC_{i}(t-\xi )+\frac{v_{i}^{\ast }}{16} t \\ &\leq &\frac{v_{i}^{\ast }}{8}t+w_{i}+c_{3}r^{\alpha -1}+\frac{2}{\bar{\mu}_{\min}}r^{-1}-\frac{t}{2} \left( C_{i}-\sum_{j=1}^{J}K_{i,j}\rho _{j}^{r}\right)r \\ &\leq & \frac{v_{i}^{\ast }}{8}t-t\frac{v_{i}^{\ast }}{4}+w_{i}+c_{3}r^{\alpha -1}+\frac{2}{\bar{\mu}_{\min}}r^{-1} \\ &\leq &2c_{3}, \end{eqnarray*} where the third and the fourth inequalities follow from \eqref{eq:larger1155} and recalling that $t \ge \max\{2\xi, 8w_i/v^*_i\}$. Since on the set $\{\check{\tau}_{i,\xi }^{r}> t\}$ we must have $\hat{W}_{i}^{r}(t) >2c_3$ we have arrived at a contradiction. \ Consequently $\left( \mathcal{A}_{i,t}^{r}\bigcup \mathcal{B}_{i,t}^{r}\right) ^{c}\cap \{\check{\tau}_{i,\xi }^{r}(x)> t\}=\emptyset $ and \begin{eqnarray*} P_x\left( \check{\tau}_{i,\xi }^{r}> t\right) &=&P_x\left( \left( \mathcal{A}_{i,t}^{r}\bigcup \mathcal{B}_{i,t}^{r}\right) \cap \{\check{\tau} _{i,\xi }^{r}> t\}\right) \leq P_x\left( \mathcal{A}_{i,t}^{r}\bigcup \mathcal{B}_{i,t}^{r}\right) \le \hat{H}_{1}e^{-t\hat{H}_{2}}\text{.} \end{eqnarray*} Thus for $\beta <\hat{H}_{2}$ \begin{eqnarray*} E_x\left[ e^{\beta \check{\tau}_{i,\xi }^{r}}\right] \leq 1+ \beta e^{\beta M_{\xi}}+\frac{\beta}{\hat{H}_{2}-\beta}e^{(\beta -\hat{H}_{2})M_{\xi}} \leq H_{1}(\beta)e^{H_{1}(\beta)(\xi +w_{i})} \end{eqnarray*} for suitable $H_{1}(\beta)\in (0,\infty)$, where the last inequality comes from the definition of $M_{\xi}$ in \eqref{eq:Mdef}. \end{proof} We now establish a lower bound on an exponential moment of $\check{\tau}_{i,0}^{r}$. \begin{proposition} \label{thm:VlowerBnd}For all $i\in \mathbb{N}_{I}$ there exist $R,H_{1},H_{2},H_{3} \in (0,\infty)$ such that for all $r\geq R$, $\beta >0$ and $ x=(q,z)\in \mathcal{S}^r$ satisfying $w_{i}=(G^{r}q)_{i}\geq H_{1}$ we have \begin{equation*} E_x\left[ e^{\beta \check{\tau}_{i,0}^{r}}\right] >H_{2}e^{H_{3}\beta w_{i}} \end{equation*} \end{proposition} \begin{proof} For $k\in (0,\infty )$ define the event \begin{equation*} \mathcal{B}_{i,k}^{r}=\bigcup_{j\in \mathbb{N}_{J}}\left\{ \sup_{0\leq s\leq k}\left\vert \frac{A_{j}^{r}(r^{2}s)}{r}-rs\lambda _{j}^{r}\right\vert +\sup_{0\leq s\leq C_{i}k}\left\vert \frac{S_{j}^{r}(r^{2}s)}{r}-rs\mu _{j}^{r}\right\vert \geq \frac{v_{i}^{\ast }\min \{1,\bar{\mu}_{\min}\}k}{4J}\right\} \text{.} \end{equation*} From Theorem \ref{thm:LDP} there exists $R \in (\hat{R},\infty)$ (recall \eqref{def:Rhat}) and $\hat{H}_{2},\hat{H}_{3} \in (0,\infty)$ such that for all $r\geq R$ and $k \in (0,\infty)$, we have \begin{equation*} P\left( \mathcal{B}_{i,k}^{r}\right) \leq \hat{H}_{2}e^{-k \hat{H}_{3}}\text{.} \end{equation*} We assume that $R$ is big enough so that \eqref{eq:larger1155} is satisfied for all $r \ge R$. Let $H_{1}=\max \left\{ 5c_{3},\frac{6v_{i}^{\ast }\log (2\hat{H}_{2})}{\hat{ H}_{3}}\right\} $. \ Then for $w_{i}\geq H_{1}$ we have \begin{align} P\left( \mathcal{B}_{i,w_{i}/(6v_{i}^{\ast })}^{r}\right) \leq \hat{H}_{2}e^{-w_{i}\hat{H}_{3}/6v_{i}^{\ast }} \leq \frac{1}{2} \label{eq:eq852} \end{align} and on the event $(\mathcal{B}_{i,w_{i}/6v_{i}^{\ast }}^{r})^{c}$ we have from \eqref{eq:queleneqn} and \eqref{eq:eq939}, that under $P_x$, \begin{eqnarray*} \inf_{0\leq s\leq w_{i}/6v_{i}^{\ast }}\hat{W}_{i}^{r}(s) &=&\inf_{0\leq s\leq w_{i}/6v_{i}^{\ast }}\left( w_{i}+\sum_{j=1}^{J} K_{i,j}\frac{A_{j}^{r}(r^{2}s)}{ r\mu _{j}^{r}}-\sum_{j=1}^{J}K_{i,j} \frac{S_{j}^{r}(r^{2}\bar{B}_{j}^{r}(s))}{r\mu _{j}^{r}}\right) \\ &\geq &\inf_{0\leq s\leq w_{i}/6v_{i}^{\ast }}\left( w_{i}-r\left( C_{i}-\sum_{j=1}^{J}K_{i,j}\varrho _{j}^{r}\right) s-\sum_{j=1}^{J}K_{i,j}\frac{ \min \{1,\bar{\mu}_{\min}\}w_{i}}{12J\mu _{j}^{r}}\right) \\ &\geq &\inf_{0\leq s\leq w_{i}/6v_{i}^{\ast }}\left( \frac{5w_{i}}{6} -2v_{i}^{\ast }s\right) \\ &\geq & \frac{w_{i}}{2} > 2c_{3} \end{eqnarray*} where the third line uses \eqref{def:Rhat} and \eqref{eq:larger1155}. Thus $\{\check{\tau}_{i,0}^{r} \leq w_{i}/6v_{i}^{\ast }\}\cap (\mathcal{B} _{i,w_{i}/6v_{i}^{\ast }}^{r})^c = \emptyset$, $P_x$ a.s.. \ This gives \begin{eqnarray*} E_x\left[ e^{\beta \check{\tau}_{i,0}^{r}}\right] &=&E_x\left[ e^{\beta \check{\tau}_{i,0}^{r}}\mathit{I}_{\mathcal{B}_{i,w_{i}/6v_{i}^{\ast }}^{r}}\right] +E_x\left[ e^{\beta \check{\tau}_{i,0}^{r}}\mathit{I}_{( \mathcal{B}_{i,w_{i}/6v_{i}^{\ast }}^{r})^{c}}\right] \\ &\geq &e^{\beta \left( w_{i}/(6v_{i}^{\ast })\right) }P_x\left( \mathcal{B} _{i,w_{i}/6v_{i}^{\ast }}^{r}\right)^c \geq \frac{1}{2} e^{(\beta /(6v_{i}^{\ast }))w_{i}}\text{,} \end{eqnarray*} where the last inequality is from \eqref{eq:eq852}. Thus completes the proof. \end{proof} Recall $\delta^*$ from Proposition \ref{thm:stopTimeExpMomBnd} and fix $\beta \in (0, \delta^*)$. For $i\in \mathbb{N}_{I}$ let \begin{equation*} V_{i}(x)\doteq E_x\left[ e^{\beta \check{\tau}_{i,0}^{r}}\right] \text{.} \end{equation*} Also recall the Markov process $\hat X^r$ in \eqref{eq:eqhatxrt}. The following result proves a Lyapunov function property for $V_i$. \begin{proposition} \label{thm:timeDecayV} There exist $H,R \in (0,\infty)$ such that for all $x=(q,z)\in \mathcal{S}^r$, $r\geq R$, $i \in \mathbb{N}_I$, and $t\in \lbrack 0,1]$ we have \begin{equation*} E_x\left[ V_{i}(\hat{X}^{r}(t))\right] \leq e^{-\beta t} V_{i}(x)+H\text{.} \end{equation*} \end{proposition} \begin{proof} From the Markov property we have \begin{eqnarray*} E_x\left[ V_{i}(\hat{X}^{r}(t))\right] =E_x\left[ e^{\beta \left( \check{\tau}_{i,t}^{r}-t\right) }\right] =E_x\left[ e^{\beta \left( \check{\tau}_{i,t}^{r}-t\right) }\mathit{I} _{\left\{ \check{\tau}_{i,0}^{r}\geq t\right\} }\right] +E_x\left[ e^{\beta \left( \check{\tau}_{i,t}^{r}-t\right) }\mathit{I}_{\left\{ \check{\tau}_{i,0}^{r}<t\right\} }\right] \text{.} \end{eqnarray*} Let $R$ be as in Proposition \ref{thm:stopTimeExpMomBnd}. \ Let $t\in \lbrack 0,1]$ and $r\geq R$ be arbitrary. \ Then from Proposition \ref{thm:stopTimeExpMomBnd}, for some $\hat H_1, \hat H_2 \in (0,\infty)$ \begin{eqnarray*} E_x\left[ e^{\beta \left( \check{\tau}_{i,t}^{r}-t\right) }\mathit{I} _{\left\{ \check{\tau}_{i,0}^{r}<t\right\} }\right] \leq \sup_{x':w_{i}\leq 2c_{3}}\sup_{0\le \xi \le 1}E_{x'}\left[ e^{\beta \check{\tau}_{i,\xi}^{r}}\right] \leq \hat{H}_{1}e^{\hat{H}_{2}(2c_{3}+1)} \end{eqnarray*} Furthermore, \begin{eqnarray*} E_x\left[ e^{\beta \left( \check{\tau}_{i,t}^{r}-t\right) }\mathit{I} _{\left\{ \check{\tau}_{i,0}^{r}\geq t\right\} }\right] = e^{-t\beta }E_x\left[ e^{\beta \check{\tau}_{i,0}^{r}}\mathit{I} _{\left\{ \check{\tau}_{i,0}^{r}\geq t\right\} }\right] \leq e^{-t\beta }E_x\left[ e^{\beta \check{\tau}_{i,0}^{r}}\right] =e^{-t\beta }V_{i}(x)\text{.} \end{eqnarray*} Combining the two estimates we have the result. \end{proof} From the Lyapunov function property proved in the previous result we have the following moment estimate for all time instants. \begin{proposition} \label{thm:timeIndVbnd} There exist $ H_{1},H_{2},R \in (0,\infty) $ such that for all $t\geq 0$, $i \in \mathbb{N}_I$ and $ r\geq R$ we have \begin{equation*} E_x\left[ V_{i}(\hat{X}^{r}(t))\right] \leq H_{1}e^{-\beta t}V_{i}(x)+H_{2} \end{equation*} \end{proposition} \begin{proof} Let $R, H$ be as in Proposition \ref{thm:timeDecayV}. \ Then for all $i \in \mathbb{N}_I$, $x=(q,z) \in \mathcal{S}^r$, $t \in [0,1]$ and $r\ge R$, we have \begin{equation*} E_x\left[ V_{i}(\hat{X}^{r}(t))\right] \leq e^{-\beta t}V_{i}(x)+ {H}\text{.} \end{equation*} Then from the Markov property, for any $r\geq R$ and $t\ge 0$ we have \begin{eqnarray} E_x\left[ V_{i}(\hat{X}^{r}(t))\right] = E_x\left[ E_x\left[ \left. V_{i}(\hat{X}^{r}(t))\right\vert \hat{X} ^{r}(\left\lfloor t\right\rfloor )\right] \right] \leq {H}+ e^{-\beta (t-\left\lfloor t\right\rfloor)} E_x\left[ V_{i}(\hat{X}^{r}(\left\lfloor t\right\rfloor ))\right]. \label{eq:VgenTimeFromIntTime} \end{eqnarray} Using the Markov property again \begin{eqnarray*} E_x\left[ V_{i}(\hat{X}^{r}(\left\lfloor t\right\rfloor ))\right] =E_x\left[ E\left[ V_{i}(\hat{X} ^{r}(1))\mid { \hat{X}^{r}(\left\lfloor t\right\rfloor -1)}\right] \right] \leq {H}+e^{-\beta}E_x\left[ V_{i}(\hat{X}^{r}(\left\lfloor t\right\rfloor -1))\right]. \end{eqnarray*} Iterating the above inequality we get \begin{eqnarray*} E_x\left[ V_{i}(\hat{X}^{r}(\left\lfloor t\right\rfloor ))\right] \leq e^{-\beta \left\lfloor t\right\rfloor} V_{i}(x)+ {H}\sum_{k=0}^{\left\lfloor t\right\rfloor -1}\left( e^{-\beta}\right) ^{k} \leq e^{-\beta \left\lfloor t\right\rfloor }V_{i}(x)+\frac{ {H}}{1-e^{-\beta}}\text{.} \end{eqnarray*} Combining this with (\ref{eq:VgenTimeFromIntTime}) we have for all $t\ge 0$ \begin{equation*} E_x\left[ V_{i}(\hat{X}^{r}(t))\right] \leq e^{-\beta t} V_{i}(x)+ {H}\left(1+\frac{1}{1-e^{-\beta}}\right) . \end{equation*} The result follows. \end{proof} {\bf Proof of Theorem \ref{thm:WmomBnd}.} This proof is immediate from Proposition \ref{thm:VlowerBnd} and Proposition \ref{thm:timeIndVbnd} on taking $\gamma = H_3 \beta $ where $H_3$ is as in the statement of Proposition \ref{thm:VlowerBnd} and $\beta$ is as fixed above Proposition \ref{thm:timeDecayV}. \section{\protect Path Occupation Measure Convergence} \label{sec:pathoccmzr} Let for $t\ge 0$ \begin{equation} \hat{Z}^{r}(t)=w^{r}+G^{r}(\hat{A}^{r}(t)-\hat{S}^{r}(\bar{B}^{r}(t))),\label{eq:zhrt} \end{equation} where $w^r = G^rq$. Consider the collection of random variables indexed by $T$ and $r$ taking values in $\mathcal{P}\left( D([0,1]:\mathbb{R}_+^{I}\times \mathbb{R}^{I})\right) $, defined by \begin{equation*} \theta _{T}^{r}(dx\times dy)=\frac{1}{T}\int_{0}^{T}\delta _{\hat{W} ^{r}(t+\cdot )}(dx)\delta _{\hat{Z}^{r}(t+\cdot )-\hat{Z}^{r}(t)}(dy)dt . \end{equation*} In this section we will prove the tightness of the collection $\{\theta _{T}^{r}, T>0, r>0\}$ of random path occupation measures and characterize limit points along suitable subsequences. We begin by noting the following monotonicity property of a one dimensional Skorohod map introduced in Section \ref{sec:hgi}. \begin{theorem} \label{thm:skorokIneq} Fix $T \in (0,\infty)$ and $f\in D([0,T]:\mathbb{R} )$ satisfying $f(0)=0$. Let $\varphi _{1}=\Gamma_1(f)$. Suppose $\varphi_{2}, \varphi_{3} \in D([0,T]:\mathbb{R})$ are such that \begin{itemize} \item $\varphi_2(t) = f(t) + h_2(t)$, $t\in [0,T]$, where $h_2 \in D([0,T]:\mathbb{R})$ is a nondecreasing function with $h_2(0)=0$ and $\int_{[0,T]} 1_{(0,\infty)} (\varphi_2(s)) dh_2(s) =0$. \item $\varphi_3(t) = f(t) + h_3(t)$, $t\in [0,T]$, where $h_3 \in D([0,T]:\mathbb{R})$ is a nondecreasing function with $h_3(0)=0$ and $\varphi_3(t)\ge 0$ for all $t \in [0,T]$. \end{itemize} Then for all $t\in [0,T]$, $\varphi_2(t) \le \varphi_1(t) \le \varphi_3(t)$. \end{theorem} \begin{proof} The proof of the second inequality is straightforward and is omitted. Consider now the first inequality. Note that $\varphi _{1}(t)=f(t)+h_{1}(t)$ where $h_{1}(t)=-\inf_{0\leq s\leq t}\{f(s)\}$ and thus it suffices to show that for any $t\in \lbrack 0,T]$, $ h_{2}(t)\leq -\inf_{0\leq s\leq t}\{f(s)\}$. \ Assume that there exists $ t_{2}^{\ast }\in \lbrack 0,T]$ such that $h_{2}(t_{2}^{\ast })>-\inf_{0\leq s\leq t_{2}^{\ast }}\{f(s)\} \doteq a$. \ Let \begin{equation*} t_{1}^{\ast }=\sup \{s\in \lbrack 0,t_{2}^{\ast }]:h_{2}(s)\leq a\} \end{equation*} and note that either $h_{2}(t_{1}^{\ast })>a$ or $h_{2}(t_{1}^{\ast })=a$ and $h_{2}(r)>a$ for all $r\in (t_{1}^{\ast },t_{2}^{\ast }]$. \ In the first case \begin{equation*} \varphi _{2}(t_{1}^{\ast })=f(t_{1}^{\ast })+h_{2}(t_{1}^{\ast })>f(t_{1}^{\ast })-\inf_{0\leq s\leq t_{2}^{\ast }}\{f(s)\}\geq 0 \end{equation*} so $\varphi _{2}(t_{1}^{\ast })>0$ and \begin{equation*} \int_{\{t_{1}^{\ast }\}}dh_{2}(s)=h_2(t_{1}^{\ast })-\lim_{s\uparrow t_{1}^{\ast }}h_2(s)>0 \end{equation*} which is a contradiction. \ In the second case for all $r\in (t_{1}^{\ast },t_{2}^{\ast }]$ \begin{equation*} \varphi _{2}(r)=f(r)+h_{2}(r)> f(r)-a \ge f(r)-\inf_{0\leq s\leq t_2^*}\{f(s)\}\geq 0 \end{equation*} so $\varphi _{2}(r)>0$ for all $r\in (t_{1}^{\ast },t_{2}^{\ast }]$ and \begin{equation*} \int_{(t_{1}^{\ast },t_{2}^{\ast }]}dh_{2}(s)=h_2(t_{2}^{\ast })-h_2(t_{1}^{\ast })=h_{2}(t_{2}^{\ast })-a >0 \end{equation*} which is also a contradiction. \ Therefore for any $t\in \lbrack 0,T]$ we have $h_{2}(t)\leq -\inf_{0\leq s\leq t}\{f(s)\}$ and the desired inequality follows. \end{proof} \begin{theorem} \label{thm:finTimeConvToRBM}For any $\epsilon >0$ and $T \in (0, \infty)$ there exists $R \in (0,\infty)$ such that for all $r\geq R$ and $x=(q,z) \in \mathcal{S}^r$, \begin{equation*} \sup_{s \in [0,\infty)}P_x\left( \sup_{0\leq t\leq T}\left\vert \Gamma \left( \hat W^r(s)+\hat{Z}^{r}(s+\cdot) - \hat{Z}^{r}(s) +r(K\rho ^{r}-C)\iota\right)(t) -\hat{W}^{r}(t+s)\right\vert >\epsilon \right) <\epsilon \text{. } \end{equation*} \end{theorem} \begin{proof} We will only prove the result without the outside supremum and in fact only when $s=0$. The general case follows on using the Markov property and the fact that the estimate in \eqref{eq:squareIdleTimeResultImplication} is uniform over all $x \in \Gamma^r$. Let \begin{equation*} \hat{\xi}^{r}_i(t)=\frac{1}{r}\int_{0}^{r^{2}t}\mathit{I}_{\left\{ {W} _{i}^{r}(s)\ge c_{3}r^{\alpha }\right\} }(s)dI^{r}_i(s), \; i \in \mathbb{N}_{I}. \end{equation*} Note that \begin{equation*} \hat{W}^{r}_i(t)-c_{3}r^{\alpha -1}=\hat{Z}^{r}_i(t)+tr(K\rho ^{r}-C)_i+\hat{\xi} ^{r}_i(t)-c_{3}r^{\alpha -1}+ \int_{0}^{t}\mathit{I}_{\left\{ \hat{W}_{i}^{r}(s)-c_{3}r^{\alpha -1}< 0\right\} }(s)d\hat I^{r}_i(s) \end{equation*} and consequently due to Theorem \ref{thm:skorokIneq}\ we have \begin{equation*} \hat{W}^{r}(t)-c_{3}r^{\alpha -1}\leq \Gamma \left( \hat{Z}^{r} + r(K\rho ^{r}-C)\iota +\hat{\xi}^{r}-c_{3}r^{\alpha -1}\right)(t), \; t \ge 0 \text{.} \end{equation*} In addition, \begin{equation*} \hat{W}^{r}(t)=\hat{Z}^{r}(t)+tr(K\rho ^{r}-C)+\hat{I}^{r}(t) \end{equation*} is a nonnegative function and $\hat{I}^{r}(0)$ is nondecreasing and satisfies $\hat{I}^{r}(0)=0$. Thus once more from Theorem \ref{thm:skorokIneq} \begin{equation*} \Gamma \left( \hat{Z}^{r}(\cdot)+ r(K\rho ^{r}-C)\iota\right) \leq \hat{W}^{r}(t) \text{,}\; t\ge 0. \end{equation*} Combining this gives for all $t\ge 0$ \begin{equation} \Gamma \left( \hat{Z}^{r} +r(K\rho ^{r}-C)\iota\right)(t) \leq \hat{W}^{r}(t)\leq \Gamma \left( \hat{Z}^{r} +r(K\rho ^{r}-C)\iota+\hat{\xi}^{r}(\cdot)-c_{3}r^{ \alpha -1}\right) +c_{3}r^{\alpha -1}. \label{eq:skorokIneq} \end{equation} Lipschitz property of the Skorokhod map gives that there is a $\kappa_1 \in (0,\infty)$ such that for all $T>0$ \begin{equation*} \sup_{0\leq t\leq T}\left\vert \Gamma \left( \hat{Z}^{r}(\cdot) +r(K\rho ^{r}-C)\iota\right)(t) -\hat{W}^{r}(t) \right\vert \leq \kappa_1\left( 2c_{3}r^{\alpha -1}+\left\vert \hat{\xi}^{r}(T)\right\vert \right). \end{equation*} From Theorem \ref{thm:IdleTimeExp} (see \eqref{eq:squareIdleTimeResultImplication}), for any $\epsilon >$ $0$ and $T \in (0,\infty)$, there exists $R \in (0,\infty)$ such that for all $r\geq R$ and $x \in \mathcal{S}^r$ \begin{equation*} P_x\left( \left\vert \hat{\xi}^{r}(T)\right\vert >\epsilon \right) <\epsilon . \end{equation*} The result follows. \end{proof} Recall the initial condition $q^r$ introduced in \eqref{eq:queleneqn}. \begin{theorem} \label{thm:occMeasTight} Suppose $\hat q^r \doteq q^r/r$ satisfies $\sup_{r>0} \hat q^r <\infty$. Let $\{t_r\}$ be an increasing sequence such that $t_r\uparrow\infty$ as $r\to \infty$. Suppose that $\hat w^r$ converges to some $w \in \mathbb{R}_+^I$. Then, the random variables $ \{\theta _{t_r}^{r}, r>0\}$ are tight in the space $\mathcal{P} \left( D([0,1]:\mathbb{R}_+^{I}\times\mathbb{R}^{I})\right) $. \end{theorem} \begin{proof} It suffices to show that the collection $$\left\{\left (\hat W^r(t+\cdot), \hat Z^r(t+\cdot)- \hat Z^r(t)\right),\; r> 0,\; t>0\right\}$$ is tight in $D([0,1]: \mathbb{R}_+^I \times \mathbb{R}^I)$. Let \begin{equation*} \mathcal{F}_{t}^{r}=\sigma \left( \hat{S}_{j}^{r}(\bar{B}^{r}(s)),\hat{A} _{j}^{r}(s):j\in \mathbb{N}_{J},0\leq s\leq t\right), \; t\ge 0. \end{equation*} and note that for all $j\in \mathbb{N}$ both $\hat{S}_{j}^{r}(\bar{B} ^{r}(t)) $ and $\hat{A}_{j}^{r}(t)$ are $\mathcal{F}_{t}^{r}$-martingales. \ Consequently, there are $\kappa_1, \kappa_2 \in (0,\infty)$ such that for any $r>0$, $\delta>0$ and $\mathcal{F}_{t}^{r}$-stopping times $ \tau _{1},\tau _{2}$ satisfying $\tau _{1}\leq \tau _{2}\leq \tau _{1}+\delta \le 1 $, \begin{eqnarray*} &&E\left[ \left( \hat{Z}_{i}^{r}(\tau _{2})-\hat{Z}_{i}^{r}(\tau _{1})\right) ^{2}\right] \\ &\leq &\kappa_1 \sum_{j=1}^{J}G_{i,j}^{r}\left( E\left[ (\hat{A}_{j}^{r}(\tau _{2})- \hat{A}_{j}^{r}(\tau _{1}))^{2}\right] +E\left[ (\hat{S}_{j}^{r}(\bar{B} ^{r}(\tau _{2}))-\hat{S}_{j}^{r}(\bar{B}^{r}(\tau _{1})))^{2}\right] \right) \\ &\leq & \kappa_1 \sum_{j=1}^{J} E\left[ \tau _{2}-\tau _{1} \right] +\sum_{j=1}^{J} E\left[ \bar{B}^{r}(\tau _{2})- \bar{B}^{r}(\tau _{1})\right] \\ &\leq & \kappa_2\delta . \end{eqnarray*} This proves the tightness of the collection $\{\hat Z^r(t+\cdot)- \hat Z^r(t),\; r> 0,\; t>0\}$. From the convergence $r(K\varrho^r-C) \to v^*$, Theorem \ref{thm:finTimeConvToRBM}, and Lipschitz property of the Skorohod map, to prove the tightness of $\{\hat W^r(t+\cdot),\; r> 0,\; t>0\}$ it now suffices to prove the tightness of $\{\hat W^r(t),\; r> 0,\; t>0\}$. However that is an immediate consequence of Propositions \ref{thm:WmomBnd} and \ref{thm:stopTimeExpMomBnd}. The result follows. \end{proof} Recall that the reflected Brownian motion $\{\check W^{w_0}\}_{w_0 \in \mathbb{R}_+^I}$ in \eqref{eq:eqrbm} has a unique invariant probability distribution which we denote as $\pi$. We will denote by $\Pi$ the unique measure on $C([0,1]:\mathbb{R}_+^I)$ associated with this Markov process with initial distribution $\pi$. The following theorem gives a characterization of the weak limit points of the sequence $\theta _{t_r}^{r}$ in Theorem \ref{thm:occMeasTight}. We denote the canonical coordinate processes on $D([0,1]:\mathbb{R}_+^I \times \mathbb{R}^I)$ as $(\mathbf{w}(t), \mathbf{z}(t))_{0\le t\le 1}$. Let $\mathcal{G}_t \doteq \sigma\{(\mathbf{w}(s), \mathbf{z}(s)): 0 \le s \le t\}$ be the canonical filtration on this space. \begin{theorem} \label{thm:limitMeasProp} Suppose $\hat q^r \doteq q^r/r$ satisfies $\sup_{r>0} \hat q^r <\infty$. Also suppose that $\theta _{t_r}^{r}$ converges in distribution, along some subsequence as $r\to \infty$, to a $\mathcal{P}(D([0,1]:\mathbb{R}_+^I \times \mathbb{R}^I))$ valued random variable $\theta$ given on some probability space $(\bar \Omega, \bar \mathcal{F}, \bar P)$. Then for $\bar P$ a.e. $\omega$, under $\theta(\omega)\equiv \theta_{\omega}$ the following hold. \begin{enumerate} \item $\theta_{\omega}(C([0,1]:\mathbb{R}_+^I\times \mathbb{R}^I))=1$. \item $\{\mathbf{z}(t)\}_{0\le t\le 1}$ is a $\mathcal{G}_t$-Brownian motion with covariance matrix $\Sigma = \Lambda \Lambda'$, where $\Lambda$ is as introduced above \eqref{eq:eqrbm}. \item $\{(\mathbf{w}(t),\mathbf{z}(t))\}_{0\le t\le 1}$ satisfy $\theta_{\omega}$ a.s. $$\mathbf{w}(t) = \Gamma(\mathbf{w}(0)- v^*\iota +\mathbf{z})(t),\; 0\le t \le 1.$$ \item $\theta_{\omega} \circ (\mathbf{w}(0))^{-1} = \pi$ and thus denoting the first marginal of $\theta_{\omega}$ on $C([0,1]:\mathbb{R}_+^I)$ as $\theta_{\omega}^1$, we have $\theta_{\omega}^1=\Pi$. \end{enumerate} \end{theorem} \begin{proof} For notational simplicity we denote the convergent subsequence of $\theta _{t_r}^{r}$ by the same symbol. For $(x,y)\in D([0,1]:\mathbb{R}_{+}^{I}\times \mathbb{R}^{I})$ define $ j(x,y)=\sup_{0\leq t<1} \left\Vert (x(t),y(t))-(x(t-),y(t-))\right\Vert. $ Then there is a $\kappa_1\in (0,\infty)$ such that for all $r$, $E \theta^r_{t_r} ((x,y): j(x,y) > \kappa_1/r) =0.$ Thus in particular, for every $\delta \in (0,\infty)$, as $r\to \infty$, $E \theta^r_{t_r} ((x,y): j(x,y) > \delta) \to 0.$ By weak convergence of $\theta^r_{t_r}$ to $\theta$ and Fatou's lemma we then have $E \theta ((x,y): j(x,y) > \delta) = 0$ which proves part (1) of the theorem. In what follows, we will denote the expected value under $\theta^r_{t_r}$ (resp. $\theta$) as $E_{\theta^r_{t_r}}$ (resp. $E_{\theta}$). Let $f: D([0,1]:\mathbb{R}_{+}^{I}\times \mathbb{R}^{I}) \to \mathbb{R}$ be a continuous and bounded function. We now argue that for all $0\le s<t\le 1$, and $i \in \mathbb{N}_I$ \begin{equation} \bar E \left(\left |E_{\theta}\left(f(\mathbf{w}(\cdot \wedge s), \mathbf{z}(\cdot \wedge s)) (\mathbf{z}_i(t)-\mathbf{z}_i(s))\right)\right| \wedge 1\right) =0. \label{eq:eqismart} \end{equation} This will prove that $\{\mathbf{z}(t)\}_{0\le t\le 1}$ is a $\mathcal{G}_t$-martingale under $\theta_{\omega}$ for a.e. $\omega$. To see \eqref{eq:eqismart} note that \begin{align*} &E E_{\theta^r_{t_r}}\left[f(\mathbf{w}(\cdot \wedge s), \mathbf{z}(\cdot \wedge s)) (\mathbf{z}_i(t)-\mathbf{z}_i(s))\right]^2\\ &= E\left[\frac{1}{t_r}\int_0^{t_r} f(\hat W^r(u+(\cdot \wedge s)), \hat Z^r(u+(\cdot \wedge s))- \hat Z^r(u)) [\hat Z^r_i(u+t) - \hat Z^r_i(u+s)] du\right]^2\\ &= \frac{2}{t_r^2}\int_0^{t_r}\int_0^u E(H_i(u)H_i(v)) dv du, \end{align*} where for $u\ge 0$ \begin{equation*} H_i(u)=f (\hat{W}^{r}(u+(\cdot \wedge s)),\hat{Z}^{r}(u+(\cdot \wedge s))-\hat{Z}^{r}(u))(\hat{Z}_i^{r}(u+t)- \hat{Z}_i^{r}(u+s))\text{.} \end{equation*} Since $\hat Z^r_i$ is a martingale, we have for $v<u-1$, $E(H_i(u)H_i(v))=0$. Also from properties of Poisson processes it follows that for every $p\ge 1$ \begin{equation} \sup_{r>0, u\ge 0, s,t\in [0,1]} E\left\|\hat Z^r(u+t) - \hat Z^r(u+s)\right\|^p \doteq m_p<\infty. \label{eq:eqlpzrin} \end{equation} Thus since $f$ is bounded , we have for some $\kappa_2 \in (0,\infty)$ $$\frac{2}{t_r^2}\int_0^{t_r}\int_0^u E(H_i(u)H_i(v)) dv du \le \frac{\kappa_2}{t_r}\to 0$$ as $r\to \infty$. Thus as $r\to \infty$ \begin{equation*} \bar E \left(\left |E_{\theta^r_{t_r}}\left(f(\mathbf{w}(\cdot \wedge s), \mathbf{z}(\cdot \wedge s)) (\mathbf{z}_i(t)-\mathbf{z}_i(s))\right)\right| \wedge 1\right)\to 0. \end{equation*} The equality in \eqref{eq:eqismart} now follows on noting that from \eqref{eq:eqlpzrin}, for all $t\in [0,1]$, $\sup_{r>0} E E _{\theta^r_{t_r}}(\mathbf{z}_i(t))^2 < \infty .$ In order to argue that $\{\mathbf{z}(t)\}_{0\le t\le 1}$ is a $\mathcal{G}_t$-Brownian motion with covariance matrix $\Sigma$ it now suffices to show that defining $\mathbf{m}(t) \doteq \mathbf{z}(t)\mathbf{z}'(t) - t \Sigma$, $\{\mathbf{m}(t)\}_{0\le t\le 1}$ is a $I^2$ dimensional $\{\mathcal{G}_t\}$-martingale. Once more, it suffices to show that with $f$ as before, $0\le s<t\le 1$, and $i,l \in \mathbb{N}_I$, \begin{equation} \bar E \left(\left |E_{\theta}\left(f(\mathbf{w}(\cdot \wedge s), \mathbf{z}(\cdot \wedge s)) (\mathbf{m}_{i,l}(t)-\mathbf{m}_{i,l}(s))\right)\right| \wedge 1\right) =0. \label{eq:eqismartqv} \end{equation} For this note that \begin{align} &\bar E E_{\theta^r_{t_r}}\left[f(\mathbf{w}(\cdot \wedge s), \mathbf{z}(\cdot \wedge s)) (\mathbf{m}_{i,l}(t)-\mathbf{m}_{i,l}(s))\right]^2 \nonumber\\ &=E\left[\frac{1}{t_r}\int_0^{t_r} f(\hat W^r(u+(\cdot \wedge s)), \hat Z^r(u+(\cdot \wedge s))- \hat Z^r(u)) [\hat M^{r,u}_{i,l}(t) - \hat M^{r,u}_{i,l}(s)]du\right]^2\nonumber\\ &= \frac{2}{t_r^2}\int_0^{t_r}\int_0^u E(H^r_{i,l}(u)H^r_{i,l}(v)) dv du,\label{eq:a11052} \end{align} where for $u\ge 0$ $$\hat M^{r,u}_{i,l}(t)=\left( \hat{Z}^{r}_{i}(u+t)-\hat{Z}^{r}_i(u)\right) \left( \hat{Z} ^{r}_l(u+t)-\hat{Z}^{r}_l(u)\right) -t\Sigma_{il} $$ and $$ H^r_{i,l}(u) = f(\hat W^r(u+(\cdot \wedge s)), \hat Z^r(u+(\cdot \wedge s))- \hat Z^r(u))[\hat M^{r,u}_{i,l}(t) - \hat M^{r,u}_{i,l}(s)]. $$ Write $$\hat M^{r,u}_{i,l}(t)- \hat M^{r,u}_{i,l}(s) = \hat{\Psi}_{i,l}^{r}(u) + \bar{\xi}_{i,l}^{r}(u),$$ where \begin{eqnarray*} \hat{\Psi}_{i,l}^{r}(u) &=&(\hat{Z}_{i}^{r}(u+t)-\hat{Z}_{i}^{r}(u+s))(\hat{Z }_{l}^{r}(u+t)-\hat{Z}_{l}^{r}(u+s)) \\ &&-\sum\limits_{j=1}^{J}G_{i,j}^{r}K_{l,j}(\bar{B}_{j}^{r}(u+t)-\bar{B} _{j}^{r}(u+s)+(t-s)\varrho _{j}^{r}) \end{eqnarray*} and \begin{equation}\label{eq:defxiil} \bar{\xi}_{i,l}^{r}(u)=\sum\limits_{j=1}^{J}G_{i,j}^{r}K_{l,j}(\bar{B} _{j}^{r}(u+t)-\bar{B}_{j}^{r}(u+s)+(t-s)\varrho _{j}^{r})-(t-s)\Sigma_{i,l} \text{.} \end{equation} Then for $0\le v\le u\le t_r$ \begin{align} |E(H^r_{i,l}(u)H^r_{i,l}(v))| &\le |E(\hat H^r_{i,l}(u)\hat H^r_{i,l}(v))|+ \|f\|_{\infty}^2\sup_{u\ge 0}E(\bar{\xi}_{i,l}^{r}(u))^2\nonumber\\ &\quad+ 2\|f\|_{\infty}^2 \left[\sup_{u\ge 0} E(\hat{\Psi}_{i,l}^{r}(u))^2\right]^{1/2} \left[\sup_{u\ge 0} E(\bar{\xi}_{i,l}^{r}(u))^2\right]^{1/2}, \label{eq:eq1036} \end{align} where $$ \hat H^r_{i,l}(u) = f(\hat W^r(u+(\cdot \wedge s)), \hat Z^r(u+(\cdot \wedge s))- \hat Z^r(u))\hat{\Psi}_{i,l}^{r}(u). $$ From \eqref{eq:eqlpzrin}, for some $\kappa_3\in (0,\infty)$ $$\sup_{r>0, u,v>0} E|\hat H^r_{i,l}(u)\hat H^r_{i,l}(v)| \le \kappa_3.$$ Also, from martingale properties of $\hat A_j$ and $\hat S_j$ we see that for $v<u-1$, $$E(\hat H^r_{i,l}(u)\hat H^r_{i,l}(v))=0.$$ Combining the above two displays we now have that as $r\to \infty$ \begin{equation}\label{eq:qvdiag} \frac{2}{t_r^2}\int_0^{t_r}\int_0^u |E(\hat H^r_{i,l}(u)\hat H^r_{i,l}(v))| dv du \le \frac{\kappa_4}{t_r} \to 0. \end{equation} From \eqref{eq:eqlpzrin} once more, we have for some $\kappa_5 \in (0,\infty)$ \begin{equation} \sup_{u\ge 0, r>0} E(\hat{\Psi}_{i,l}^{r}(u))^2 \le \kappa_5.\label{eq:bdpsiil} \end{equation} We now argue that \begin{equation} \sup_{u\ge 0} E(\bar{\xi}_{i,l}^{r}(u))^2 \to 0 \mbox{ as } r\to \infty.\label{eq:bdpsiilxi} \end{equation} Note that once \eqref{eq:bdpsiilxi} is proved, it follows on combining \eqref{eq:a11052}, \eqref{eq:eq1036}, \eqref{eq:qvdiag} and \eqref{eq:bdpsiilxi} that $$E E_{\theta^r_{t_r}}\left[f(\mathbf{w}(\cdot \wedge s), \mathbf{z}(\cdot \wedge s)) (\mathbf{m}_{i,l}(t)-\mathbf{m}_{i,l}(s))\right]^2\to 0$$ as $r\to \infty$. Once more using the moment bound in \eqref{eq:eqlpzrin} we then have \eqref{eq:eqismartqv} completing the proof of (2). We now return to the proof of \eqref{eq:bdpsiilxi}. We note that for some $\kappa_6 \in (0,\infty)$ $$\sup_{u,r>0} |\bar{\xi}_{i,l}^{r}(u)| \le \kappa_6 \mbox{ a.s. }.$$ Thus for any $\epsilon \in (0,\infty)$ \begin{equation}\label{eq:eps2118} \sup_{u>0} E|\bar{\xi}_{i,l}^{r}(u)|^2 \le \epsilon^2 + \kappa_6^2 \sup_{u>0}P(|\bar{\xi}_{i,l}^{r}(u)|>\epsilon). \end{equation} Next from properties of Poisson processes it follows that for any $\tilde \epsilon \in (0,\infty)$, as $r\to \infty$ \begin{equation*} \sup_{u\ge 0}P\left( \left\vert \bar{A}_{j}^{r}(u+t)-\bar{A}_{j}^{r}(u+s)-(t-s)\lambda_{j}^r\right\vert >\tilde\epsilon \right) \to 0 \end{equation*} and \begin{equation*} \sup_{u\ge 0}P\left( \left\vert \bar{S}_{j}^{r}(\bar{B}_{j}^{r}(u+t))-\bar{S}_{j}^{r}( \bar{B}_{j}^{r}(u+t))-(\bar{B}_{j}^{r}(u+t)-\bar{B}_{j}^{r}(u+s))\mu_{j}^r\right\vert >\tilde \epsilon \right) \to 0 \text{.} \end{equation*} Also, using Theorem \ref{thm:WmomBnd}, as $r\to \infty$ \begin{align*} &\sup_{u\ge 0}P\left( \left\vert \bar{A}_{j}^{r}(u+t)-\bar{A}_{j}^{r}(u+s)-\left( \bar{S} _{j}^{r}(\bar{B}_{j}^{r}(u+t))-\bar{S}_{j}^{r}(\bar{B}_{j}^{r}(u+t))\right) \right\vert >\tilde \epsilon \right) \\ &=\sup_{u\ge 0}P\left( \left\vert \bar{Q}_{j}^{r}(u+t)-\bar{Q}_{j}^{r}(u+s)\right\vert >\tilde \epsilon \right) \to 0. \end{align*} Combining the above three convergence properties we have that as $r\to \infty$ \begin{equation} \sup_{u\ge 0} P\left( \left\vert (t-s)\lambda_{j}^r-(\bar{B}_{j}^{r}(u+t)-\bar{B} _{j}^{r}(u+s))\mu _{j}^r\right\vert >\tilde \epsilon \right) \to 0 \text{.} \label{eq:eeq1113} \end{equation} Recalling the definition of $\bar{\xi}_{i,l}^{r}(u)$ from \eqref{eq:defxiil} and noting that $2\sum_{j=1}^J G_{ij}K_{l,j}\varrho_j = \Sigma_{il}$, we see from \eqref{eq:eeq1113} that for any $\epsilon \in (0,\infty)$ $$\sup_{u>0}P(|\bar{\xi}_{i,l}^{r}(u)|>\epsilon) \to 0$$ as $r\to \infty$. Using this in \eqref{eq:eps2118} and sending $\epsilon\to 0$ we have \eqref{eq:bdpsiilxi}. As noted earlier this completes the proof of (1). We now prove (3). From Theorem \ref{thm:finTimeConvToRBM} and since $r(K\varrho^r-C)\to v^*$ as $r\to \infty$, we have for every $t \in [0,1]$, as $r\to \infty$ \begin{align*} & E E_{\theta^r_{t_r}}\left[\left\|\mathbf{w}(t) - \Gamma(\mathbf{w}(0) + \mathbf{z} -v^*\iota)(t)\right\|\wedge 1\right]\\ & = \frac{1}{t_r}\int_0^{t_r} E\left[\left\|\hat W^r(u+t) - \Gamma(\hat W^r(u) + \hat Z^r(u+\cdot)-\hat Z^r(u)- v^*\iota)(t)\right\|\wedge 1\right] du \to 0. \end{align*} Since $\theta^r_{t_r}\to \theta$ in distribution, we have from continuous mapping theorem $$E E_{\theta}\left[\left\|\mathbf{w}(t) - \Gamma(\mathbf{w}(0) + \mathbf{z} +v^*\iota)(t)\right\|\wedge 1\right]=0.$$ This proves (3). Finally in order to prove (4) it suffices to show that for every continuous and bounded $g:\mathbb{R}_+^I \to \mathbb{R}$ and $t\in [0,1]$ \begin{equation} \label{eq:geq1129} E\left| E_{\theta}(g(\mathbf{w}(t))) - E_{\theta}(g(\mathbf{w}(0)))\right| =0 \end{equation} Note that as $r\to \infty$ \begin{align*} &E\left| E_{\theta^r_{t_r}}(g(\mathbf{w}(t))) - E_{\theta}(g(\mathbf{w}(0)))\right|\\ &= E\left|\frac{1}{t_r}\int_0^{t_r} g(\hat W^r(u+t)) du - \frac{1}{t_r}\int_0^{t_r} g(\hat W^r(u)) du\right|\\ &\le \frac{2\|g\|_{\infty}}{t_r} \to 0 . \end{align*} The equality in \eqref{eq:geq1129} now follows on using the convergence of $\theta^r_{t_r}\to \theta$ and applying continuous mapping theorem. This completes the proof of the theorem. \end{proof} \section{Proofs of Theorems \ref{thm:thm6.5} and \ref{thm:thm6.5disc}.} \label{sec:pfsmainthms} Recall from \eqref{eq:eq942} the cost function in the EWF, namely $\mathcal{C}$. {\bf Proof of Theorem \ref{thm:thm6.5}.} From Theorem \ref{thm:discCostInefBnd} and noting that $h\cdot \hat Q^r(t) \ge \mathcal{C}(\tilde W^r(t))$ a.s., we have $$E\frac{1}{t_r}\int_{0}^{t_r} |h\cdot \hat{Q}^{r}(t) - \mathcal{C}(\tilde W^r(t))| dt \le B r^{\alpha -1/2}(1+ |\hat q^r|^2).$$ Next, from Theorem \ref{thm:restCostJobOrd} we see that $\mathcal{C}$ is a Lipschitz function. Let $L_{\mathcal{C}}$ denote the corresponding Lipschitz constant. Since $M^r\to M$, we can find $\eta_r \in (0,\infty)$ such that $\eta_r\to 0$ as $r\to \infty$ and \begin{equation} \label{eq:eqworkldfi} |\tilde W^r(t) - \hat W^r(t)| \le \eta_r |\hat Q^r(t)| \mbox{ for all } t \ge 0,\, r>0. \end{equation} From Theorem \ref{thm:WmomBnd} it then follows that, as $r\to \infty$ $$ E\frac{1}{t_r}\int_{0}^{t_r} |\mathcal{C}(\tilde W^r(t)) - \mathcal{C}(\hat W^r(t))| dt \le L_{\mathcal{C}} \eta_r \frac{1}{t_r}\int_{0}^{t_r} E|\hat Q^r(t)| dt \to 0.$$ Thus in order to complete the proof it suffices to show that \begin{equation}\label{eq:maintpt129} \frac{1}{t_r}\int_{0}^{t_r} \mathcal{C}(\hat W^r(t)) \to \int \mathcal{C}(w)\pi(dw), \mbox{ in } L^1, \mbox{ as } r \to \infty . \end{equation} From Theorems \ref{thm:occMeasTight} and \ref{thm:limitMeasProp}, for every $L\in (0,\infty)$, $$ \frac{1}{t_r}\int_{0}^{t_r} \mathcal{C}_L(\hat W^r(t)) \to \int \mathcal{C}_L(w)\pi(dw), \mbox{ in } L^1, \mbox{ as } r \to \infty $$ where $\mathcal{C}_L(w) \doteq \mathcal{C}(w)\wedge L$ for $w \in \mathbb{R}_+^I$. Also, from linear growth of $\mathcal{C}$ and Theorem \ref{thm:WmomBnd}, as $L\to \infty$, $$\sup_{r>0} \frac{1}{t_r}\int_{0}^{t_r} E|\mathcal{C}(\hat W^r(t)) - \mathcal{C}_L(\hat W^r(t))| dt \le \frac{1}{L} \sup_{r>0} \frac{1}{t_r}\int_{0}^{t_r} E\mathcal{C}^2(\hat W^r(t)) dt \to 0.$$ Theorem \ref{thm:limitMeasProp} and Fatou's lemma also show that $\int \mathcal{C}(w) \pi(dw)<\infty$. Combining this with the above two displays we now have \eqref{eq:maintpt129} and the result follows. \qed \\ \ \\ We now prove the convergence of the discounted cost. Proof is a simpler version of the argument in the proof of Theorem \ref{thm:thm6.5} and therefore we omit some details.\\ \ \\ {\bf Proof of Theorem \ref{thm:thm6.5disc}.} Minor modifications of the proof of Theorem \ref{thm:occMeasTight} together with Theorem \ref{thm:finTimeConvToRBM} show that for any $T<\infty$ $\hat W^r$ converges in $D([0,T]: \mathbb{R}_+^I)$ to $\check{W}^{w_{0}}$. Thus using continuity of $\mathcal{C}$, for every $L\in (0,\infty)$ and $\mathcal{C}_L$ as in the proof of Theorem \ref{thm:thm6.5}, for every $T<\infty$, \begin{equation*} \lim_{r\rightarrow \infty }E\left[ \int_{0}^{T}e^{-\theta t}\mathcal{C}_{L}\left( \hat{ W}^{r}(t)\right) dt\right] =E\left[ \int_{0}^{T}e^{-\theta t}\mathcal{C}_{L}\left( \check{W}^{w_{0}}(t)\right) dt\right] \text{.} \end{equation*} From Theorem \ref{thm:WmomBnd} we have, as $L\to \infty$, $$ \sup_{r>0} E \int_{0}^{\infty}e^{-\theta t}|\mathcal{C}( \hat{W}^{r}(t))-\mathcal{C}_{L}( \hat{ W}^{r}(t))| dt \le \frac{1}{L } \sup_{r>0} \int_{0}^{\infty}e^{-\theta t} E\mathcal{C}^2( \hat{W}^{r}(t)) dt \to 0$$ From Theorem \ref{thm:WmomBnd} we also see that as $T\to \infty$ $$\sup_{r>0} \int_{T}^{\infty}e^{-\theta t} E\mathcal{C}( \hat{W}^{r}(t)) dt \to 0, \; \int_{T}^{\infty}e^{-\theta t} E\mathcal{C}(\check{W}^{w_{0}}(t))dt \to 0.$$ Using the fact that $E \int_{0}^{\infty}e^{-\theta t}\mathcal{C}( \check{W}^{w_{0}}(t)) dt <\infty$ it then follows that for every $T\in (0,\infty)$ $$E \int_{0}^{\infty}e^{-\theta t} h \cdot\hat{Q}^{r}(t) dt \to E \int_{0}^{\infty}e^{-\theta t}\mathcal{C}(\check{W}^{w_{0}}(t))dt .$$ The result follows. \qed \setcounter{equation}{0} \appendix \numberwithin{equation}{section} \section{Large Deviation Estimates for Poisson Processes} The following result gives classical exponential tail bounds for Poisson processes. For the proof of the first estimate we refer the reader to \cite{kurtz1978strongapp} while the second result is a consequence of \cite[Section 4.1l1 3, Theorem 5]{LipShibook}. \begin{theorem} \label{thm:LDP}Let $N^{r}(t)$ be a Poisson process with rates $\lambda ^{r}$ such that $\lim_{r\rightarrow \infty }\lambda ^{r}=\lambda \in (0,\infty) $. \ Then for any $\epsilon \in (0,\infty)$ there exist $B_{1},B_{2}, R \in (0,\infty)$ such that for all $0<\sigma <\infty $ and $ r\geq R$ we have \begin{equation*} P\left( \sup_{0\leq t\leq 1}\left\vert \frac{N^{r}(\sigma t)}{\sigma } -\lambda ^{r}t\right\vert >\epsilon \right) \leq B_{1}e^{-\sigma B_{2}} \end{equation*} and for all $T \in (0,\infty)$ \begin{equation*} P\left( \sup_{0\leq t\leq T}\left\vert N^{r}(r^{2}t)-r^{2}t\lambda^{r}\right\vert \geq \epsilon rT\right) \leq B_{1}e^{-B_{2}T} \end{equation*} \end{theorem} \begin{bibdiv} \begin{biblist} \bib{atakum}{article}{ author={Ata, Baris}, author={Kumar, Sunil}, title={Heavy traffic analysis of open processing networks with complete resource pooling: asymptotic optimality of discrete review policies}, date={2005}, journal={The Annals of Applied Probability}, volume={15}, number={1A}, pages={331\ndash 391}, } \bib{belwil1}{article}{ author={Bell, Steven~L}, author={Williams, Ruth~J}, title={Dynamic scheduling of a system with two parallel servers in heavy traffic with resource pooling: asymptotic optimality of a threshold policy}, date={2001}, journal={The Annals of Applied Probability}, volume={11}, number={3}, pages={608\ndash 649}, } \bib{belwil2}{article}{ author={Bell, Steven~L}, author={Williams, Ruth~J}, title={Dynamic scheduling of a parallel server system in heavy traffic with complete resource pooling: Asymptotic optimality of a threshold policy}, date={2005}, journal={Electronic Journal of Probability}, volume={10}, pages={1044\ndash 1115}, } \bib{boh1}{article}{ author={B{\"o}hm, Volker}, title={On the continuity of the optimal policy set for linear programs}, date={1975}, journal={SIAM Journal on Applied Mathematics}, volume={28}, number={2}, pages={303\ndash 306}, } \bib{budgho1}{article}{ author={Budhiraja, Amarjit}, author={Ghosh, Arka~Prasanna}, title={A large deviations approach to asymptotically optimal control of crisscross network in heavy traffic}, date={2005}, journal={The Annals of Applied Probability}, volume={15}, number={3}, pages={1887\ndash 1935}, } \bib{budgho2}{article}{ author={Budhiraja, Amarjit}, author={Ghosh, Arka~Prasanna}, title={Diffusion approximations for controlled stochastic networks: An asymptotic bound for the value function}, date={2006}, journal={The Annals of Applied Probability}, volume={16}, number={4}, pages={1962\ndash 2006}, } \bib{harwil1}{article}{ author={Harrison, J.~M.}, author={Williams, R.~J.}, title={Brownian models of open queueing networks with homogeneous customer populations}, date={1987}, ISSN={0090-9491}, journal={Stochastics}, volume={22}, number={2}, pages={77\ndash 115}, url={http://dx.doi.org.libproxy.lib.unc.edu/10.1080/17442508708833469}, review={\MR{912049}}, } \bib{har1}{article}{ author={Harrison, J~Michael}, title={Brownian models of open processing networks: Canonical representation of workload}, date={2000}, journal={Annals of Applied Probability}, pages={75\ndash 103}, } \bib{harmandhayan}{article}{ author={Harrison, J~Michael}, author={Mandayam, Chinmoy}, author={Shah, Devavrat}, author={Yang, Yang}, title={Resource sharing networks: Overview and an open problem}, date={2014}, journal={Stochastic Systems}, volume={4}, number={2}, pages={524\ndash 555}, } \bib{harvan}{article}{ author={Harrison, J~Michael}, author={Van~Mieghem, Jan~A}, title={Dynamic control of brownian networks: state space collapse and equivalent workload formulations}, date={1997}, journal={The Annals of Applied Probability}, pages={747\ndash 771}, } \bib{kankelleewil}{article}{ author={Kang, WN}, author={Kelly, FP}, author={Lee, NH}, author={Williams, RJ}, title={State space collapse and diffusion approximation for a network operating under a fair bandwidth sharing policy}, date={2009}, journal={The Annals of Applied Probability}, pages={1719\ndash 1780}, } \bib{kurtz1978strongapp}{article}{ author={Kurtz, T.G.}, title={Strong approximation theorems for density dependent {M}arkov chains}, date={1978}, journal={Stochastic Process. Appl.}, volume={6}, pages={223\ndash 240}, } \bib{LipShibook}{book}{ author={Liptser, R.~Sh.}, author={Shiryayev, A.~N.}, title={Theory of martingales}, series={Mathematics and its Applications (Soviet Series)}, publisher={Kluwer Academic Publishers Group, Dordrecht}, date={1989}, volume={49}, } \bib{masrob}{article}{ author={Massoulie, Laurent}, author={Roberts, James~W}, title={Bandwidth sharing and admission control for elastic traffic}, date={2000}, journal={Telecommunication systems}, volume={15}, number={1-2}, pages={185\ndash 201}, } \end{biblist} \end{bibdiv} \end{document}
arXiv
\begin{document} \begin{abstract} We show the surjectivity of a restriction map on higher $(0,1)$-cycles for a smooth projective scheme over an excellent henselian discrete valuation ring. This gives evidence for a conjecture stated in \cite{KEW} saying that base change holds for such schemes in general for motivic cohomology in degrees $(i,d)$ for fixed $d$ being the relative dimension over the base. Furthermore, the restriction map we study is related to a finiteness conjecture for the $n$-torsion of $CH_0(X)$, where $X$ is a variety over a $p$-adic field. \end{abstract} \title{On a base change conjecture for higher zero-cycles} \section{Introduction} Let $\mathcal{O}_K$ be an excellent henselian discrete valuation ring with quotient field $K$ and residue field $k=\mathcal{O}_K/\pi\mathcal{O}_K$ and always assume that $1/n\in k$. Let $X$ be a regular scheme, flat and projective over Spec$\mathcal{O}_K$ of fibre dimension $d$. Let $X_K$ denote the generic fibre and $X_0$ the reduced special fibre. Let $\Lambda=\mathbb{Z}/n\mathbb{Z}$. In \cite[Cor. 9.5]{SS} and \cite[App.]{EWB} it is shown that for $X\rightarrow \text{Spec}\mathcal{O}_K$ smooth and projective and $k$ finite or algebraically closed, the restriction map $$CH_1(X)_{\Lambda}\xrightarrow{\simeq} CH_0(X_0)_{\Lambda}$$ is an isomorphism of Chow groups with coefficients in $\Lambda$. This result is reproven in \cite{KEW} for more general residue fields and generalised to the case that $X_0$ is a simple normal crossings divisor. In that case one needs to replace $CH_0(X_0)$ by $H^{2d}_{cdh}(X_0,\mathbb{Z}/n\mathbb{Z}(d))$, i.e. the hypercohomology of the motivic complex $\mathbb{Z}/n\mathbb{Z}(d)$ in the cdh-topology, which is isomorphic to $CH_0(X_0)$ for $X_0/k$ smooth. The result then says that if $k$ is finite, or algebraically closed, or $(d-1)!$ prime to $m$, or $A$ is of equal characteristic, or $X/\mathcal{O}_K$ is smooth with perfect residue field $k$, then there is an isomorphism $$CH_1(X)_{\Lambda}\xrightarrow{\simeq} H^{2d}_{cdh}(X_0,\mathbb{Z}/n\mathbb{Z}(d))$$ which is induced by restricting a one-cycle in general position to a zero-cycle on $X_0^{sm}$. Generalising this result, the following conjecture is stated in section $10$ of \cite{KEW}: \begin{conj}\label{conjkew} The restriction homomorphism $$res:H^{i,d}(X,\mathbb{Z}/n\mathbb{Z})\rightarrow H^{i,d}_{cdh}(X_0,\mathbb{Z}/n\mathbb{Z})$$ is an isomorphism for all $i\geq 0$. \end{conj} Here $H^{i,d}(X,\mathbb{Z}/m\mathbb{Z})=H^{i}(X,\mathbb{Z}/m\mathbb{Z}(d))$ are the motivic cohomology groups for schemes over Dedekind rings defined in \cite{Sp}. In this article we consider the corresponding restriction map on higher Chow groups of zero-cycles with coefficients in $\Lambda$ $$res^{CH}:CH^d(X,2d-i)_{\Lambda}\xrightarrow{} CH^d(X_0,2d-i)_{\Lambda}$$ for $X/\mathcal{O}_K$ smooth which we define to be induced by the following composition: $$res^{CH}:CH^n(X,m)\xrightarrow{}CH^n(X_K,m)\xrightarrow{\cdot(-\pi)}CH^{n+1}(X_K,m+1)\xrightarrow{\partial}CH^n(X_0,m).$$ Here $\cdot(-\pi)$ is the product with $-\pi\in CH^1(K,1)=K^{\times}$ defined in \cite[Sec. 5]{Bl}, $\pi$ is a local parameter for the discrete valuation on $K$ and $\partial$ is the boundary map coming from the localization sequence for higher Chow groups (see \cite{Le2}). We call the composition $$sp_{\pi}^{CH}:CH^n(X_K,m)\xrightarrow{\cdot(-\pi)}CH^{n+1}(X_K,m+1)\xrightarrow{\partial}CH^n(X_0,m)$$ a specialisation map for higher Chow groups. We note that $res^{CH}$ does not depend on the choice of $\pi$ whereas $sp_{\pi}^{CH}$ does. For a detailed discussion of the specialisation map see also \cite[Sec. 3]{ADIKMP}. Our main theorem is the following: \begin{theorem}\label{mt} Let $X/\mathcal{O}_K$ be smooth. Then the restriction map $$res^{CH}:CH^d(X,1)_{\Lambda}\twoheadrightarrow CH^d(X_0,1)_{\Lambda}$$ is surjective. This implies in particular the surjectivity part of conjecture \ref{conjkew} for the pair $(2d-1,d)$. \end{theorem} This implies the following corollary: \begin{corollary}\label{cormt} Let $X/\mathcal{O}_K$ be smooth. Then the specialisation map $$sp^{CH}_{\pi}:CH^d(X_K,1)_{\Lambda}\twoheadrightarrow CH^d(X_0,1)_{\Lambda}$$ is surjective. \end{corollary} The restriction map in the degree of theorem \ref{mt} is of particular interest since it is related to a conjecture on the finiteness of $CH^d(X_K)[n]$ for $K$ a $p$-adic field. This is shown in section \ref{remarks} as well as the injectivity for $d=2$. Furthermore theorem \ref{mt} together with the main result of \cite{KEW} may be considered as a generalization to perfect residue fields of the vanishing of the Kato homology group $KH_3(X)$ defined in \cite{SS} where it was proven for $k$ finite or separably closed. \\ \paragraph{\textit{Acknowledgements.}} I would like to heartily thank my supervisor Moritz Kerz for his help and many ideas for this paper. I would also like to thank Johann Haas and Yigeng Zhao for helpful discussions and the department of mathematics of the university of Regensburg and the SFB 1085 "Higher Invariants" for the nice working environment. \section{Main result}\label{mainresult} Let $\mathcal{O}_K$ be an excellent henselian discrete valuation ring with quotient field $K$ and residue field $k=\mathcal{O}_K/\pi\mathcal{O}_K$ and always assume that $1/n\in k$. From now on let $X$ be a smooth and projective scheme over Spec$\mathcal{O}_K$ of fibre dimension $d$ in which case we also say that $X$ is of relative dimension $d$ over $\mathcal{O}_K$. Let $X_K$ denote the generic fibre and $X_0$ the reduced special fibre. By $X_{(p)}$ we denote the set of points $x\in X$ such that $\text{dim}(\overline{\{x\}})=p$, where $\overline{\{x\}}$ denotes the closure of $x$ in $X$. We are going to use the following notation for Rost's Chow groups with coefficients in Milnor K-theory (see \cite[Sec. 5]{Ro}): $$C_p(X,m)=\bigoplus_{x\in X_{(p)}}(K_{m+p}^Mk(x))\otimes \mathbb{Z}/n\mathbb{Z}$$ $$Z_p(X,m)=\text{ker}[\partial:C_p(X,m)\rightarrow C_{p-1}(X,m)]$$ $$A_p(X,m)=H_p(C_*(X,m))$$ We write $Z_k(X)$ for the group of $k$-cycles on $X$, i.e. the free abelian group generated by $k$-dimensional closed subschemes of $X$. Let $\pi$ be some fixed a local parameter of $\mathcal{O}_K$. We define the restriction map $$res_\pi: C_p(X,m)\rightarrow C_{p-1}(X_0,m+1)$$ to be the composition $$res_{\pi}: C_p(X,m)\rightarrow C_{p-1}(X_K,m+1)\xrightarrow{\cdot\{-\pi\}} C_{p-1}(X_K,m+2)\xrightarrow{\partial}C_{p-1}(X_0,m+1).$$ In the above composition the map $C_p(X,m)\rightarrow C_{p-1}(X_K,m)$ is defined to be the identity on all elements supported on $X_{(p)}\setminus {X_0}_{(p)}$ and zero on ${X_0}_{(p)}$. The map $\partial$ is defined to be the boundary map induced by the tame symbol on Milnor K-theory for discrete valuation rings. More precisely, $\partial$ is defined as follows: Let $\overline{\{x\}}$ be the subscheme corresponding to $x\in X_{(p)}$. Let us assume for simplicity that $\overline{\{x\}}$ is normal. Otherwise we take the normalisation and use the norm map. Now if $y\in \overline{\{x\}}_{(p-1)}$, then $y$ defines a discrete valuation on $k(x)$. Let $\pi'$ be a local parameter of $k(x)$. Let $\partial^x_y:K^M_{n+1}k(x)\rightarrow K^M_{n}k(y)$ be the tame symbol defined by sending $\{\pi',u_1,...,u_n\}$ to $\{\bar{u_1},...,\bar{u_n}\},$ where the $u_i$ are units in the discrete valuation ring of $k(x)$ and the $\bar{u_i}$ their images in $k(y)$. $\partial$ is defined to be the sum of all $\partial^x_y$ taken over all $x\in X_{(p)}$ and all $y\in \overline{\{x\}}_{(p-1)}$. Note that the restriction map $res_\pi$ has to be distinguished from the specialisation map $$sp^x_{y,\pi'}=\partial^x_y\circ \{-\pi'\}:K_{n}^Mk(x)\rightarrow K_{n}^Mk(y).$$ $sp^x_{y,\pi'}$ sends $\{\pi'^{i_1}u_1,...,\pi'^{i_n}u_n\}$ to $\{\bar{u_1},...,\bar{u_n}\},$ where again the $u_i$ are units in the discrete valuation ring of $k(x)$ and the $\bar{u_i}$ their images in $k(y)$. The map $res_\pi$ depends on the choice of $\pi$ but the induced map on homology $$res: A_p(X,m)\rightarrow A_{p-1}(X_0,m+1)$$ is independent of the choice. This can be seen as follows: Let $u\in\mathcal{O}_K^\times$ and $\alpha\in C_p(X,m)$. Then $res_{u\pi}(\alpha)=\partial(\{-\pi u\}\cdot \alpha)=\partial(\{-\pi\}\cdot \alpha)+\partial(\{u\}\cdot \alpha)=res_{\pi}(\alpha)+\partial(\{u\}\cdot \alpha)$. Now if $\alpha\in A_p(X,m)$, then $\partial(\{u\}\cdot \alpha)=0$ and $res_{u\pi}(\alpha)=res_{\pi}(\alpha)$. In the following we will write $res$ for $res_\pi$, fixing a local parameter $\pi\in O_K$. We now turn to our principle interest of study, the restriction map $$res:C_2(X,-1)\rightarrow C_1(X_0,0).$$ We start with the following lemma: \begin{lemma}\label{Csurj} The map $res:C_2(X,-1)\rightarrow C_1(X_0,0)$, after having fixed $\pi$, is surjective. \end{lemma} \begin{proof} Let $\bar{u}\in K^M_1k(x)$ for some $x\in X_0^{(d-1)}$. As in the proof of \cite[Lem. 7.2]{SS} we can find a relative surface $Z\subset X$ containing $x$ and being regular at $x$ and such that $Z\cap X_0$ contains $\overline{\{x\}}$ with multiplicity $1$. Let $Z_0=\cup_{i\in I} Z_0^{(i)}\cup \overline{\{x\}}$ be the union of the pairwise different irreducible components of the special fiber of $Z$ with those irreducible components different from $\overline{\{x\}}$ indexed by $I$. Since all maximal ideals, $m_i$ corresponding to $Z_0^{(i)}$ and $m_x$ corresponding to $\overline{\{x\}}$, in the semi-local ring $\mathcal{O}_{Z,Z_0}$ are coprime, the map $\mathcal{O}_{Z,Z_0}\rightarrow \prod_{i\in I}\mathcal{O}_{Z,Z_0}/m_i\times \mathcal{O}_{Z,Z_0}/m_x$ is surjective. Therefore we can find a lift $u\in K^M_1k(z)$, $z$ being the generic point of $Z$, of $\bar{u}$ which specialises to $\bar{u}$ in $K(\overline{\{x\}})^{\times}$ and to $1$ in $K(Z_0^{(i)})^{\times}$ for all $i\in I$. \end{proof} The main result we are going to prove is the following: \begin{proposition}\label{Asurj} The restriction map $res:A_2(X,-1)\rightarrow A_1(X_0,0)$ is surjective. \end{proposition} It will be implied by the following key lemma: \begin{keylemma}\label{keylemma1} Let $\xi\in \text{ker}[Z_1(X)/n\overset{res}{\rightarrow} Z_0(X_0)/n]$, then there is a $\xi'\in \text{ker}[C_2(X,-1)\overset{res}{\rightarrow} C_1(X_0,0)]$ with $\partial(\xi')=\xi$. \end{keylemma} \begin{proof}(Proposition \ref{Asurj}) Let $\xi_0\in \text{ker}[C_1(X_0,0)\overset{\partial}{\rightarrow}C_0(X_0,0)]$. By lemma \ref{Csurj} there is a $\xi\in C_2(X,-1)$ with $res(\xi)=\xi_0$. As $res(\partial (\xi))=\partial(res(\xi))=0$, key lemma \ref{keylemma1} tells us that there is a $\xi'\in \text{ker}(C_2(X,-1)\rightarrow C_1(X_0,0))$ with $\partial\xi'=\partial\xi$. As $res$ is a homomorphism, it follows that $\xi_0= res(\xi-\xi')$ and $\partial(\xi-\xi')=0$. Hence $res:Z_2(X,-1)\rightarrow Z_1(X_0,0)$ is surjective and the commutativity of $\partial$ and $res$ implies that $res:A_2(X,-1)\rightarrow A_1(X_0,0)$ is surjective. \end{proof} \begin{proof} (Key lemma \ref{keylemma1}) We start with the case of relative dimension $d=1$, i.e. $X$ is a smooth fibered surface over $\mathcal{O}_K$, and consider the following diagram: $$\begin{xy} \xymatrix{ C_2(X,-1)=K(X)^*\otimes \mathbb{Z}/n\mathbb{Z} \ar[r]^{res} \ar[d]_{\partial} & C_1(X_0,0)=K(X_0)^*\otimes \mathbb{Z}/n\mathbb{Z} \ar[d]^{\partial} \\ Z_1(X)/n \ar[r]^{res} & Z_0(X_0)/n } \end{xy} $$ where we write $Z_i(X)/n$ for $C_{i}(X,-i)$ which are just the cycles of dimension $i$ modulo $n$. The restriction map in the lowest degree $res:Z_1(X)/n \rightarrow Z_0(X_0)/n$ agrees with the specialisation map on cycles defined by Fulton in \cite[Rem. 2.3]{Fu} since $X_0$ is a principle Cartier divisor and $\partial^x_y(\{-\pi\})=\text{ord}_{\mathcal{O}_{\overline{\{x\}},y}}(\pi)$. Modifying $\xi\in \text{ker}[Z_1(X)/n\overset{res}{\rightarrow} Z_0(X_0)/n]$ by elements equivalent to zero in $Z_1(X)/n$, we may represent it by an element $x\in\text{ker}[Z_1(X)\rightarrow Z_0(X_0)]$. We consider the following short exact sequence of sheaves: \begin{equation}\label{eq1} 0\rightarrow \mathcal{O}^*_{X;X_0}\rightarrow \mathcal{M}^*_{X;X_0}\rightarrow Div(X,X_0)\rightarrow 0, \end{equation} where $\mathcal{M}^*_{X;X_0} (\text{resp. }\mathcal{O}^*_{X;X_0})$ denotes the sheaf of invertible meromorphic functions (resp. invertible regular functions) relative to $\text{Spec}\mathcal{O}_K$ and congruent to $1$ in the generic point of $X_0$, i.e. in $\mathcal{O}_{X,{\mu}}$, where $\mu$ is the generic point of $X_0$, and $Div(X,X_0)$ is the sheaf assoiciated to $\mathcal{M}^*_{X;X_0}/\mathcal{O}^*_{X;X_0}$. In other words, $Div(X,X_0)(U)$ is the set of relative Cartier divisors on $U\subset X$ which specialise to zero in $X_0$. For the concept of relative meromorphic funtions and divisors see \cite[Sec. 20, 21.15]{EGA4}. We want to show that $(Div(X,X_0)(X)/\mathcal{M}^*_{X;X_0}(X))/n=0$. \begin{claim} $\text{Pic}(X,X_0)\cong Div(X,X_0)(X)/\mathcal{M}^*_{X;X_0}(X)$. \end{claim} Short exact sequence (\ref{eq1}) induces the following exact sequence: $$\mathcal{O}^*_{X;X_0}(X)\rightarrow\mathcal{M}^*_{X;X_0}(X)\rightarrow Div(X,X_0)(X)\rightarrow \text{Pic}(X,X_0)\rightarrow H^1(X,\mathcal{M}^*_{X;X_0})$$ Now $\text{Pic}(X,X_0)=H^1(X,\mathcal{O}^*_{X;X_0})$ can also be described as the group of isomorphism classes of pairs $(\mathcal{L},\psi)$ of an invertible sheaf $\mathcal{L}$ with a trivialisation $\psi:\mathcal{L}|_{X_0}\cong \mathcal{O}_{X_0}$ (see f.e. \cite[Lem. 2.1]{SV}). The following argument shows that the map $Div(X,X_0)(X)\rightarrow \text{Pic}(X,X_0)$ is surjective: Let $(\mathcal{L},\psi)\in \text{Pic}(X,X_0)$. The trivialisation $\psi$ gives an isomorphism $\psi:\mathcal{L}\otimes_{\mathcal{O}_X}\mathcal{O}_{X_0}\xrightarrow{\cong}\mathcal{O}_{X_0}$ and by localising an isomorphism $\psi_\mu:\mathcal{L}_{\mu}\otimes_{\mathcal{O}_{X,{\mu}}}\mathcal{O}_{X_0,{\mu}}\xrightarrow{\cong}\mathcal{O}_{X_0,{\mu}}$, where $\mu$ again denotes the generic point of $X_0$. Let $s$ denote a lift of $\psi_{\mu}^{-1}(1)$ under the surjective map $\mathcal{L}_{\mu}\twoheadrightarrow \mathcal{L}_{\mu}\otimes_{\mathcal{O}_{X,{\mu}}}\mathcal{O}_{X_0,{\mu}}$. Then $s$ is a meromorphic section of $\mathcal{L}$ and the divisor div$(s)\in Div(X,X_0)(X)$ maps to $(\mathcal{L},\psi)$. It follows that $\text{Pic}(X,X_0)\cong Div(X,X_0)(X)/\mathcal{M}^*_{X;X_0}(X)$. \qed \begin{claim} $\text{Pic}(X,X_0)$ is uniquely $n$-divisible. \end{claim} Since $$\text{Pic}(X,X_0)\cong \varprojlim_m\text{Pic}(X_m,X_0)\cong\varprojlim_m H^1(X_0,1+\pi\mathcal{O}_{X_m}),$$ where the first isomorphism follows from \cite[Thm. 5.1.4]{EGA3}, it suffices to show that $H^1(X_0,1+\pi\mathcal{O}_{X_m})$ is uniquely $n$-divisible. This can be seen as follows: $$1+\pi\mathcal{O}_{X_m}\supset 1+\pi^{2}\mathcal{O}_{X_m}\supset ...\supset 1$$ defines a finite filtration on the sheaf $1+\pi\mathcal{O}_{X_m}$ with graded pieces $gr^n=(\pi)^n/(\pi)^{n+1}\cong \mathcal{O}_{X_0}\otimes (\pi)^n$. We use this filtration to define a filtration on $H^1(X_0,1+\pi\mathcal{O}_{X_m})$ by $$F^n:=\text{Im}(H^1(X_0,1+\pi^n\mathcal{O}_{X_m})\rightarrow H^1(X_0,1+\pi\mathcal{O}_{X_m})).$$ The unique divisibility of $H^1(X_0,1+\pi\mathcal{O}_{X_m})$ follows now by descending induction from the exact sequence $$0\rightarrow 1+\pi^{n+1}\mathcal{O}_{X_m}\rightarrow 1+\pi^{n}\mathcal{O}_{X_m}\rightarrow gr^n\rightarrow 0,$$ the unique divisibility of $H^i(X_0,\mathcal{O}_{X_0}\otimes \pi^n)$ as a finitely generated $k$-module and the five-lemma. \end{proof} It follows that $\text{Pic}(X,X_0)/n\cong (Div(X,X_0)(X)/\mathcal{M}^*_{X;X_0}(X))/n=0$ and therefore that the class of $x$ in $Z_1(X)/n$, i.e. $\xi$, is in the image of $\text{ker}[C_2(X,-1)\overset{res}{\rightarrow} C_1(X_0,0)]$ under $\partial$. We now do the induction step for $X$ of arbitraty relative dimension $d>1$ over Spec$\mathcal{O}_K$, assuming that the key lemma holds for relative dimension $d-1$, using an idea of Bloch put forward in \cite[App.]{EWB}. By a standard norm argument we may from now on assume that $k$ is infinite. As above we may represent $\xi$ by an element of $\text{ker}[Z_1(X)\rightarrow Z_0(X_0)]$ and as in the proof of \cite[Prop. 4.1]{KEW} we may assume that $\xi$ is represented by a cycle of the form $[x]-r[y]\in \text{ker}[Z_1(X)\rightarrow Z_0(X_0)]$ with $x$ and $y$ integral and such that $y$ is regular and has intersection number $1$ with $X_0$. Let us recall the argument: First note that one can lift a reduced closed point of $X_0$ to an integral horizontal one-cycle having intersection number $1$ with $X_0$. Now if $\xi=\sum_{i=1}^s n_i[x_i]\in \text{ker}[Z_1(X)\rightarrow Z_0(X_0)]$, then we lift $(x_i\cap X_0)_{\text{red}}$ to a one-cycle $y_i$ of the aforementioned type. Furthermore, we choose the same $y_i$ for all the $x_i$ intersecting $X_0$ in the same closed point. Let $r_i$ be the intersection multiplicity of $x_i$ with $X_0$. Then also $\sum_{i=1}^s n_ir_i[y_i]\in \text{ker}[Z_1(X)\rightarrow Z_0(X_0)]$ and it suffices to show the statement for each $x_i-r_iy_i$ separately, i.e. the claim follows. Let $\tilde x$ be the normalisation of $x$. Since $\mathcal{O}_K$ is excellent, $\tilde x$ is finite over $x$. This implies that there is an imbedding $\tilde x\hookrightarrow X':=X\times_{\text{Spec}\mathcal{O}_K}\mathbb{P}^N$ such that the following diagram commutes: $$\begin{xy} \xymatrix{ \tilde x \ar[r]^{} \ar[d]_{} & X'=X\times_{\text{Spec}\mathcal{O}_K}\mathbb{P}^N \ar[d]^{pr_X} \\ x \ar[r]^{} \ar[d]_{} & X \ar[d]^{} \\ \text{Spec}\mathcal{O}_K \ar[r]^{=} & \text{Spec}\mathcal{O}_K } \end{xy} $$ Let $[\tilde x\cap X_0']= r'[\bar z]$ for $\bar z$ an integral zero-dimensional subscheme of $X'_0$. We take a regular lift $z$ of $\bar z$ in $y\times \mathbb{P}^N\subset X'$ which has intersection number $1$ with $X_0'$ and get that $[\tilde x]-r'[z]\in ker[Z_1(X')\rightarrow Z_0(X_0')]$ and $pr_{X*}([\tilde x]-r'[z])=[x]-r[y]=\xi$. We now use a Bertini theorem by Altman and Kleiman to prove key lemma \ref{keylemma1} by an induction on the relative dimension of $X$ over $\mathcal{O}_K$. \begin{lemma}\label{subschemes} There exist smooth closed subschemes $Z,Z'\subset X'$ with the following properties: \begin{enumerate} \item $Z$ has fiber dimension one, $Z'$ has fiber dimension $d-1$. \item $Z$ contains $\tilde x$, $Z'$ contains $z$. \item The intersection $Z\cap Z'\cap X'_0$ consist of reduced points. \end{enumerate} \end{lemma} \begin{proof} First note that for a sheaf of ideals $\mathcal{J}\subset \mathcal{O}_{X'}$ we have the following short exact sequence: $$0\rightarrow \mathcal{J}\otimes_{\mathcal{O}_{X'}}\mathcal{O}_{X'}(-[X_0'])(M)\rightarrow \mathcal{J}\otimes_{\mathcal{O}_{X'}}\mathcal{O}_{X'}(M)\rightarrow \mathcal{J}\otimes_{\mathcal{O}_{X'}}i_*\mathcal{O}_{X_0'}(M)\rightarrow 0$$ for $i:X'_0\hookrightarrow X'$ and $M\in\mathbb{Z}$. For $M\gg 0$ Serre vanishing implies that $H^1(X', \mathcal{F}(M))=0$ for $\mathcal{F}$ coherent and therefore that the map $$\Gamma(\mathcal{J}\otimes_{\mathcal{O}_{X'}}\mathcal{O}_{X'}(M))\twoheadrightarrow \Gamma(\mathcal{J}\otimes_{\mathcal{O}_{X'}}\mathcal{O}_{X'_0}(M))$$ is surjective. This allows us to lift the sections on the right defining subvarieties of $X_0'$ to sections of a twisted sheaf of ideals on $X'$. Let $\mathcal{J}_{\tilde x}$ be the sheaf of ideals defining $\tilde x$ and $\mathcal{J}_z$ be the sheaf of ideals defining $z$. Let $p\in\tilde x\cap X'_0$ $(q\in z\cap X'_0)$. Then $\text{dim}_{X_0}(p)=d\geq 2$ and since $\tilde x$ (resp. $z$) is regular, we have that $e_{\tilde x\cap X_0'}(p)\leq e_{\tilde x}(p)=\text{dim}_{k(p)}(\Omega^1_{\tilde x}(p))=1<2$, where $e_{\tilde x}(p)$ is the embedding dimension of $\tilde x$ at $p$ and analogously for $q$. Therefore by \cite[Thm. 7]{AK} we can find sections in $\bar\sigma_1,...,\bar\sigma_{d+N-1}\in\mathcal{J}_{\tilde x}|_{X_0'}(M)$ (resp. $\bar\sigma'\in\mathcal{J}_{\tilde x}|_{X_0'}(M)$) defining smooth subschemes containing $p$ (resp. $q$) that intersect transversally. Let $\sigma_1,...,\sigma_{d+N-1}$ (resp. $\sigma'$) be liftings under the surjections $\Gamma(\mathcal{J}_{\tilde{x}}\otimes_{\mathcal{O}_{X'}}\mathcal{O}_{X'}(M))\twoheadrightarrow \Gamma(\mathcal{J}_{\tilde{x}}\otimes_{\mathcal{O}_{X'}}\mathcal{O}_{X'_0}(M))$ and $\Gamma(\mathcal{J}_z\otimes_{\mathcal{O}_{X'}}\mathcal{O}_{X'}(M))\twoheadrightarrow \Gamma(\mathcal{J}_z\otimes_{\mathcal{O}_{X'}}\mathcal{O}_{X'_0}(M))$. Then the complete intersections $Z:=V(\sigma_1,...,\sigma_{d+N-1})$ and $Z':=V(\sigma')$ have the desired properties. \end{proof} Using these subschemes, we can now do the induction step and finish the proof of the key lemma. Since $Z\cap Z'\cap X_0'$ consists of reduced points, the component $z'$ of $Z\cap Z'$ that contains $z\cap X_0'$ has intersection number $1$ with $X_0'$ and is a regular curve as it is regular over the closed point of Spec$\mathcal{O}_K$. Now since $Z'$ is of relative dimension $d-1$ and $z$ and $z'$ both lie in $Z'$ and satisfy $res([z']-[z])=0$, we get by the induction assumption that there is a $\xi$ with support on $Z'$ restricting to $1$ and with $\partial(\xi)=[z']-[z]$. By the relativ dimension one case proved in the beginning we get that for $\tilde x,z'\subset Z$ and $[\tilde x]-r'[z']$, which also restricts to $0$, there is a $\xi'$ with support on $Z$ such that $res(\xi')=0$ and $\partial(\xi')=[\tilde x]-r'[z']$. It follows that $res(\xi'+r\xi)=1$ and $\partial(\xi'+r\xi)=[\tilde x]-r'[z]$. By the commutativity of the following diagram we get the result. $$\begin{xy} \xymatrix{ & C_2(X',-1) \ar[d]_{} \ar[ddl]_{} \ar[rr] & & C_1(X_0',0) \ar[d]_{} \ar[ddl]_{} \\ & Z_1(X')/n \ar[ddl]_{} \ar[rr] & & Z_0(X_0')/n \ar[ddl]_{} \\ C_2(X,-1) \ar[d]_{} \ar[rr]^{} & & C_1(X_0,0) \ar[d]_{} & \\ Z_1(X)/n \ar[rr]^{} & & Z_0(X_0)/n } \end{xy} $$ The commutativity of the diagram follows from \cite[Sec. 4]{Ro} since all the maps in question are defined in terms of the 'four basic maps' which are compatible. \qed \begin{corollary} The restriction map $$res^{CH}:CH^d(X,1)_{\Lambda}\rightarrow CH^d(X_0,1)_{\Lambda}$$ defined in the introduction is surjective. \end{corollary} \begin{proof} We first show that the homology of the sequence $$\oplus_{x\in X_0^{(d-2)}}K_{2}^Mk(x)\rightarrow \oplus_{x\in X_0^{(d-1)}}K_{1}^Mk(x)\rightarrow \oplus_{x\in X_0^{(d)}}K_{0}^Mk(x)$$ is isomorphic to $CH^d(X_0,1)$ which implies that $A_1(X_0,0)\cong CH^d(X_0,1)_{\Lambda}$. This follows from the spectral sequence \begin{equation}\label{niveauspectralseq} E^{p,q}_1=\oplus_{x\in X_0^{(p)}}CH^{r-p}(\text{Spec}k(x),-p-q)\Rightarrow CH^r(X_0,-p-q) \end{equation} (see \cite[Sec. 10]{Bl}) for $r=d=\text{dim} X_0$, the fact that $CH^r(k(x),r)\cong K^M_r(k(x))$ and the vanishing of $CH^{r}(\text{Spec}k(x),j)$ for $r>j$. Using a limit argument and the localization sequence for schemes over a regular noetherian base $B$ of dimension one constructed in \cite{Le2}, we also get the existence of spectral sequence (\ref{niveauspectralseq}) for $X/\mathcal{O}_K$. Now for the same reasons as above this spectral sequence implies that the homology of $$\oplus_{x\in X^{(d-2)}}K_{2}^Mk(x)\rightarrow \oplus_{x\in X^{(d-1)}}K_{1}^Mk(x)\rightarrow \oplus_{x\in X^{(d)}}K_{0}^Mk(x)$$ is isomorphic to $CH^d(X,1)$ which implies that $A_2(X,-1)\cong CH^d(X,1)_{\Lambda}$. The result now follows from proposition \ref{Asurj} and the compatibility of $res$ and $res^{CH}$. \end{proof} \begin{remark} The isomorphism $A_1(X_0,0)\cong CH^d(X_0,1)_{\Lambda}$ also follows from the isomphism $CH(X,1)\cong H^{p-1}(X,\mathcal{K}_p)$ for $p\geq 0$ and $\mathcal{K}_p$ the K-theory sheaf (see f.e. \cite[Cor. 5.3]{M}). \end{remark} \section{Remarks on the injectivity of res}\label{remarks} In this section we prove the injectivity of the restriction map for $d=2$ in our case and remark on imlications of the conjectured injectivity. \begin{conj}\label{propinj} The map $res:A_2(X,-1)\rightarrow A_1(X_0,0)$ is injective. \end{conj} \begin{proposition}\label{propinj2} Conjecture \ref{propinj} holds for $X/\mathcal{O}_K$ of relative dimension $2$. \end{proposition} \begin{proof} Let $\Lambda:=\mathbb{Z}/n$ and $\Lambda(q):=\mu_n^{\otimes q}$. We use the coniveau spectral sequence $$E_1^{p,q}(X,\Lambda(c))=\coprod_{x\in X^p}H^{p+q}_x(X,\Lambda(c))\Rightarrow H_{\text{\'et}}^{p+q}(X,\Lambda(c)),$$ where $H_x^*$ is \'etale cohomology with support in $x$. Cohomological purity (respectively absolute purity) gives isomorphisms $H^{p+q}_x(X,\Lambda(c))\cong H^{q-p}(k(x),\Lambda(c-p))$ which lets us write the above spectral sequence in the following form: $$E_1^{p,q}(X,\Lambda(c))=\coprod_{x\in X^p}H^{q-p}(k(x),\Lambda(c-p))\Rightarrow H_{\text{\'et}}^{p+q}(X,\Lambda(c)).$$ For more details see for example \cite{CHK}. Writing out this spectral sequence for $X$ and $X_0$ respectively and using the norm residue isomorphism $K^M_n(k)/m\cong H^n(k,\mu_m^{\otimes n})$ for $n\leq 2$ (see \cite{MS}), we get injective edge morphisms $A_2(X,-1)\hookrightarrow H^3_{\text{\'et}}(X,\Lambda(2))$ and $A_1(X_0,-1)\hookrightarrow H^3_{\text{\'et}}(X_0,\Lambda(2))$ for dimensional reasons. The restriction map induces a map between these spectral sequences and therefore a commutative diagram $$\begin{xy} \xymatrix{ A_2(X,-1) \ar[r]^{} \ar@{^{(}->}[d]_{} & A_1(X_0,0) \ar@{^{(}->}[d]^{} \\ H^3_{\text{\'et}}(X,\Lambda(2)) \ar[r]^{\cong} & H^3_{\text{\'et}}(X_0,\Lambda(2)) } \end{xy} $$ whose lower horizontal morphism is an isomorphism by proper base change. It follows that $A_2(X,-1)\rightarrow A_1(X_0,0)$ is injective. \end{proof} \begin{remark}\label{remarkinj} The injectivity of $res$ would have implications for a finiteness conjecture on the $n$-torsion of $CH_0(X_K)$ for $X_K$ a smooth scheme over a $p$-adic field with finite residue field and good reduction (see for example \cite{Co}). More precisely, using the coniveau spectral sequence, we can see that the group $A_1(X_K,0)$ is isomorphic to $H^{2d-1}_{Zar}(X_K,\mathbb{Z}/n(d))$ and therefore surjects onto $CH_0(X_K)[n]$. Furthermore it fits into the exact sequence (see \cite[Sec. 5]{Ro}) $$A_2(X,-1)\rightarrow A_1(X_K,0)\rightarrow A_1(X_0,-1)\cong CH_1(X_0)/n.$$ Now conjecture \ref{propinj} implies that there is a sequence of injections $A_2(X,-1)\hookrightarrow A_1(X_0,0)\hookrightarrow H^{2d-1}_{\text{\'et}}(X_0,\mathbb{Z}/n(d))$ into the finite group $H^{2d-1}_{\text{\'et}}(X_0,\mathbb{Z}/n(d))$. Note that the second injection follows from the Kato conjectures. More precisely, there is an exact sequence $$KH_3(X_0,\mathbb{Z}/n\mathbb{Z})\rightarrow A_1(X_0,0)\cong CH^d(X_0,1)_{\Lambda}\rightarrow H^{2d-1}_{\text{\'et}}(X_0,\mathbb{Z}/n(d))$$ (see \cite[Lem. 6.2]{JS}) and the Kato homology group $KH_3(X_0,\mathbb{Z}/n\mathbb{Z})$ is zero due to the Kato concectures (see \cite{KS}). Therefore the finiteness of $CH_0(X_K)[n]$ would depend on the finiteness of $CH_1(X_0)/n.$ In the case of relative dimension $2$ the finiteness of $CH_1(X_0)/n\cong \text{Pic}(X_0)/n$ can be shown using the injection $\text{Pic}(X_0)/n\hookrightarrow H^2_{\text{\'et}}(X_0,\mu_n)$ and the finiteness of $H^2_{\text{\'et}}(X_0,\mu_n)$ (see f.e. \cite[VI.2.8]{Mi}). Therefore proposition \ref{propinj2} implies in particular the finiteness of $CH_0(X_K)[n]$ for $X_K$ a smooth surface over a $p$-adic field with finite residue field and good reduction which is a well-known result by Bloch (see f.e. \cite[Thm. 3.3.2]{Co2}). \end{remark} \begin{remark} In the light of remark \ref{remarkinj} and the base change conjecture for higher zero-cycles stated in the introduction one might ask if $$CH^d(X_K,i)[n]$$ is finite for all $i\geq 0$ for smooth schemes over $p$-adic fields. \end{remark} \end{document}
arXiv
\begin{document} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{claim}[theorem]{Claim} \newtheorem{cor}[theorem]{Corollary} \newtheorem{prop}[theorem]{Proposition} \newtheorem{definition}{Definition} \newtheorem{question}[theorem]{Question} \newcommand{{{\mathrm h}}}{{{\mathrm h}}} \numberwithin{equation}{section} \numberwithin{theorem}{section} \numberwithin{table}{section} \def\mathop{\sum\!\sum\!\sum}{\mathop{\sum\!\sum\!\sum}} \def\mathop{\sum\ldots \sum}{\mathop{\sum\ldots \sum}} \def\mathop{\int\ldots \int}{\mathop{\int\ldots \int}} \def\hbox{\rlap{$\sqcap$}$\sqcup$}{\hbox{\rlap{$\sqcap$}$\sqcup$}} \def\qed{\ifmmode\hbox{\rlap{$\sqcap$}$\sqcup$}\else{\unskip\nobreak\hfil \penalty50\hskip1em\null\nobreak\hfil\hbox{\rlap{$\sqcap$}$\sqcup$} \parfillskip=0pt\finalhyphendemerits=0\endgraf}\fi} \newfont{\teneufm}{eufm10} \newfont{\seveneufm}{eufm7} \newfont{\fiveeufm}{eufm5} \newfam\eufmfam \textfont\eufmfam=\teneufm \scriptfont\eufmfam=\seveneufm \scriptscriptfont\eufmfam=\fiveeufm \def\frak#1{{\fam\eufmfam\relax#1}} \newcommand{{\boldsymbol{\lambda}}}{{\boldsymbol{\lambda}}} \newcommand{{\boldsymbol{\mu}}}{{\boldsymbol{\mu}}} \newcommand{{\boldsymbol{\xi}}}{{\boldsymbol{\xi}}} \newcommand{{\boldsymbol{\rho}}}{{\boldsymbol{\rho}}} \def\mathfrak K{\mathfrak K} \def\mathfrak{T}{\mathfrak{T}} \def{\mathfrak A}{{\mathfrak A}} \def{\mathfrak B}{{\mathfrak B}} \def{\mathfrak C}{{\mathfrak C}} \def \balpha{\bm{\alpha}} \def \bbeta{\bm{\beta}} \def \bgamma{\bm{\gamma}} \def \blambda{\bm{\lambda}} \def \bchi{\bm{\chi}} \def \bphi{\bm{\varphi}} \def \bpsi{\bm{\psi}} \def\eqref#1{(\ref{#1})} \def\vec#1{\mathbf{#1}} \def{\mathcal A}{{\mathcal A}} \def{\mathcal B}{{\mathcal B}} \def{\mathcal C}{{\mathcal C}} \def{\mathcal D}{{\mathcal D}} \def{\mathcal E}{{\mathcal E}} \def{\mathcal F}{{\mathcal F}} \def{\mathcal G}{{\mathcal G}} \def{\mathcal H}{{\mathcal H}} \def{\mathcal I}{{\mathcal I}} \def{\mathcal J}{{\mathcal J}} \def{\mathcal K}{{\mathcal K}} \def{\mathcal L}{{\mathcal L}} \def{\mathcal M}{{\mathcal M}} \def{\mathcal N}{{\mathcal N}} \def{\mathcal O}{{\mathcal O}} \def{\mathcal P}{{\mathcal P}} \def{\mathcal Q}{{\mathcal Q}} \def{\mathcal R}{{\mathcal R}} \def{\mathcal S}{{\mathcal S}} \def{\mathcal T}{{\mathcal T}} \def{\mathcal U}{{\mathcal U}} \def{\mathcal V}{{\mathcal V}} \def{\mathcal W}{{\mathcal W}} \def{\mathcal X}{{\mathcal X}} \def{\mathcal Y}{{\mathcal Y}} \def{\mathcal Z}{{\mathcal Z}} \newcommand{\rmod}[1]{\: \mbox{mod} \: #1} \def{\mathcal g}{{\mathcal g}} \def\mathbf r{\mathbf r} \def{\mathbf{\,e}}{{\mathbf{\,e}}} \def{\mathbf{\,e}}_p{{\mathbf{\,e}}_p} \def{\mathbf{\,e}}_r{{\mathbf{\,e}}_r} \def{\mathrm{Tr}}{{\mathrm{Tr}}} \def{\mathrm{Nm}}{{\mathrm{Nm}}} \def\widetilde{\cI}{\widetilde{{\mathcal I}}} \def\widetilde{\cJ}{\widetilde{{\mathcal J}}} \def{\mathrm{lcm}}{{\mathrm{lcm}}} \def\({\left(} \def\){\right)} \def\fl#1{\left\lfloor#1\right\rfloor} \def\rf#1{\left\lceil#1\right\rceil} \def\rnorm#1{\langle#1\rangle_r} \def\qquad \mbox{and} \qquad{\qquad \mbox{and} \qquad} \newcommand{\commI}[1]{\marginpar{ \begin{color}{red} \vskip-\baselineskip \raggedright\footnotesize \itshape\hrule I: #1\par \hrule\end{color}}} \newcommand{\commK}[1]{\marginpar{ \begin{color}{blue} \vskip-\baselineskip \raggedright\footnotesize \itshape\hrule K: #1\par \hrule\end{color}}} \hyphenation{re-pub-lished} \mathsurround=1pt \defb{b} \overfullrule=5pt \def \F{{\mathbb F}} \def \K{{\mathbb K}} \def \Z{{\mathbb Z}} \def \Q{{\mathbb Q}} \def \R{{\mathbb R}} \def \C{{\mathbb C}} \def\F_p{\F_p} \def \fp{\F_p^*} \def \U{{\mathbf U}} \def \V{{\mathbf V}} \def\cK_p(m,n){{\mathcal K}_p(m,n)} \def\psi_p(m,n){\psi_p(m,n)} \def\cS_{a,p}(\cA,\cB;\cI,\cJ){{\mathcal S}_{a,p}({\mathcal A},{\mathcal B};{\mathcal I},{\mathcal J})} \def\cS_{a,r}(\cA,\cB;\cI,\cJ){{\mathcal S}_{a,r}({\mathcal A},{\mathcal B};{\mathcal I},{\mathcal J})} \def\cS_{a,p^k}(\cA,\cB;\cI,\cJ){{\mathcal S}_{a,p^k}({\mathcal A},{\mathcal B};{\mathcal I},{\mathcal J})} \def\cS_{a,p}(\cA;\cI,\cJ){{\mathcal S}_{a,p}({\mathcal A};{\mathcal I},{\mathcal J})} \def\cS_{a,r}(\cA;\cI,\cJ){{\mathcal S}_{a,r}({\mathcal A};{\mathcal I},{\mathcal J})} \def\cS_{a,p^k}(\cA,\cI,\cJ){{\mathcal S}_{a,p^k}({\mathcal A},{\mathcal I},{\mathcal J})} \def\cS_{a,p}(\cI,\cJ){{\mathcal S}_{a,p}({\mathcal I},{\mathcal J})} \def\cS_{a,r}(\cI,\cJ){{\mathcal S}_{a,r}({\mathcal I},{\mathcal J})} \def\cS_{a,p^k}(\cI,\cJ){{\mathcal S}_{a,p^k}({\mathcal I},{\mathcal J})} \defR_{a,p}(\cI,\cJ){R_{a,p}({\mathcal I},{\mathcal J})} \defR_{a,r}(\cI,\cJ){R_{a,r}({\mathcal I},{\mathcal J})} \defR_{a,r}(\cI,\cJ_0){R_{a,r}({\mathcal I},{\mathcal J}_0)} \defR_{a,r}(\cI,\cJ_j){R_{a,r}({\mathcal I},{\mathcal J}_j)} \defR_{a,p^k}(\cI,\cJ){R_{a,p^k}({\mathcal I},{\mathcal J})} \defT_{a,p}(\cI,\cJ){T_{a,p}({\mathcal I},{\mathcal J})} \defT_{a,r}(\cI,\cJ){T_{a,r}({\mathcal I},{\mathcal J})} \defT_{a,r}(\cI,\cJ_0){T_{a,r}({\mathcal I},{\mathcal J}_0)} \defT_{a,r}(\cI,\cJ_j){T_{a,r}({\mathcal I},{\mathcal J}_j)} \defT_{a,p^k}(\cI,\cJ){T_{a,p^k}({\mathcal I},{\mathcal J})} \def \xbar{\overline x_p} \title[Bounds for triple Exp. Sum with Exp. and Linear Function]{Bounds for triple exponential sums with mixed exponential and linear terms} \author[Kam Hung Yau] {Kam Hung Yau} \address{Department of Pure Mathematics, University of New South Wales, Sydney, NSW 2052, Australia} \email{[email protected]} \begin{abstract} We establish bounds of triple exponential sums with mixed exponential and linear terms. The method we use is by Shparlinski together with a bound of additive energy from Roche-Newton, Rudnev and Shkredov. \end{abstract} \maketitle \section{Introduction} Particular bounds of exponential sums were first studied in Number Theory as they produce arithmetic information about certain Diophantine problems. For example, by obtaining estimates of exponential sums over primes, Vinogradov~\cite{V} was able to establish every sufficiently large odd integer can be written as a sum of three primes. Now the study of bounds for exponential sums are both for mathematical and arithmetic interest. Let $g$ be an arbitrary integer with $\gcd(g, p) =1$. We denote $T$ to be the multiplicative order of $g$ modulo $p$. Given two intervals of consecutive integers $$ {\mathcal I} = \{ K+1, \ldots, K+M \}, \quad {\mathcal J} = \{ L+1, \ldots, L+N \} $$ and $$ {\mathcal K} = \{ 1, \ldots, H \} $$ with integers $H,K,L,M,N$ such that $0 < M \le p $, $0< N \le T$, $0 < H <T$ and a complex sequence ${\mathcal A} = (\alpha_{m})_{m \in {\mathcal I}}$, we define the following exponential sum $$ {\mathcal S}_{a,T,p}({\mathcal A}; {\mathcal I}, {\mathcal J}, {\mathcal K} ) = \sum_{m \in {\mathcal I}} \sum_{n \in {\mathcal J}} \sum_{x \in {\mathcal K}} \alpha_m e_p(am g^x) e_T(nx) $$ for integers $a \in \mathbb{Z}$ with $\gcd(a,p)=1$ where $e_{h}(x)=e(2 \pi i x/h)$. In particular, when ${\mathcal I} = \mathbb{Z}_{p}$, we define $$ {\mathcal S}_{a,T,p}({\mathcal A}; {\mathcal J}, {\mathcal K} ) = {\mathcal S}_{a,T,p}({\mathcal A}; {\mathcal I}, {\mathcal J}, {\mathcal K} ). $$ Similar double exponential sums has already been considered. In particular, sums of the form $$ S({\mathcal A}, {\mathcal B}; {\mathcal I} , {\mathcal J}) = \sum_{m \in {\mathcal I} } \sum_{n \in {\mathcal J}} \alpha_{m} \beta_{n} e_{p}(am g^{n}) $$ has been considered in the work by Shparlinski \& Yau~\cite{SY}. For the case when $g$ is not necessary a primitive root of $p$, bounds has been established under the condition $ {\mathcal I} = \{ 1 \}$ and $\alpha_{m} =\beta_{n} =1$ by Kerr~\cite{K} but the same method imployed there also works for the general ${\mathcal I}$ as the bound depend only on the norm. Similar sums for multiplicative character has also been studied in~\cite{SY2}. We refer the reader to~\cite{KS} for a broader overview of this subject. In this paper we establish bounds for ${\mathcal S}_{a,T,p}({\mathcal A}; {\mathcal I}, {\mathcal J}, {\mathcal K} )$ when ${\mathcal I} = \mathbb{Z}_{p}$, it is clear the same method also works for general ${\mathcal I}$. Our approach follows from Shparlinski as in the proof of~\cite{Shp}[Theorem 2.1]. In particular, after applying the triangle and H\"older inequality to ${\mathcal S}_{a,T,p}({\mathcal A}; {\mathcal I}, {\mathcal J}, {\mathcal K} )$, we obtain a mean fourth-moment of an exponential sum. By opening and changing the order of summation and appealing to the orthogonality of the exponential function, we can bound the sum by the number of solutions to a particular congruence (see Lemma~\ref{additive energy}). \section{Main Result} The statement $A \ll B$ and $A = O(B)$ are both equivalent to the inequality $|A| \le c B$ for some positive absolute constant $c$. We define for any real number $\sigma >0$, $$ \lVert {\mathcal A} \rVert_{\sigma} = \Big (\sum_{m \in {\mathcal I}} | \alpha_{m}|^{\sigma} \Big )^{1/\sigma}. $$ We state below a bound for ${\mathcal S}_{a,T,p}({\mathcal A}; {\mathcal K}, {\mathcal J})$. \begin{theorem} \label{S-bound} For any prime $p$, we have \begin{equation*} \begin{split} S_{a,T,p}&({\mathcal A}; {\mathcal J}, {\mathcal K} ) \ll \lVert {\mathcal A} \rVert_{1}^{1/2} \lVert {\mathcal A} \rVert_{2}^{1/2}p^{1/4} N^{3/8} T^{5/8}. \end{split} \end{equation*} \end{theorem} Using the same technique as in~\cite[Lemma 3.14]{Shp2} and the bound~\cite[Corollary 19]{R-NRS}, we obtain the trivial bound \begin{equation} \begin{split} \label{trivial-bound1} S_{a,T,p}({\mathcal A}; {\mathcal J}, {\mathcal K} ) & \ll \lVert A \rVert_{1} N \min \{ p^{1/8} H^{5/8}, p^{1/4} H^{3/8} \}. \end{split} \end{equation} Assuming $| \alpha_{m}| \le 1$ we have $\lVert {\mathcal A} \rVert_{1} \ll M$ and $\lVert {\mathcal A} \rVert_{2} \ll M^{1/2}$. We see that Theorem~\ref{S-bound} provides a stronger bound $$ S_{a,T,p}({\mathcal A}; {\mathcal J}, {\mathcal K} ) \ll M^{3/4}p^{1/4} N^{3/8} T^{5/8} $$ than~(\ref{trivial-bound1}) which becomes $$ S_{a,T,p}({\mathcal A}; {\mathcal J}, {\mathcal K} ) \ll M N \min \{ p^{1/8} H^{5/8}, p^{1/4} H^{3/8} \} $$ when $$ pT^{5} < M^{2}N^{5}H^{5} \qquad \mbox{and} \qquad T^{5} < M^{2}N^{5}H^{3}. $$ \section{Preparation} For an integer $u$, we define $$ \langle u \rangle_{r} = \min_{k \in \mathbb{Z}} | u - kr| $$ as the distance to the nearest integral multiple of $r$. We recall a well-known bound from~\cite[Bound (8.6)]{IK}. \begin{lemma} \label{linearbound} For an integers $u$, $W$ and $Z \ge 1$, we have $$ \sum_{n=W+1}^{W+Z} e_{r}(nu) \ll \min \left \{ Z, \frac{r}{ \langle u \rangle_{r}} \right \}. $$ \end{lemma} We recall that $T$ is the multiplicative order of $g$ modulo $p$. For any positive integer $K \le T$, we define the \textit{additive energy} $E_{p}(K)$ as the number of solutions to the congruence \begin{equation} \label{addenergy} g^{x_{1}} + g^{x_{2}} \equiv g^{x_{3}} + g^{x_{4}} \pmod{p} \end{equation} where $$ \quad (x_{1}, x_{2}, x_{3}, x_{4}) \in \{1, \ldots, K \}^{4}. $$ Our approach to bounding ${\mathcal S}_{a,T,p}({\mathcal A}; {\mathcal I}, {\mathcal J}, {\mathcal K} )$ is to reduce the problem to estimating $E_{p}(K)$. Note that $(v_{1}, v_{2}, v_{1}, v_{2}) \in \{1, \ldots, K \}^{4}$ is always a solution to~(\ref{addenergy}), hence we have the trivial lower bound $K^{2} \le E_{p}(K)$. If $(v_{1}, v_{2}, v_{3}, v_{4}) \in \{1, \ldots, K \}^{4}$ is a solution to~(\ref{addenergy}) then $v_{4}$ is depended on $v_{1}, v_{2}, v_{3}$ and we obtain the trivial upper bound $E_{p}(K) \le K^3$. In particular, $E_{p}(K)$ is an increasing function of $K$. Set $A, B, C= \{g, \ldots, g^{K} \}$ then we have the trivial bound $|A|\le K$, $|BC| \le 2K$. Appealing to~\cite[Theorem 6]{R-NRS}, we derive a non-trivial estimate on $E_{p}(K)$. \begin{lemma} \label{additive energy} For any positive integer $1 \le K \le T$, we have $$ E_{p}(K) \ll K^{5/2}. $$ \end{lemma} \begin{comment} We recall a bound in~\cite[Corollary 19]{R-NRS} \begin{lemma} \label{NRS:expbound} For $N < p^{2/3}$ and any $a \in \mathbb{Z}_{p}^{*}$, we have $$ \sum_{n=1}^{N} e_{p}(ag^{n}) \ll \min \{ p^{1/8}N^{5/8}, p^{1/4} N^{3/8} \}. $$ \end{lemma} \end{comment} \section{Proof of Theorem~\ref{S-bound}} We proceed similarly to the proof of~\cite[Theorem 2.1]{Shp}. Rearranging then applying Lemma~\ref{linearbound}, we have \begin{align*} {\mathcal S}_{a,T,p}({\mathcal A}; {\mathcal J}, {\mathcal K} ) & = \sum_{x=1}^{H} \sum_{m =0}^{p-1} \alpha_{m} e_{p}(amg^{x}) \sum_{n=L+1}^{L+N} e_{T}(nx) \\ & = \sum_{x=1}^{H} \sum_{m = 0}^{p-1} \alpha_{m} e_{p}(amg^{x}) \varphi_{x} \end{align*} where $$ | \varphi_{x} | \le \min \left ( N, \frac{T}{ \langle x \rangle_{T}} \right ). $$ Define $I = \lceil \log N \rceil$ and define the sets $$ {\mathcal L}_{0} = \{ x \in \mathbb{Z}: 0 < x \le T/N \} $$ and $$ {\mathcal L}_{i} = \{ x \in \mathbb{Z}: \min \{T ,e^{i}T/N \} \ge x > e^{i-1}T/N \} $$ for $i=1, \ldots, I$. Therefore, we obtain $$ {\mathcal S}_{a,T,p}({\mathcal A}; {\mathcal J} , {\mathcal K}) \ll \sum_{i=0}^{I} |S_{i}| $$ where $$ S_{i} = \sum_{x \in {\mathcal L}_{i}} \sum_{m=0}^{p-1} \alpha_{m} e_{p}(amg^{x}) \varphi_{x} $$ for $i=0, \ldots, I.$ Applying the triangle and H\"older inequality, we obtain \begin{align} \label{Si} |S_{i}| &\le \sum_{m=0}^{p-1} |\alpha_{m}|^{1/2} |\alpha_{m}^{2}|^{1/4} \Big | \sum_{x \in {\mathcal L}_{i}} \alpha_{m} e_{p}(amg^{x}) \varphi_{x} \Big | \nonumber \\ & \le \Big ( \sum_{m=0}^{p-1} |\alpha_{m}| \Big )^{1/2} \Big ( \sum_{m=0}^{p-1} | \alpha_{m}|^{2} \Big )^{1/4} \Big ( \sum_{m=0}^{p-1} \Big | \sum_{x \in {\mathcal L}_{i}} e_{p}(amg^{x}) \varphi_{x} \Big |^{4} \Big )^{1/4} \\ & = \lVert {\mathcal A} \rVert_{1}^{1/2} \lVert {\mathcal A} \rVert_{2}^{1/2} \Big ( \sum_{m=0}^{p-1} \Big | \sum_{x \in {\mathcal L}_{i}} e_{p}(amg^{x}) \varphi_{x} \Big |^{4} \Big )^{1/4} \nonumber \end{align} which is valid for all $i = 0, \ldots , I$. Opening the summation and changing the order of summation, we obtain \begin{align*} \sum_{m=0}^{p-1} & \Big | \sum_{x \in {\mathcal L}_{i}} e_{p}(amg^{x}) \varphi_{x} \Big |^{4} \\ & = \sum_{m=0}^{p-1} \underset{x_{1}, \ldots,x_{4} \in {\mathcal L}_{i}}{ \sum \cdots \sum} \varphi_{x_{1}} \varphi_{x_{2}} \overline{ \varphi_{x_{3}} \varphi_{x_{4 }} } e_{p}(am(g^{x_{1}} + g^{x_{2}} - g^{x_{3}} -g^{x_{4}} )) \\ & = \underset{x_{1}, \ldots,x_{4} \in {\mathcal L}_{i}}{ \sum \cdots \sum} \varphi_{x_{1}} \varphi_{x_{2}} \overline{ \varphi_{x_{3}} \varphi_{x_{4 }} } \sum_{m=0}^{p-1} e_{p}(am(g^{x_{1}} + g^{x_{2}} - g^{x_{3}} -g^{x_{4}} )). \end{align*} Since for all $x \in {\mathcal L}_{i}$, we have the bound $\varphi_{x} \ll e^{-i}N$, hence we get \begin{align*} \sum_{m=0}^{p-1} & \Big | \sum_{x \in {\mathcal L}_{i}} e_{p}(amg^{x}) \varphi_{x} \Big |^{4} \\ & \le \underset{x_{1}, \ldots,x_{4} \in {\mathcal L}_{i}}{ \sum \cdots \sum} |\varphi_{x_{1}} \varphi_{x_{2}} \overline{ \varphi_{x_{3}} \varphi_{x_{4 }} } |\sum_{m=0}^{p-1} e_{p}(am(g^{x_{1}} + g^{x_{2}} - g^{x_{3}} -g^{x_{4}} )) \\ & \ll e^{-4i}N^{4} \underset{x_{1}, \ldots,x_{4} \in {\mathcal L}_{i}}{ \sum \cdots \sum} \sum_{m=0}^{p-1} e_{p}(am(g^{x_{1}} + g^{x_{2}} - g^{x_{3}} -g^{x_{4}} )). \end{align*} By appealing to the orthogonality of exponential function, we obtain $$ \sum_{m=0}^{p-1} \Big | \sum_{x \in {\mathcal L}_{i}} e_{p}(amg^{x}) \varphi_{x} \Big |^{4} \ll p e^{-4i}N^{4} E_{p}( \lfloor e^{i} T/N \rfloor). $$ Therefore by Lemma~\ref{additive energy}, we obtain \begin{align*} \sum_{m=0}^{p-1} \left | \sum_{x \in {\mathcal L}_{i}} e_{p}(amg^{x}) \varphi_{x} \right |^{4} & \ll p e^{-4i}N^{4} (e^{i} T/N)^{5/2} \\ & \ll p e^{-3/2i} N^{3/2}T^{5/2}. \end{align*} Substituting this bound into~(\ref{Si}), we obtain $$ |S_{i}| \ll \lVert {\mathcal A} \rVert_{1}^{1/2} \lVert {\mathcal A} \rVert_{2}^{1/2}p^{1/4} e^{-3i/8}N^{3/8} T^{5/8}. $$ Finally, we have $$ \sum_{i=0}^{I} |S_{i}| \ll \lVert {\mathcal A} \rVert_{1}^{1/2} \lVert {\mathcal A} \rVert_{2}^{1/2}p^{1/4} N^{3/8} T^{5/8} $$ and the result follows immediately. \end{document}
arXiv
AMS Home Publications Membership Meetings & Conferences News & Public Outreach Notices of the AMS The Profession Programs Government Relations Education Giving to the AMS About the AMS MathSciNet® Member Directory Bookstore Journals Employment Services Giving to the AMS Bookstore MathSciNet® Meetings Journals Membership Employment Services Giving to the AMS About the AMS The AMS website will be down on Saturday December 11th from 8:30 am to approximately 11:30 am for maintenance. ISSN 1088-6826(online) ISSN 0002-9939(print) Journals Home Search My Subscriptions Subscribe Your device is paired with for another days. Previous issue | This issue | Most recent issue | All issues (1950–Present) | Next issue | Previous article | Articles in press | Recently published articles | Next article Properly $3$-realizable groups Authors: R. Ayala, M. Cárdenas, F. F. Lasheras and A. Quintero Journal: Proc. Amer. Math. Soc. 133 (2005), 1527-1535 MSC (2000): Primary 57M07; Secondary 57M10, 57M20 DOI: https://doi.org/10.1090/S0002-9939-04-07628-2 Published electronically: November 19, 2004 MathSciNet review: 2111954 Full-text PDF Free Access Abstract | References | Similar Articles | Additional Information Abstract: A finitely presented group $G$ is said to be properly $3$-realizable if there exists a compact $2$-polyhedron $K$ with $\pi _1(K) \cong G$ and whose universal cover $\tilde {K}$ has the proper homotopy type of a (p.l.) $3$-manifold with boundary. In this paper we show that, after taking wedge with a $2$-sphere, this property does not depend on the choice of the compact $2$-polyhedron $K$ with $\pi _1(K) \cong G$. We also show that (i) all $0$-ended and $2$-ended groups are properly $3$-realizable, and (ii) the class of properly $3$-realizable groups is closed under amalgamated free products (HNN-extensions) over a finite cyclic group (as a step towards proving that $\infty$-ended groups are properly $3$-realizable, assuming $1$-ended groups are). References [Enhancements On Off] (What's this?) Hans-Joachim Baues and Antonio Quintero, Infinite homotopy theory, $K$-Monographs in Mathematics, vol. 6, Kluwer Academic Publishers, Dordrecht, 2001. MR 1848146 M. Cárdenas, T. Fernández, F. F. Lasheras, and A. Quintero, Embedding proper homotopy types, Colloq. Math. 95 (2003), no. 1, 1–20. MR 1967550, DOI https://doi.org/10.4064/cm95-1-1 M. Cárdenas, F.F. Lasheras. On properly $3$-realizable groups. Preprint. M. J. Dunwoody, The accessibility of finitely presented groups, Invent. Math. 81 (1985), no. 3, 449–457. MR 807066, DOI https://doi.org/10.1007/BF01388581 D. B. A. Epstein, Ends, Topology of 3-manifolds and related topics (Proc. The Univ. of Georgia Institute, 1961) Prentice-Hall, Englewood Cliffs, N.J., 1962, pp. 110–117. MR 0158380 R. Geoghegan. Topological Methods in Group Theory. Book in preparation. Bruce Hughes and Andrew Ranicki, Ends of complexes, Cambridge Tracts in Mathematics, vol. 123, Cambridge University Press, Cambridge, 1996. MR 1410261 Francisco F. Lasheras, Universal covers and 3-manifolds, J. Pure Appl. Algebra 151 (2000), no. 2, 163–172. MR 1775571, DOI https://doi.org/10.1016/S0022-4049%2899%2900061-4 Francisco F. Lasheras, A note on fake surfaces and universal covers, Topology Appl. 125 (2002), no. 3, 497–504. MR 1935166, DOI https://doi.org/10.1016/S0166-8641%2801%2900295-4 J. P. May, A concise course in algebraic topology, Chicago Lectures in Mathematics, University of Chicago Press, Chicago, IL, 1999. MR 1702278 C. T. C. Wall (ed.), Homological group theory, London Mathematical Society Lecture Note Series, vol. 36, Cambridge University Press, Cambridge-New York, 1979. MR 564417 C. T. C. Wall, Finiteness conditions for ${\rm CW}$-complexes, Ann. of Math. (2) 81 (1965), 56–69. MR 171284, DOI https://doi.org/10.2307/1970382 J. H. C. Whitehead, Simple homotopy types, Amer. J. Math. 72 (1950), 1–57. MR 35437, DOI https://doi.org/10.2307/2372133 Perrin Wright, Formal $3$-deformations of $2$-polyhedra, Proc. Amer. Math. Soc. 37 (1973), 305–308. MR 331397, DOI https://doi.org/10.1090/S0002-9939-1973-0331397-2 H-J. Baues, A. Quintero. Infinite Homotopy Theory. K-monographs in Mathematics, Kluwer Academic Publishers, 2001. M. Cárdenas, T. Fernández, F. F. Lasheras, A. Quintero. Embedding proper homotopy types. Colloq. Math., vol. 95, no. 1(2003), 1-20. M. J. Dunwoody. The accessibility of finitely presented groups. Invent. Math., 81(1985), 449-457. D. B. A. Epstein. Ends, in: Topology of $3$-manifolds and related topics. Proc. Univ. Georgia Inst., Prentice Hall, Englewood Cliffs, NJ. (1961), 110-117. B. Hughes, A. Ranicki. Ends of complexes. Cambridge Tracts in Math. 123. Cambridge Univ. Press, 1996. F. F. Lasheras. Universal covers and $3$-manifolds. J. Pure Appl. Algebra, vol. 151, no. 2(2000), 163-172. F. F. Lasheras. A note on fake surfaces and universal covers. Topology Appl., vol. 125, no. 3(2002), 497-504. P.J. May. A Concise Course in Algebraic Topology. Chicago Lectures in Mathematics. University of Chicago Press, 1999. P. Scott, C.T.C. Wall. Topological methods in group theory. Homological Group Theory, London Math. Soc. Lecture Notes, Cambridge Univ. Press, Cambridge (1979), 137-204. C.T.C. Wall. Finiteness conditions for CW-complexes. Ann. of Math., 81(1965), 56-69. J.H.C. Whitehead. Simple homotopy types. Amer. J. Math., 72(1950), 1-57. P. Wright. Formal $3$-deformations of $2$-polyhedra. Proc. Amer. Math. Soc., 37(1973), 305-308. Retrieve articles in Proceedings of the American Mathematical Society with MSC (2000): 57M07, 57M10, 57M20 Retrieve articles in all journals with MSC (2000): 57M07, 57M10, 57M20 R. Ayala Affiliation: Departamento de Geometría y Topología, Universidad de Sevilla, Apdo 1160, 41080-Sevilla, Spain M. Cárdenas F. F. Lasheras MR Author ID: 633766 Email: [email protected] A. Quintero Received by editor(s): September 29, 2003 Received by editor(s) in revised form: December 31, 2003 Additional Notes: This work was partially supported by the project BFM 2001-3195-C02 Communicated by: Ronald A. Fintushel Article copyright: © Copyright 2004 American Mathematical Society Join the AMS AMS Conferences News & Public Outreach Math in the Media Mathematical Imagery Mathematical Moments The Profession Data on the Profession Fellows of the AMS Mathematics Research Communities AMS Fellowships Programs for Students Collaborations and position statements Appropriations Process Primer Congressional briefings and exhibitions About the AMS Jobs at AMS Notices of the AMS · Bulletin of the AMS American Mathematical Society · 201 Charles Street Providence, Rhode Island 02904-2213 · 401-455-4000 or 800-321-4267 AMS, American Mathematical Society, the tri-colored AMS logo, and Advancing research, Creating connections, are trademarks and services marks of the American Mathematical Society and registered in the U.S. Patent and Trademark Office. © Copyright , American Mathematical Society · Privacy Statement · Terms of Use · Accessibility
CommonCrawl
\begin{document} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{example}[theorem]{Example} \newtheorem{algol}{Algorithm} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{prop}[theorem]{Proposition} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{problem}[theorem]{Problem} \newtheorem{conj}[theorem]{Conjecture} \newtheorem{cor}[theorem]{Corollary} \theoremstyle{remark} \newtheorem{definition}[theorem]{Definition} \newtheorem{question}[theorem]{Question} \newtheorem{remark}[theorem]{Remark} \newtheorem*{acknowledgement}{Acknowledgements} \numberwithin{equation}{section} \numberwithin{theorem}{section} \numberwithin{table}{section} \numberwithin{figure}{section} \allowdisplaybreaks \definecolor{olive}{rgb}{0.3, 0.4, .1} \definecolor{dgreen}{rgb}{0.,0.5,0.} \def\cc#1{\textcolor{red}{#1}} \defC_{d}{C_{d}} \def\widetilde{C}_{d}{\widetilde{C}_{d}} \definecolor{dgreen}{rgb}{0.,0.6,0.} \def\tgreen#1{\begin{color}{dgreen}{\it{#1}}\end{color}} \def\tblue#1{\begin{color}{blue}{\it{#1}}\end{color}} \def\tred#1{\begin{color}{red}#1\end{color}} \def\tmagenta#1{\begin{color}{magenta}{\it{#1}}\end{color}} \def\tNavyBlue#1{\begin{color}{NavyBlue}{\it{#1}}\end{color}} \def\tMaroon#1{\begin{color}{Maroon}{\it{#1}}\end{color}} \def\nmid{\nmid} \def \balpha{\bm{\alpha}} \def \bbeta{\bm{\beta}} \def \bgamma{\bm{\gamma}} \def \bdelta{\bm{\delta}} \def \blambda{\bm{\lambda}} \def \bchi{\bm{\chi}} \def \bphi{\bm{\varphi}} \def \bpsi{\bm{\psi}} \def \bnu{\bm{\nu}} \def \bomega{\bm{\omega}} \def\widetilde d{\widetilde d} \def \te{\widetilde e} \def\widetilde \alpha{\widetilde \alpha} \def \tbeta{\widetilde \beta} \def \tcA{\widetilde {{\mathcal A}}} \def \tcB{\widetilde{{\mathcal B}}} \def\vec{h}{\vec{h}} \def\vec{j}{\vec{j}} \def\vec{k}{\vec{k}} \def\vec{l}{\vec{l}} \def\vec{u}{\vec{u}} \def\vec{v}{\vec{v}} \def\vec{x}{\vec{x}} \def\vec{y}{\vec{y}} \def\qquad\mbox{and}\qquad{\qquad\mbox{and}\qquad} \def{\mathcal A}{{\mathcal A}} \def{\mathcal B}{{\mathcal B}} \def{\mathcal C}{{\mathcal C}} \def{\mathcal D}{{\mathcal D}} \def{\mathcal E}{{\mathcal E}} \def{\mathcal F}{{\mathcal F}} \def{\mathcal G}{{\mathcal G}} \def{\mathcal H}{{\mathcal H}} \def{\mathcal I}{{\mathcal I}} \def{\mathcal J}{{\mathcal J}} \def{\mathcal K}{{\mathcal K}} \def{\mathcal L}{{\mathcal L}} \def{\mathcal M}{{\mathcal M}} \def{\mathcal N}{{\mathcal N}} \def{\mathcal O}{{\mathcal O}} \def{\mathcal P}{{\mathcal P}} \def{\mathcal Q}{{\mathcal Q}} \def{\mathcal R}{{\mathcal R}} \def{\mathcal S}{{\mathcal S}} \def{\mathcal T}{{\mathcal T}} \def{\mathcal U}{{\mathcal U}} \def{\mathcal V}{{\mathcal V}} \def{\mathcal W}{{\mathcal W}} \def{\mathcal X}{{\mathcal X}} \def{\mathcal Y}{{\mathcal Y}} \def{\mathcal Z}{{\mathcal Z}} \def\mathbb{C}{\mathbb{C}} \def\mathbb{F}{\mathbb{F}} \def\mathbb{K}{\mathbb{K}} \def\mathbb{Z}{\mathbb{Z}} \def\mathbb{R}{\mathbb{R}} \def\mathbb{Q}{\mathbb{Q}} \def\mathbb{N}{\mathbb{N}} \def\mathbb{L}{\mathbb{L}} \def\textsf{M}{\textsf{M}} \def\mathbb{U}{\mathbb{U}} \def\mathbb{P}{\mathbb{P}} \def\mathbb{A}{\mathbb{A}} \def\mathfrak{p}{\mathfrak{p}} \def\mathfrak{q}{\mathfrak{q}} \def\mathfrak{n}{\mathfrak{n}} \def\mathcal{X}{\mathcal{X}} \def\textrm{\bf x}{\textrm{\bf x}} \def\textrm{\bf w}{\textrm{\bf w}} \def\overline{\Q}{\overline{\mathbb{Q}}} \def \Kab{\mathbb{K}^{\mathrm{ab}}} \def \Qab{\mathbb{Q}^{\mathrm{ab}}} \def \Qtr{\mathbb{Q}^{\mathrm{tr}}} \def \Kc{\mathbb{K}^{\mathrm{c}}} \def \Qc{\mathbb{Q}^{\mathrm{c}}} \def\Z_\K{\mathbb{Z}_\mathbb{K}} \def\Z_{\K,\cS}{\mathbb{Z}_{\mathbb{K},{\mathcal S}}} \def\Z_{\K,\cS_f}{\mathbb{Z}_{\mathbb{K},{\mathcal S}_f}} \defR_{\cS_{f}}{R_{{\mathcal S}_{f}}} \defR_{\cT_{f}}{R_{{\mathcal T}_{f}}} \def\mathcal{S}{\mathcal{S}} \def\vec#1{\mathbf{#1}} \def\ov#1{{\overline{#1}}} \def{\operatorname{S}}{{\operatorname{S}}} \def\G_{\textup{m}}{\G_{\textup{m}}} \def{\mathfrak A}{{\mathfrak A}} \def{\mathfrak B}{{\mathfrak B}} \def{\mathfrak M}{{\mathfrak M}} \def \brho{\bm{\rho}} \def \btau{\bm{\tau}} \def\house#1{{ \setbox0=\hbox{$#1$} \vrule height \dimexpr\ht0+1.4pt width .5pt depth \dp0\relax \vrule height \dimexpr\ht0+1.4pt width \dimexpr\wd0+2pt depth \dimexpr-\ht0-1pt\relax \llap{$#1$\kern1pt} \vrule height \dimexpr\ht0+1.4pt width .5pt depth \dp0\relax}} \newenvironment{notation}[0]{ \begin{list} {} {\setlength{\itemindent}{0pt} \setlength{\labelwidth}{1\parindent} \setlength{\labelsep}{\parindent} \setlength{\leftmargin}{2\parindent} \setlength{\itemsep}{0pt} } } {\end{list}} \newenvironment{parts}[0]{ \begin{list}{} {\setlength{\itemindent}{0pt} \setlength{\labelwidth}{1.5\parindent} \setlength{\labelsep}{.5\parindent} \setlength{\leftmargin}{2\parindent} \setlength{\itemsep}{0pt} } } {\end{list}} \newcommand{\Part}[1]{\item[\upshape#1]} \def\Case#1#2{ \paragraph{\textbf{\boldmath Case #1: #2.}}\hfil\break\ignorespaces} \def\Subcase#1#2{ \paragraph{\textit{\boldmath Subcase #1: #2.}}\hfil\break\ignorespaces} \renewcommand{\alpha}{\alpha} \renewcommand{\beta}{\beta} \newcommand{\gamma}{\gamma} \renewcommand{\delta}{\delta} \newcommand{\epsilon}{\epsilon} \newcommand{\varphi}{\varphi} \newcommand{\hat\varphi}{\hat\varphi} \newcommand{{\boldsymbol{\f}}}{{\boldsymbol{\varphi}}} \renewcommand{\lambda}{\lambda} \renewcommand{\kappa}{\kappa} \newcommand{\hat\lambda}{\hat\lambda} \newcommand{{\boldsymbol{\mu}}}{{\boldsymbol{\mu}}} \renewcommand{\omega}{\omega} \renewcommand{\rho}{\rho} \newcommand{{\ov\rho}}{{\ov\rho}} \newcommand{\sigma}{\sigma} \newcommand{{\ov\sigma}}{{\ov\sigma}} \renewcommand{\tau}{\tau} \newcommand{\zeta}{\zeta} \newcommand{{\mathfrak{a}}}{{\mathfrak{a}}} \newcommand{{\mathfrak{b}}}{{\mathfrak{b}}} \newcommand{{\mathfrak{n}}}{{\mathfrak{n}}} \newcommand{{\mathfrak{p}}}{{\mathfrak{p}}} \newcommand{{\mathfrak{P}}}{{\mathfrak{P}}} \newcommand{{\mathfrak{q}}}{{\mathfrak{q}}} \newcommand{{\ov A}}{{\ov A}} \newcommand{{\ov E}}{{\ov E}} \newcommand{{\ov k}}{{\ov k}} \newcommand{{\ov K}}{{\ov K}} \newcommand{{\ov P}}{{\ov P}} \newcommand{{\ov S}}{{\ov S}} \newcommand{{\ov T}}{{\ov T}} \newcommand{{\ov\gamma}}{{\ov\gamma}} \newcommand{{\ov\lambda}}{{\ov\lambda}} \newcommand{{\ov y}}{{\ov y}} \newcommand{{\ov\f}}{{\ov\varphi}} \newcommand{{\mathcal A}}{{\mathcal A}} \newcommand{{\mathcal B}}{{\mathcal B}} \newcommand{{\mathcal C}}{{\mathcal C}} \newcommand{{\mathcal D}}{{\mathcal D}} \newcommand{{\mathcal E}}{{\mathcal E}} \newcommand{{\mathcal F}}{{\mathcal F}} \newcommand{{\mathcal G}}{{\mathcal G}} \newcommand{{\mathcal H}}{{\mathcal H}} \newcommand{{\mathcal I}}{{\mathcal I}} \newcommand{{\mathcal J}}{{\mathcal J}} \newcommand{{\mathcal K}}{{\mathcal K}} \newcommand{{\mathcal L}}{{\mathcal L}} \newcommand{{\mathcal M}}{{\mathcal M}} \newcommand{{\mathcal N}}{{\mathcal N}} \newcommand{{\mathcal O}}{{\mathcal O}} \newcommand{{\mathcal P}}{{\mathcal P}} \newcommand{{\mathcal Q}}{{\mathcal Q}} \newcommand{{\mathcal R}}{{\mathcal R}} \newcommand{{\mathcal S}}{{\mathcal S}} \newcommand{{\mathcal T}}{{\mathcal T}} \newcommand{{\mathcal U}}{{\mathcal U}} \newcommand{{\mathcal V}}{{\mathcal V}} \newcommand{{\mathcal W}}{{\mathcal W}} \newcommand{{\mathcal X}}{{\mathcal X}} \newcommand{{\mathcal Y}}{{\mathcal Y}} \newcommand{{\mathcal Z}}{{\mathcal Z}} \renewcommand{\mathbb{A}}{\mathbb{A}} \newcommand{\mathbb{B}}{\mathbb{B}} \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{\mathbb{F}}{\mathbb{F}} \newcommand{\mathbb{G}}{\mathbb{G}} \newcommand{\mathbb{K}}{\mathbb{K}} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\mathbb{P}}{\mathbb{P}} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{{\boldsymbol a}}{{\boldsymbol a}} \newcommand{{\boldsymbol b}}{{\boldsymbol b}} \newcommand{{\boldsymbol c}}{{\boldsymbol c}} \newcommand{{\boldsymbol d}}{{\boldsymbol d}} \newcommand{{\boldsymbol e}}{{\boldsymbol e}} \newcommand{{\boldsymbol f}}{{\boldsymbol f}} \newcommand{{\boldsymbol g}}{{\boldsymbol g}} \newcommand{{\boldsymbol i}}{{\boldsymbol i}} \newcommand{{\boldsymbol j}}{{\boldsymbol j}} \newcommand{{\boldsymbol k}}{{\boldsymbol k}} \newcommand{{\boldsymbol m}}{{\boldsymbol m}} \newcommand{{\boldsymbol p}}{{\boldsymbol p}} \newcommand{{\boldsymbol r}}{{\boldsymbol r}} \newcommand{{\boldsymbol s}}{{\boldsymbol s}} \newcommand{{\boldsymbol t}}{{\boldsymbol t}} \newcommand{{\boldsymbol u}}{{\boldsymbol u}} \newcommand{{\boldsymbol v}}{{\boldsymbol v}} \newcommand{{\boldsymbol w}}{{\boldsymbol w}} \newcommand{{\boldsymbol x}}{{\boldsymbol x}} \newcommand{{\boldsymbol y}}{{\boldsymbol y}} \newcommand{{\boldsymbol z}}{{\boldsymbol z}} \newcommand{{\boldsymbol A}}{{\boldsymbol A}} \newcommand{{\boldsymbol F}}{{\boldsymbol F}} \newcommand{{\boldsymbol B}}{{\boldsymbol B}} \newcommand{{\boldsymbol D}}{{\boldsymbol D}} \newcommand{{\boldsymbol G}}{{\boldsymbol G}} \newcommand{{\boldsymbol I}}{{\boldsymbol I}} \newcommand{{\boldsymbol M}}{{\boldsymbol M}} \newcommand{{\boldsymbol P}}{{\boldsymbol P}} \newcommand{{\boldsymbol X}}{{\boldsymbol X}} \newcommand{{\boldsymbol Y}}{{\boldsymbol Y}} \newcommand{{\boldsymbol{0}}}{{\boldsymbol{0}}} \newcommand{{\boldsymbol{1}}}{{\boldsymbol{1}}} \newcommand{{\textup{aff}}}{{\textup{aff}}} \newcommand{\operatorname{Aut}}{\operatorname{Aut}} \newcommand{{\textup{Berk}}}{{\textup{Berk}}} \newcommand{\operatorname{Birat}}{\operatorname{Birat}} \newcommand{\operatorname{char}}{\operatorname{char}} \newcommand{\operatorname{codim}}{\operatorname{codim}} \newcommand{\operatorname{Crit}}{\operatorname{Crit}} \newcommand{\operatorname{critwt}}{\operatorname{critwt}} \newcommand{\operatorname{Cycles}}{\operatorname{Cycles}} \newcommand{\operatorname{diag}}{\operatorname{diag}} \newcommand{\operatorname{Disc}}{\operatorname{Disc}} \newcommand{\operatorname{Div}}{\operatorname{Div}} \newcommand{\operatorname{Dom}}{\operatorname{Dom}} \newcommand{\operatorname{End}}{\operatorname{End}} \newcommand{\mathcal{EO}}{\mathcal{EO}} \newcommand{{\ov{F}}}{{\ov{F}}} \newcommand{\operatorname{Fix}}{\operatorname{Fix}} \newcommand{\operatorname{FOD}}{\operatorname{FOD}} \newcommand{\operatorname{FOM}}{\operatorname{FOM}} \newcommand{\operatorname{Gal}}{\operatorname{Gal}} \newcommand{\operatorname{genus}}{\operatorname{genus}} \newcommand{/\!/}{/\!/} \newcommand{\operatorname{GL}}{\operatorname{GL}} \newcommand{\operatorname{\mathcal{G\!R}}}{\operatorname{\mathcal{G\!R}}} \newcommand{\operatorname{Hom}}{\operatorname{Hom}} \newcommand{\operatorname{Index}}{\operatorname{Index}} \newcommand{\operatorname{Image}}{\operatorname{Image}} \newcommand{\operatorname{Isom}}{\operatorname{Isom}} \newcommand{{\hat h}}{{\hat h}} \newcommand{{\operatorname{ker}}}{{\operatorname{ker}}} \newcommand{K^{\textup{sep}}}{K^{\textup{sep}}} \newcommand{{\operatorname{lcm}}}{{\operatorname{lcm}}} \newcommand{{\operatorname{LCM}}}{{\operatorname{LCM}}} \newcommand{\operatorname{Lift}}{\operatorname{Lift}} \newcommand{\lim\nolimits^*}{\lim\nolimits^*} \newcommand{\limstarn}{\lim_{\hidewidth n\to\infty\hidewidth}{\!}^*{\,}} \newcommand{\log\log}{\log\log} \newcommand{\log^{\scriptscriptstyle+}}{\log^{\scriptscriptstyle+}} \newcommand{\operatorname{Mat}}{\operatorname{Mat}} \newcommand{\operatornamewithlimits{\textup{max}^{\scriptscriptstyle+}}}{\operatornamewithlimits{\textup{max}^{\scriptscriptstyle+}}} \newcommand{\MOD}[1]{~(\textup{mod}~#1)} \newcommand{\operatorname{Mor}}{\operatorname{Mor}} \newcommand{\mathcal{M}}{\mathcal{M}} \newcommand{{\operatorname{\mathsf{N}}}}{{\operatorname{\mathsf{N}}}} \newcommand{\nmid}{\nmid} \newcommand{\triangleleft}{\triangleleft} \newcommand{\operatorname{NS}}{\operatorname{NS}} \newcommand{\twoheadrightarrow}{\twoheadrightarrow} \newcommand{\operatorname{ord}}{\operatorname{ord}} \newcommand{\mathcal{O}}{\mathcal{O}} \newcommand{\operatorname{Per}}{\operatorname{Per}} \newcommand{\operatorname{Perp}}{\operatorname{Perp}} \newcommand{\operatorname{PrePer}}{\operatorname{PrePer}} \newcommand{\operatorname{PGL}}{\operatorname{PGL}} \newcommand{\operatorname{Pic}}{\operatorname{Pic}} \newcommand{\operatorname{Prob}}{\operatorname{Prob}} \newcommand{\operatorname{Proj}}{\operatorname{Proj}} \newcommand{{\ov{\QQ}}}{{\ov{\mathbb{Q}}}} \newcommand{\operatorname{rank}}{\operatorname{rank}} \newcommand{\operatorname{Rat}}{\operatorname{Rat}} \newcommand{{\operatorname{Res}}}{{\operatorname{Res}}} \newcommand{\operatorname{Res}}{\operatorname{Res}} \renewcommand{\smallsetminus}{\smallsetminus} \newcommand{\operatorname{sgn}}{\operatorname{sgn}} \newcommand{\operatorname{SL}}{\operatorname{SL}} \newcommand{\operatorname{Span}}{\operatorname{Span}} \newcommand{\operatorname{Spec}}{\operatorname{Spec}} \renewcommand{{\textup{ss}}}{{\textup{ss}}} \newcommand{{\textup{stab}}}{{\textup{stab}}} \newcommand{\operatorname{Stab}}{\operatorname{Stab}} \newcommand{\operatorname{Supp}}{\operatorname{Supp}} \newcommand{\operatorname{Sym}}{\operatorname{Sym}} \newcommand{{\textup{tor}}}{{\textup{tor}}} \newcommand{\operatorname{Trace}}{\operatorname{Trace}} \newcommand{\mathbin{\triangle}}{\mathbin{\triangle}} \newcommand{{\textup{tr}}}{{\textup{tr}}} \newcommand{{\mathfrak{h}}}{{\mathfrak{h}}} \newcommand{\operatorname{Wander}}{\operatorname{Wander}} \newcommand{\langle}{\langle} \renewcommand{\rangle}{\rangle} \newcommand{\pmodintext}[1]{~\textup{(mod}~#1\textup{)}} \newcommand{\displaystyle}{\displaystyle} \newcommand{\lhook\joinrel\longrightarrow}{\lhook\joinrel\longrightarrow} \newcommand{\relbar\joinrel\twoheadrightarrow}{\relbar\joinrel\twoheadrightarrow} \newcommand{\SmallMatrix}[1]{ \left(\begin{smallmatrix} #1 \end{smallmatrix}\right)} \def\({\left(} \def\){\right)} \def\fl#1{\left\lfloor#1\right\rfloor} \def\rf#1{\left\lceil#1\right\rceil} \title[Local--global principle and additive combinatorics] {An effective local--global principle and additive combinatorics in finite fields} \author[B.\ Kerr]{Bryce Kerr} \address{Max Planck Institute for Mathematics, Bonn, Germany} \email{[email protected]} \author[J. Mello] {Jorge Mello} \address{Max Planck Institute for Mathematics, Bonn, Germany} \email{[email protected]} \author[I. E. Shparlinski] {Igor E. Shparlinski} \address{School of Mathematics and Statistics, University of New South Wales, Sydney NSW 2052, Australia} \email{[email protected]} \subjclass[2010]{11D79, 11G25, 11P70} \keywords{Additive combinatorics, modular reduction of systems of polynomials} \begin{abstract} We use recent results about linking the number of zeros on algebraic varieties over $\mathbb{C}$, defined by polynomials with integer coefficients, and on their reductions modulo sufficiently large primes to study congruences with products and reciprocals of linear forms. This allows us to make some progress towards a question of B.~Murphy, G.~Petridis, O.~Roche-Newton, M.~Rudnev and I.~D.~Shkredov (2019) on an extreme case of the Erd\H{o}s--Szemer{\'e}di conjecture in finite fields. \end{abstract} \maketitle \tableofcontents \section{Introduction} \subsection{Description of our results} In this paper we give a new application of a recent result due to D'Andrea, Ostafe, Shparlinski and Sombra~\cite[Theorem~2.1]{DOSS}, which establishes an effective link between the number of points on zero dimensional varieties considered over $\mathbb{C}$ and also considered in the field $\mathbb{F}_p$, see Lemma~\ref{lem:TAMS} below. In particular, we give sharp upper bounds on the number of solutions to some multiplicative and additive congruences modulo primes with variables from sets with small doubling, see Section~\ref{sec:mult_eq}. These results complement those of Grosu~\cite{Gros}, who has previously applied a similar principle which allows one to study arithmetic in subsets of a finite field by lifting to zero characteristic. The results of Grosu~\cite{Gros} restrict one to consider sets ${\mathcal A}\subseteq \mathbb{F}_p$ of triple logarithmic size, see~\eqref{eq:GrosuCong} below. Our results (see Section~\ref{sec:mult_eq}) extend the cardinality of the sets considered in some applications (see~\cite[Section~4]{Gros}) to the range $|{\mathcal A}|\leqslant p^{\delta}$ for some fixed $\delta>0$ which is given explicitly and depends only on the size of $|{\mathcal A}+{\mathcal A}|$. We also obtain sharper quantitative bounds for $\delta$ which hold for almost all primes (in the sense of relative asymptotic density). For example, we prove that if such a set has small doubling, then its product set is of almost largest possible size, see Theorem~\ref{thm:main24} below. This provides some partial progress towards a question raised by Murphy, Petridis, Roche-Newton, Rudnev and Shkredov~\cite[Question~2]{MPRRS} which has also been considered by Shkredov~\cite[Corollary~2]{Shk} in a different context and can be considered a mod $p$ variant of a few sums many products estimate due to Elekes and Ruzsa~\cite{ER}, see Section~\ref{sec:appl} for more details. We note that some arithmetic applications of~\cite[Theorem~2.1]{DOSS} have already been given in~\cite{CDOSS, DOSS} (to periods of orbits of some dynamical systems) as well as~\cite{Shp2} (to torsions of some points on elliptic curves). \subsection{General notation} Throughout this work $\mathbb{N} = \left \{1, 2, \ldots \right \}$ is the set of positive integers. For a field $K$, we use $\ov K$ to denote the algebraic closure of $K$. For a prime $p$, we use $\mathbb{F}_p$ to denote the finite field of $p$ elements and $\mathbb{F}_p^{*}$ the multiplicative subgroup of $\mathbb{F}_p$. We freely switch between equations in $\mathbb{F}_p$ and congruences modulo $p$. The letters $k$, $\ell,$ $m$ and $n$ (with or without subscripts) are always used to denote positive integers; the letter~$p$ (with or without subscripts) is always used to denote a prime. As usual, for given quantities $U$ and $V$, the notations $U\ll V$, $V\gg U$ and $U=O(V)$ are all equivalent to the statement that the inequality $|U|\leqslant c V$ holds with some constant $c>0$, which may depend on the integer parameter $d$. Furthermore $V = U^{o(1)} $ means that $\log |V|/\log U\to 0$ as $U\to \infty$. We use $|{\mathcal S}|$ to denote the cardinality of a finite set ${\mathcal S}$. For a generic point $\vec{x}\in \mathbb{R}^d$, we write $x_i$ for the $i$-th coordinate of $\textrm{\bf x}$. For example, if $\balpha,\vec{h}\in \mathbb{R}^d$ then $$ \balpha=(\alpha_1,\ldots,\alpha_d) \qquad\mbox{and}\qquad \vec{h}=(h_1,\ldots,h_d). $$ Let $$ \langle \balpha, \vec{h} \rangle= \alpha_1 h_1 +\ldots +\alpha_d h_d $$ denote the Euclidian inner product and $\|\vec{h}\|$ the Euclidean norm of $\vec{h}$. For $\balpha \in \mathbb{R}^d$ and $\lambda \in \mathbb{C}$ we let $\lambda \balpha$ denote scalar multiplication $$\lambda \balpha=(\lambda \alpha_1,\ldots,\lambda \alpha_d).$$ Given a set ${\mathcal D}\subseteq \mathbb{R}^d$ and $\lambda>0$ we define $$\lambda {\mathcal D} =\{ \lambda x : ~ x\in {\mathcal D}\}.$$ \section{Main results} \subsection{Multiplicative equations over sets with small sumsets} \label{sec:mult_eq} Let $p$ be prime and for subsets ${\mathcal A},{\mathcal B}\subseteq \overline\mathbb{F}_p$ and $\lambda\in \overline\mathbb{F}_p$ we define $I_p({\mathcal A},{\mathcal B},\lambda)$ by \begin{equation} \label{eq:IpA} I_p({\mathcal A},{\mathcal B},\lambda)=\left| \left\{ (a,b)\in {\mathcal A}\times {\mathcal B} :~ ab = \lambda \right\} \right|, \end{equation} where the equation $ab = \lambda$ is in $\overline\mathbb{F}_p$. A generalised arithmetic progression ${\mathcal A}$ (defined in any group) is a set of the form $$ {\mathcal A}=\left \{ \alpha_0+\alpha_1h_1+\ldots+\alpha_dh_d :~1\leqslant h_i\leqslant A_i\right \}. $$ We define the rank of ${\mathcal A}$ to be $d$ and say ${\mathcal A}$ is proper if $$|{\mathcal A}|=A_1\ldots A_d.$$ It is convenient to define \begin{equation} \label{eq:gamma} \gamma_s = \frac{1}{(11s+15)2^{3s+5}}. \end{equation} Since our bound depend only on $\max\{A_1,\ldots,A_d,B_1,\ldots,B_e\}$, without loss of generality we now assume that $$ A_1=\ldots=A_d=B_1=\ldots =B_e = H. $$ We recall that an integer $k\ne 0$ is called $y$-smooth if all prime divisors of $k$ do not exceed $y$. \begin{theorem} \label{thm:main1} Let $H$, $d$, $e$ be positive integers with $e\leqslant d$. There exists a constant $b_d$ depending only on $d$, and an integer $Z$, which is $O(H^{1/\gamma_{d+e+1}})$-smooth and satisfies $$ \log{Z}\ll H^{(d+e)(d+e+2)^2/4} \log{H} , $$ such that for each prime number $p\nmid Z$ the following holds. For any generalised arithmetic progressions ${\mathcal A},{\mathcal B}\subseteq \overline\mathbb{F}_p$ of the form \begin{align*} {\mathcal A}&=\left \{\alpha_0+ \alpha_1h_1+\ldots+\alpha_dh_d :~ 1\leqslant h_i\leqslant H, \ i =1, \ldots, d\right \},\\ {\mathcal B}&=\left \{\beta_0+ \beta_1j_1+\ldots+\beta_ej_e :~ 1\leqslant j_i\leqslant H, \ i =1, \ldots, e\right \}, \end{align*} and $\lambda \in \overline\mathbb{F}_p^{*}$ we have $$ I_p({\mathcal A},{\mathcal B},\lambda)\leqslant \exp\left(b_d \log{H}/\log\log{H}\right). $$ \end{theorem} The integer $Z$ in Theorem~\ref{thm:main1} is constructed explicitly in Lemma~\ref{lem:iter} below and is divisible by all primes $p \leqslant H^{d+e+o(1)}$ (note that $o(1)$ here denotes a negative quantity). This is established at the end of the proof of Lemma~\ref{lem:iter}. From Theorem~\ref{thm:main1} we may deduce an estimate which holds for all primes provided our generalised arithmetic progressions are not too large. We also obtain better results for almost all primes. In particular, using the fact that no primes $p\geqslant Z$ divide a $Z$-smooth integer, we obtain: \begin{cor} \label{cor:all-primes} Let notation be as in Theorem~\ref{thm:main1}. For any prime $p$ and integer $H$ satisfying $$H\leqslant C_0(d) p^{\gamma_{d+e+1}},$$ for some constant $C_0(d)$ depending only on $d$, we have $$ I_p({\mathcal A},{\mathcal B},\lambda)\leqslant \exp\left(b_d \log{H}/\log\log{H}\right). $$ \end{cor} As a second application, using the fact that any integer $Z$ has at most $O\(\log{Z}/\log\log{Z}\)$ prime divisors, we obtain: \begin{cor} \label{cor:almost-all-primes} Let notation be as in Theorem~\ref{thm:main1}. For all but at most $O\(H^{(d+e)^3+(d+e)}\)$ primes $p$, we have $$ I_p({\mathcal A},{\mathcal B},\lambda)\leqslant \exp\left(b_d \log{H}/\log\log{H}\right). $$ \end{cor} We note an important feature of Theorem~\ref{cor:almost-all-primes} is the set of primes is independent of the generalized arithmetic progressions ${\mathcal A},{\mathcal B}$. Corollaries~\ref{cor:all-primes} and~\ref{cor:almost-all-primes} immediately yield an estimate for equations with Kloosterman fractions and squares. Indeed using that over any field and $\lambda \ne 0$, if $$ a^{-1}+ b^{-1} = \lambda $$ then $$ (a -\lambda^{-1})(b-\lambda^{-1}) = \lambda^{-2}, $$ and over any algebraically closed field, if $\lambda\neq 0$ and $a,b$ satisfy $$a^2+b^2=\lambda,$$ then $$(a +ia )(a -ib)=\lambda,$$ where $i$ is a square root of $-1$, we obtain the following results. \begin{cor} \label{cor:main22} With notation and conditions as in either Corollary~\ref{cor:all-primes} or Corollary~\ref{cor:almost-all-primes}, the number of solutions to the equations $$a^{-1}+ b^{-1} = \lambda , \qquad a\in {\mathcal A}, b\in {\mathcal B}, $$ and $$a ^2+b^2 = \lambda , \qquad a\in {\mathcal A}, b\in {\mathcal B}, $$ in $\overline \mathbb{F}_p$ are bounded by $H^{o(1)}$. \end{cor} We remark that our method, with minor changes, can allow us to extend our results to equations \begin{equation} \label{eq:multi-problem} a_1\ldots a_\nu = \lambda, \qquad a_i \in {\mathcal A}_i, \ i =1, \ldots, \nu, \quad a_i\in {\mathcal A}_i, \end{equation} with any $\nu \geqslant 2$ and generalised arithmetic progressions ${\mathcal A}_1, \ldots, {\mathcal A}_\nu \subseteq \mathbb{F}_p$. A direct application of such techniques gives a poor dependence on the parameter $\nu$. An interesting problem is to determine the largest real numbers $\gamma_{\nu,d}$ such that the number of solutions to~\eqref{eq:multi-problem} is bounded by $(|{\mathcal A}_1|\ldots|{\mathcal A}_\nu|)^{o(1)}$ provided ${\mathcal A}_1,\ldots,{\mathcal A}_\nu$ are generalized arithmetic progressions of rank at most $d$ satisfying $$|{\mathcal A}_i|\ll p^{\gamma_{\nu,d}}.$$ \subsection{Applications to the Erd\H{o}s--Szemer{\'e}di conjecture in finite fields} \label{sec:appl} As usual, given a set ${\mathcal A}\subseteq {\mathcal G}$ with a group operation $\ast$, we write $$ {\mathcal A} \ast {\mathcal A} = \left \{a\ast b:~ a,b \in {\mathcal A}\right \}. $$ Clearly for sets in rings we can use $\ast \in \left \{+, \times\right \}$. Here we also denote $$ {\mathcal A}^{-1} = \{a^{-1}:~a \in {\mathcal A}\}, \quad {\mathcal A}^2=\{ a^2:~a\in {\mathcal A}\}. $$ Combining the above results with some modern results~\cite[Theorem~4]{CS} of additive combinatorics towards the celebrated theorem of Freiman~\cite{Frei}, we, in particular verify the Erd\H{o}s--Szemer{\'e}di conjecture for sets with small sumset and small cardinality. This can be considered an extension of some ideas of Chang~\cite{Chang} into the setting of prime finite fields. \begin{theorem} \label{thm:main24} For any fixed $K \geqslant 2$ and $$ \delta=\frac{1}{(44K+26)2^{12K+8}}, $$ there exist some constants $b_0(K)$ and $c_0(K)$, depending only on $K$, such that for each prime $p$, if ${\mathcal A} \subseteq \mathbb{F}_p$ satisfies $$|{\mathcal A}+{\mathcal A}|\leqslant K|{\mathcal A}| \qquad\mbox{and}\qquad |{\mathcal A}|\leqslant c_0(K)p^{\delta}$$ then for any $\lambda \in \mathbb{F}_p^{*}$ the number of solutions to each of the equations $$ a_1a_2= \lambda , \qquad a_1^{-1}+ a_2^{-1} = \lambda, \qquad a_1^{2}+ a_2^{2} = \lambda $$ with variables $a_1,a_2\in {\mathcal A}$ is $\exp\left(b_0(K)\log{|{\mathcal A}|}/\log\log{|{\mathcal A}|}\right)$. \end{theorem} An immediate consequence of Theorem~\ref{thm:main24} is an estimate for the cardinality of sets related to the Erd\H{o}s--Szemer{\'e}di conjecture. Indeed, using Theorem~\ref{thm:main24} one has that $$ |{\mathcal A}|^2=\sum_{\lambda \in {\mathcal A}{\mathcal A}}I_p({\mathcal A},{\mathcal A},\lambda) \leqslant 2 |{\mathcal A}|+\sum_{\substack{\lambda\in {\mathcal A}{\mathcal A} \\ \lambda \not \equiv 0\mod{p}}}I_p({\mathcal A},{\mathcal A},\lambda)\leqslant |{\mathcal A}{\mathcal A}| |{\mathcal A}|^{o(1)}. $$ A similar argument also works for the sets ${\mathcal A}^{-1} + {\mathcal A}^{-1}$ and we obtain the following result. \begin{cor} \label{cor:main24} With notation and conditions as in Theorem~\ref{thm:main24}, for any fixed $K$ we have $$|{\mathcal A}{\mathcal A}|\geqslant |{\mathcal A}|^{2+o(1)} \qquad\mbox{and}\qquad |{\mathcal A}^{-1} + {\mathcal A}^{-1}| \geqslant |{\mathcal A}|^{2+o(1)}.$$ \end{cor} We note that Corollary~\ref{cor:main24} is a step towards a positive answer to a question raised by Murphy, Petridis, Roche-Newton, Rudnev and Shkredov~\cite[Question~2]{MPRRS} whether for any $\varepsilon > 0$ there exists some $\eta(\varepsilon)$ depending only on $\varepsilon$ with $\eta(\varepsilon) \to 0$ as $\varepsilon \to 0$, such that if ${\mathcal A} \subseteq \mathbb{F}_p$ satisfies $|{\mathcal A}+{\mathcal A}|\leqslant |{\mathcal A}|^{1+\varepsilon}$ then $$|{\mathcal A}{\mathcal A}| \geqslant |{\mathcal A}|^{2- \eta(\varepsilon)}. $$ Theorem~\ref{thm:main24} confirms this in the extreme case of rapidly decaying (as $|{\mathcal A}|$ grows) values of $\varepsilon$. In other words instead of fixed $K$ in Corollary~\ref{cor:main24} we can take $K$ as a very slowly growing function of $ |{\mathcal A}|$. We also recall that Shkredov~\cite[Corollary~2]{Shk} has shown that if \begin{equation} \label{eq:ShkAcond} |{\mathcal A}+{\mathcal A}|\ll |{\mathcal A}| \end{equation} for a set ${\mathcal A} \subseteq \mathbb{F}_p$ of cardinality $|{\mathcal A}| \ll p^{13/23}$ then the number of solutions to $$ a_1a_2 = \lambda, \qquad a_1,a_2\in {\mathcal A}, $$ is bounded by $|{\mathcal A}|^{149/156+o(1)}$. Clearly, this result and Theorem~\ref{thm:main24} are of similar spirit, however they are incomparable. In particular, the cardinality of the sets considered in~\cite[Corollary~2]{Shk} is uniform with respect to the implied constant in~\eqref{eq:ShkAcond}, which is a feature not present in our bound. We refer the reader to~\cite{MPW} for various incidence results related to counting solutions to multiplicative equations with variables belonging to sets with small sumset. Our result does give a direct improvement on Grosu~\cite[Section~4]{Gros}, who obtains similar estimates to Theorem~\ref{thm:main24} with the condition, which we slightly simplify as \begin{equation} \label{eq:GrosuCong} |{\mathcal A}|\leqslant \frac{1}{\log 2} \log \log\log p-1-\varepsilon, \end{equation} for any $\varepsilon > 0$ provided that $p$ is large enough. However, the paper of Grosu~\cite{Gros} contains other interesting results which allow one to lift problems in $\mathbb{F}_p$ to $\mathbb{C}$ while preserving more arithmetic information than counting solutions to equations considered in Theorem~\ref{thm:main24}. We now obtain a version of Theorem~\ref{thm:main24} which holds for almost all primes. \begin{theorem} \label{thm:main24-AA} Let $ A \geqslant 3$ be sufficiently large and let $K \geqslant 2$ be a fixed integer. For all but at most $O\(A^{8K^3+4K^2} \log A/\log \log A\)$ primes $p$ with $$ p > c_0(K) A^{2K} $$ for some sufficiently large constant $c_0(K)$ depending only on $K$, the following holds. If ${\mathcal A} \subseteq \mathbb{F}_p$ satisfies $$|{\mathcal A}+{\mathcal A}|\leqslant K|{\mathcal A}| \qquad\mbox{and}\qquad |{\mathcal A}|\leqslant A$$ then for any $\lambda \in \mathbb{F}_p^{*}$ the number of solutions to each of the equations $$ a_1a_2= \lambda \qquad\mbox{and}\qquad a_1^{-1}+ a_2^{-1} = \lambda $$ with variables $a_1,a_2\in {\mathcal A}$ is $|{\mathcal A}|^{o(1)}$. \end{theorem} As before, we obtain a result towards the Erd\H{o}s--Szemer{\'e}di conjecture modulo almost all primes. \begin{cor} \label{cor:main24-AA} With notation and conditions as in Theorem~\ref{thm:main24-AA} we have $$|{\mathcal A}{\mathcal A}| \geqslant |{\mathcal A}|^{2+o(1)} \qquad\mbox{and}\qquad |{\mathcal A}^{-1} + {\mathcal A}^{-1}| \geqslant |{\mathcal A}|^{2+o(1)}.$$ \end{cor} \subsection{Overview of our approach} We first illustrate the main ideas of our paper in the setting of Corollary~\ref{cor:all-primes}. With $H$ as in Theorem~\ref{thm:main1}, let ${\mathcal A},{\mathcal B}\subseteq \mathbb{F}_{p}$ be generalised arithmetic progressions of rank $d,e$ respectively and recall we aim to show $$I_p({\mathcal A},{\mathcal B})=H^{o(1)}.$$ Our main input is the following iterative inequality, see Lemma~\ref{lem:iter}, that there exists generalised arithmetic progressions $\widetilde {\mathcal A}, \widetilde {\mathcal B}$ of rank $\widetilde d, \widetilde e$ respectively, satisfying $$I_p({\mathcal A},{\mathcal B})\ll H^{o(1)}I_p(\widetilde {\mathcal A},\widetilde {\mathcal B}),$$ and $$\widetilde d+\widetilde e<d+e, \qquad |\widetilde {\mathcal A}||\widetilde {\mathcal B}|\ll |{\mathcal A}|{\mathcal B}|.$$ Proceeding by induction on $d+e$, the above properties are sufficient to establish the desired result. Suppose ${\mathcal A},{\mathcal B}\subseteq \mathbb{F}_p$ are given by \begin{align*} {\mathcal A}&=\left \{\alpha_0+ \alpha_1h_1+\ldots+\alpha_dh_d :~ 1\leqslant h_i\leqslant H, \ i =1, \ldots, d\right \},\\ {\mathcal B}&=\left \{\beta_0+ \beta_1j_1+\ldots+\beta_ej_e :~ 1\leqslant j_i\leqslant H, \ i =1, \ldots, e\right \}, \end{align*} and for simplicity assume ${\mathcal A},{\mathcal B}$ are proper. Hence we aim to count the number of solutions to the equation \begin{equation} \begin{split} \label{eq:eqn-o} (\alpha_0+\alpha_1h_{1}+\dots+\alpha_d h_{d})&(\alpha_0+\beta_1j_{1}+\dots+\beta_d j_{e})= \lambda, \\ 1\leqslant h_1,\ldots,h_d&, j_1, \ldots, j_e \leqslant H. \end{split} \end{equation} Fix a pair $$(h_{1,0},\dots,h_{d,0}) \in [1,H]^{d}, \quad (j_{1,0}\dots,j_{e,0}) \in [1,H]^{e},$$ satisfying $$(\alpha_0+\alpha_1h_{1,0}+\dots+\alpha_d h_{d,0})(\beta_0+\beta_1j_{1,0}+\dots+\beta_d j_{e,0})= \lambda,$$ and consider the variety $V_p\subseteq \overline \mathbb{F}^{d+e+2}$ defined by the system of equations \begin{align*} & (X_0+X_1h_{1}+\dots+X_d h_{d})(Y_0+Y_1j_{1}+\dots+Y_e j_{e})- \\ & \quad \quad \quad (X_0+X_1h_{1,0}+\dots+X_d h_{d,0})(Y_0+Y_1j_{1,0}+\dots+Y_e j_{e,0})=0, \end{align*} such that $h_1,\dots,h_d,j_1,\dots,d_e$ satisfy~\eqref{eq:eqn-o} and in variables $X_0,\dots,Y_e$. Let $V$ denote the corresponding variety over $\mathbb{C}$. By assumption, we have $$(\alpha_0, \dots, \alpha_d, \beta_0,\dots,\beta_e)\in V_p.$$ Assuming $H$ is sufficiently small in terms of $p$, a local-global result of D'Andrea, Ostafe, Shparlinski and Sombra~\cite{DOSS}, see Lemma~\ref{lem:TAMS} below, implies there exists $$(\rho_0,\dots,\rho_d,\gamma_0,\dots,\gamma_e)\in V.$$ Hence any solution $h_1,\dots,j_e$ to~\eqref{eq:eqn-o} also satisfies \begin{align*} (\rho_0+\rho_1h_{1}+\dots+\rho_d h_{d})(\gamma_0+\gamma_1j_{1}+\dots+\gamma_e j_{e})= \lambda_0, \end{align*} for some $\lambda_0\in \mathbb{C}$. A result of Chang~\cite{Chang}, see Lemma~\ref{lem:chang} below, implies there exists $H^{o(1)}$ possible values for either \begin{equation} \label{eq:rho-0} \rho_0+\rho_1h_{1}+\dots+\rho_d h_{d}=\mu_1, \end{equation} or $$\gamma_0+\gamma_1j_{1}+\dots+\gamma_e j_{e}=\mu_2.$$ Assuming~\eqref{eq:rho-0}, our set of solutions to~\eqref{eq:eqn-o} is restricted to the union of $H^{o(1)}$ cosets of a lattice ${\mathcal L}$ of rank smaller than $d$. After performing basis reduction to ${\mathcal L}$ and back-substitution, the desired iterative inequality follows. \section{Preliminaries} \subsection{Tools from Diophantine geometry} For a polynomial $G$ with integer coefficients, its \textit{height}, is defined as the logarithm of the maximum of the absolute values of the coefficients of $G$. The height of an algebraic number $\alpha$ is defined as the height of its minimal polynomial (we also set it to $1$ for $\alpha=0$). We now recall the statement of~\cite[Theorem~2.1]{DOSS} which underlies our approach. \begin{lemma} \label{lem:TAMS} Let $G_i \in \mathbb{Z}[T_1,\ldots,T_n]$, $i=1,\ldots,s$, $n \geq 1$ be polynomials of degree at most $r \geq 2$ and height at most $h$, whose zero set in $\mathbb{C}^n$ has a finite number $\kappa$ of distinct points. Then there is an integer $\mathfrak{A} \geq 1$ with $$ \log \mathfrak{A} \leq (11n +4)r^{3n+1}h + (55r + 99)\log ((2n+5)s)r^{3n+2} $$ such that, if $p$ is a prime not dividing $\mathfrak{A}$, then the zero set in $\overline{\mathbb{F}}^n_p$ of the polynomials $G_i$ reduced modulo $p$, $ i=1,\ldots,s$, consists of exactly $\kappa$ distinct points. \end{lemma} Results of this type have previously appeared. For example Chang~\cite[Lemma~2.14]{Chang} has shown the following result. Let $$ {\mathcal V}=\bigcap_{j=1,\ldots,s}[F_j=0], $$ be an affine variety in $\mathbb{C}^{n}$ defined by polynomials $F_j \in \mathbb{Z}[X_1,\ldots,X_n]$, $j=1, \ldots, s$, of height at most $h$ and let $F\in \mathbb{Z}[X_1,\ldots,X_n]$ be a polynomial of height at most $h$ such that there is $\balpha \in {\mathcal V}$ with $F(\balpha)\neq 0$. Then there is $\bbeta \in {\mathcal V}$ with $F(\bbeta)\neq 0$ whose coordinates are algebraic numbers of height $O(h)$. There are also modulo $p$ analogues of~\cite[Lemma~2.14]{Chang} which allow one to lift solutions to $\mathbb{C}$ from a variety modulo $p$ and we refer the reader to~\cite{Gros} for results of this type. One may also use effective versions of the B\'{e}zout identity, and more generally the Hilbert Nullstellensatz, to lift points on a variety modulo $p$ to $\mathbb{C}$, and this idea has previously been used in~\cite{BBK,BGKS,CKSZ, KMSV, Shp2}. \subsection{Tools from geometry of numbers} Let $\left \{\vec{b}_1,\ldots,\vec{b}_m\right \}$ be a set of $m\leqslant d$ linearly independent vectors in ${\mathbb{R}}^d$. The set of vectors $$ {\mathcal L} = \left\{ \sum_{i=1}^m n_i \vec{b}_i :~ n_i \in \mathbb{Z} \right \},$$ is called an $d$-dimensional lattice of rank $m$. The set $\left \{ \vec{b}_1,\ldots, \vec{b}_m\right \}$ is called a \textit{basis} of ${\mathcal L}$. Each lattice has multiple sets of basis vectors, and we refer to any other set $\{\widetilde{\vec{b}}_1,\ldots,\widetilde{\vec{b}}_m\}$ of linearly independent vectors such that $$ {\mathcal L} = \left\{ \sum_{i=1}^m n_i \widetilde{\vec{b}}_i :~ n_i \in \mathbb{Z} \right \}$$ as a basis. We also define the determinant of ${\mathcal L}$ as $$ \det {\mathcal L} = \sqrt{\left|\det B\cdot B^T\right|}, $$ where $B$ is the $(m\times d)$-matrix with rows $\vec{b}_1,\ldots,\vec{b}_m$, and is independent of the choice of basis. We refer to~\cite{GrLoSch} for a background on lattices. The following is~\cite[Lemma~1]{HB}. \begin{lemma} \label{lem:HB} Let ${\mathcal L}\subseteq \mathbb{Z}^d$ be a lattice of rank $m$. Then ${\mathcal L}$ has a basis $\vec{b}_1,\ldots,\vec{b}_m$ such that, for each $\vec{x}\in {\mathcal L}$, we may write $$\vec{x}=\sum_{j=1}^{m}\lambda_j\vec{b}_j,$$ with $$\lambda_j\ll \frac{\|\vec{x}\|}{\|\vec{b}_j\|}.$$ We also have $$\det {\mathcal L}\ll \prod_{j=1}^{m}\|\vec{b}_i\|\ll \det {\mathcal L}.$$ \end{lemma} \begin{lemma} \label{lem:linear} Let $\alpha_1,\ldots,\alpha_d\in \mathbb{C}$ and let ${\mathcal L}$ denote the lattice $$ {\mathcal L}=\{ (n_1,\ldots,n_d)\in \mathbb{Z}^d :~ \alpha_1n_1+\ldots+\alpha_dn_d=0\}. $$ For integers $H_1,\ldots,H_d$ we consider the convex body $$ D=\{ (x_1,\ldots,x_d)\in \mathbb{R}^{d} :~ |x_i|\leqslant H_i \}. $$ If ${\mathcal L}\cap D$ contains $d-1$ linearly independent points and there exists some $1\leqslant \ell\leqslant d$ such that $\alpha_\ell\neq 0$, then there exists some $1\leqslant j \leqslant d$ such that for each $i = 1, \ldots, d$ there exist integers $a_i$ and $b_i$ satisfying $$ \frac{\alpha_i}{\alpha_j}=\frac{a_i}{b_i}, \qquad \gcd(a_i,b_i)=1, \qquad a_i, b_i \ll H^{d}, $$ where $$ H=\max_{1\leqslant i \leqslant d} H_i. $$ \end{lemma} \begin{proof} Choose $d-1$ linearly independent points $\vec{x}^{(1)},\ldots, \vec{x}^{(d-1)}$ satisfying $$\vec{x}^{(i)}=(x_{i,1},\ldots,x_{i,d})\in {\mathcal L}\cap D, \qquad 1\leqslant i \leqslant d-1.$$ Let $X$ denote the $(d-1)\times d$ matrix whose $i$-th row is $\vec{x}^{(i)}$ and let $X^{(j)}$ denote the $(d-1)\times (d-1)$ matrix obtained from $X$ by removing the $j$-th column. By assumption, the rank of $X$ equals $d-1$. Hence there exists some $1\leqslant j \leqslant d$ such that \begin{equation} \label{eq:det} \det X^{(j)}\neq 0. \end{equation} By symmetry we may suppose $j=d$. Since each $\vec{x}^{(i)}\in {\mathcal L}\cap D$, we have $$ X^{(d)}\begin{pmatrix} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_{d-1} \end{pmatrix}=-\alpha_d \begin{pmatrix} x_{1,d} \\ x_{2,d} \\ \vdots \\ x_{d-1,d} \end{pmatrix}. $$ Note that~\eqref{eq:det} and the assumption $\alpha_{\ell}\neq 0$ implies $\alpha_{d}\neq 0$. Let $Y^{(d)}$ denote the adjoint matrix of $X^{(d)}$, thus $$ X^{(d)} Y^{(d)}=\det X^{(d)} I_{d-1}, $$ where $I_{d-1}$ is the $(d-1)\times (d-1)$-identity matrix. Hence, the above implies $$ \det X^{(d)}\begin{pmatrix} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_{d-1} \end{pmatrix}=-\alpha_d Y^{(d)}\begin{pmatrix} x_{1,d} \\ x_{2,d} \\ \vdots \\ x_{d-1,d} \end{pmatrix}. $$ By Hadamard's inequality and the definition of $H$, $$ \det X^{(d)}\ll H^{d}, $$ and $$ Y^{(d)}\begin{pmatrix} x_{1,d} \\ x_{2,d} \\ \vdots \\ x_{d-1,d} \end{pmatrix}=\begin{pmatrix} y_{1} \\ y_{2} \\ \vdots \\ y_{d-1} \end{pmatrix}, $$ for some integers $y_1,\ldots,y_{d-1} \ll H^{d},$ from which the result follows. \end{proof} \subsection{Tools from additive combinatorics} \label{sec:additivecomb} Our proof of Theorem~\ref{thm:main1} uses Lemma~\ref{lem:TAMS} to reduce to counting solutions to multiplicative equations over $\mathbb{C}$ to which the following result of Chang~\cite[Proposition~2]{Chang} may be applied, see also~\cite[Remark~1]{Chang}. \begin{lemma} \label{lem:chang} For each integer $d\geqslant 1$ there exist a constant $B_d$, depending only on $d$, such that the following holds. Let $\gamma_0,\ldots,\gamma_d\in \mathbb{C}$ and define the set ${\mathcal A}$ by $$ {\mathcal A}=\{ \gamma_0+\gamma_1h_1+\ldots+\gamma_dh_d :~ |h_i|\leqslant H_i\}. $$ For any $\lambda \in \mathbb{C}^{*}$ the number of solutions to $$ a_1a_2=\lambda, \quad a_1,a_2\in {\mathcal A}, $$ is bounded by $\exp\left(B_d\log{|{\mathcal A}|}/\log\log{|{\mathcal A}|}\right)$. \end{lemma} \section{An iterative inequality} \subsection{Formulation of the result} Our main input for the proof of Theorem~\ref{thm:main1} is the following iterative inequality which combines some ideas of Chang~\cite{Chang} with lattice basis reduction. Note that, as in~\cite{Chang}, it is not necessary to assume our generalised arithmetic progression is proper. Recall that for ${\mathcal A},{\mathcal B}\subseteq \mathbb{F}_p$ and $\lambda\in \mathbb{F}_p$ we define $I_p({\mathcal A},{\mathcal B},\lambda)$ by~\eqref{eq:IpA}. We also recall that an integer $n$ is called $y$-smooth if all prime divisors $p$ of $n$ satisfy $p \leqslant y$. \begin{lemma} \label{lem:iter} Let $H$, $d$, $e$ be positive integers with $e\leqslant d$ and let $H$ be sufficiently large. There exists a constant $B_d$ depending only on $d$, and an integer $Z_{d,e}$ which \begin{itemize} \item[(i)] is $O(H^{1/\gamma_{d+e+1}})$-smooth with $\gamma_{d+e+1}$ given by~\eqref{eq:gamma}, \item[(ii)] is divisible by all primes $p \leqslant H^{d+e+o(1)}$, \item[(iii)] satisfies $$ \log{Z_{d,e}}\ll H^{(d+e)(d+1)(e+1)} \log{H}, $$ \end{itemize} such that for any prime $p \nmid Z_{d,e}$ the following holds. Let $\lambda\in \overline\mathbb{F}_p^*$ and ${\mathcal A},{\mathcal B}\subseteq \overline\mathbb{F}_p$ generalised arithmetic progressions of the form \begin{equation} \label{eq:Aiter} {\mathcal A}=\{ \alpha_0+\alpha_1h_1+\ldots +\alpha_d h_d :~ |h_i|\leqslant H, \ i =1, \ldots, d\}, \end{equation} and \begin{equation} \label{eq:Biter} {\mathcal B}=\{ \beta_0+\beta_1j_1+\ldots +\beta_e j_e :~ |j_i|\leqslant H, \ i =1, \ldots, e\}, \end{equation} with $d,e\geqslant 2$ and $$\alpha_1,\ldots,\alpha_d,\beta_1,\ldots,\beta_e \in \overline\mathbb{F}_p^*. $$ There exists a constant $\widetilde{C}_{d}$ depending only on $d$ and $e$, integers $\widetilde d$ and $\te$ satisfying $$ \widetilde d \leqslant d, \qquad \te \leqslant e, \qquad \widetilde d+\te<d+e, $$ generalised arithmetic progressions $\tcA,\tcB$ of the form \begin{align*} & \tcA=\{ \widetilde \alpha_0+\widetilde \alpha_1h_1+\ldots +\widetilde \alpha_{\widetilde d} h_{\widetilde d} :~ |h_i|\leqslant \widetilde{C}_{d} H, \ i =1, \ldots, \widetilde d\},\\ & \tcB=\{ \tbeta_0+ \tbeta_1j_1+\ldots + \tbeta_{\te} j_{\te} :~ |j_i|\leqslant \widetilde{C}_{d} H, \ i =1, \ldots, \te \}, \end{align*} with $$\widetilde \alpha_1 \ldots,\widetilde \alpha_d, \tbeta_1 \ldots \tbeta_e \in \overline\mathbb{F}_p^{*},$$ and some $\mu \in \overline\mathbb{F}_p^*$ such that $$ I_p({\mathcal A},{\mathcal B},\lambda)\leqslant \exp\left(B_d \log{H}/\log\log{H}\right)I_p(\tcA,\tcB,\mu ). $$ \end{lemma} We split the proof of Lemma~\ref{lem:iter} in a series of steps. \subsection{Elimination undesired primes} We first denote \begin{equation} \label{eq:Z0def} Z_0 =\prod_{p\leqslant C_d H^{d}}p \end{equation} for an appropriately large constant $C_d$, which depends only on $d$. We now fix $p \nmid Z_0$, thus \begin{equation} \label{eq:large p} p > C_d H^{d}. \end{equation} We first construct the integer $Z_{d,e}$. For $\vec{h},\vec{h}_0 \in \mathbb{Z}^{d}$ and $\vec{j},\vec{j}_0 \in \mathbb{Z}^{e}$ define the polynomial \begin{equation} \begin{split} \label{eq:Phhjj} &P_{\vec{h},\vec{h}_0, \vec{j},\vec{j}_0}\(\vec{X},\vec{Y} \)\\ &\qquad =(X_0+X_1h_{0,1}+\ldots+X_d h_{0,d})(Y_0+Y_1 j_{0,1}+\ldots+Y_e j_{0,e}) \\ & \qquad \qquad -(X_0+X_1h_1+\ldots+X_d h_d)(Y_0+Y_1 j_{1}+\ldots+Y_e j_{e}). \end{split} \end{equation} We may identify the polynomial $P_{\vec{h},\vec{h}_0 \vec{j},\vec{j}_0}(\vec{X},\vec{Y} )$ with a point in the vector space $\mathbb{C}^{\Delta}$ where $$ \Delta = (d+1)(e+1)-1 = de+d +e, $$ which is formed by its coefficients. Suppose ${\mathcal M}\subseteq \mathbb{Z}^{d}\times \mathbb{Z}^{e}$ satisfies $|{\mathcal M}|\leqslant \Delta $ and the set $$\{P_{\vec{h},\vec{h}_0, \vec{j},\vec{j}_0}(\vec{X},\vec{Y} ) :~ (\vec{h},\vec{j})\in {\mathcal M}\},$$ is linearly independent over $\mathbb{C}$. Let $M\(\vec{h}_0,\vec{j}_0,{\mathcal K}\)$ denote the $|{\mathcal M}|\times \Delta$ matrix whose rows correspond to coefficients of the polynomials $P_{\vec{h},\vec{h}_0, \vec{j},\vec{j}_0}(\vec{X},\vec{Y} )$ with $(\vec{h},\vec{j})\in {\mathcal M}.$ Define \begin{equation} \label{eq:Z1def} Z_1\(\vec{h}_0,\vec{j}_0,{\mathcal M}\)=\det M_0\(\vec{h}_0,\vec{j}_0,{\mathcal M}\), \end{equation} where $M_0\(\vec{h}_0,\vec{j}_0,{\mathcal M}\)$ is a $|{\mathcal M}|\times |{\mathcal M}|$ submatrix of $M\(\vec{h}_0,\vec{j}_0,{\mathcal M}\)$ with nonzero determinant. If $$\vec{h}_0\in [-H,H]^{d}, \qquad \vec{j}_0\in [-H,H]^{e}, \qquad {\mathcal M}\subseteq [-H,H]^{d}\times [-H,H]^{e} $$ then for each $(\vec{h},\vec{j})\in {\mathcal M}$ the polynomial $P_{\vec{h},\vec{h}_0, \vec{j},\vec{j}_0}$ has height at most $2\log{H}+O(1)$. Clearly there are \begin{equation} \label{eq:Choices} \begin{split} W = (2H+1)^d \cdot (2H+1)^e \cdot & \sum_{r=1}^{\Delta } \binom{(2H+1)^{d+e}}{r}\\ & \ll H^{d+e + \Delta (d+e)} = H^{(d+e)(\Delta +1)} \end{split} \end{equation} choices for the above triple $\(\vec{h}_0,\vec{j}_0,{\mathcal M}\)$. By Hadamard's inequality \begin{equation} \label{eq:Z1b} Z_1\(\vec{h}_0,\vec{j}_0,{\mathcal K}\)\ll H^{2|{\mathcal M}|}\ll H^{2\Delta }. \end{equation} Define \begin{equation} \label{eq:Z1} Z_1=\prod_{\substack{\vec{h}_0,\in [-H,H]^{d} \\ \vec{j}_0\in [-H,H]^{e} \\ {\mathcal M}\subseteq [-H,H]^{d}\times [-H,H]^{e} \\ |{\mathcal M}|\leqslant \Delta }}Z_1\(\vec{h}_0,\vec{j}_0,{\mathcal M}\), \end{equation} so, recalling~\eqref{eq:Choices} and~\eqref{eq:Z1b}, we see that $$ \log{Z_1}\ll W \log H \ll H^{(d+e)(\Delta +1)} \log{H}, $$ and that $Z_1$ is $O(H^{2\Delta })$-smooth. For each $\vec{h}_0,\vec{j}_0,{\mathcal M}$ as above, let $V\(\vec{h}_0,\vec{j}_0,{\mathcal M}\)$ denote the variety \begin{align*} V\(\vec{h}_0,\vec{j}_0,{\mathcal M}\)& =\bigcap_{(\vec{h}, \vec{j})\in {\mathcal M}}\left\{ (\vec{x},\vec{y})\in \mathbb{C}^{d+1}\times \mathbb{C}^{e+1} :~ P_{\vec{h},\vec{h}_0, \vec{j},\vec{j}_0}(\vec{x},\vec{y})= 0 \right \} \\ & \qquad \qquad \quad \bigcap \left\{ (\vec{x},\vec{y})\in \mathbb{C}^{d+1}\times \mathbb{C}^{e+1} :~ x_1=1,~ y_1=1 \right \}, \end{align*} and let $V_p\(\vec{h}_0,\vec{j}_0,{\mathcal M}\)$ be the reduction of $V\(\vec{h}_0,\vec{j}_0,{\mathcal M}\)$ modulo $p$ \begin{align*} V_p\(\vec{h}_0,\vec{j}_0,{\mathcal M}\)& =\bigcap_{(\vec{h}, \vec{j})\in {\mathcal M}}\left\{ (\vec{x},\vec{y})\in \overline \mathbb{F}_p^{d+1}\times \overline \mathbb{F}_p^{e+1} :~ P_{\vec{h},\vec{h}_0, \vec{j},\vec{j}_0}(\vec{x},\vec{y})= 0 \right \} \\ & \qquad \qquad \quad \bigcap \left\{ (\vec{x},\vec{y})\in \overline \mathbb{F}^{d+1}\times \overline \mathbb{F}_p^{e+1} :~ x_1=1,~ y_1=1 \right \}. \end{align*} If $|V\(\vec{h}_0,\vec{j}_0,{\mathcal M}\)|=\infty$ then define $$Z_2\(\vec{h}_0,\vec{j}_0,{\mathcal M}\)=1.$$ Otherwise, that is, if $|V\(\vec{h}_0,\vec{j}_0,{\mathcal M}\)|<\infty$, by Lemma~\ref{lem:TAMS} there exists a positive integer $Z_2\(\vec{h}_0,\vec{j}_0,{\mathcal M}\)$ satisfying $$ Z_2\(\vec{h}_0,\vec{j}_0,{\mathcal M}\)\ll H^{1/\gamma_{d+e+1}}, $$ such that for each prime $p$ not dividing $Z_2\(\vec{h}_0,\vec{j}_0,{\mathcal M}\)$, we have $$|V\(\vec{h}_0,\vec{j}_0,{\mathcal M}\)|=|V_p\(\vec{h}_0,\vec{j}_0,{\mathcal M}\)|.$$ Denote \begin{equation} \label{eq:Z2} Z_2=\prod_{\substack{\vec{h}_0,\in [-H,H]^{d} \\ \vec{j}_0\in [-H,H]^{e} \\ {\mathcal M}\subseteq [-H,H]^{d}\times [-H,H]^{e} \\ |{\mathcal M}|\leqslant \Delta }}Z_2\(\vec{h}_0,\vec{j}_0,{\mathcal M}\). \end{equation} With $Z_1$ and $Z_2$ as in~\eqref{eq:Z1} and~\eqref{eq:Z2} we define $$ Z_{d,e}=Z_0Z_1 Z_2. $$ Since $\gamma_{d+e+1} \leqslant \Delta^{-1}$ we see that $Z_{d,e}$ is $O(H^{1/\gamma_{d+e+1}})$-smooth and satisfies $$ \log{Z_{d,e}}\ll H^{(d+e)(\Delta +1 )} \log{H}. $$ From now on we only consider primes $p \nmid Z_{d,e}$. \subsection{Local to global lifting of rational points on some varieties} Fix some prime $p$ not dividing $Z_{d,e}$. With ${\mathcal A},{\mathcal B}$ as in~\eqref{eq:Aiter} and~\eqref{eq:Biter}, choose \begin{align*} & {\mathcal H} \subseteq [-H,H]^{d}\cap \mathbb{Z}^{d}, \\ & {\mathcal J}\subseteq [-H,H]^{e}\cap \mathbb{Z}^{e}, \end{align*} such that the points \begin{align*} & \alpha_0+h_1\alpha_1+\ldots+h_d\alpha_d, \quad (h_1,\ldots,h_d)\in {\mathcal H},\\ & \beta_0+j_1\beta_1+\ldots+j_e\beta_e, \quad (j_1,\ldots,j_e)\in {\mathcal J}, \end{align*} are distinct modulo $p$ and for each $a\in {\mathcal A}$ there exists some integer vector $(h_1,\ldots,h_d)\in {\mathcal H}$ such that $$a=\alpha_0+h_1\alpha_1+\ldots+h_d\alpha_d$$ and for each $b\in {\mathcal B}$ there exists some $(j_1,\ldots,j_e)\in {\mathcal J}$ such that $$b=\beta_0+j_1\beta_1+\ldots+j_e\beta_e.$$ Write $$ \vec{h} =(h_1,\ldots,h_d) \qquad\mbox{and}\qquad \vec{j}=(j_1,\ldots,j_e), $$ so that $I_p({\mathcal A},{\mathcal B},\lambda)$ is bounded by the number of solutions to \begin{equation} \label{eq:eqn} (\alpha_0+\alpha_1h_1+\ldots+\alpha_d h_d)(\beta_0+\beta_1 j_{1}+\ldots+\beta_e j_{e})\equiv \lambda \mod{p}, \end{equation} with $\vec{h}\in {\mathcal H}$ and $\vec{j}\in {\mathcal J}$. Dividing both sides of~\eqref{eq:eqn} by $\alpha_1\beta_1$ and modifying $\alpha_0,\ldots,\alpha_d,\beta_0,\ldots,\beta_e,\lambda$ if necessary, we may assume \begin{equation} \label{eq:alpha1} \alpha_1=\beta_1=1. \end{equation} This reduction allows for a convenient application of Lemma~\ref{lem:TAMS}. In what follows, we will construct a variety over $\overline \mathbb{F}_p$ which contains the point $(\alpha_0,\dots,\alpha_d,\beta_0,\dots,\beta_d)$ and the assumption that $\alpha_1=\beta_1=1$ allows us to obtain a nonzero point in the corresponding variety over $\mathbb{C}$ after applying Lemma~\ref{lem:TAMS}. Let ${\mathcal K}\subseteq {\mathcal H}\times {\mathcal J}$ denote the set $$ {\mathcal K}=\{ \(\vec{h}, \vec{j}\)\in {\mathcal H}\times {\mathcal J} :~ \vec{h}, \vec{j} \ \text{satisfy~\eqref{eq:eqn}}\}, $$ so that \begin{equation} \label{eq:IABK} I_p({\mathcal A},{\mathcal B},\lambda)=|{\mathcal K}|. \end{equation} Since we may assume ${\mathcal K}\neq \emptyset$, fix some $(\vec{h}_0, \vec{j}_0)\in {\mathcal K}$ and for each $(\vec{h}, \vec{j})\in {\mathcal K}$ consider the polynomial $P_{\vec{h},\vec{h}_0, \vec{j},\vec{j}_0}\(\vec{X},\vec{Y} \)$, given by~\eqref{eq:Phhjj}. Clearly for $(\vec{h}_0, \vec{j}_0) \ne (\vec{h}, \vec{j})$ the polynomial $P_{\vec{h},\vec{h}_0, \vec{j},\vec{j}_0}\(\vec{X},\vec{Y} \)$ is not identical to zero over $\mathbb{C}$. Indeed, it is enough to consider the specialisations $P_{\vec{h},\vec{h}_0, \vec{j},\vec{j}_0}\(\(1, 0\ldots, 0\),\vec{Y} \)$ and $P_{\vec{h},\vec{h}_0, \vec{j},\vec{j}_0}(\vec{X},\(1, 0\ldots, 0\))$ to see this. Furthermore, if $p > 2H$ (which is guaranteed by our assumption) then $(\vec{h}_0, \vec{j}_0) \ne (\vec{h}, \vec{j})$ implies $(\vec{h}_0, \vec{j}_0) \not \equiv (\vec{h}, \vec{j}) \mod p$ and we see that $P_{\vec{h},\vec{h}_0, \vec{j},\vec{j}_0}(\vec{X},\vec{Y} )$ is also not identical to zero over $\overline \mathbb{F}_p$. Let $V_p\subseteq \overline \mathbb{F}^{d+e+2}_p$ denote the variety \begin{align*} V_p&=\bigcap_{(\vec{h},\vec{j})\in {\mathcal K}}\left\{ (\vec{x},\vec{y})\in \overline \mathbb{F}^{d+1}_p\times \overline\mathbb{F}^{e+1}_p :~ P_{\vec{h},\vec{h}_0, \vec{j},\vec{j}_0}(\vec{x},\vec{y})= 0 \right \} \\ & \qquad \qquad \qquad \quad \bigcap \left\{ (\vec{x},\vec{y})\in \overline \mathbb{F}^{d+1}_p\times \overline\mathbb{F}^{e+1}_p :~ x_1=1, ~ y_1=1 \right \}. \end{align*} Let ${\mathcal M} \subseteq {\mathcal K}$ be a maximal set of $(\vec{h}, \vec{j})\in {\mathcal K}$, such that the polynomials $P_{\vec{h},\vec{h}_0, \vec{j},\vec{j}_0}(\vec{X}, \vec{Y} )$ are linearly independent over $\mathbb{C}$. With $Z_1\(\vec{h}_0,\vec{j}_0,{\mathcal M}\)$ defined as in~\eqref{eq:Z1def}, with respect to such ${\mathcal K}$, since $$p \nmid Z_1\(\vec{h}_0,\vec{j}_0,{\mathcal M}\) $$ we conclude that ${\mathcal M} \subseteq {\mathcal K}$ is also a maximal set such that the polynomials $P_{\vec{h},\vec{h}_0, \vec{j},\vec{j}_0}(\vec{X}, \vec{Y} )$ are linearly independent over $\overline \mathbb{F}_p$. Hence \begin{align*} V_p&=\bigcap_{(\vec{h},\vec{j})\in {\mathcal M}}\left\{ (\vec{x},\vec{y})\in \overline \mathbb{F}^{d+1}_p\times \overline\mathbb{F}^{e+1}_p :~ P_{\vec{h},\vec{h}_0, \vec{j},\vec{j}_0}(\vec{x},\vec{y})= 0 \right \} \\ & \qquad \qquad \qquad \quad \bigcap \left\{ (\vec{x},\vec{y})\in \overline \mathbb{F}^{d+1}_p\times \overline\mathbb{F}^{e+1}_p :~ x_1=1, ~ y_1=1 \right \} \end{align*} and $1 \leqslant |{\mathcal M}|\leqslant \Delta$. By definition of ${\mathcal K}$ and~\eqref{eq:alpha1}, we have \begin{equation} \label{eq:alphain} (\alpha_0,\ldots,\alpha_d,\beta_0,\ldots,\beta_e)\in V_p. \end{equation} Let $V\subseteq \mathbb{C}^{d+e+2}$ denote the variety \begin{equation} \begin{split} \label{eq:Vdef} V& =\bigcap_{(\vec{h}, \vec{j})\in {\mathcal M}}\left\{ (\vec{x},\vec{y})\in \mathbb{C}^{d+1}\times \mathbb{C}^{e+1} :~ P_{\vec{h},\vec{h}_0, \vec{j},\vec{j}_0}(\vec{x},\vec{y})= 0 \right \} \\ & \qquad \qquad \quad \bigcap \left\{ (\vec{x},\vec{y})\in \mathbb{C}^{d+1}\times \mathbb{C}^{e+1} :~ x_1=1,~ y_1=1 \right \}. \end{split} \end{equation} We next show there exists some $(\brho,\btau)=(\rho_0,\rho_1,\ldots,\rho_d,\tau_0,\tau_1,\ldots,\tau_e)\in \mathbb{C}^{d+1}\times \mathbb{C}^{e+1}$ satisfying \begin{equation} \label{eq:rho} (\brho,\btau)\in V, \quad (\rho_1,\ldots,\rho_d)\neq \mathbf{0}, \quad (\tau_1,\ldots,\tau_e) \neq \mathbf{0}. \end{equation} Certainly it is enough to show that \begin{equation} \label{eq:V nonempty} |V|\geqslant 1, \end{equation} as the non-vanishing conditions in~\eqref{eq:rho} are obvious because any point $(\brho,\btau)\in V$ satisfies $\rho_1=\tau_1=1$. We may assume \begin{equation} \label{eq:Vbounded} |V|<\infty, \end{equation} since otherwise~\eqref{eq:rho} is trivial. We next apply Lemma~\ref{lem:TAMS}. The assumption~\eqref{eq:Vbounded} and that $$p \nmid Z_2\(\vec{h}_0,\vec{j}_0,{\mathcal M}\) $$ implies \begin{equation} \label{eq:VVp} |V|=|V_p|. \end{equation} We see from~\eqref{eq:alphain} that $$ |V_p|\geqslant 1. $$ Combining the above with~\eqref{eq:VVp}, we obtain~\eqref{eq:V nonempty}. Hence there exists some $(\brho,\btau) \in \mathbb{C}^{d+1}\times \mathbb{C}^{e+1}$ satisfying~\eqref{eq:rho}. Note that from~\eqref{eq:Vdef} we have \begin{equation} \label{eq:rho1} \rho_1=\tau_1=1. \end{equation} \subsection{Reduction to counting solutions to a multiplicative congruence on a complex line} We see that any solution to~\eqref{eq:eqn} satisfies \begin{equation} \label{eq:rhorhorho} (\rho_0+\rho_1h_1+\ldots+\rho_d h_d)(\tau_0+\tau_1 j_{1}+\ldots+\tau_e j_{e})=\vartheta, \end{equation} where $$ \vartheta=(\rho_0+\rho_1h_{0,1}+\ldots+\rho_d h_{0,d})(\tau_0+\tau_1 j_{0,1}+\ldots+\tau_e j_{0,e}). $$ Consider the following two cases \begin{itemize} \item If $\vartheta=0$, then either $$ \rho_0+\rho_1h_1+\ldots+\rho_d h_d=0, $$ or $$ \tau_0+\tau_1j_1+\ldots+\tau_ e j_e=0. $$ \item If $\vartheta\neq 0$, then by Lemma~\ref{lem:chang}, there exists a set of $$\exp\left(B_d\log{H}/\log\log{H}\right)$$ pairs $\Omega=\{(\omega_1, \omega_2)\}$ such that any solution to~\eqref{eq:rhorhorho} satisfies $$ \rho_0+\rho_1h_1+\ldots+\rho_d h_d=\omega_1, \quad \tau_0+\tau_1j_1+\ldots+\tau_e j_e=\omega_2, $$ for some $(\omega_1, \omega_2)\in \Omega$. \end{itemize} Taking a maximum over the above two cases, we see that there exists some $\xi \in \mathbb{C}$ and some $i=1,2$ such that $$ I_p({\mathcal A},{\mathcal B},\lambda)\leqslant \exp\left(B_d\log{H}/\log\log{H}\right)J_i({\mathcal A},{\mathcal B},\lambda), $$ where $J_1({\mathcal A},{\mathcal B},\lambda)$ counts the number of solutions to \begin{equation} \label{eq:eqn1} (\alpha_0+\alpha_1h_1+\ldots+\alpha_d h_d)(\beta_0+\beta_1j_1+\ldots+\beta_e j_e)\equiv \lambda \mod{p}, \end{equation} and \begin{equation} \label{eq:zz1} \rho_1h_1+\ldots+\rho_dh_d=\xi, \end{equation} with variables $\vec{h}\in {\mathcal H},\vec{j}\in {\mathcal J}$ and $J_2({\mathcal A},{\mathcal B},\lambda)$ counts the number of solutions to~\eqref{eq:eqn1} and $$ \tau_1j_1+\ldots+\tau_ej_e=\xi, $$ with variables $\vec{h}\in {\mathcal H}$, $\vec{j}\in {\mathcal J}$. Suppose first that \begin{equation} \label{eq:IpJp} I_p({\mathcal A},{\mathcal B},\lambda)\leqslant \exp\left(B_d\log{H}/\log\log{H}\right)J_1({\mathcal A},{\mathcal B},\lambda), \end{equation} the case \begin{equation} \label{eq:IpJp1} I_p({\mathcal A},{\mathcal B},\lambda)\leqslant \exp\left(B_d\log{H}/\log\log{H}\right)J_2({\mathcal A},{\mathcal B},\lambda), \end{equation} may be treated with a similar argument which we indicate at the end of the proof. \subsection{Application of geometry of numbers to derive the desired inequality} Let ${\mathcal L}$ denote the lattice $$ {\mathcal L}=\left\{ (n_1,\ldots,n_d)\in \mathbb{Z}^{d} :~ \rho_1 n_1+\ldots+\rho_d n_d=0 \right\}, $$ and $D$ the convex body $$ D=\{ (n_1,\ldots,n_d) :~ |n_i|\leqslant H\}. $$ Assuming $J_1({\mathcal A},{\mathcal B},\lambda)\neq 0$, there exists some $\vec{h}^{*}=(h^{*}_1,\ldots,h^{*}_d) \in D\cap \mathbb{Z}^d$ such that if $\vec{h}=(h_1,\ldots,h_d)$ satisfies~\eqref{eq:zz1} then \begin{equation} \label{eq:h123} \vec{h}-\vec{h}^{*}\in {\mathcal L}\cap 2D. \end{equation} By~\eqref{eq:rho1} we have $$ \dim {\mathcal L}<d. $$ Hence we may consider two cases, either \begin{equation} \label{eq:case11} \dim ({\mathcal L}\cap 2D)<d-1, \end{equation} or \begin{equation} \label{eq:case12} \dim ({\mathcal L}\cap 2D)=d-1. \end{equation} Suppose that we have~\eqref{eq:case11}. Let ${\mathcal L}^{*}$ denote the lattice generated by ${\mathcal L}\cap 2D$, so that $\dim {\mathcal L}^{*}=r$ for some $r<d-1$. By Lemma~\ref{lem:HB} there exists a basis $\blambda_1,\ldots,\blambda_{r},$ for ${\mathcal L}^{*}$ such that each $\vec{h}$ satisfying~\eqref{eq:h123} may be expressed in the form \begin{equation} \label{eq:glattice1} \vec{h}-\vec{h}^{*}=k_1\blambda_1+\ldots+k_r \blambda_r, \end{equation} where from~\eqref{eq:h123} $$ k_1, \ldots, k_r \ll \frac{\|\vec{h}-\vec{h}^{*}\|}{\|\blambda_j\|} \ll H. $$ Substituting~\eqref{eq:glattice1} into~\eqref{eq:eqn1}, there exists $\widetilde \alpha_0,\ldots,\widetilde \alpha_r\in \mathbb{F}_p$ such that for any $\vec{h} \in {\mathcal H}$, $\vec{j}\in {\mathcal J}$ satisfying~\eqref{eq:eqn1} and~\eqref{eq:zz1} there exists $\ell_1,\ldots,\ell_r$ such that $$ (\widetilde \alpha_0+\widetilde \alpha_1\ell_1+\ldots + \widetilde \alpha_r\ell_r)(\beta_0 +\beta_1h_1+\ldots +\beta_eh_e)\equiv \lambda \mod{p}, $$ and $$ \widetilde \alpha_0+\widetilde \alpha_1\ell_1+\ldots + \widetilde \alpha_r\ell_r= \alpha_0+\alpha_1h_1+\ldots+\alpha_dh_d. $$ Let $\widetilde d=r$ and let $\tcA$ denote the generalized arithmetic progression $$ \tcA=\left \{ \widetilde \alpha_0+\widetilde \alpha_1 \widetilde h_1+\ldots +\widetilde \alpha_{\widetilde d} \widetilde h_{\widetilde d} :~ \left|\widetilde h_i\right |\leqslant \widetilde{C}_{d} H, \ i =1, \ldots, \widetilde d\right \}. $$ From construction of ${\mathcal H}$, for each $a\in {\mathcal A}$ there exists a unique $\bf{h}\in {\mathcal H}$ such that $$ a=\alpha_0+\alpha_1h_1+\ldots+\alpha_dh_d. $$ For each $\bf{h}\in {\mathcal H}$, there exists some $\widetilde{\bf h}$ satisfying $$ \left|\widetilde h_i\right|\leqslant \widetilde{C}_{d} H, \quad i=1,\ldots, \widetilde d,$$ such that $$ \alpha_0+\alpha_1h_1+\ldots+\alpha_d h_d=\widetilde \alpha_0+\widetilde \alpha_1\widetilde h_1+\ldots +\widetilde \alpha_{\widetilde d} \widetilde h_{\widetilde d}.$$ The above implies we may choose a set $$\widetilde {\mathcal H}\subseteq [-\widetilde{C}_{d} H,\widetilde{C}_{d} H]^{\widetilde d},$$ such that the points $$\widetilde \alpha_0+\widetilde \alpha_1\widetilde h_1+\ldots +\widetilde \alpha_{\widetilde d}\widetilde h_{\widetilde d}, \quad \widetilde{\bf{h}}\in {\mathcal H}_0,$$ are distinct and for each $\bf{h}\in {\mathcal H}$ satisfying~\eqref{eq:eqn1} and~\eqref{eq:zz1} there exists some $\widetilde{\bf{h}}\in {\mathcal H}_0$ such that $$ \alpha_0+\alpha_1h_1+\ldots+\alpha_d h_d=\widetilde \alpha_0+\widetilde \alpha_1\widetilde h_1+\ldots +\widetilde \alpha_{\widetilde d}. \widetilde h_{\widetilde d}.$$ The above combined with~\eqref{eq:IABK} implies that $$J_1({\mathcal A},{\mathcal B},\lambda)\leqslant I_p(\tcA,{\mathcal B},\lambda).$$ Hence, recalling~\eqref{eq:IpJp}, we obtain the desired result provided~\eqref{eq:case11} holds. If we are in the case of~\eqref{eq:case12}, then by Lemma~\ref{lem:linear} there exists integers $a_i$ and $b_i$ satisfying $$ \frac{\rho_i}{\rho_d}=\frac{a_i}{b_i}, \quad \gcd(a_i,b_i)=1, \quad a_i, b_i \ll H^{d}, \qquad 1\leqslant i \leqslant d, $$ where by symmetry we assume $j=1$ and also use that $\rho_1=1$ in our application of Lemma~\ref{lem:linear}. By~\eqref{eq:large p}, provided that $C_d$ is large enough, we see that $$ \text{if} \quad a_i,b_i\neq 0 \quad \text{then} \quad a_i,b_i\not \equiv 0 \mod{p}. $$ By~\eqref{eq:zz1} and~\eqref{eq:h123} $$ h_d-h_d^{*}=\frac{\alpha_1}{\alpha_d}(h_1^{*}-h_1)+\ldots+\frac{\alpha_{d-1}}{\alpha_d}(h_{d-1}^{*}-h_{d-1}), $$ which combined with the above implies \begin{equation} \label{eq:gdcase2} h_d\equiv h_d^{*}-a_1\overline{b_1}(h_1-h_1^{*})-\ldots - a_{d-1}\overline{b_{d-1}}(h_{d-1}-h_{d-1}^{*}) \mod{p}, \end{equation} where $\overline{x}$ denotes the multiplicative inverse of $x$ modulo $p$. As before, substituting~\eqref{eq:gdcase2} into~\eqref{eq:eqn1}, there exists a generalized arithmetic progression $$ \tcA=\{ \widetilde \alpha_0+\widetilde \alpha_1\ell_1+\ldots +\widetilde \alpha_{\widetilde d} \ell_{\widetilde d} :~ |\ell_i|\leqslant \widetilde{C}_{d} H, \ i =1, \ldots, \widetilde d \}, $$ with $\widetilde d<d$ such that $$ J_1({\mathcal A},{\mathcal B},\lambda)\leqslant I_p(\tcA,{\mathcal B},\lambda), $$ and the result follows combining this with~\eqref{eq:IpJp}. In the case of~\eqref{eq:IpJp1}, we apply a similar argument as before, except with the lattice $$ {\mathcal L}=\left\{ (n_1,\ldots,n_e)\in \mathbb{Z}^{e} :~ \tau_1 n_1+\ldots+\tau_e n_e=0 \right\}, $$ and convex body $$ D=\{ (n_1,\ldots,n_e) :~ |n_i|\leqslant H, \ i =1, \ldots, e\}, $$ to obtain $$ J_2({\mathcal A},{\mathcal B},\lambda)\leqslant I_p({\mathcal A},\tcB,\lambda), $$ for some generalized arithmetic progression $\tcB$ of the form $$ \tcB=\{ \tbeta_0+\tbeta_1h_1+\ldots +\tbeta_{\te} h_{\te} :~ |h_i|\leqslant \widetilde{C}_{d} H, \ i =1, \ldots, \te\}, $$ with $\te<e$. Combining this with~\eqref{eq:IpJp1} we obtain the desired inequality under the assumption~\eqref{eq:large p}. To conclude that proof it remains to verify~(ii), about the divisibility of $Z_{d,e}$. \subsection{Prime divisors of $Z_{d,e}$} We now show that $Z_{d,e}$ is divisible by all primes $p \leqslant H^{d+e+o(1)}$. Fix some small $\varepsilon>0$ and consider generalised arithmetic progressions of the form~\eqref{eq:Aiter} and~\eqref{eq:Biter}. We next use the Dirichlet pigeon-hole principle to show the statement of Lemma~\ref{lem:iter} fails for any prime $H^d < p \leqslant H^{d+e-\varepsilon}$ provided that $H$ is large enough. This is sufficient from~\eqref{eq:Z0def} and the fact that $Z_0|Z_{d,e}$, (provided $C_d>1$). Indeed, since the value of $Z_{d,e}$ does not depend on the generalised arithmetic progressions ${\mathcal A}_0$ and ${\mathcal B}_0$, we can choose $$ \alpha_0 = 0 \qquad\mbox{and}\qquad \alpha_i = (2H+1)^{i-1}, \quad i =1, \ldots, d, $$ and $$ \beta_0 = 0 \qquad\mbox{and}\qquad \beta_i = (2H+1)^{j-1}, \quad j =1, \ldots, e. $$ Hence ${\mathcal A}_0$ and ${\mathcal B}_0$ are proper and in fact contain $(2H+1)^d$ and $(2H+1)^e$ distinct residues modulo $p$, respectively. Next, there are $ (2H+1)^{d+e}$ products in $\overline{\mathbb{F}}_p$ $$ \(\alpha_0+\alpha_1h_1+\ldots +\alpha_d h_d\) \(\beta_0+\beta_1j_1+\ldots +\beta_e j_e\) $$ over all choices of $|h_i|\leqslant H$, $i =1, \ldots, d$ and $|j_i|\leqslant H$, $i =1, \ldots, e$, except for at most $O\(H^{d+e-1}\)$ choices for which this product is divisible by $p$. Hence, there exists a non-zero residue class $\lambda_0$ modulo $p$ into which at least $$ (2H+1) ^{d+e} \(1+ O\(H^{-1}\)\)/p \gg H^{\varepsilon} $$ of such products fall, thus giving $$ I_p({\mathcal A}_0,{\mathcal B}_0,\lambda_0) \gg H^{\varepsilon} $$ contradicting the assumed bound. Hence we have $p \mid Z_{d,e}$ for such primes. This also implies that the assumption~\eqref{eq:large p} holds for any prime $p\nmid Z_{d,e}$. \section{Proofs of results on factorisation in generalised arithmetic progressions} \subsection{Proof of Theorem~\ref{thm:main1}} We proceed by induction on $d+e$ with base case $$d+e=1.$$ In this case, there exists some $\lambda_0\in \mathbb{F}_p$ such that $$I_{p}({\mathcal A},{\mathcal B},\lambda)=|\{ 1\leqslant h\leqslant H \ : \ h=\lambda_0 \}|,$$ for which there is at most $1$ solution provided $H\leqslant p$. Hence the result follows by taking $$Z=\prod_{p\leqslant H}p.$$ Let $C_*(\ell) $ be sufficiently large depending only on the implied constants in Lemma~\ref{lem:iter}. We next set up some notation related to our induction hypothesis. Let $H\gg 1,$ $\ell\geqslant 2$ and for each pair of positive integers $d,e$ satisfying $$e\leqslant d \qquad\mbox{and}\qquad d+e \leqslant \ell,$$ let $Z_{d,e}$ be as in Lemma~\ref{lem:iter}. Define $$ Z_{\ell}=\prod_{\substack{0<e\leqslant d \\ d+e\leqslant \ell \\}}Z_{d,e}, $$ so that $Z_{\ell}$ is $O(H^{1/\gamma_{\ell}})$-smooth and satisfies $$ \log{Z_{\ell}}\ll \log{H}\sum_{3\leqslant j \leqslant \ell}\sum_{\substack{0<e\leqslant d \\ d+e=j \\}}H^{j(d+1)(e+1)}\ll H^{(\ell-1)(\ell+2)^2/4} \log H $$ where we have used $$ j(d+1)(e+1) \leqslant j \(\frac{d+e+2}{2}\)^2 \leqslant \ell \frac{(\ell+2)^2}{4}. $$ We formulate our induction hypothesis as follows. There exists a constant $b_{\ell-1}$ such that for any positive integers $e\leqslant d$ satisfying $d+e \leqslant \ell-1$ and prime \begin{equation} \label{eq:p-large} p \geqslant C_*(\ell) H^{\ell-1} \end{equation} not dividing $Z_{\ell-1}$ (which by Lemma~\ref{lem:iter}~(ii) holds for any $p \nmid Z_{\ell-1}$), for any $\lambda \in \mathbb{F}_p$ and generalized arithmetic progressions \begin{align*} {\mathcal A}&=\left \{\alpha_0+ \alpha_1h_1+\ldots+\alpha_dh_d :~ 1\leqslant h_i\leqslant H, \ i =1, \ldots, d\right \},\\ {\mathcal B}&=\left \{\beta_0+ \beta_1j_1+\ldots+\beta_ej_e :~ 1\leqslant j_i\leqslant H, \ i =1, \ldots, e\right \}, \end{align*} we have $$ I_p({\mathcal A},{\mathcal B},\lambda)\leqslant \exp\left(b_{\ell-1}\log{(|{\mathcal A}||{\mathcal B}|)}/\log \log{(|{\mathcal A}||{\mathcal B}|)}\right). $$ Let $e\leqslant d$ be positive integers satisfying $$d+e=\ell$$ and $H\gg 1$. By Lemma~\ref{lem:iter}, for any prime $p \nmid Z_\ell$, and thus satisfying $$ p\gg H^{d+e} $$ the following holds. Let $\lambda\in \overline\mathbb{F}_p^*$ and ${\mathcal A},{\mathcal B}\subseteq \overline\mathbb{F}_p$ generalised arithmetic progressions as in~\eqref{eq:Aiter} and satisfying \eqref{eq:Biter} with $d,e\geqslant 2$ and $$\alpha_1,\ldots,\alpha_d,\beta_1,\ldots,\beta_e \in \overline \mathbb{F}_p^{*}. $$ There exists a constant $\widetilde{C}_{d}$ depending only on $d,e$, integers $\widetilde d$ and $\te$ satisfying $$ \widetilde d \leqslant d, \qquad \te \leqslant e, \qquad \widetilde d+\te<d+e, $$ generalised arithmetic progressions $\tcA,\tcB$ of the form \begin{align*} & \tcA=\{ \widetilde \alpha_0+\widetilde \alpha_1h_1+\ldots +\widetilde \alpha_{\widetilde d} h_{\widetilde d} :~ |h_i|\leqslant \widetilde{C}_{d} H, \ i =1, \ldots, \widetilde d\},\\ & \tcB=\{ \tbeta_0+ \tbeta_1j_1+\ldots + \tbeta_{\te} j_{\te} :~ |j_i|\leqslant \widetilde{C}_{d} H, \ i =1, \ldots, \te \}, \end{align*} with $$\widetilde \alpha_1 \ldots,\widetilde \alpha_d, \tbeta_1 \ldots \tbeta_e \in \overline \mathbb{F}_p^{*},$$ and some $\mu \in \overline\mathbb{F}_p^*$ such that $$ I_p({\mathcal A},{\mathcal B},\lambda)\leqslant \exp\left(B_d \log{H}/\log\log{H}\right)I_p(\tcA,\tcB,\mu ). $$ If $p \nmid Z_{\ell}$ then we obviously have $p \nmid Z_{\ell-1}$ (since $Z_{\ell-1} \mid Z_\ell$) and also $p$ satisfies~\eqref{eq:p-large}. Therefore, by our induction hypothesis (where we can also assume that $C_{d} \geqslant C_*(d+e-1)$) $$ I_p({\mathcal A},{\mathcal B},\lambda) \ll \exp\left((B_d+b_{\ell-1}) \log{H}/\log\log{H}\right). $$ The result now follows by taking $$b_\ell =\max_{d \leqslant \ell} B_d+b_{\ell-1}, \quad Z=Z_{\ell}$$ and noting that $Z_\ell$ is $O(H^{1/\gamma_{\ell+1}})$-smooth and satisfies $$ \log Z_\ell\ll H^{\ell(\ell+2)^2/4} \log H. $$ \section{Proofs of results towards the Erd\H{o}s--Szemer{\'e}di conjecture} \subsection{Proof of Theorem~\ref{thm:main24}} The celebrated theorem of Freiman~\cite{Frei} states that if ${\mathcal A}\subseteq \mathbb{Z}$ is a finite set satisfying $$|{\mathcal A}+{\mathcal A}|\leqslant K|{\mathcal A}|,$$ then there exist constants $b(K)$ and $d(K)$ depending only on $K$, and some generalised arithmetic progression ${\mathcal B}$ of rank $d(K)$ and size $$|{\mathcal B}|\leqslant b(K) |{\mathcal A}|,$$ such that $${\mathcal A}\subseteq {\mathcal B}.$$ The theorem of Freiman~\cite{Frei} has gone through a number of improvements and generalisations to sets from arbitrary abelian groups. A version of this result convenient for our application is due to Cwalina and Schoen~\cite[Theorem~4]{CS}, which states that we may take ${\mathcal B}$ proper, $$ b(K) \leqslant \exp\(c K^4(\log{K+2})\) \qquad\mbox{and}\qquad d(K) \leqslant 2K, $$ for some absolute constant $c$ (note the additive group of $\mathbb{F}_p$ has no proper subgroups, so only the first alternative of~\cite[Theorem~4]{CS} applies). Thus, using Corollaries~\ref{cor:all-primes} and~\ref{cor:main22} with $H = |{\mathcal A}|$, $d = e = d(K)$, $$ \delta=\gamma_{2d(K)+1}=\frac{1}{(44K+26)2^{12K+8}} \quad \text{and}\quad c_0(K) = C_0(2K), $$ where $C_0(d)$ is as in Corollary~\ref{cor:all-primes} (which we can assume to be monotonically increasing with respect to both $d$ and $e$), we obtain that for each $\lambda \in \mathbb{F}_q^{*}$, the number of solutions to each of the equations $$ a_1a_2 = \lambda , \qquad a_1^{-1}+ a_2^{-1}= \lambda, \qquad a_1^{2}+ a_2^{2}= \lambda , $$ over $\mathbb{F}_p$ with variables $a_1,a_2\in {\mathcal A}$ is $|{\mathcal A}|^{o(1)}$ since we assume that $K$ is fixed, from which the desired result follows. \subsection{Proof of Theorem~\ref{thm:main24-AA}} We follow the proof of Theorem~\ref{thm:main24} however apply Corollary~\ref{cor:almost-all-primes} instead of Corollary~\ref{cor:all-primes}. \end{document}
arXiv
What axioms does ZF have, exactly? While trying to find the list of axioms of ZF on the Web and in literature I noticed that the lists I had found varied quite a bit. Some included the axiom of empty set, while others didn't. That is perfectly understandable - the statement of the axiom is provable from the axiom schema of specification. Some lists also contained the axiom of pairing, while others didn't - I've heard here on MSE that the statement of this axiom is also provable. I was wondering: are there other axioms of ZF statements of which are also provable that I don't know of? What is the true commonly accepted list of ZF axioms which doesn't contain any redundant axioms included just for emphasis? logic set-theory foundations Mark Fantini $\begingroup$ Does it matter what axioms you have? It matters what you can prove from them. And if you show that you can prove one list from the other and vice versa, then it doesn't matter anymore. For brevity, I prefer to think of ZF as the following: Extensionality, power set, union, regularity, replacement schema, infinity, choice. [Note that the replacement schema has two versions, one requires that we add the specification axiom as well; the other proves specification.] $\endgroup$ – Asaf Karagila♦ Sep 1 '14 at 16:19 $\begingroup$ @AsafKaragila it's always nice to reduce the number of employed axioms, even though two different lists of axioms may be equivalent. And I also think it was you who told me that the axiom of pairing was not needed :-) $\endgroup$ – user132181 Sep 1 '14 at 16:22 $\begingroup$ Well, pairing is provable from replacement + power set + empty set; and empty set is provable from a myriad of axioms (infinity, for example). $\endgroup$ – Asaf Karagila♦ Sep 1 '14 at 16:27 $\begingroup$ Of course in my first comment, choice is a explicit addition to $\sf ZF$. :-) $\endgroup$ – Asaf Karagila♦ Sep 1 '14 at 16:37 $\begingroup$ @AsafKaragila I thought that asking the question in terms of ZFC instead of ZF would be... redundant (pun intended) :-D $\endgroup$ – user132181 Sep 1 '14 at 16:43 Here is my preferred list of axioms, they are written in the language of $\in$, and $=$ is a logical symbol. Extensionality. $\forall x\forall y(x=y\leftrightarrow\forall z(z\in x\leftrightarrow z\in y))$. Two sets are equal if and only if they have the same elements. Union. $\forall x\exists y\forall u(u\in y\leftrightarrow\exists v(v\in x\land u\in v))$. If $x$ is a set, then $\bigcup x$ is a set. Regularity. $\forall x(\exists y(y\in x)\rightarrow\exists y(y\in x\land\forall z(z\in x\rightarrow z\notin y)))$. The $\in$ relation is well-founded. Power set. $\forall x\exists y\forall z(z\in y\leftrightarrow\forall u(u\in z\rightarrow u\in x))$. If $x$ is a set, then $\mathcal P(x)$ is a set. Replacement schema. If $\varphi(x,y,p_1,\ldots,p_n)$ is a formula in the language of set theory, then: $$\forall p_1\ldots\forall p_n\\ \forall u(\forall x(x\in u\rightarrow(\exists y\varphi(x,y,p_1,\ldots,p_n)\rightarrow\exists y(\varphi(x,y,p_1,\ldots,p_n)\land\forall z(\varphi(x,z,p_1,\ldots,p_n)\rightarrow z=y)))\rightarrow\exists v\forall y(y\in v\leftrightarrow\exists x(x\in u\land\varphi(x,y,p_1,\ldots,p_n))).$$ For every fixed parameters, $p_1,\ldots,p_n$, and for every set $u$, if for every $x\in u$ there is at most one $y$ such that $\varphi(x,y,p_1,\ldots,p_n)$, namely the formula, with the fixed parameters, define a partial function on $u$, then there is some $v$ which is exactly the range of this function. Infinity. $$\exists x(\exists y(y\in x\land\forall z(z\notin y))\land\forall u(u\in x\rightarrow\exists v(v\in x\land\forall w(w\in v\leftrightarrow w\in u\lor w=u))))\text{.}$$ There exist a set $x$ which has the empty set as an element, and whenever $y\in x$, then $y\cup\{y\}\in x$ as well. I wrote those purely in the language of $\in$, as you can see, to avoid any claims that I need to use $\subseteq$ or $\mathcal P$ or $\bigcup$. I will now allow myself these addition to the language. From these axioms we can easily: Prove there is an empty set: it is the element of the set guaranteed to exist in the infinity axiom. Prove the pairing axiom: By the power set axiom, $\mathcal P(\varnothing)$ exists, and its power set $\{\varnothing,\{\varnothing\}\}$ exists too. Now consider the formula $\varphi(x,y,a,b,c,d)$ whose content is $$(x=a\land y=c)\lor(x=b\land y=d).$$ Given two sets, $u,v$ consider the replacement axiom for $\varphi$ with the parameters: $\varphi(x,y,\varnothing,\mathcal P(\varnothing),u,v)$, and the domain $\mathcal{P(P(\varnothing))}$. Then there is a set who is the range of the function $\varphi$ defines here, which is exactly $\{u,v\}$. Specification schema: Suppose that $\varphi(x,p_1,\ldots,p_n)$ is a formula in the language of set theory, and $A$ is a set which exists. Define $\psi(x,y,p_1,\ldots,p_n)$ to be $\varphi(x,p_1,\ldots,p_n)\land x=y$. Easily we can prove that given any element of $A$ there is at most one element satisfying $\psi(x,y,p_1,\ldots,p_n)$ (with the fixed parameters). And therefore the range of the function defined is $\{x\in A\mid\varphi(x,p_1,\ldots,p_n)\}$ as wanted. And so on and so forth. The choice of axiomatization usually doesn't matter. But it does matter when one has to verify the axioms by hand for one reason or another, then it might be fortuitous to add explicit axioms or it might be better to keep it minimal. Depending on the situation. It is also an important question what axioms you keep, or add, when you consider weakening of $\sf ZF$. You can remove replacement, but add specification, or perhaps specification for a particular class of formulas; or you can remove extensionality and then the choice whether to use Replacement or Collection schemas really prove a big different; and so on. dankness Asaf Karagila♦Asaf Karagila $\begingroup$ I'm extremely grateful for this very detailed answer, thank you very much. $\endgroup$ – user132181 Sep 1 '14 at 17:39 $\begingroup$ For rule two I think you meant x instead of z $\endgroup$ – DanielV Sep 1 '14 at 18:01 $\begingroup$ @Daniel: Thanks for noticing! $\endgroup$ – Asaf Karagila♦ Sep 1 '14 at 20:44 $\begingroup$ In replacement, you presumably intended to say something about $y=z$ at the end of the second line. Also, there's an alternative proof of pairing using infinity and replacement. $\endgroup$ – Andreas Blass Sep 1 '14 at 23:30 $\begingroup$ @Andreas: D'oh, of course. $\endgroup$ – Asaf Karagila♦ Sep 2 '14 at 4:12 Not the answer you're looking for? Browse other questions tagged logic set-theory foundations or ask your own question. Is the Axiom of Empty Set a canonical ZFC axiom? When can ZFC be said to have been "born"? How to prove that a set exists in ZFC? Is every subset of a set a set? Is there such a thing as the number of axioms? What is the standard first-order language to formalize ZFC? Where could I find a discussion about "minimal sets" of axioms for ZF(C) set theory? Second order ZFC, intuition required Redundant axioms IN ZFC Dependence of axioms of Zermelo set theory Provable formulas in everyday Mathematics In ZFC, which axioms of set are not required to class? Axiom of Power Set Is there a contradiction hiding in this alternative set theory with 3 axioms? Build a model of a universe of sets while the axiom of pairing fails. Why do we allow redundant axioms in ZFC? Weaker Choice of the Real Numbers
CommonCrawl
In acute triangle $ABC$ points $P$ and $Q$ are the feet of the perpendiculars from $C$ to $\overline{AB}$ and from $B$ to $\overline{AC}$, respectively. Line $PQ$ intersects the circumcircle of $\triangle ABC$ in two distinct points, $X$ and $Y$. Suppose $XP=10$, $PQ=25$, and $QY=15$. The value of $AB\cdot AC$ can be written in the form $m\sqrt n$ where $m$ and $n$ are positive integers, and $n$ is not divisible by the square of any prime. Find $m+n$. Let $AP=a, AQ=b, \cos\angle A = k$ Therefore $AB= \frac{b}{k} , AC= \frac{a}{k}$ By power of point, we have $AP\cdot BP=XP\cdot YP , AQ\cdot CQ=YQ\cdot XQ$ Which are simplified to $400= \frac{ab}{k} - a^2$ $525= \frac{ab}{k} - b^2$ Or $a^2= \frac{ab}{k} - 400$ $b^2= \frac{ab}{k} - 525$ (1) Or $k= \frac{ab}{a^2+400} = \frac{ab}{b^2+525}$ Let $u=a^2+400=b^2+525$ Then, $a=\sqrt{u-400},b=\sqrt{u-525},k=\frac{\sqrt{(u-400)(u-525)}}{u}$ In triangle $APQ$, by law of cosine $25^2= a^2 + b^2 - 2abk$ Pluging (1) $625= \frac{ab}{k} - 400 + \frac{ab}{k} - 525 -2abk$ Or $\frac{ab}{k} - abk =775$ Substitute everything by $u$ $u- \frac{(u-400)(u-525)}{u} =775$ The quadratic term is cancelled out after simplified Which gives $u=1400$ Plug back in, $a= \sqrt{1000} , b=\sqrt{875}$ Then $AB\cdot AC= \frac{a}{k} \frac{b}{k} = \frac{ab}{\frac{ab}{u} \cdot\frac{ab}{u} } = \frac{u^2}{ab} = \frac{1400 \cdot 1400}{ \sqrt{ 1000\cdot 875 }} = 560 \sqrt{14}$ So the final answer is $560 + 14 = \boxed{574}$
Math Dataset
\begin{definition}[Definition:Induced Homomorphism between Localizations of Ring] Let $A$ be a commutative ring with unity. Let $S, T \subseteq A$ be multiplicatively closed subsets. Let $S$ be a subset of the saturation of $T$. The '''induced homomorphism''' between localizations $A_S \to A_T$ is the unique $A$-algebra homomorphism between them. \end{definition}
ProofWiki
\begin{document} \title{Enumeration of Lifts of Commuting Elements of a Group} \section{Introduction} Frobenius \cite{Fr} computed the number of pairs of commuting elements of a finite group $\Gamma$ to be $ c\,\vert \Gamma\vert $, where $c$ is the number of conjugacy classes in $\Gamma$ and the vertical bars denote the cardinality of a set. This formula was generalized by Mednykh \cite{Me} who computed the number of homomorphisms from the fundamental group $\pi=\pi_1(W)$ of an arbitrary closed connected oriented surface $W$ to $\Gamma$: \begin{equation}\label{FroMed}\vert {{\rm Hom}} (\pi , \Gamma)\vert\, =\, \vert \Gamma \vert \, \sum_{\rho\in {\rm Irr} (\Gamma)} \,\left (\frac{\vert \Gamma \vert}{\dim \, \rho}\right )^{2d-2}\, . \end{equation} Here ${\rm Irr} (\Gamma)$ is the set of equivalence classes of irreducible complex linear representations of $\Gamma$, and $d$ is the genus of $W$. For $W=S^1\times S^1$, this gives the Frobenius formula since $ \pi_1(S^1\times S^1)=\FAT{Z}^2$ and $\vert {\rm Irr} (\Gamma)\vert=c$. Formula (\ref{FroMed}) can be proved by algebraic means, see Jones \cite{Jo}, or deduced from Topological Quantum Field Theory in dimension 2, see Freed and Quinn \cite{FQ}. The Frobenius-Mednykh formula (\ref{FroMed}) was generalized by Turaev \cite{T} as follows. Consider a group epimorphism $q:G'\to G$ with finite kernel $\Gamma$. Fix a homomorphism $g:\pi=\pi_1(W)\to G$. Let ${\rm Hom}_g(\pi, G')$ be the set of all lifts of $g$ to $G'$, that is the set of homomorphisms $g':\pi\to G'$ such that $q g'=g$. Since $\pi$ is finitely generated, the set ${\rm Hom}_g(\pi, G')$ is finite. Note that the action of $G$ on $\Gamma$ by outer automorphisms induces an action of $G$ on ${\rm Irr} (\Gamma)$. The stabilizer of $\rho \in {\rm Irr} (\Gamma)$ under this action is denoted $G_\rho$. Then \begin{equation}\label{Turaev1} |{{\rm Hom}}_g(\pi, G')|=\, \vert \Gamma \vert \, \sum_{\rho\in {\rm Irr} (\Gamma), G_{\rho} \supset {\rm Im}(g)} \,\left (\frac{\vert \Gamma \vert}{\dim \, \rho}\right )^{2d-2} \,g^*(\zeta_\rho) ([W])\, , \end{equation} where $\zeta_\rho\in H^2(G_\rho; \FAT{C}^\times) $ is a cohomology class introduced in \cite{T}, and $g^*(\zeta_\rho) ([W])\in \FAT{C}^\times$ is the evaluation of $g^*(\zeta_\rho) \in H^2(\pi; \FAT{C}^\times)$ on the fundamental class $[W]\in H_2(W{})=H_2(\pi{})$ of $W$. Here and below the group of coefficients in homology is $\FAT{Z}$. The evaluation in question is induced by the bilinear form $\FAT{C}^\times \times \FAT{Z}\to \FAT{C}^\times, (z,n)\mapsto z^n$. For $G=1, G'=\Gamma$, we recover Formula (\ref{FroMed}). For $W=S^1\times S^1$, Formula (\ref{Turaev1}) computes the number of pairs of commuting elements of $G'$ projecting to a given pair of commuting elements of $G$. More precisely, let $\alpha, \beta \in G$ be such that $\alpha \beta = \beta \alpha$. Set $$L(\alpha, \beta, q) = \{a,b \in G'\, | \, q(a) = \alpha, q(b) = \beta, \mbox{ and } ab=ba\}\, .$$ Clearly, $L(\alpha, \beta, q) = {{\rm Hom}}_g(\FAT{Z}^2, G') $, where $g:\FAT{Z}^2=\pi_1(S^1\times S^1)\longrightarrow G$ carries the generators of $ \FAT{Z}^2$ to $\alpha$ and $\beta$, respectively. Formula (\ref{Turaev1}) implies that \begin{equation}\label{Turaev1-} |L(\alpha, \beta, q)|=\, \vert \Gamma \vert \, \sum_{\rho\in {\rm Irr} (\Gamma), G_{\rho} \supset \{\alpha, \beta\}} \, \,g^*(\zeta_\rho) ([W])\, . \end{equation} We use this equality to compute $|L(\alpha, \beta, q)|$ for several small groups $\Gamma$. We focus on the case where $\Gamma=Q_8 = \{\pm 1, \pm i, \pm j, \pm k\}$ is the quaternion group. \begin{thm}\label{Theorem_Q_8(1)} Let $1 \rightarrow Q_8 \rightarrow G' \stackrel{q} \rightarrow G \rightarrow 1$ be a short exact sequence of groups. For any commuting elements $\alpha, \beta $ of $ G$, the set $L(\alpha, \beta, q) $ has $0$, $8$, $16$, $24$, or $40$ elements. \end{thm} \begin{cor} Let $G'$ be a group with normal subgroup $Q_8 \triangleleft G'$. Let $a,b $ be commuting elements of $ G'$. Then the number of pairs $\gamma_1, \gamma_2 \in Q_8$ such that $a\gamma_1$ commutes with $ b\gamma_2 $ is equal to $8$, $16$, $24$, or $40$. \end{cor} \begin{thm}\label{Theorem_Q_8(2)} For each $n \in \{0,8,16,24, 40\}$, there is a short exact sequence of the form $Q_8 \hookrightarrow G' \stackrel{q}{\twoheadrightarrow} G$ and commuting elements $\alpha, \beta $ of $ G$ such that $|L(\alpha, \beta, q)| = n$. \end{thm} The existence of commuting lifts of $\alpha, \beta\in G$ to $G'$ may be approached from a homological viewpoint. Consider for simplicity the case where $\alpha, \beta$ generate $G$. Let $g:\FAT{Z}^2\to G$ be the homomorphism carrying the generators of $ \FAT{Z}^2$ to $\alpha$ and $\beta$, respectively. Let $\kappa_{\alpha, \beta} \in H_2(G{})$ be the image of a generator of $H_2(\FAT{Z}^2{})=\FAT{Z}$ under the induced homomorphism $g_*: H_2(\FAT{Z}^2{}) \to H_2(G{})$. The homology class $\kappa_{\alpha, \beta}$ is well defined, at least up to multiplication by $-1$. It is clear that if $\alpha, \beta $ lift to commuting elements of $G'$, then $\kappa_{\alpha, \beta}$ lies in the image of the homomorphism $q_*: H_2(G'{})\to H_2(G{})$. This yields a homological obstruction to the existence of such a lift. It is easy to give examples showing that this obstruction may be non-trivial. The following theorem shows that, generally speaking, this obstruction is insufficient. \begin{thm}\label{Theorem_Q_8(2+)} There is a short exact sequence of the form $Q_8 \hookrightarrow G' \stackrel{q}{\twoheadrightarrow} G$ and commuting generators $\alpha, \beta $ of $ G$ such that $\kappa_{\alpha, \beta}$ lies in the image of the homomorphism $q_*: H_2(G'{})\to H_2(G{})$ and $ L(\alpha, \beta, q) = \emptyset$. \end{thm} Applying Formula (\ref{Turaev1}) to surfaces of genus $\geq 2$, we obtain the following theorem. \begin{thm}\label{ed} Let $1 \rightarrow Q_8 \rightarrow G' \stackrel{q} \rightarrow G \rightarrow 1$ be a short exact sequence of groups. Let $d\geq 2$ and $\alpha_i, \beta_i \in G$, $1 \leq i \leq d$ be such that $\prod_{i=1}^d [\alpha_i, \beta_i] = 1$. Then there is a family $\{a_i, b_i \}_{i=1}^d$ of elements of $G'$ such that $q(a_i) = \alpha_i, q(b_i) = \beta_i$ for all $i$ and $ \prod_{i=1}^d [a_i, b_i] = 1$. The number of such families is equal to $8^{2d-1}(N \pm 2^{2-2d})$, where $N\in \{1,2,4\}$. \end{thm} It would be interesting to find out what numbers $8^{2d-1}(N \pm 2^{2-2d})$ with $N\in \{1,2,4\}$ are realizable as the number of families as in this theorem. The work of the second named author was partially supported by the NSF grant DMS-0707078. \section{Representations and cohomology classes}\label{section2} We define in this section the cohomology classes $\zeta_\rho$ used in the formulas above. Let $q:G'\to G$ be a group epimorphism with kernel $\Gamma$ (not necessarily finite). The group $G$ acts on ${\rm Irr} (\Gamma)$ as follows. For each $\alpha \in G$, choose $ \widetilde{\alpha} \in q^{-1}(\alpha)\subset G'$. Let $\rho : \Gamma \to GL_n(\FAT{C})$ be an irreducible representation of degree $n\geq 1$. For $\alpha \in G$, the map $ \Gamma \to GL_n(\FAT{C}), \gamma \mapsto \rho(\widetilde{\alpha} \gamma \widetilde{\alpha}^{-1}) $ is an irreducible representation of $\Gamma$ denoted $\rho \alpha$. The equivalence class of $\rho\alpha$ does not depend on the choice of $ \widetilde{\alpha} $. This defines a right action of $G$ on ${\rm Irr} (\Gamma)$. Given an irreducible representation $\rho:\Gamma\to GL_n(\FAT{C})$, let $ G _\rho = \{ \alpha \in G | \, \rho \alpha \sim \rho \} $ be the stabilizer of its equivalence class. We now define $\zeta_\rho \in H^2(G_\rho; \FAT{C}^\times)$ following \cite{T}. For each $\alpha \in G_\rho $, there is a matrix $M_\alpha\in GL_n(\FAT{C})$ such that \begin{equation} \label{bh} \rho(\widetilde{\alpha} \gamma \widetilde{\alpha}^{-1}) = M_\alpha \rho (\gamma) M_\alpha^{-1} \end{equation} for all $\gamma \in \Gamma$. The irreducibility of $\rho$ implies that $M_\alpha$ is unique up to multiplication by a non-zero scalar. Fix $M_\alpha$ for all $\alpha\in G_\rho$. For any $\alpha, \beta\in G_\rho$, there a unique $\zeta_\rho(\alpha, \beta) \in \FAT{C}^\times$ such that \begin{equation} \label{cohomology class_1} M_{\alpha \beta} \, \rho ( (\widetilde{\alpha\beta})^{-1} \widetilde \alpha \widetilde \beta) = \zeta_\rho(\alpha, \beta) \, M_\alpha\, M_\beta. \end{equation} One checks that $\zeta_\rho:G_\rho \times G_\rho \to \FAT{C}^\times$ is a 2-cocycle. Its cohomology class in $ H^2(G_\rho; \FAT{C}^\times)$ depends only on the equivalence class of $\rho$ and does not depend on the choice of the conjugating matrices $\{M_\alpha\}_\alpha$. By abuse of notation, we denote this cohomology class by the same symbol $\zeta_\rho$. The arguments below in this section show that the cohomology class $q^*(\zeta_\rho)\in H^2(q^{-1}(G_\rho); \FAT{C}^\times)$ has finite order. If $\Gamma$ is finite, then we can deduce that $\zeta_\rho$ has finite order in $ H^2(G_\rho; \FAT{C}^\times)$ so that its evaluation on any 2-dimensional homology class of $G_\rho$ is a root of unity. If $n=1$, then we can take $M_\alpha=1\in \FAT{C}$ for all $\alpha\in G$ and obtain $\zeta_\rho(\alpha, \beta)=\rho ( (\widetilde{\alpha\beta})^{-1} \widetilde \alpha \widetilde \beta)$ for all $\alpha, \beta\in G$. In particular, if $\rho$ is the trivial 1-dimensional representation of $\Gamma$, then $\zeta_\rho=1$. A similar construction produces cohomology classes of certain subgroups of the group ${\mathcal A}={\mathop{\rm Aut}\nolimits}(\Gamma)$ of automorphisms of $\Gamma$. We define a right action of ${\mathcal A}$ on $\mathop{\rm Irr}\nolimits(\Gamma)$ by $\rho \varphi = \rho \circ \varphi $ for $\varphi \in {\mathcal A}$ and $\rho\in \mathop{\rm Irr}\nolimits(\Gamma)$. For an irreducible representation $\rho:\Gamma\to GL_n(\FAT{C})$, the stabilizer ${\mathcal A}_\rho\subset {\mathcal A} $ of the equivalence class of $\rho$ consists of all $\varphi \in {\mathcal A} $ such that there is ${\mathcal M}_\varphi \in GL_n(\FAT{C})$ satisfying \begin{equation} \label{matrix} \rho\varphi(\gamma) = {\mathcal M}_\varphi \, \rho (\gamma)\, {\mathcal M}_\varphi^{-1}\end{equation} for all $\gamma \in \Gamma$. The matrix ${\mathcal M}_\varphi$ is unique up to multiplication by an element of $\FAT{C}^\times$. We fix ${\mathcal M}_\varphi$ for all $\varphi \in {\mathcal A}_\rho$ and define a 2-cocycle $\eta_\rho:{\mathcal A}_\rho \times {\mathcal A}_\rho \to \FAT{C}^\times$ by \begin{equation} \label{cohomology class} {\mathcal M}_{\varphi \psi} = \eta_\rho(\varphi, \psi) \, {\mathcal M}_\varphi\, {\mathcal M}_\psi\, , \end{equation} where $\varphi, \psi \in {\mathcal A}_\rho$. The class of this cocycle in $ H^2({\mathcal A}_\rho; \FAT{C}^\times)$ depends only on the equivalence class of $\rho$ and does not depend on the choice of the conjugating matrices $\{{\mathcal M}_\varphi\}_\varphi$. This cohomology class is also denoted $\eta_\rho$. Taking the determinant on both sides of (\ref{cohomology class}), we obtain that $\eta_\rho^n=1$, where $n=\dim\, \rho$. The constructions of $\zeta_\rho$ and $\eta_\rho$ are related by the following lemma. \begin{lemma}\label{le1} Let $q:G'\to G$ be a group epimorphism with kernel $\Gamma$. Let $\omega: G' \to {\mathcal A}={\mathop{\rm Aut}\nolimits}(\Gamma) $ be the homomorphism carrying $a\in G'$ to the automorphism $\gamma\mapsto a\gamma a^{-1}$ of $\Gamma$. For any irreducible representation $\rho:\Gamma \to GL_n(\FAT{C})$ of $\Gamma$, we have $\omega^{-1}({\mathcal A}_\rho)=q^{-1} (G_\rho)$. If $\omega(G')\subset {\mathcal A}_\rho$ (or equivalently, if $G_\rho=G$), then \begin{equation}\label{Turaev_2++} q^*(\zeta_\rho)={\omega}^*(\eta_\rho)\, , \end{equation} where $q^* : H^2(G;\FAT{C}^\times) \to H^2(G';\FAT{C}^\times)$ and ${\omega}^* : H^2({\mathcal A}_\rho;\FAT{C}^\times) \to H^2(G';\FAT{C}^\times)$ are the homomorphisms induced by $q$ and $\omega$, respectively. \end{lemma} {\it Proof.} The equality $\omega^{-1}({\mathcal A}_\rho)=q^{-1} (G_\rho)$ follows from the definitions. For each $\alpha \in G$ fix $ \widetilde{\alpha} \in q^{-1}(\alpha)\subset G'$ and set $M_\alpha= \mathcal M_{\omega(\widetilde {\alpha})}$. Formula (\ref{matrix}) implies (\ref{bh}). For $a\in G'$, set $\gamma_a= (\widetilde {q(a)})^{-1} a \in \Gamma$ and $M^+_a=M_{q(a)} \rho(\gamma_a)\in GL_n(\FAT{C})$. Given $\gamma\in \Gamma$, we deduce from (\ref{bh}) that $$\rho(a\gamma a^{-1})=\rho (\widetilde {q(a)} \gamma_a \gamma \gamma_a^{-1} (\widetilde {q(a)})^{-1})= M^+_a\, \rho(\gamma)\, (M^+_a)^{-1} \, .$$ Comparing with (\ref{matrix}) we obtain that $M^+_a=k_a \, \mathcal M_{\omega(a)} $ for some $k_a\in \FAT{C}^\times$. The 2-cocycle $\{\eta_\rho(\omega(a), \omega(b))\}_{a,b\in G'}$ can be computed from the matrices $\{\mathcal M_{\omega(a)}\}_a$ via the identity $\mathcal M_{\omega(ab)}= \eta_\rho(\omega(a), \omega(b)) \, \mathcal M_{\omega(a)} \, \mathcal M_{\omega(b)}$ for $a,b\in G'$. The 2-cocycle $\{\zeta_\rho({q(a)}, {q(b)})\}_{a,b\in G'}$ derived from the matrices $\{M_\alpha\}_{\alpha\in G}$ can be computed from $\{M^+_a\}_a$. Indeed, for $a,b\in G'$, we have $$\gamma_{ab}= ({\widetilde {q(ab)}})^{-1} \, {\widetilde {q(a)}}\, \gamma_a \, {\widetilde {q(b)}}\, \gamma_b\, $$ and $$M^+_{ab}=M_{q(ab)}\, \rho(\gamma_{ab})= M_{q(a)q(b)}\, \rho(({\widetilde {q(ab)}})^{-1} \, {\widetilde {q(a)}}\, \gamma_a \, {\widetilde {q(b)}}\, \gamma_b)$$ $$= M_{q(a)q(b)}\, \rho(({\widetilde {q(ab)}})^{-1} \, {\widetilde {q(a)}}\, {\widetilde {q(b)}})\, \rho( {\widetilde {q(b)}}^{-1} \gamma_a {\widetilde {q(b)}})\, \rho( \gamma_b)$$ $$= \zeta_\rho({q(a)}, {q(b)}) \, M_{q(a)}\, M_{q(b)} M_{q(b)}^{-1} \,\rho(\gamma_a) M_{q(b)}\, \rho (\gamma_b)= \zeta_\rho({q(a)}, {q(b)}) M^+_a M^+_b\, .$$ We conclude that the 2-cocycles $\{\zeta_\rho({q(a)}, {q(b)})\}_{a,b\in G'}$ and $\{\eta_\rho(\omega(a), \omega(b))\}_{a,b\in G'}$ differ by a coboundary.\qed Formula (\ref{Turaev_2++}) allows us to rewrite Formulas (\ref{Turaev1}) and (\ref{Turaev1-}) as follows. Suppose that $g(\pi)=G$ and $g_*([W]) = q_*(\Delta)$ for some $\Delta \in H_2(G'{})$, where $q_*: H_2(G'{}) \to H_2(G{})$ and $g_*: H_2(\pi{}) \to H_2(G{})$ are the homomorphisms induced by $q:G'\to G$ and $g:\pi=\pi_1(W)\to G$, respectively. Then we can replace $g^*(\zeta_\rho) ([W])$ in (\ref{Turaev1}) and (\ref{Turaev1-}) by ${\omega}^*(\eta_\rho) (\Delta)$. Indeed, $$g^*(\zeta_\rho) ([W])= \zeta_\rho (g_*( [W]))=\zeta_\rho (q_*(\Delta))=q^*(\zeta_\rho) (\Delta)={\omega}^*(\eta_\rho) (\Delta)\, .$$ In particular, if $\Gamma$ is finite and $G$ is generated by commuting elements $\alpha, \beta$, then (\ref{Turaev1-}) gives \begin{equation}\label{Turaev 2} |L(\alpha, \beta, q)|=\, \vert \Gamma \vert \, \sum_{\rho\in {\rm Irr} (\Gamma), \,{\mathcal A}_\rho \supset \omega(G')} \, \,{\omega}^*(\eta_\rho) (\Delta) \, . \end{equation} Note that $\Delta$ as above necessarily exists if $L(\alpha, \beta, q)\neq \emptyset$ because in this case $g$ lifts to a homomorphism $g': \pi\to G'$ and we may take $\Delta =g'_*([W])\in H_2(G'{})$. Note also that if $\dim \rho=1$, then $\eta_\rho=1$ and ${\omega}^*(\eta_\rho) (\Delta)=1$ for any $\Delta \in H_2(G'{})$. If $\dim \rho=2$, then ${\omega}^*(\eta_\rho) (\Delta)=\pm 1$ for any $\Delta \in H_2(G'{})$. \section{Proof of Theorems 1 and 5} {\it Proof of Theorem \ref{Theorem_Q_8(1)}.} It is well known that in the generators $a=i$, $b=j$, the group $Q_8 $ has the presentation $\{a,b\, | \, a^4=1, a^2=b^2, bab^{-1}=a^{-1}\}$. Therefore the abelianization of $Q_8$ gives $\FAT{Z}_2 \times \FAT{Z}_2$. Consequently, $Q_8$ has four homomorphisms to $\FAT{C}^\times=GL_1(\FAT{C})$, one trivial $\chi_0$ and three non-trivial $\chi_1, \chi_2, \chi_3$. The equality $\sum_{\rho\in \mathop{\rm Irr}\nolimits Q_8} (\dim \rho)^2 =8$ implies that besides the 1-dimensional representations, the set $\mathop{\rm Irr}\nolimits Q_8$ has only one element which is the equivalence class of an irreducible 2-dimensional representation $\rho_0$. This representation can be described explicitly by $$i\mapsto \left( \begin{array}{cc} i & 0 \\ 0 & -i \\ \end{array} \right), \quad j\mapsto \left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \\ \end{array} \right), \quad \mbox{and\ } \,\,\, k=ij\mapsto \left( \begin{array}{cc} 0 & i \\ i & 0 \\ \end{array} \right). $$ Consider the group ${\mathcal A}=\mathop{\rm Aut}\nolimits Q_8$ and its subgroups $\{{\mathcal A}_{\chi_i}\}_{i=0}^3$ and ${\mathcal A}_{\rho_0}$. Clearly, $ {\mathcal A}_{\chi_0}= {\mathcal A}_{\rho_0}={\mathcal A}$. The obvious equality $\chi_3=\chi_1 \chi_2$ implies that the intersection of any two of the sets $\{{\mathcal A}_{\chi_i}\}_{i=1,2,3}$ is equal to the intersection of all three of these sets. Replacing if necessary $G$ by its subgroup generated by $\alpha, \beta$ and replacing $G'$ by the pre-image of this subgroup, we can assume that $\alpha$ and $\beta$ generate $G$. Let $\omega: G' \to {\mathcal A}=\mathop{\rm Aut}\nolimits Q_8$ be the homomorphism defined in Lemma \ref{le1}. The remarks at the end of Section \ref{section2} show that ${\omega}^*(\eta_{\chi_i}) (\Delta)=1$ for $i=0,1,2,3$ and ${\omega}^*(\eta_{\rho_0}) (\Delta)=\pm 1$ for any $\Delta\in H_2(G'{})$. If $L(\alpha, \beta, q)= \emptyset$, then $|L(\alpha, \beta, q)|=0$ and we are done. If $L(\alpha, \beta, q)\neq \emptyset$, then by (\ref{Turaev 2}), \begin{equation} \label{formula_Q_8} |L(\alpha, \beta, q)| = 8( 1 + \left\{ \begin{array}{ll} 0, & {\omega(G') \nsubseteq {\mathcal A}_{\chi_i} \mbox{ \rm for }i=1,2,3;} \\ 1, & {\omega(G') \subseteq {\mathcal A}_{\chi_i} \mbox{ \rm for only one }i\in \{1,2,3\};} \\ 3, & {\omega(G') \subseteq {\mathcal A}_{\chi_i} \mbox{ \rm for }i=1,2,3.} \end{array} \right\} \pm 1 ). \end{equation} The possible values for the right-hand side are $0,8,16,24$, and $40$.\qed {\it Proof of Theorem \ref{ed}.} Replacing if necessary $G$ by its subgroup generated by $\{\alpha_i, \beta_i\}_{i=1}^d$ and replacing $G'$ by the pre-image of this subgroup, we can assume that the set $\{\alpha_i, \beta_i\}_{i=1}^d$ generates $G$. Let $W$ be a closed connected oriented surface of genus $d$. Let $g:\pi_1(W)\to G$ be the epimorphism carrying the standard generators of $\pi_1(W)$ to $\alpha_1, \beta_1, \dots, \alpha_d, \beta_d$, respectively. Formula (\ref{Turaev1}) yields that \begin{equation}\label{formula_Q_8_genus_d1} |{\rm Hom}_g(\pi, G')| = 8^{2d-1}( 1 + \sum_{{i=1,2,3},\, G_{\chi_i}=G} \, g^*(\zeta_{\chi_i}) ([W])\, + \frac{1}{2^{2d-2}} \, g^*(\zeta_{\rho_0}) ([W]) ). \end{equation} Since all values of the homomorphisms $\{\chi_i:Q_8\to \FAT{C}^\times\}_{i=1,2,3}$ are $\pm 1$, the same is true for the cocycles $\{\zeta_{\chi_i}\}_{i=1,2,3}$. Hence $g^*(\zeta_{\chi_i}) ([W]) = \pm 1$ for all $i$. Then the sum $ 1 + \sum_i g^*(\zeta_{\chi_i}) ([W])$ on the right-hand side of (\ref{formula_Q_8_genus_d1}) is an integer. Since $g^*(\zeta_{\rho_0}) ([W])$ is a root of unity and $d\geq 2$, the number $|{\rm Hom}_g(\pi, G')|$ is non-zero. The same arguments as in the proof of Theorem~\ref{Theorem_Q_8(1)} show that $$ |{\rm Hom}_g(\pi, G')| = 8^{2d-1}( 1 + \left\{ \begin{array}{ll} 0 \\ 1 \\ 3 \end{array} \right\} \pm \left(\frac{1}{2}\right)^{2d-2} ).$$\qed \section{Proof of Theorems 3 and 4} The group ${\mathcal A}={\mathop{\rm Aut}\nolimits}(Q_8)$ is known to be isomorphic to the symmetric group $S_4$. We shall identify both ${\mathcal A}$ and $S_4$ with the group of rotations of a 3-dimensional cube as follows. Let us label the vertices of the cube by $\{1,2,3,4\}$ such that the vertices of each main diagonal have the same label, see Figure~1. Let us label the faces of the cube by $\pm i, \pm j, \pm k$ so that the opposite faces have opposite labels. Then any rotation of the cube defines both a permutation in $S_4$ and an automorphism of $Q_8$. This establishes the identification of these groups mentioned above. \begin{center} \begin{picture}(160,130) \put(102,107){\bf 1} \put(2,107){\bf 2} \put(52,132){\bf 3}\put(152,132){\bf 4} \put(102,-5){\bf 3} \put(2,-5){\bf 4} \put(52,20){\bf 1}\put(152,20){\bf 2} \put(80,115){$i$} \put(130,70){{\it j}} \put(60,55){{\it k}} \put(80,15){$-i$} \put(25,70){{\it --j}} \put(90,75){{\it --k}} \put(5,5){\line(1,0){100}} \put(5,105){\line(1,0){100}} \put(55,130){\line(1,0){100}} \put(55,30){\line(1,0){45}} \put(110,30){\line(1,0){45}} \put(5,5){\line(0,1){100}} \put(105,5){\line(0,1){100}} \put(155,30){\line(0,1){100}} \put(55,30){\line(0,1){70}} \put(55,110){\line(0,1){20}} \qbezier(5,5)(15,10)(55,30) \qbezier(105,5)(115,10)(155,30) \qbezier(5,105)(15,110)(55,130) \qbezier(105,105)(115,110)(155,130) \put(65,-25){Figure 1} \end{picture} \end{center} The stabilizers in ${\mathcal A}$ of the irreducible representations $\{\chi_i\}_{i=0}^3$ and $ \rho_0$ of $Q_8$ can be explicitly computed. As mentioned above, ${\mathcal A}_{\chi_0}= {\mathcal A}_{\rho_0}={\mathcal A}$. \begin{lemma}\label{stabilizer} Let $\{{\mathcal A}_{\chi_i}\subset \mathcal A\}_{i=1}^{3}$ be the stabilizers of the non-trivial 1-dimensional representations $\{\chi_i\}_{i=1}^{3}$ of $Q_8$. Then \begin{itemize} \item[a.] Every permutation of order $4$ in ${\mathcal A}= S_4$ belongs to exactly one of the groups $\{{\mathcal A}_{\chi_i}\}_{i=1}^{3}$. \item[b.] A permutation of order $3$ in ${\mathcal A}=S_4$ belongs to none of the groups $\{{\mathcal A}_{\chi_i}\}_{i=1}^{3}.$ \end{itemize} \end{lemma} {\it Proof.} The action of ${\mathcal A}$ on the set of non-trivial 1-dimensional representations of $ Q_8 $ is equivalent to the action on the kernels of these representations. These kernels are precisely the order 4 subgroups $\langle i \rangle$, $\langle j \rangle$, and $\langle k \rangle$ of $Q_8$. The action of ${\mathcal A}$ on these subgroups is equivalent to the action of ${\mathcal A}$ on the pairs of opposite faces $\{ \pm i\} , \{ \pm j\}, \{ \pm k\}$ of the cube. Now, a permutation of order 4 in $S_4$ corresponds to a rotation of the cube about a line connecting the centers of two opposite faces. Such a rotation stabilizes this pair of faces and permutes the other two pairs. This implies the first claim of the lemma. A permutation of order 3 in $S_4$ corresponds to a rotation of the cube about a main diagonal. Such a rotation permutes cyclically the pairs of opposite faces. This implies the second claim of the lemma. \qed {\it Proof of Theorem \ref{Theorem_Q_8(2)}.} For each $n=0,8,16,24,$ and 40 we produce two integers $k, m\geq 2$ and a short exact sequence of groups \begin{equation}\label{n=16} Q_8 \hookrightarrow G' \stackrel{q}{\twoheadrightarrow} \FAT{Z}_k(\alpha) \times \FAT{Z}_m(\beta) \end{equation} such that $|L(\alpha,\beta,q)|=n$. Here $\FAT{Z}_k(\alpha)$ denotes the cyclic group $\langle \alpha \, | \, \alpha^k = 1 \rangle$. \noindent {\bf Case $n=40$.} Set $G = \FAT{Z}_2(\alpha) \times \FAT{Z}_2(\beta)$ and $G'=Q_8\times G$. The map $q:G'\to G$ is the projection. Clearly, $L(\alpha,\beta,q)= \{ \gamma_1, \gamma_2 \in Q_8\, |\, \gamma_1 \gamma_2 = \gamma_2 \gamma_1\}$. By Formula (\ref{FroMed}), we have $|L(\alpha,\beta,q)|=40$. For $n= 8,16,24$, we proceed as follows. We first produce a sequence (\ref{n=16}) and commuting lifts $\widetilde{\alpha}, \widetilde{\beta}\in G'$ of $\alpha$ and $\beta$. This allows us to use Formula (\ref{formula_Q_8}) for $|L(\alpha,\beta,q)|$. Set $\Delta=\kappa_{\widetilde{\alpha}, \widetilde{\beta}}\in H_2(G')$. The sign ${\omega}^*(\eta_{\rho_0}) (\Delta)=\pm 1$ in (\ref{formula_Q_8}) will be computed from the equation \begin{equation} \label{pairing} {\mathcal M}_{\omega(\widetilde{\beta})} {\mathcal M}_{\omega(\widetilde{\alpha})} = {\omega}^*(\eta_{\rho_0}) (\Delta) {\mathcal M}_{\omega(\widetilde{\alpha})} {\mathcal M}_{\omega(\widetilde{\beta})}\, , \end{equation} where ${\mathcal M}_{\omega(\widetilde{\alpha})}, {\mathcal M}_{\omega(\widetilde{\beta})} \in GL_2(\FAT{C})$ are any matrices satisfying \begin{equation} \label{matrix_Q_8} \rho_0 (\widetilde{\alpha} \gamma \widetilde{\alpha}^{-1}) = {\mathcal M}_{\omega(\widetilde{\alpha})} \rho_0 (\gamma) {\mathcal M}_{\omega(\widetilde{\alpha})}^{-1}, \mbox{ and } \rho_0 (\widetilde{\beta} \gamma \widetilde{\beta}^{-1}) = {\mathcal M}_{\omega(\widetilde{\beta})} \rho_0 (\gamma) {\mathcal M}_{\omega(\widetilde{\beta})}^{-1}, \end{equation} for all $\gamma \in Q_8$ (cf.\ (\ref{matrix})). Formula (\ref{pairing}) follows from the definition of $\eta_{\rho_0}$ and the equalities $ \widetilde{\alpha}\widetilde{\beta} = \widetilde{\beta}\widetilde{\alpha} $ and $${\omega}^*(\eta_{\rho_0}) (\Delta) ={\omega}^*(\eta_{\rho_0}) (\kappa_{\widetilde{\alpha}, \widetilde{\beta}}) = \frac{ \eta_{\rho_0} (\omega(\widetilde{\alpha}), \omega(\widetilde{\beta}))}{ \eta_{\rho_0} (\omega(\widetilde{\beta}), \omega( \widetilde{\alpha}))}\, .$$ \noindent {\bf Case $n=16$.} Set $G = \FAT{Z}_3(\alpha) \times \FAT{Z}_3(\beta)$ and $G'=Q_8\rtimes (\FAT{Z}_3(y) \times \FAT{Z}_3(z))$, where $y$ acts on $Q_8$ by $y(i)=j, y(j)=-k$, and $z$ acts trivially. The map $q:G'\to G$ is given by $q(Q_8)=1$, $q(y)=\alpha$, and $q(z)=\beta$. Clearly, $\widetilde{\alpha} = y$ and $\widetilde{\beta} = z$ commute in $G'$. Since $y$ acts as an automorphism of order 3 on $Q_8$, we have that $\omega(G') \nsubseteq {\mathcal A}_{\chi_i}$ for $ i=1,2,3$ by Lemma \ref{stabilizer}. Since the action of $\widetilde{\beta}=z$ on $Q_8 $ is trivial, we have $ \omega(\widetilde{\beta})=1$ and hence ${\omega}^*(\eta_{\rho_0}) (\Delta) = 1$. Therefore $|L(\alpha,\beta,q)| = 8(1 +1)=16$. \noindent {\bf Case $n=24$.} Set $G = \FAT{Z}_2(\alpha) \times \FAT{Z}_2(\beta)$ and $G'=Q_8\rtimes (\FAT{Z}_2(y) \times \FAT{Z}_2(z))$, where $\FAT{Z}_2(y) \times \FAT{Z}_2(z)$ acts on $Q_8$ by inner automorphisms: \begin{equation}\label{inner} y(\gamma)=j\gamma j^{-1}\,\,\, \mbox{ and } \,\,\, z(\gamma)=i\gamma i^{-1}\,\,\, \mbox{ for all } \, \gamma \in Q_8. \end{equation} The map $q:G'\to G$ is given by $q(Q_8)=1$, $q(y)=\alpha$, and $q(z)=\beta$. Clearly, $\widetilde{\alpha} = y$ and $\widetilde{\beta} = z$ commute in $G'$. We have $\omega(G') \subset \mathop{\rm Inn}\nolimits(Q_8) \subset {\mathcal A}_{\chi_i}$ for $i=1,2,3$. It follows from (\ref{inner}) that the matrices $${\mathcal M}_{\omega(y)} = \rho_0(j) = \left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \\ \end{array} \right) \quad \mbox{ and } \quad {\mathcal M}_{\omega(z)} = \rho_0(i) = \left( \begin{array}{cc} i & 0 \\ 0 & -i \\ \end{array} \right) $$ satisfy (\ref{matrix_Q_8}). By (\ref{pairing}), ${\omega}^*(\eta_{\rho_0}) (\Delta) = -1$. Thus, $|L(\alpha,\beta,q)|=8(1+3-1) = 24$. \noindent {\bf Case $n=8$.} Set $G = \FAT{Z}_2(\alpha) \times \FAT{Z}_2(\beta)$ and $G' = Q_{16}\rtimes \FAT{Z}_2(z)$, where $Q_{16}$ is generated by $a,b$ subject to the relations $a^8 =1, a^4=b^2, bab^{-1}=a^{-1}$, and $z$ acts on $Q_{16}$ by $z(a) = a^3$ and $z(b)=b$. We identify the group $\langle a^2,b\rangle \subset G'$ with $Q_8$ via $a^2=i$ and $b=j$. The map $q:G'\to G$ is given by $q(a)=\alpha$, $q(b)= 1$, and $q(z)=\beta$. It is easy to check that $\widetilde{\alpha}=ab$ and $\widetilde{\beta}= a^2bz$ commute and $q(\widetilde{\alpha})=\alpha$, $q(\widetilde{\beta})=\beta$. Note that the conjugation by $a$ induces an automorphism of order 4 of $Q_8$, and $z$ acts by an inner automorphism on $Q_8$: $$ z(\gamma)=j\gamma j^{-1}\,\,\, \mbox{ for all } \, \gamma \in Q_8.$$ It follows by Lemma \ref{stabilizer} that $\omega(G')\subset {\mathcal A}_{\chi_i} $ for exactly one $i\in \{1,2,3\}$. Since $$\widetilde{\alpha} i \widetilde{\alpha}^{-1} = -i \quad \mbox{ and } \quad\widetilde{\alpha} j \widetilde{\alpha}^{-1} = k,$$ the matrix ${\mathcal M}_{\omega(\widetilde{\alpha})}$ has to satisfy $$\rho_0 (-i) = {\mathcal M}_{\omega(\widetilde{\alpha})} \rho_0 (i) {\mathcal M}_{\omega(\widetilde{\alpha})}^{-1} \quad \mbox{ and } \quad \rho_0 (k) = {\mathcal M}_{\omega(\widetilde{\alpha})} \rho_0 (j) {\mathcal M}_{\omega(\widetilde{\alpha})}^{-1}. $$ We take the following solution: ${\mathcal M}_{\omega(\widetilde{\alpha})} = \left( \begin{array}{cc} 0 & 1 \\ i & 0 \\ \end{array} \right).$ Similarly, since $$\widetilde{\beta} i \widetilde{\beta}^{-1} = i \quad \mbox{ and } \quad \widetilde{\beta} j \widetilde{\beta}^{-1} = -j,$$ we can take ${\mathcal M}_{\omega(\widetilde{\beta})} = \left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \\ \end{array} \right).$ By (\ref{pairing}), ${\omega}^*(\eta_{\rho_0}) (\Delta)= -1$. Hence $|L(\alpha,\beta,q)| = 8(1+1-1) = 8$. \noindent {\bf Case $n=0$.} Set $G'' = Q_{8}\rtimes \FAT{Z}_6(y)$, where $y$ acts on $Q_{8}$ by $y(i) = j$ and $y(j)=-k$. Set $G' = G'' \rtimes \FAT{Z}_2(z) $, where $z$ acts on $G'' $ by $z(i)=i$, $z(j) = -j$ and $z(y) = k y$. Clearly, $Q_8$ is a normal subgroup of $G'$ and $G'/Q_{8}=\FAT{Z}_6 \times \FAT{Z}_2$. We have a short exact sequence \begin{equation}\label{op} Q_8 \hookrightarrow G' \stackrel{q}{\twoheadrightarrow} G= \FAT{Z}_6(\alpha) \times \FAT{Z}_2(\beta)\, , \end{equation} where $q(Q_8)=1$, $q(y)=\alpha$, and $q(z)=\beta$. We use Formula (\ref{Turaev1-}) to prove that $L(\alpha,\beta,q)=\emptyset$. By Lemma \ref{stabilizer}, since the conjugation by $y$ induces an automorphism of $Q_8$ of order 3, it can not stabilize non-trivial 1-dimensional representations of $Q_8$. It follows that only $\chi_0$ and $\rho_0$ contribute to (\ref{Turaev1-}): \begin{equation} |L(\alpha,\beta,q)|= 8 \, \left( g^*(\zeta_{\chi_0}) ([W]) + g^*(\zeta_{\rho_0}) ([W])\right). \end{equation} As mentioned in Section~\ref{section2}, we have $ g^*(\zeta_{\chi_0}) ([W])=1$. Since $g^*(\zeta_{\rho_0}) ([W])$ is a root of unity and $|L(\alpha,\beta,q)|$ is an integer, $ g^*(\zeta_{\rho_0}) ([W]) = \pm 1$. We compute $g^*(\zeta_{\rho_0}) ([W]) $ from the definition of $\zeta_{\rho_0}$ in Section~\ref{section2}. Put $\widetilde{\alpha} = y$, $\widetilde{\beta} = z$, and $\widetilde{\alpha\beta} = \widetilde{\alpha}\widetilde{\beta}$. Since $$\widetilde{\alpha} i \widetilde{\alpha}^{-1} = j\ \mbox{\ \ and \ \ }\ \widetilde{\alpha} j \widetilde{\alpha}^{-1} = -k,$$ the matrix $M_{\alpha}\in GL_2(\FAT{C})$ must satisfy (cf. (\ref{bh})) $$M_{\alpha} \rho_0 (i) = \rho_0 (j) M_{\alpha} \ \mbox{\ \ and \ \ }\ M_{\alpha} \rho_0 (j) = \rho_0 (-k) M_{\alpha}. $$ We can take $M_{\alpha} = \left( \begin{array}{cc} 1 & -1 \\ i & i \\ \end{array} \right).$ Note that $\widetilde \beta \gamma \widetilde \beta^{-1}=z(\gamma)=i\gamma i^{-1}$ for all $\gamma \in Q_8$. We can take $M_{\beta} = \left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \\ \end{array} \right)$. The conjugation by $\widetilde{\alpha\beta} = \widetilde{\alpha}\widetilde{\beta}$ induces the automorphism $\displaystyle \left\{ \begin{array}{l} i \mapsto j \\ j \mapsto k \end{array} \right. $ of $Q_8$, and hence $M_{\alpha\beta}$ must satisfy $$M_{\alpha\beta} \rho_0 (i) = \rho_0 (j) M_{\alpha\beta} \ \mbox{\ \ and \ \ }\ M_{\alpha\beta} \rho_0 (j) = \rho_0 (k) M_{\alpha\beta}. $$ We can take $M_{\alpha \beta} = \left( \begin{array}{cc} 1 & 1 \\ i & -i \\ \end{array} \right)$. We have $ (\widetilde{\alpha\beta})^{-1} \widetilde{\alpha} \widetilde{\beta} = 1$ and $ (\widetilde{\beta\alpha})^{-1} \widetilde{\beta} \widetilde{\alpha} = z^{-1}y^{-1}zy = b = j \in Q_8$. Thus, $\zeta_{\rho_0}(\alpha, \beta)$ and $ \zeta_{\rho_0}(\beta, \alpha)$ satisfy $$ \left( \begin{array}{cc} 1 & 1 \\ i & -i \\ \end{array}\right) = \zeta_{\rho_0}(\alpha, \beta) \left( \begin{array}{cc} 1 & -1 \\ i & i \\ \end{array}\right) \left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \\ \end{array}\right) $$ and $$ \left( \begin{array}{cc} 1 & 1 \\ i & -i \\ \end{array}\right) \left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \\ \end{array}\right) = \zeta_{\rho_0}(\beta, \alpha) \left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \\ \end{array}\right) \left( \begin{array}{cc} 1 & -1 \\ i & i \\ \end{array}\right).$$ Therefore $\zeta_{\rho_0}(\alpha, \beta)=1$, $\zeta_{\rho_0}(\beta, \alpha)= -1$, and $$g^*(\zeta_{\rho_0}) ([W]) =g^*(\kappa_{\alpha, \beta})= \frac{\zeta_{\rho_0}(\alpha, \beta)}{\zeta_{\rho_0}(\beta, \alpha)} = -1.$$ Thus $|L(\alpha,\beta,q)| = 8(1 -1)=0$. \qed \noindent {\bf Remark.} It is possible to obtain $|L(\alpha,\beta,q)|= 24$ as a combination $ 8(1+1+1)$ in Formula (\ref{formula_Q_8}). Indeed, set $G = \FAT{Z}_2(\alpha) \times \FAT{Z}_2(\beta)$ and $G' = Q_{16}\times \FAT{Z}_2(z)$. The map $q:G'\to G$ is given by $q(a)=\alpha$, $q(b)=1$, and $q(z)=\beta$. As in Case $n=8$ above, $\omega(G')$ lies in exactly one of the stabilizers $\{{\mathcal A}_{\chi_i}\}_{i=1}^{3}$. We choose $\widetilde{\alpha} = a$ and $\widetilde{\beta} = z$. Since conjugation by $\widetilde{\beta}=z$ is the identity on $Q_8={\rm Ker}\, q$, we have $\omega^*(\eta_{\rho_0})(\kappa_{\widetilde \alpha, \widetilde \beta})= 1$. Therefore $|L(\alpha,\beta,q)|=8(1+1+1) $. {\it Proof of Theorem \ref{Theorem_Q_8(2+)}.} Consider the short exact sequence (\ref{op}). As we know, $L(\alpha, \beta, q)=\emptyset$. We claim that the induced homomorphism $q_* : H_2(G'{}) \rightarrow H_2(G{})$ is an epimorphism so that $\kappa_{\alpha, \beta}\in {\rm Im} (q_*)$. Set $\Gamma=Q_8$ and consider the exact sequence (see \cite{Br}) $$H_2(G') \stackrel{q_*} \rightarrow H_2(G) \rightarrow \Gamma/[\Gamma,G'] \rightarrow H_1(G') \rightarrow H_1(G) \rightarrow 1\, .$$ The group $ \Gamma/[\Gamma,G']$ is trivial because $[\Gamma,\Gamma] = \{ \pm 1\}$, and, as we saw, there is an element of $G'$ whose action on $\Gamma$ cyclically permutes the non-trivial cosets of $[\Gamma,\Gamma]$. Therefore $q_*$ is an epimorphism.\qed \section{Some other non-abelian groups} In this section we state analogues of Theorem \ref{Theorem_Q_8(1)} for the symmetric group $S_3$, the dihedral group of order eight $ D_4 $, the alternating group of order twelve $A_4$, and the extra-special 2-groups. \noindent {\bf 1.} Let $1 \rightarrow S_3 \rightarrow G' \stackrel{q}\rightarrow G \rightarrow 1$ be a short exact sequence of groups. For any commuting elements $\alpha, \beta$ of $G$, we have $|L(\alpha, \beta, q)|=18$. Indeed, the group $S_3$ has two 1-dimensional representations $\chi_0$ and $\chi_1$, and one 2-dimensional irreducible representation $\rho_0$. Clearly, $G$ stabilizes them all. Formula (\ref{Turaev1-}) gives $$ |L(\alpha, \beta, q)| = |S_3|\, ( 1 + g^*(\zeta_{\chi_1}) ([W])\, + g^*(\zeta_{\rho_0}) ([W]) ). $$ Since $(S_3)_{ab} \cong \FAT{Z}_2 $, the values of $\chi_1$ are $\pm 1$. Hence $g^*(\zeta_{\chi_1}) ([W]) = \pm 1$, and, since $g^*(\zeta_{\rho_0}) ([W])$ is a root of unity, $|L(\alpha, \beta, q)|\neq 0$. We can therefore apply Formula (\ref{Turaev 2}). Since ${\mathop{\rm Aut}\nolimits}(S_3)=S_3$ and $H^2(S_3; \FAT{C}^\times) = 1$, the cohomology class $\eta_\rho$ arising from any irreducible representation $\rho$ of $S_3$ is trivial. Hence $|L(\alpha, \beta, q)|=6(1+1+1)=18$. \noindent {\bf 2.} Let $1 \rightarrow D_4 \rightarrow G' \stackrel{q}\rightarrow G \rightarrow 1$ be a short exact sequence of groups and $\alpha, \beta $ be commuting elements of $G$. Then $|L(\alpha, \beta, q)|=24$ or 40. Indeed, $\mathop{\rm Aut}\nolimits (D_4)=\FAT{Z}_2 \times \FAT{Z}_2$, and all automorphisms of $D_4$ are inner. Hence the action of $G$ on $\mathop{\rm Irr}\nolimits (D_4)$ is trivial. The group $D_4$ has four 1-dimensional representations $\{\chi_i\}_{i=0}^3$ and one 2-dimensional irreducible representation $\rho_0$. By (\ref{Turaev1-}), $$|L(\alpha, \beta, q)| = |D_4|\, ( 1 + \sum_{i=1,2,3} \, g^*(\zeta_{\chi_i}) ([W])\, + \, g^*(\zeta_{\rho_0}) ([W]) ),$$ where $g^*(\zeta_{\chi_i}) ([W])=\pm 1$ and $g^*(\zeta_{\rho_0})$ is a root of unity. Consequently, $L(\alpha, \beta, q) \neq \emptyset$ and we can apply Formula (\ref{Turaev 2}), where ${\omega}^*(\eta_{\chi_i}) (\Delta)=1$ for all $i=1,2,3,$ and ${\omega}^*(\eta_{\rho_0}) (\Delta)=\pm 1$. Therefore $|L(\alpha, \beta, q)|=8(4 \pm 1)=24$ or 40. \noindent {\bf 3.} Let $1 \rightarrow A_4 \rightarrow G' \stackrel{q}\rightarrow G \rightarrow 1$ be a short exact sequence of groups and $\alpha, \beta $ be commuting elements of $G$. Then $\vert L(\alpha, \beta, q)\vert=0,24,$ or 48. Indeed, the group $A_4$ has three 1-dimensional representations and a 3-dimensional irreducible representation $\rho_0$. Clearly, if ${\mathop{\rm Aut}\nolimits}(A_4)=S_4$ stabilizes a non-trivial 1-dimensional representation of $A_4$, then it stabilizes all of them. If $\vert L(\alpha, \beta, q)\vert \neq 0$, then by (\ref{Turaev 2}), we have $|L(\alpha, \beta, q)|=12(1+ \left\{ \begin{array}{l} 0 \\ 2 \end{array} \right\} \pm 1) $ so that $|L(\alpha, \beta, q)|=24 $ or 48. Theorem \ref{Theorem_Q_8(1)} and Example 2 above in this section can be generalized as follows. Recall that a finite group $\Gamma$ is extra-special if its order is a power of a prime integer $p$, its center $Z(\Gamma)$ is isomorphic to $\FAT{Z}_p$, and the quotient $\Gamma/Z(\Gamma)$ is a direct product of several copies of $\FAT{Z}_p$. An extra-special $p$-group $\Gamma$ satisfies $[\Gamma, \Gamma]=Z(\Gamma)$ and $\vert \Gamma\vert=p^{2r+1}$ for some $r$, see \cite[Theorem 5.2]{G}. For example, both $Q_8$ and $D_4$ are extra-special 2-groups. \noindent {\bf 4.} Let $\Gamma$ be an extra-special 2-group of order $2^{2r+1}$ with $r\geq 1$. Then $\Gamma$ has $| \, \Gamma/[\Gamma,\Gamma] \, | = 2^{2r}$ one-dimensional representations. Since $Z(\Gamma)=\FAT{Z}_2$, the group $\Gamma$ has two conjugacy classes or order 1. All other conjugacy classes in $\Gamma$ have exactly two elements. Indeed, if $Z(\Gamma) = \{1,\sigma\} $ with $\sigma\in \Gamma$, then the conjugacy class of $\gamma \in G- Z(\Gamma)$ is equal to $ \{\gamma, \sigma\gamma \}$. It follows that $\Gamma$ has $2^{2r}+1$ conjugacy classes. Hence $\Gamma$ has a unique irreducible representation $\rho_0$ of dimension greater than 1. The equality $\sum_{\rho\in \mathop{\rm Irr}\nolimits\Gamma} (\dim \rho)^2 =|\Gamma|$ implies that $\dim \rho_0=2^r$. Let $1 \rightarrow \Gamma \rightarrow G' \stackrel{q}\rightarrow G \rightarrow 1$ be a short exact sequence of groups and $\alpha, \beta $ be commuting elements of $G$. Formula (\ref{Turaev 2}) implies that either $\vert L(\alpha, \beta, q)\vert = 0$ or $$|L(\alpha, \beta, q)| = 2^{2r+1}(1+ N \pm 1) $$ where $N$ is the number of non-trivial 1-dimensional representations of $\Gamma$ stable under the action of $G$ by conjugations. \end{document}
arXiv
Current (mathematics) In mathematics, more particularly in functional analysis, differential topology, and geometric measure theory, a k-current in the sense of Georges de Rham is a functional on the space of compactly supported differential k-forms, on a smooth manifold M. Currents formally behave like Schwartz distributions on a space of differential forms, but in a geometric setting, they can represent integration over a submanifold, generalizing the Dirac delta function, or more generally even directional derivatives of delta functions (multipoles) spread out along subsets of M. Definition Let $\Omega _{c}^{m}(M)$ denote the space of smooth m-forms with compact support on a smooth manifold $M.$ A current is a linear functional on $\Omega _{c}^{m}(M)$ which is continuous in the sense of distributions. Thus a linear functional $T:\Omega _{c}^{m}(M)\to \mathbb {R} $ is an m-dimensional current if it is continuous in the following sense: If a sequence $\omega _{k}$ of smooth forms, all supported in the same compact set, is such that all derivatives of all their coefficients tend uniformly to 0 when $k$ tends to infinity, then $T(\omega _{k})$ tends to 0. The space ${\mathcal {D}}_{m}(M)$ of m-dimensional currents on $M$ is a real vector space with operations defined by $(T+S)(\omega ):=T(\omega )+S(\omega ),\qquad (\lambda T)(\omega ):=\lambda T(\omega ).$ Much of the theory of distributions carries over to currents with minimal adjustments. For example, one may define the support of a current $T\in {\mathcal {D}}_{m}(M)$ as the complement of the biggest open set $U\subset M$ such that $T(\omega )=0$ whenever $\omega \in \Omega _{c}^{m}(U)$ The linear subspace of ${\mathcal {D}}_{m}(M)$ consisting of currents with support (in the sense above) that is a compact subset of $M$ is denoted ${\mathcal {E}}_{m}(M).$ Homological theory Integration over a compact rectifiable oriented submanifold M (with boundary) of dimension m defines an m-current, denoted by $[[M]]$: $[[M]](\omega )=\int _{M}\omega .$ If the boundary ∂M of M is rectifiable, then it too defines a current by integration, and by virtue of Stokes' theorem one has: $[[\partial M]](\omega )=\int _{\partial M}\omega =\int _{M}d\omega =[[M]](d\omega ).$ This relates the exterior derivative d with the boundary operator ∂ on the homology of M. In view of this formula we can define a boundary operator on arbitrary currents $\partial :{\mathcal {D}}_{m+1}\to {\mathcal {D}}_{m}$ :{\mathcal {D}}_{m+1}\to {\mathcal {D}}_{m}} via duality with the exterior derivative by $(\partial T)(\omega ):=T(d\omega )$ for all compactly supported m-forms $\omega .$ Certain subclasses of currents which are closed under $\partial $ can be used instead of all currents to create a homology theory, which can satisfy the Eilenberg–Steenrod axioms in certain cases. A classical example is the subclass of integral currents on Lipschitz neighborhood retracts. Topology and norms The space of currents is naturally endowed with the weak-* topology, which will be further simply called weak convergence. A sequence $T_{k}$ of currents, converges to a current $T$ if $T_{k}(\omega )\to T(\omega ),\qquad \forall \omega .$ It is possible to define several norms on subspaces of the space of all currents. One such norm is the mass norm. If $\omega $ is an m-form, then define its comass by $\|\omega \|:=\sup\{\left|\langle \omega ,\xi \rangle \right|:\xi {\mbox{ is a unit, simple, }}m{\mbox{-vector}}\}.$ So if $\omega $ is a simple m-form, then its mass norm is the usual L∞-norm of its coefficient. The mass of a current $T$ is then defined as $\mathbf {M} (T):=\sup\{T(\omega ):\sup _{x}|\vert \omega (x)|\vert \leq 1\}.$ The mass of a current represents the weighted area of the generalized surface. A current such that M(T) < ∞ is representable by integration of a regular Borel measure by a version of the Riesz representation theorem. This is the starting point of homological integration. An intermediate norm is Whitney's flat norm, defined by $\mathbf {F} (T):=\inf\{\mathbf {M} (T-\partial A)+\mathbf {M} (A):A\in {\mathcal {E}}_{m+1}\}.$ Two currents are close in the mass norm if they coincide away from a small part. On the other hand, they are close in the flat norm if they coincide up to a small deformation. Examples Recall that $\Omega _{c}^{0}(\mathbb {R} ^{n})\equiv C_{c}^{\infty }(\mathbb {R} ^{n})$ so that the following defines a 0-current: $T(f)=f(0).$ In particular every signed regular measure $\mu $ is a 0-current: $T(f)=\int f(x)\,d\mu (x).$ Let (x, y, z) be the coordinates in $\mathbb {R} ^{3}.$ Then the following defines a 2-current (one of many): $T(a\,dx\wedge dy+b\,dy\wedge dz+c\,dx\wedge dz):=\int _{0}^{1}\int _{0}^{1}b(x,y,0)\,dx\,dy.$ See also • Georges de Rham • Herbert Federer • Differential geometry • Varifold Notes References • de Rham, Georges (1984). Differentiable manifolds. Forms, currents, harmonic forms. Grundlehren der mathematischen Wissenschaften. Vol. 266. Translated by Smith, F. R. With an introduction by S. S. Chern. (Translation of 1955 French original ed.). Berlin: Springer-Verlag. doi:10.1007/978-3-642-61752-2. ISBN 3-540-13463-8. MR 0760450. Zbl 0534.58003. • Federer, Herbert (1969). Geometric measure theory. Die Grundlehren der mathematischen Wissenschaften. Vol. 153. Berlin–Heidelberg–New York: Springer-Verlag. doi:10.1007/978-3-642-62010-2. ISBN 978-3-540-60656-7. MR 0257325. Zbl 0176.00801. • Griffiths, Phillip; Harris, Joseph (1978). Principles of algebraic geometry. Pure and Applied Mathematics. New York: John Wiley & Sons. doi:10.1002/9781118032527. ISBN 0-471-32792-1. MR 0507725. Zbl 0408.14001. • Simon, Leon (1983). Lectures on geometric measure theory. Proceedings of the Centre for Mathematical Analysis. Vol. 3. Canberra: Centre for Mathematical Analysis at Australian National University. ISBN 0-86784-429-9. MR 0756417. Zbl 0546.49019. • Whitney, Hassler (1957). Geometric integration theory. Princeton Mathematical Series. Vol. 21. Princeton, NJ and London: Princeton University Press and Oxford University Press. doi:10.1515/9781400877577. ISBN 9780691652900. MR 0087148. Zbl 0083.28204.. • Lin, Fanghua; Yang, Xiaoping (2003), Geometric Measure Theory: An Introduction, Advanced Mathematics (Beijing/Boston), vol. 1, Beijing/Boston: Science Press/International Press, pp. x+237, ISBN 978-1-57146-125-4, MR 2030862, Zbl 1074.49011 This article incorporates material from Current on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia
\begin{document} \title[]{Real class sizes} \author[H. P. Tong-Viet]{Hung P. Tong-Viet} \address{Department of Mathematical Sciences, Binghamton University, Binghamton, NY 13902-6000, USA} \email{[email protected]} \subjclass[2010]{Primary 20E45; Secondary 20D10} \date{\today} \keywords{Real conjugacy classes, real class sizes, prime graphs} \begin{abstract} In this paper, we study the structures of finite groups using some arithmetic conditions on the sizes of real conjugacy classes. We prove that a finite group is solvable if the prime graph on the real class sizes of the group is disconnected. Moreover, we show that if the sizes of all non-central real conjugacy classes of a finite group $G$ have the same $2$-part and the Sylow $2$-subgroup of $G$ satisfies certain condition, then $G$ is solvable. \end{abstract} \maketitle \section{Introduction} Let $G$ be a finite group. An element $x\in G$ is said to be real if there exists an element $g\in G$ such that $x^g=x^{-1}$. We denote by $\Real(G)$ the set of all real elements of $G$. A conjugacy class $x^G$ containing $x\in G$ is said to be real if $x$ is a real element of $G$ or equivalently $x^G=(x^{-1})^G$. The size of a real conjugacy class is called a real class size. Several arithmetic properties of the real class sizes can be conveniently stated using graph theoretic language. The prime graph on the real class sizes of a finite group $G$, denoted by $\Delta^*(G)$, is a simple graph with vertex set $\rho^*(G)$ the set of primes dividing the size of some real conjugacy class of $G$ and there is an edge between two vertices $p$ and $q$ if and only if the product $pq$ divides some real class size. Now a prime $p$ is not a vertex of $\Delta^*(G)$, that is, $p\not\in\rho^*(G)$ if and only if $p$ divides no real class size of $G$. In \cite{DNT}, the authors show that $2$ is not a vertex of $\Delta^*(G)$ if and only if $G$ has a normal Sylow $2$-subgroup $S$ (i.e., $G$ is $2$-closed) and $\Real(S)\subseteq\mathbf{Z}(S)$. For odd primes, a similar result is not that satisfactory. Combining results in \cite{GNT}, \cite{IN} and \cite{Tiep}, we can show that if an odd prime $p$ is not a vertex of $\Delta^*(G)$ and assume further that when $p=3$, $\SL_3(2)$ is not a composition factor of $G$, then $\mathbf{O}^{2'}(G)$ has a normal Sylow $p$-subgroup and $\mathbf{O}^{p'}(G)$ is solvable, in particular, $G$ is $p$-solvable (see Lemma \ref{lem: Ito-Michler theorem for conjugacy classes}). The proofs of the aforementioned results, especially, for odd primes, are quite involved and depend heavily on the classification of finite simple groups. This is in contrast to the similar result for all conjugacy classes, that is, if a prime $p$ does not divide the size of any conjugacy classes of $G$ then $G$ has a normal central Sylow $p$-subgroup. The proof of this classical result is just an application of Jordan's theorem on the existence of derangements in finite permutation groups. It is proved in \cite{DNT} that $\Delta^*(G)$ has at most two connected components. In our first result, we will show that if $\Delta^*(G)$ is disconnected, then $G$ is solvable. \begin{thmA} Let $G$ be a finite group. If the prime graph on the real class sizes of $G$ is disconnected, then $G$ is solvable. \end{thmA} We next study in more detail the real class sizes of finite groups with disconnected prime graph on real class sizes. \begin{thmB} Let $G$ be a finite group. Suppose that $\Delta^*(G)$ is disconnected. Then $2$ divides some real class size and one of the following holds. \begin{enumerate} \item[$(1)$] $G$ has a normal Sylow $2$-subgroup. \item[$(2)$] $\Delta^*(\mathbf{O}^{2'}(G))$ is disconnected and the real class sizes of $\mathbf{O}^{2'}(G)$ are either odd or powers of $2$. \end{enumerate} \end{thmB} It follows from Theorem B that if the prime graph $\Delta^*(G)$ of a finite group $G$ is disconnected, then $2$ must be a vertex of $\Delta^*(G)$. This confirms once again the importance of the prime $2$ in the study of real conjugacy classes of finite groups. In the second conclusion of Theorem B, both connected components of $\Delta^*(\mathbf{O}^{2'}(G))$ are complete and one of the components of this graph contains the prime $2$ only. (See Theorem \ref{th: 2 powers or odd}). We should mention that it was proved in \cite{DPS} that a finite group whose all real class sizes are either odd or powers of $2$ is solvable. However, in the proof of (2), we will need the solvability from Theorem A. So, if one can prove part (2) of Theorem B without using the solvability of the group, then we would have another proof of Theorem A. In \cite{DPS}, it is shown that if all non-central real conjugacy classes of a group have prime sizes, then the group has a normal Sylow $2$-subgroup or a normal $2$-complement. The next example shows that this is not the case if we only assume that all real class sizes are prime powers. \begin{ex} Let $G={\mathrm {Alt}}_4:C_4$ be a solvable group of order $48$. We have $G=\mathbf{O}^{2'}(G)$, $G/\mathbf{Z}(G)\cong\Sym_4,G/\mathbf{O}_2(G)\cong\Sym_3$ and the real class sizes of $G$ are $1,3$ or $8$. Clearly, $G$ has no normal Sylow $2$-subgroup nor normal $2$-complement. \end{ex} It follows from \cite{BHM} that if the prime graph defined on all class sizes of a finite group $G$ is disconnected, then $G/\mathbf{Z}(G)$ is a Frobenius group with abelian kernel and complement. By our example above, this does not hold for the prime graph on real class sizes. In our last result, we provide further evidence for a conjecture proposed in \cite{Tong}. We will prove Conjecture C in \cite{Tong} under some condition on the Sylow $2$-subgroups. \begin{thmC} Let $G$ be a finite group. Suppose that the sizes of all non-central real conjugacy classes of $G$ have the same $2$-part. Assume further that $G$ has a Sylow $2$-subgroup $S$ with $\Real(S)\subseteq \mathbf{Z}(S)$. Then $G$ is solvable and $\mathbf{O}^{2'}(G)$ has a normal $2$-complement. \end{thmC} A conjecture due to G. Navarro states that a finite group $G$ is solvable if $G$ has at most two real class sizes. Clearly, our Theorem C implies this conjecture with an additional assumption that $\Real(S)\subseteq\mathbf{Z}(S)$ for some Sylow $2$-subgroup $S$ of $G$. Finite $2$-groups $S$ with $\Real(S)\subseteq\mathbf{Z}(S)$ have been studied by Chillag and Mann \cite{CM}. These are exactly the finite $2$-groups $S$ for which if $x,y\in S$ and $x^2=y^2$, then $x\mathbf{Z}(S)=y\mathbf{Z}(S)$. \section{Real conjugacy classes} Our notation are more or less standard. If $n$ is a positive integer, then $\pi(n)$ is the set of prime divisors of $n$. If $\pi(n)\subseteq \sigma$ for some set of primes $\sigma$, then $n$ is said to be a $\sigma$-number. If $n>1$ is an integer and $p$ is a prime, then the $p$-part of $n$, denoted by $n_p$, is the largest power of $p$ dividing $n$. Recall that $\Real(G)$ is the set of all real elements of $G$. We collect some properties of real elements and real class sizes in the following lemma. \begin{lem}\label{lem: real elements} Let $G$ be a finite group and let $N\unlhd G$. \begin{enumerate} \item[$(1)$] If $x\in \Real(G)$, then every power of $x$ is also real. \item[$(2)$] If $x\in \Real(G)$, then $x^t=x^{-1}$ for some $2$-element $t\in G$. \item[$(3)$] If $x\in \Real(G)$ and $|x^G|$ is odd, then $x^2=1.$ \item[$(4)$] If $x,y\in \Real(G)$, $xy=yx$ and $(|x^G|,|y^G|)=1$, then $xy\in \Real(G)$. Furthermore, if $(o(x),o(y))=1$, then $\pi(|x^G|)\cup\pi(|y^G|)\subseteq \pi(|(xy)^G|)$. \item[$(5)$] If $|G:N|$ is odd, then $\Real(G)=\Real(N)$. \item[$(6)$] Suppose that $Nx$ is a real element in $G/N$. If $|N|$ or the order of $Nx$ in $G/N$ is odd, then $Nx=Ny$ for some real element $y\in G$ (of odd order if the order of $Nx$ is odd). \end{enumerate} \end{lem} \begin{proof} Let $x\in G$ be a real element. Then $x^g=x^{-1}$ for some $g\in G$. If $k$ is any integer, then $(x^k)^g=(x^g)^k=(x^{-1})^k=(x^{k})^{-1}$ so $x^k$ is real which proves (1). Write $o(g)=2^am$ with $(2,m)=1$ and let $t=g^m$. Then $t$ is a $2$-element and $x^t=x^{g^m}=x^g=x^{-1}$ as $g^2\in\mathbf{C}_G(x)$ and $m$ is odd. This proves (2). Parts (3)--(5) are in Lemma~6.3 of \cite{DNT}. Finally, (6) is Lemma 2.2 in \cite{GNT}. \end{proof} Fix $1\neq x\in \Real(G)$, set $\mathbf{C}^*_G(x)=\{g\in G\:|\:x^g\in \{x,x^{-1}\}\}.$ Then $\mathbf{C}^*_G(x)$ is a subgroup of $G$ containing $\mathbf{C}_G(x)$. If $g\in G$ such that $x^g=x^{-1}$, then $x^{g^2}=x$ so $g^2\in\mathbf{C}_G(x)$. Assume that $x$ is not an involution. We see that $g\in\mathbf{C}_G^*(x)\setminus \mathbf{C}_G(x)$ and if $h\in \mathbf{C}_G^*(x)\setminus \mathbf{C}_G(x)$, then $x^h=x^{-1}=x^g$ so that $hg^{-1}\in\mathbf{C}_G(x)$ or equivalently $h\in\mathbf{C}_G(x)g$. Thus $\mathbf{C}_G(x)$ has index $2$ in $\mathbf{C}_G^*(x)$ and hence $|x^G|$ is even. In particular, this is the case if $x$ is a nontrivial real element of odd order. The next lemma is well-known. We will use this lemma freely without further reference. \begin{lem}\label{lem:div} Let $G$ be a finite group and let $N\unlhd G$. Then \begin{enumerate} \item[$(1)$] If $x\in N$, then $|x^N|$ divides $|x^G|$. \item[$(2)$] If $Nx\in G/N$, then $|(Nx)^{G/N}|$ divides $|x^G|$. \end{enumerate} \end{lem} The following lemma shows that a finite group $G$ has no nontrivial real element of odd order if and only if $G$ has a normal Sylow $2$-subgroup. \begin{lem}\label{lem:even order real} \emph{(\cite[Proposition 6.4]{DNT}).} The following are equivalent: \begin{enumerate} \item[$(1)$] Every nontrivial element in $\Real(G)$ has even order. \item[$(2)$] Every element in $\Real(G)$ is a $2$-element. \item[$(3)$] $G$ has a normal Sylow $2$-subgroup. \end{enumerate} \end{lem} The next lemma determines the number of connected components of the prime graphs on real class sizes. \begin{lem}\label{lem:real prime graph components}\emph{(\cite[Theorem ~6.2]{DNT}).} For any finite group $G$, $\Delta^*(G)$ has at most two connected components. \end{lem} If a finite group $G$ is of even order, then it has a real element of order $2$. If an odd prime $p$ dividing $|G|$, $G$ may not have a real element of order $p$. However, if $G$ has no proper normal subgroup of odd index and $G$ is $p$-solvable, Dolfi, Malle and Navarro \cite{DMN} show that $G$ must contain a real element of order $p$. \begin{lem}\label{lem:real element of order p}\emph{(\cite[Corollary~B]{DMN}).} Let $G$ be a finite group with $\mathbf{O}^{2'}(G)=G$. Suppose that $p$ is an odd prime dividing $|G|$. If $G$ is $p$-solvable, then $G$ has a real element of order $p$. \end{lem} In the next two lemmas, we state the It\^{o}-Michler theorem for real conjugacy classes. \begin{lem}\label{lem:odd size}\emph{(\cite[Theorem~6.1]{DNT}).} Let $G$ be a finite group and let $P$ be a Sylow $2$-subgroup of $G$. Then all real classes of $G$ have odd size if and only if $P\unlhd G$ and $\Real(P)\subseteq \mathbf{Z}(P)$. \end{lem} \begin{lem}\label{lem: Ito-Michler theorem for conjugacy classes} Let $G$ be a finite group and $p$ be an odd prime. If $p=3$, assume in addition that $G$ has no composition factor isomorphic to $\SL_3(2)$. If $p$ does not divide $|x^G|$ for all real elements of $G$, then $G$ is $p$-solvable and $\mathbf{O}^{p'}(G)$ is solvable. Furthermore, $\mathbf{O}^{2'}(G)$ has a normal Sylow $p$-subgroup $P$ and $P'\leq \mathbf{Z}(\mathbf{O}^{2'}(G))$. \end{lem} \begin{proof} The first claim is Theorem A in \cite{GNT}. Now, Theorem B in that reference implies that $p$ does not divide $\chi(1)$ for all real-valued irreducible characters $\chi\in\Irr(G)$. By Theorem A in \cite{Tiep}, we know that $\mathbf{O}^{p'}(G)$ is solvable. Finally, the last statement follows from Theorem A in \cite{IN}. \end{proof} \iffalse The following is an opposite of the situation in Lemma \ref{lem:odd size}. \begin{lem}\label{lem:2 power}\emph{(\cite[Theorem~3.4]{NST}).} Let $G$ be a finite group. Then all real class sizes of $G$ are $2$-powers if and only if $G$ has a normal $2$-complement $K$ and $\mathbf{C}_G(K)$ contains all the real elements of $G$. \end{lem} \fi \section{Proofs of Theorems A and B} Let $G$ be a finite group. Suppose that $\Delta^*(G)$ is disconnected. Then $\Delta^*(G)$ has exactly two connected components by Lemma \ref{lem:real prime graph components}. The following will be used frequently in our proofs. \begin{lem}\label{lem:2-closed} Let $G$ be a finite group and suppose that $\Delta^*(G)$ has two connected components with vertex sets $\pi_1$ and $\pi_2$, where $2\not\in\pi_2$. Then there exists an involution $i\in G$ such that $|i^G|>1$ is a $\pi_2$-number and $\mathbf{C}_G(i)$ has a normal Sylow $2$-subgroup. \end{lem} \begin{proof} Let $p$ be a prime in $\pi_2$. Then $p$ must divide $|i^G|$ for some nontrivial real element $i\in G$. Clearly, every prime divisor of $|i^G|$ is adjacent to $p\in\pi_2$, this implies that $|i^G|$ is a nontrivial $\pi_2$-number. Hence $|i^G|>1$ is odd and so $i^2=1$ by Lemma \ref{lem: real elements}(3) and thus $i$ is an involution of $G$. Assume that $\mathbf{C}_G(i)$ has a nontrivial real element $x$ of odd order. Then $xi=ix$ and $|x^G|$ is even so $|x^G|$ is a $\pi_1$-number and thus $(|x^G|,|i^G|)=1$. Lemma \ref{lem: real elements}(4) implies that $xi$ is a real element. Furthermore, since $(o(x),o(i))=1$, $2p$ divides $|(ix)^G|$ by Lemma \ref{lem: real elements}(4) again. This means that $2\in \pi_1$ and $p\in\pi_2$ are adjacent in $\Delta^*(G)$, which is impossible. Therefore, $\mathbf{C}_G(i)$ has no nontrivial real element of odd order and thus it has a normal Sylow $2$-subgroup by Lemma \ref{lem:even order real}. \end{proof} Notice that if $N\unlhd G$, then $\Delta^*(N)$ is a subgraph of $\Delta^*(G)$ by Lemma \ref{lem:div}(1) and the fact that $\Real(N)\subseteq \Real(G)$. However, in general, it is not true that $\Delta^*(G/N)$ is a subgraph of $\Delta^*(G)$. The involutions in $G/N$ might produce extra vertices as well as edges in $\Delta^*(G/N)$. However, this is the case if $|N|$ is odd. \begin{lem}\label{lem:subgraph} Let $G$ be a finite group and let $N\unlhd G$ with $|N|$ odd. Then $\Delta^*(G/N)$ is a subgraph of $\Delta^*(G)$. \end{lem} \begin{proof} We first show that $\rho^*(G/N)\subseteq\rho^*(G)$. Indeed, let $p\in\rho^*(G/N)$ and let $Nx\in G/N$ be a real element such that $p$ divides $|(Nx)^{G/N}|$. By Lemma \ref{lem: real elements}(6), there exists a real element $y\in G$ such that $Nx=Ny$. Since $|(Nx)^{G/N}|=|(Ny)^{G/N}|$ divides $|y^G|$, $p$ divides $|y^G|$, so $p\in\rho^*(G)$. With a similar argument, we can show that if $p\neq q\in\rho^*(G/N)$ which are adjacent in $\Delta^*(G/N)$, then $p,q$ are adjacent in $\Delta^*(G)$ by using Lemma \ref{lem: real elements}(6) again. Thus $\Delta^*(G/N)$ is a subgraph of $\Delta^*(G)$. \end{proof} Let $X$ be a subgroup or a quotient of a finite group $G$ and suppose that $\Delta^*(X)$ is a subgraph of $\Delta^*(G)$. Assume that $\Delta^*(G)$ is disconnected having two connected components with vertex sets $\pi_1$ and $\pi_2$, respectively. To show that $\Delta^*(X)$ is disconnected, it suffices to show that $\rho^*(X)\cap\pi_i\neq\emptyset$ for $i=1,2$ or equivalently $X$ has two real elements $u_i,i=1,2$ which both lift to real elements of $G$ and $\pi(|u_i^X|)\cap\pi_i\neq\emptyset$ for $i=1,2$. For a finite group $G$ and a prime $p,$ $G$ is said to be $p$-closed if it has a normal Sylow $p$-subgroup and it is $p$-nilpotent if it has a normal $p$-complement. \begin{prop}\label{prop:normal and quotient} Let $G$ be a finite group. Suppose that $\Delta^*(G)$ has two connected components with vertex sets $\pi_1$ and $\pi_2$ where $2\not\in\pi_2$. Then \begin{enumerate} \item[$(1)$] If $G$ is not $2$-closed and $M\unlhd G$ with $|G:M|$ odd, then $\Delta^*(M)$ is disconnected. \item[$(2)$] If $N\unlhd G$ with $|N|$ odd and assume further that $G=\mathbf{O}^{2'}(G)$ is not $2$-nilpotent, then $\Delta^*(G/N)$ is disconnected. \end{enumerate} \end{prop} \begin{proof} By Lemma \ref{lem:2-closed}, $G$ has an involution $i$ such that $|i^G|>1$ is a $\pi_2$-number and $\mathbf{C}_G(i)$ has a normal Sylow $2$-subgroup $S$. For (1), let $M\unlhd G$ with $|G:M|$ being odd. Notice that $\rho^*(M)\subseteq\rho^*(G)=\pi_1\cup\pi_2$ and that if $M$ is $2$-closed, then $G$ is also $2$-closed. Thus we assume that $M$ is not $2$-closed. By Lemma \ref{lem:odd size}, $2$ divides some real class size of $M$ and hence $2\in\rho^*(M)\cap\pi_1$. Now $M$ contains every real element of $G$ by Lemma \ref{lem: real elements}(5), in particular, $i\in M$. Moreover, $\Delta^*(M)$ is a subgraph of $\Delta^*(G)$. If $M\leq \mathbf{C}_G(i)$, then $S\unlhd M$ as $|G:M|$ is odd, so $M$ is $2$-closed, a contradiction. Thus, $|i^M|>1$ and hence $\rho^*(M)\cap\pi_2\neq\emptyset$. Therefore, $\Delta^*(M)$ is disconnected as $\rho^*(M)\cap \pi_i$ is non-empty for each $i=1,2$. For (2), suppose that $N\unlhd G$ with $|N|$ odd. By Lemma \ref{lem:subgraph}, $\Delta^*(G/N)$ is a subgraph of $\Delta^*(G)$. In particular, $\rho^*(G/N)\subseteq \rho^*(G)=\pi_1\cup \pi_2$. If $G/N$ is $2$-closed, then $SN\unlhd G$ is of odd index and thus $G=SN$ since $G=\mathbf{O}^{2'}(G)$. However, this would imply that $G$ is $2$-nilpotent with a normal $2$-complement $N$. Therefore, we assume that $G/N$ is not $2$-closed. Clearly, $Ni\in G/N$ is an involution. If $Ni$ is central in $G/N$, then $G/N=\mathbf{C}_{G/N}(Ni)=\mathbf{C}_G(i)N/N,$ where the latter equality follows from \cite[Lemma 7.7]{Isaacs-1}. Hence $G/N$ has a normal Sylow $2$-subgroup $SN/N$, a contradiction. Thus $Ni$ is not central in $G/N$ and $|(Ni)^{G/N}|>1$, so $\rho^*(G/N)\cap\pi_2\neq\emptyset$. Finally, as $G/N$ is not $2$-closed, $2\in \rho^*(G/N)\cap\pi_1$ by applying Lemma \ref{lem:odd size} again. Therefore, $\Delta^*(G/N)$ is disconnected. \end{proof} We are now ready to prove Theorem A which we restate here. \begin{thm}\label{th:disconnected real class sizes graph} Let $G$ be a finite group. If $\Delta^*(G)$ is disconnected, then $G$ is solvable.\end{thm} \begin{proof} Let $G$ be a counterexample with minimal order. Then $G$ is non-solvable and $\Delta^*(G)$ is disconnected. By Lemma \ref{lem:real prime graph components}, $\Delta^*(G)$ has exactly two connected components with vertex sets $\pi_1$ and $\pi_2$, respectively. If $G$ has a normal Sylow $2$-subgroup, then it is clearly solvable by Feit-Thompson theorem. Therefore, we can assume that $G$ has no normal Sylow $2$-subgroup. Now it follows from Lemma \ref{lem:even order real} that $G$ has a nontrivial real element $x$ of odd order. Then $|x^G|$ is divisible by $2$ and hence $2$ is always a vertex of $\Delta^*(G)$. We assume that $2\in \pi_1$. Hence all vertices in $\pi_2$ are odd primes. (1) By Lemma \ref{lem:2-closed}, $G$ has an involution $i$ such that $|i^G|>1$ is a $\pi_2$-number and $\mathbf{C}_G(i)$ has a normal Sylow $2$-subgroup, say $S$. Clearly, $S$ is also a Sylow $2$-subgroup of $G$ as $|i^G|$ is odd. Notice that $\mathbf{C}_G(i)$ is solvable. Now $G$ has a nontrivial real element $x$ of odd order by Lemma \ref{lem:even order real}. Clearly $|x^G|$ is even so $|x^G|$ must be a $\pi_1$-number. Thus $(|x^G|,|i^G|)=1$; therefore, $G$ is not a nonabelian simple group by \cite[Theorem 2]{FA}. (2) $G=\mathbf{O}^{2'}(G)$. Since $G$ is not $2$-closed, $\Delta^*(\mathbf{O}^{2'}(G))$ is disconnected by Proposition \ref{prop:normal and quotient}(1). If $\mathbf{O}^{2'}(G)<G$, then $\mathbf{O}^{2'}(G)$ is solvable by the minimality of $|G|$, hence $G$ is solvable since $G/\mathbf{O}^{2'}(G)$ is solvable. This contradiction shows that $G=\mathbf{O}^{2'}(G)$. (3) $\mathbf{O}_{2'}(G)=1$. By (2) above, we have $G=\mathbf{O}^{2'}(G)$ and since $G$ is not solvable, $G$ is not $2$-nilpotent so that by Proposition \ref{prop:normal and quotient}(2) $\Delta^*(\overline{G})$ is disconnected, where $\overline{G}=G/\mathbf{O}_{2'}(G)$. If $\mathbf{O}_{2'}(G)$ is nontrivial, then $|\overline{G}|<|G|$ and thus by the minimality of $|G|$, $\overline{G}$ is solvable and so is $G$. Hence $\mathbf{O}_{2'}(G)=1$ as required. (4) If $M$ is a maximal normal subgroup of $G$, then $|G:M|=2$ and $G=M\langle i\rangle$. Let $M$ be a maximal normal subgroup of $G$. Then $G/M$ is a simple group. (a) Assume first that $G/M$ is abelian, then $G/M\cong C_r$ for some prime $r$. Since $\mathbf{O}^{2'}(G)=G$ by (2), we deduce that $r=2$. Clearly, $M$ is not solvable. We next claim that $i\not\in M$ and hence we have that $G=M\langle i\rangle$. Suppose that by contradiction that $i\in M$. Then either $|i^M|>1$ or $M=\mathbf{C}_G(i)$. If the latter case holds, then $M$ is solvable by (1) and thus $G$ is solvable, a contradiction. Assume that $|i^M|>1$. Notice that $\Delta^*(M)$ is a subgraph of $\Delta^*(G)$. We see that $\rho^*(M)\cap \pi_2\neq\emptyset$ as every prime divisor of $|i^M|$ is in $\pi_2$ and $i\in\Real(M)$. Observe next that $M$ is not $2$-closed so it has a nontrivial real element $z$ of odd order by Lemma \ref{lem:even order real} and thus $|z^M|$ is even. In other words, $2\in\rho^*(M)$. Therefore $\rho^*(M)\cap\pi_1\neq\emptyset$. Hence we have shown that $\Delta^*(M)$ is disconnected. Thus by induction, $M$ is solvable, which is a contradiction. (b) Now, assume that $G/M$ is a non-abelian simple group. Set $\overline{G}=G/M$. Assume first that $i\not\in M$. Then $\overline{i}$ is an involution in $\overline{G}$ and $|\overline{i}^{\overline{G}}|$ divides $|i^G|$. Let $\overline{y}$ be a nontrivial real element of $\overline{G}$ of odd order (such an element exists by Lemma \ref{lem:even order real}). By Lemma \ref{lem: real elements}(6), $\overline{y}$ lifts to a real element $z\in G$ of odd order. Therefore $|\overline{y}^{\overline{G}}|$ divides $|z^G|$. Since $(|z^G|,|i^G|)=1$, we deduce that $(|\overline{y}^{\overline{G}}|,|\overline{i}^{\overline{G}}|)=1$, contradicting \cite[Theorem 2]{FA}. Assume that $i\in M$. If $G=M\mathbf{C}_G(i)$, then $G/M\cong \mathbf{C}_G(i)/(M\cap\mathbf{C}_G(i))$ is a non-abelian simple group, which is impossible as $\mathbf{C}_G(i)$ is solvable by (1). Thus $H:=M\mathbf{C}_G(i)<G$ and $|G:H|=|\overline{G}:\overline{H}|$ divides $|G:\mathbf{C}_G(i)|=|i^G|$. Let $\overline{y}\in \overline{G}$ be a real element of odd order and $z\in G$ be a real element of odd order such that $\overline{y}=\overline{z}$. Then $|\overline{y}^{\overline{G}}|=|\overline{z}^{\overline{G}}|$ divides $|z^G| $ so $(|\overline{G}:\mathbf{C}_{\overline{G}}(\overline{z})|,|\overline{G}:\overline{H}|)=1$ as $(|z^G|,|i^G|)=1$. Therefore, $\overline{G}=\overline{H}\mathbf{C}_{\overline{G}}(\overline{z})$, where $\overline{H}$ has odd index in $\overline{G}$ and $\overline{H}$ has a normal Sylow $2$-subgroup $\overline{S}$. As every nontrivial real element of odd order of $\overline{G}$ lifts to a nontrivial real element of odd order of $G$ by Lemma \ref{lem: real elements}(6), every prime divisor $s$ of $|\overline{G}:\overline{H}|$ (which lies in $\pi_2$) divides the size of no nontrivial real element of odd order of $\overline{G}$, so $\overline{G}\cong \PSL_2(q)$ with $q=2^r-1$ a Mersenne prime and $s\mid (q-1)/2$ or $\overline{G}\cong\textrm{M}_{23}$ with $s=5$ by \cite[Theorem 4.1]{GNT}. If the latter case holds, then $|\overline{G}:\overline{H}|$ must be a power of $5$. Inspecting \cite{Atlas} shows that this is not the case. Thus $\overline{G}\cong \PSL_2(q)$ with $q=2^r-1$ a Mersenne prime. It follows that $r\ge 3$ and $q\ge 7$ as $\overline{G}$ is non-solvable. Observe that $\overline{S}$ is a Sylow $2$-subgroup of $\overline{G}$ of order $q+1=2^r\ge 8$ and so $\overline{S}\cong \textrm{D}_{q+1}$, which is a maximal subgroup of $\overline{G}$ unless $q=7$. Suppose first that $q>7$. It implies that $\overline{H}=\overline{S}$ so that $|\overline{G}:\overline{H}|=q(q-1)/2$. In particular, $q$ divides $|\overline{G}:\overline{H}|$ and thus $q\mid (q-1)/2$ by the claim above, which is impossible. Now assume that $q=7$. Notice that the Sylow $2$-subgroup of $\overline{G}\cong\PSL_2(7)$ is self-normalizing but not maximal. Since $\overline{S}\unlhd \overline{H}$, we must have that $\overline{H}=\overline{S}$ and we will get a contradiction as above. (5) $G=G'\langle i\rangle$ and $|G:G'|=2$. Since $G$ is not non-abelian simple, $G$ possesses a maximal normal subgroup $W$. It follows from (4) that $G=W\langle i\rangle$ and $|G:W|=2$. It also follows from (4) that $i$ does not lie in any maximal normal subgroup of $G$ so that $G=\langle i^G\rangle.$ Clearly $G'\leq W$ as $G/W\cong C_2$ is abelian. Moreover, $G=\langle i^G\rangle \leq G'\langle i\rangle\leq W\langle i\rangle=G.$ It follows that $W=G'$ and the claim follows. (6) $\mathbf{O}^{2'}(G')$ is a $\pi_1$-group. Since $|G:G'|=2$ and $G$ is non-solvable, $G'$ is non-solvable and thus $\Delta^*(G')$ is a connected subgraph of $\Delta^*(G)$ with $2\in\rho^*(G')\subseteq \pi_1$. Observe that $\pi(G)=\pi(G')$. Let $\sigma=\pi(G')\setminus\pi_1$. Now, if $q\in \sigma$, then $q$ is odd and divides the size of no nontrivial real conjugacy classes of $G'$. Let $K=\mathbf{O}^{2'}(G')$. If $q\in \sigma$ and $q>3$ or $q=3$ and $\SL_3(2)$ is not a composition factor of $G'$, then $K$ has a normal Sylow $q$-subgroup $Q$ by Lemma \ref{lem: Ito-Michler theorem for conjugacy classes}. Clearly $Q\unlhd G$ and thus $Q=1$ by (3). Hence $q$ does not divide $ |K|$. We now suppose that $q=3$ and $\SL_3(2)$ is isomorphic to a composition factor of $G'$. As $G'/K$ is solvable, $\SL_3(2)$ is isomorphic to a composition factor of $K$. Let $L$ be a subnormal subgroup of $K$ and $U\unlhd L$ such that $L/U\cong\SL_3(2)$. Using \cite{Atlas}, we see that $L/U$ has a self-normalizing Sylow $2$-subgroup $T/U$ and a real element $Uz\in L/U$ of order $3$ with $|(Uz)^{L/U}|=7\cdot 8$. There exists a real element $y\in L$ of $3$-power order with $Uz=Uy$ (see \cite[Lemma 2.6]{Tong}). Since $|(Uz)^{L/U}|$ divides $|y^L|$ and $|y^L|$ divides $|y^G|$, we see that $7\in \pi_1$. By (1), $\mathbf{C}_G(i)$ has a normal Sylow $2$-subgroup $S$ with $S\in{\mathrm {Syl}}_2(G)$. As $L$ is subnormal in $G$, $S\cap L$ is a Sylow $2$-subgroup of $L$ and thus $(S\cap L)U/U$ is a Sylow $2$-subgroup of $L/U$. Observe that $\mathbf{C}_{G}(i)$ contains a Sylow $7$-subgroup, say $Q$, of $G$. Hence $(Q\cap L)U/U$ is a Sylow $7$-subgroup of $L/U$. Since $Q$ normalizes $S$, we can see that $(Q\cap L)U/U$ normalizes $(S\cap L)U/U$, which is impossible as the Sylow $2$-subgroup of $L/U$ is self-normalizing. Thus we have shown that $K=\mathbf{O}^{2'}(G')$ is a $\pi_1$-group. \textbf{The final contradiction}. By (5), we have $G=G'\langle i\rangle$ so $\mathbf{C}_G(i)=\mathbf{C}_{G'}(i)\langle i\rangle$ which implies that $|i^G|=|G:\mathbf{C}_G(i)|=|G':\mathbf{C}_{G'}(i)|$. Since $|i^G|$ is a $\pi_2$-number and $\mathbf{C}_{G}(i)$ is solvable, $\mathbf{C}_{G'}(i)$ is also solvable and possesses a Hall $\pi_1$-subgroup $T$ which is also a Hall $\pi_1$-subgroup of $G'$ (as $|G':\mathbf{C}_{G'}(i)|$ is a $\pi_2$-number). As $\mathbf{O}^{2'}(G')\unlhd G'$ is a $\pi_1$-subgroup by (6), we deduce that $\mathbf{O}^{2'}(G')\leq T$ and thus $\mathbf{O}^{2'}(G')$ is solvable since $T\leq \mathbf{C}_{G'}(i)$ is solvable. Clearly, $G'/\mathbf{O}^{2'}(G')$ is solvable by Feit-Thompson theorem, which implies that $G'$ is solvable and hence $G$ is solvable. This contradiction finally proves the theorem. \end{proof} In the next result, we show that if $G=\mathbf{O}^{2'}(G)$ and $\Delta^*(G)$ is disconnected, then each connected component of $\Delta^*(G)$ is complete and one of the components is just $\{2\}$. In particular, every real class size of $G$ is either odd or a $2$-power. \begin{thm}\label{th: 2 powers or odd} Let $G$ be a finite group. Suppose that $G=\mathbf{O}^{2'}(G)$ and $\Delta^*(G)$ is disconnected with vertex sets $\pi_1$ and $\pi_2$ where $2\not\in\pi_2$. Then $\pi_1=\{2\}$ and $\pi_2=\pi(|i^G|)$ for some non-central involution $i\in G$. \end{thm} \begin{proof} By Theorem \ref{th:disconnected real class sizes graph}, we know that $G$ is solvable. Let $i\in G$ be an involution as in Lemma \ref{lem:2-closed} and let $S$ be a normal Sylow $2$-subgroup of $\mathbf{C}_G(i)$. Let $\sigma$ be the set of odd prime divisors $p$ of $|\mathbf{C}_G(i)|$ such that $p$ does not divide $ |i^G|$. Since $|i^G|$ is a $\pi_2$-number and $|G|=|i^G|\cdot |\mathbf{C}_G(i)|$, we see that $$\pi_1\setminus\{2\}\subseteq \pi(G)\setminus (\pi_2\cup\{2\})\subseteq \sigma=\pi(G)\setminus (\{2\}\cup \pi(|i^G|)).$$ Assume that $\sigma=\emptyset$. Then $\pi_1\subseteq \{2\}$ and $\pi_2\subseteq\pi(|i^G|)$. As $G=\mathbf{O}^{2'}(G)$ and $\Delta^*(G)$ is disconnected, $2$ divides some real class size of $G$, hence $2\in\pi_1$. Moreover $\pi(|i^G|)\subseteq \pi_2$; therefore $\pi_1=\{2\}$ and $\pi(|i^G|)=\pi_2$ as wanted. Assume next that $\sigma$ is not empty and let $p\in\sigma$. Then $p$ is odd and $p$ divides $|\mathbf{C}_G(i)|$ but does not divide $ |i^G|$. By Lemma \ref{lem:real element of order p}, $G$ has a real element $x$ of order $p$. Let $P$ be a Sylow $p$-subgroup of $\mathbf{C}_G(i)$. Since $|G:\mathbf{C}_G(i)|=|i^G|$ is not divisible by $p$, $P$ is also a Sylow $p$-subgroup of $G$. Replacing $x$ by its $G$-conjugates, we can assume $x\in P\leq \mathbf{C}_G(i)$ by Sylow theorem. We have that $xi=ix$, $(o(x),o(i))=1$ and $(|x^G|,|i^G|)=1$ so that by applying Lemma \ref{lem: real elements}(4), $xi$ is real in $G$ and $$\pi(|x^G|)\cup\pi(|i^G|)\subseteq \pi(|(xi)^G|).$$ As $|i^G|>1$, we can find a prime $r$ dividing $|i^G|$ and so $r\in\pi_2$. Now the previous inclusion would imply that $2\in\pi(|x^G|)\subseteq \pi_1$ and $r\in\pi_2$ are adjacent in $\Delta^*(G)$, which is impossible. \end{proof} We now consider the situation when $2$ is not a vertex of $\Delta^*(G)$, where $G$ is a finite group. By Lemma \ref{lem:odd size}, $G$ has a normal Sylow $2$-subgroup $S$ with $\Real(S)\subseteq\mathbf{Z}(S)$. \begin{lem}\label{lem:2 is a vertex} Let $G$ be a finite group. Suppose that $G$ has a normal Sylow $2$-subgroup $S$ with $\Real(S)\subseteq \mathbf{Z}(S)$. Then $\Delta^*(G)$ is connected. \end{lem} \begin{proof} If $\Delta^*(G)$ has at most one vertex, then we are done. So, assume that $\Delta^*(G)$ has at least two vertices, i.e., $|\rho^*(G)|\ge 2$. We argue by contradiction. Suppose that $\Delta^*(G)$ is not connected. Then $\Delta^*(G)$ has two connected components with vertex sets $\pi_1$ and $\pi_2$. Then we can find two non-central real elements $x,y\in\Real(G)$ such that $\pi(|x^G|)\subseteq \pi_1$ and $\pi(|y^G|)\subseteq\pi_2$ so $(|x^G|,|y^G|)=1$. Since $S\unlhd G$ and $\Real(S)\subseteq \mathbf{Z}(S)$, $\Real(G)\subseteq \Real(S)\subseteq \mathbf{Z}(S)$ by using Lemma \ref{lem: real elements}(5). Thus all nontrivial real elements of $G$ are involutions in $\mathbf{Z}(S)$. Let $E$ be the set of all involutions of $\mathbf{Z}(S)$ together with the identity. Then $E$ is an elementary abelian subgroup of $\mathbf{Z}(S)$. Indeed, $E=\Omega_1(\mathbf{Z}(S))$ and thus $E\unlhd G.$ Clearly $x,y\in E$ so that $E$ is not central. In particular, $\mathbf{C}_G(E)\unlhd G$ is a proper subgroup of $G$. Notice that $E\leq \mathbf{Z}(S)$ and hence $S\leq \mathbf{C}_G(E)$. Therefore $A:=G/\mathbf{C}_G(E)$ is a nontrivial group of odd order and we can consider $E$ as an $\mathbb{F}_2A$-module. By Maschke's theorem, $E$ is a completely reducible $A$-module. Observe that $|x^A|=|A:\mathbf{C}_A(x)|=|G:\mathbf{C}_G(x)|=|x^G|$ and $|y^A|=|y^G|$. Therefore, the two $A$-orbits $x^A$ and $y^A$ have coprime sizes. By Theorem 1.1 in \cite{DGPS}, the $A$-orbit $(xy)^A$ has size $|x^A|\cdot |y^A|$. Hence $|(xy)^G|=|x^G|\cdot |y^G|$, where $xy\in E$ is an involution. This implies that there is an edge between a prime in $\pi_1$ and a prime in $\pi_2$, which is impossible. \end{proof} We are now ready to prove Theorem B. \begin{thm}\label{th:structure} Let $G$ be a finite group. Suppose that $\Delta^*(G)$ is disconnected. Then $2$ divides some real class size and one of the following holds. \begin{enumerate} \item[$(1)$] $G$ has a normal Sylow $2$-subgroup. \item[$(2)$] $\Delta^*(\mathbf{O}^{2'}(G))$ is disconnected and the real class sizes of $\mathbf{O}^{2'}(G)$ are either odd or powers of $2$. \end{enumerate} \end{thm} \begin{proof} Suppose that $\Delta^*(G)$ is disconnected and let the vertex sets of the connected components are $\pi_1$ and $\pi_2$, respectively. Assume that $2\not\in\pi_2$. We see that $\Delta^*(G)$ has at least two vertices. We first claim that $2\in\pi_1$. It suffices to show that $2$ divides some real class size of $G$. Suppose by contradiction that $2$ divides no real class size. Then $G$ has a normal Sylow $2$-subgroup $S$ with $\Real(S)\subseteq \mathbf{Z}(S)$ by Lemma \ref{lem:odd size}. However, Lemma \ref{lem:2 is a vertex} implies that $\Delta^*(G)$ is connected, which is a contradiction. Next, suppose that $G$ has no normal Sylow $2$-subgroup. We claim that part (2) of the conclusion holds. Let $K:=\mathbf{O}^{2'}(G)$. By Proposition \ref{prop:normal and quotient}(1), $\Delta^*(K)$ is disconnected. Since $\mathbf{O}^{2'}(K)=K$ and $\Delta^*(K)$ is disconnected, the result follows from Theorem \ref{th: 2 powers or odd}. \end{proof} We suspect that if a finite group $G$ has a normal Sylow $2$-subgroup $S$, then $\Delta^*(G)$ is connected, that is, case (1) in Theorem \ref{th:structure} cannot occur. However, we are unable to prove or disprove this yet. In view of Lemma \ref{lem:2 is a vertex}, this is true if $\Real(S)\subseteq \mathbf{Z}(S)$. There are many examples of finite groups whose prime graphs on real class sizes are disconnected. For the first example, let $m>1$ be an odd integer; the dihedral group $\textrm{D}_{2n}$ of order $2n$, where $n=m$ or $n=2m$, has a disconnected prime graph on real class sizes as its real class sizes are just $1,2$ and $m$. For another example, let $G$ be a Frobenius group with Frobenius kernel $F$ and complement $H$, where both $F$ and $H$ are abelian and $|H|$ is even. In this case, all the nontrivial real class sizes of $G$ are $|F|$ and $|H|$ and since $(|F|,|H|)=1$, $\Delta^*(G)$ is disconnected. \section{Proof of Theorem C} Let $G$ be a finite group. Observe that if $x\in \mathbf{Z}(G)$ is a real element of $G$, then $x^2=1$. We first begin with the following lemma. \begin{lem}\label{lem:real 2-elements} Let $G$ be a finite group and let $S\in{\mathrm {Syl}}_2(G)$. Suppose that $\Real(S)\subseteq\mathbf{Z}(S)$ and $|x^G|_2=2^a\ge 2$ for all non-central real elements $x\in G$. If $y$ is a nontrivial real element of $G$ whose order is a $2$-power, then $y$ is a central involution of $G$. \end{lem} \begin{proof} Let $y$ be a nontrivial real element of $G$ whose order is a $2$-power. Then $y^t=y^{-1}$ for some $2$-element $t\in G$ by Lemma \ref{lem: real elements}(2). As $t$ normalizes $\langle y\rangle$, $U:=\langle y,t\rangle$ is a $2$-subgroup of $G$. By Sylow theorem, $U^g \leq S$ for some $g\in G.$ If $y^g$ is a central involution of $G$, then so is $y$. Thus we can assume that $U\leq S$. Since $y\in\Real(U)$, we have $y\in \Real(S)\subseteq \mathbf{Z}(S)$ and so $y^2=1$. Hence $y$ is an involution. Finally, since $y\in\mathbf{Z}(S)$, $|y^G|$ is odd which forces $y\in\mathbf{Z}(G)$ as by assumption $|x^G|$ is even for all non-central real elements $x$ of $G$. \end{proof} The following lemma is obvious. \begin{lem}\label{lem:reduction} Let $G$ be a finite group and let $S\in{\mathrm {Syl}}_2(G)$. Suppose that $|x^G|_2=2^a\ge 2$ for all non-central real elements $x\in G$. Let $K\unlhd G$ be a normal subgroup of odd index. Then $|x^K|_2=2^a$ for all non-central real elements $x\in K$. \end{lem} \begin{proof} Let $K$ be a normal subgroup of $G$ of odd index. By Lemma \ref{lem: real elements}(5), we have $\Real(G)\subseteq\Real(K)$. Now let $x\in K$ be a non-central real element of $K$ and let $C:=\mathbf{C}_G(x)$. Let $P\in{\mathrm {Syl}}_2(C)$ and let $S\in{\mathrm {Syl}}_2(G)$ such that $P\leq S$. Since $|G:K|$ is odd, we have $P\leq S\leq K$. In particular, $S\in{\mathrm {Syl}}_2(K)$. We see that $\mathbf{C}_K(x)=K\cap C$ and $P\leq K\cap C$. Thus $P\leq \mathbf{C}_K(x)\leq C$ and so $P$ is also a Sylow $2$-subgroup of $\mathbf{C}_K(x)$. Therefore $|C|_2=|\mathbf{C}_K(x)|_2$ and hence $|x^G|_2=|G:C|_2=|S:P|=|K:\mathbf{C}_K(x)|_2=|x^K|_2$. \end{proof} \begin{lem}\label{lem:central involutions} Let $G$ be a finite group and let $S\in{\mathrm {Syl}}_2(G)$. Suppose that $|x^G|_2=2^a\ge 2$ for all non-central real elements $x\in G$. Assume further that $G$ has a Sylow $2$-subgroup $S$ with $\Real(S)\subseteq \mathbf{Z}(S)$. Then every nontrivial real element of $G/\mathbf{O}_{2'}(G)$ of $2$-power order lies in the center of $G/\mathbf{O}_{2'}(G)$. \end{lem} \begin{proof} Let $N=\mathbf{O}_{2'}(G)\unlhd G$ and let $Nx$ be a real element of $G/N$ of order $k:=2^c\ge 2$. Then $Nx=Ny$ for some real element $y\in G$ by Lemma \ref{lem: real elements}(6). We see that $y^{k}\in N$ and so $(y^k)^m=1$ for some odd integer $m\ge 1$. As $(k,m)=1$, $1=uk+vm$ for some integers $u,v$. We have $Nx=Ny=Ny^{uk+vm}=(Ny^{uk})(Ny^{vm})=Ny^{vm}$ as $y^k\in N$. Clearly $z:=y^{vm}$ is a nontrivial real element of $G$ whose order divides $k=2^c$ and $Nx=Ny=Nz$. By Lemma \ref{lem:real 2-elements}, $z$ is a central involution of $G$ and thus $Nx$ is also a central involution of $G/N.$ \end{proof} In the next theorem, we show that if a finite group satisfies the hypothesis of Theorem C, then it is solvable. \begin{thm}\label{th:equal 2-parts-solvability} Let $G$ be a finite group. Suppose that $|x^G|_2=2^a$ for all non-central real elements $x\in G$. Assume further that $G$ has a Sylow $2$-subgroup $S$ with $\Real(S)\subseteq \mathbf{Z}(S)$. Then $G$ is solvable. \end{thm} \begin{proof} Let $G$ be a minimal counterexample to the theorem and let $S$ be a Sylow $2$-subgroup of $G$ with $\Real(S)\subseteq \mathbf{Z}(S)$. Then $G$ is non-solvable and thus $G$ has no normal Sylow $2$-subgroup. By Lemma \ref{lem:even order real}, $G$ has a nontrivial real element $z$ of odd order. Clearly, $z$ is not central and thus $|z^G|$ is always even. Therefore, $|z^G|_2=2^a\ge 2.$ It follows from Lemma \ref{lem:central involutions} that every nontrivial real element of $G/\mathbf{O}_{2'}(G)$ of $2$-power order lies in the center of $G/\mathbf{O}_{2'}(G)$. In particular, all involutions of $G/\mathbf{O}_{2'}(G)$ are in the center of $G/\mathbf{O}_{2'}(G)$. Now we can apply results in \cite{Griess}. Since $\mathbf{O}_{2'}(G/\mathbf{O}_{2'}(G))=1$, by the main theorem in \cite{Griess}, the last term of the derived series of $G/\mathbf{O}_{2'}(G)$, say $H/\mathbf{O}_{2'}(G)$ is isomorphic to a direct product $L_1\times L_2\times\cdots\times L_n$, where each $L_i$ is isomorphic to either $\SL_2(q)$ with $q\ge 5$ odd or $2\cdot {\mathrm {Alt}}_7$, the perfect double cover of ${\mathrm {Alt}}_7$. For each $i$, every real element of $L_i$ is also a real element of $G/\mathbf{O}_{2'}(G)$ as $L_i$ is a subgroup of $G/\mathbf{O}_{2'}(G)$. Moreover, every nontrivial real element of $L_i$ of $2$-power order must lie in the center of $G/\mathbf{O}_{2'}(G)$ and hence must be in $\mathbf{Z}(L_i).$ Thus to obtain a contradiction, we need to find a real element $x\in L_i$ of order $2^m\ge 4$. Notice that $|\mathbf{Z}(L_i)|=2$ for all $i\ge 1$. Assume first that $L_i\cong 2\cdot{\mathrm {Alt}}_7$ for some $i\ge 1.$ Using \cite{Atlas}, $L_i$ has a real element $x$ of order $4$. Assume next that $L_i\cong \SL_2(q)$ for some $q\ge 5$ odd. It is well known that the Sylow $2$-subgroup $T$ of $\SL_2(q)$ is a generalized quaternion group of oder $2^{k+1}$ for some $k\ge 2.$ (See, for example, Theorem 2.8.3 in \cite{Gorenstein}). Now $T$ is generated by two elements $\alpha$ and $\beta$ such that $o(\beta)=2^k,o(\alpha)=4$, $\alpha^2=\beta^{2^{k-1}}$ and $\beta^\alpha=\beta^{-1}$. Thus $\beta$ is a real element of $\SL_2(q)$ of order $2^k\ge 4$. We can take $x=\beta$. The proof is now complete. \end{proof} We now prove the $2$-nilpotence part of Theorem C. \begin{thm}\label{th:equal 2-parts-2-nilpotent} Let $G$ be a finite group. Suppose that $|x^G|_2=2^a$ for all non-central real elements $x\in G$. Assume further that $G$ has a Sylow $2$-subgroup $S$ with $\Real(S)\subseteq \mathbf{Z}(S)$. Then $\mathbf{O}^{2'}(G)$ is $2$-nilpotent. \end{thm} \begin{proof} By Lemma \ref{lem:reduction}, we can assume that $G=\mathbf{O}^{2'}(G)$. Let $\overline{G}=G/\mathbf{O}_{2'}(G)$ and use the `bar' notation. Now $G$ is solvable by Theorem \ref{th:equal 2-parts-solvability}. Let $\overline{P}=\mathbf{O}_{2}(\overline{G})$. Since $\overline{G}$ is solvable, it possesses a Hall $2'$-subgroup, say $\overline{H}$. It follows from Lemma \ref{lem:central involutions} that every real element of $\overline{G}$ whose order is a power of $2$ lies in the center of $\overline{G}$. This implies that $\overline{H}$ centralizes all real elements of order at most 4 of $\overline{P}$ and thus by \cite[Theorem B]{IN2}, $\overline{H}$ centralizes $\overline{P}$. By Hall-Higman 1.2.3, $\overline{H}\leq \mathbf{C}_{\overline{G}}(\overline{P})\leq \overline{P}$ which forces $\overline{H}=1$. This means that $\overline{G}=\overline{P}$ is a $2$-group and so $G$ is $2$-nilpotent as required. \end{proof} Finally, Theorem C follows by combining Theorems \ref{th:equal 2-parts-solvability} and \ref{th:equal 2-parts-2-nilpotent}. \end{document}
arXiv
Erdős–Bacon number A person's Erdős–Bacon number is the sum of one's Erdős number—which measures the "collaborative distance" in authoring academic papers between that person and Hungarian mathematician Paul Erdős—and one's Bacon number—which represents the number of links, through roles in films, by which the person is separated from American actor Kevin Bacon.[1][2] The lower the number, the closer a person is to Erdős and Bacon, which reflects a small world phenomenon in academia and entertainment.[3] To have a defined Erdős–Bacon number, it is necessary to have both appeared in a film and co-authored an academic paper, although this in and of itself is not sufficient as ones co-authors must have a known chain leading to Paul Erdős, and one's film must have actors eventually leading to Kevin Bacon. Academic scientists Mathematician Daniel Kleitman has an Erdős–Bacon number of 3. He co-authored papers with Erdős and has a Bacon number of 2 via Minnie Driver in Good Will Hunting; Driver and Bacon appeared together in Sleepers.[4] Like Kleitman, mathematician Bruce Reznick has co-authored a paper with Erdős[5] and has a Bacon number of 2, via Roddy McDowall in the film Pretty Maids All in a Row, giving him an Erdős–Bacon number of 3 as well.[6] Physicist Nicholas Metropolis has an Erdős number of 2,[7] and also a Bacon number of 2,[8] giving him an Erdős–Bacon number of 4. Metropolis and Richard Feynman both worked on the Manhattan Project at Los Alamos Laboratory. Via Metropolis, Feynman has an Erdős number of 3 and, from having appeared in the film Anti-Clock alongside Tony Tang, Feynman also has a Bacon number of 3. Richard Feynman thus has an Erdős–Bacon number of 6.[7] Theoretical physicist Stephen Hawking has an Erdős–Bacon number of 6: his Bacon number of 2 (via his appearance alongside John Cleese in Monty Python Live (Mostly), who acted alongside Kevin Bacon in The Big Picture) is lower than his Erdős number of 4.[9] Similarly to Stephen Hawking, scientist Carl Sagan has an Erdős–Bacon number of 6, also from a Bacon number of 2 and an Erdős number of 4.[10] Mathematician Jordan Ellenberg has an Erdős number of 3[11] and a Bacon number of 2 due to a cameo appearance in the film Gifted for which he was also the mathematical consultant[12] Linguist Noam Chomsky has an Erdős number of 4,[13] he also co-starred with Danny Glover in the 2005 documentary The Peace!, giving him a Bacon number of 2[14] and combined Erdős–Bacon number of 6. Actors Danica McKellar, who played Winnie Cooper in The Wonder Years, has an Erdős–Bacon number of 6. While an undergraduate at the University of California, Los Angeles, McKellar coauthored a mathematics paper[15] with Lincoln Chayes, who via his wife Jennifer Tour Chayes[16] has an Erdős number of 3, giving McKellar one of 4. Having worked with Margaret Easley, McKellar has a Bacon number of 2.[2] Israeli-American actress Natalie Portman has an Erdős–Bacon number of 7.[17] She collaborated (using her birth name, Natalie Hershlag) with Abigail A. Baird,[18] who has a collaboration path[19][20][21] leading to Joseph Gillis, who has an Erdős number of 1, giving Portman an Erdős number of 5.[22] Portman appeared in A Powerful Noise Live (2009) with Sarah Michelle Gellar, who appeared in The Air I Breathe (2007) with Bacon, giving Portman a Bacon number of 2.[23] British actor Colin Firth has an Erdős–Bacon number of 6. Firth is credited as co-author of a neuroscience paper, "Political Orientations Are Correlated with Brain Structure in Young Adults",[24] after he suggested on BBC Radio 4 that such a study could be done.[25] Another author of that paper, Geraint Rees, has an Erdős number of 4,[26] which gives Firth an Erdős number of 5. Firth's Bacon number of 1 is due to his appearance in Where the Truth Lies.[27][28] Kristen Stewart has an Erdős–Bacon number of 7; she is credited as a co-author on an artificial intelligence paper that was written after a technique was used for her short film Come Swim, giving her an Erdős number of 5,[29][30] and she co-starred with Michael Sheen in Twilight, who co-starred with Bacon in Frost/Nixon, giving her a Bacon number of 2.[31] Albert M. Chan has Erdős–Bacon number of 4. He co-authored a peer-reviewed paper on OFDM, giving him an Erdős number of 3.[32][33][34] Chan appeared alongside Kevin Bacon in Patriots Day, giving him a Bacon number of 1.[35] Others Elon Musk, who is neither a scientist nor an actor, has an Erdős–Bacon number of 6. In 2010 Musk had a cameo in the film Iron Man 2.[36] Since actor Mickey Rourke played a role in both Iron Man 2 and in Diner where also Kevin Bacon played a role, Musk has a Bacon number of 2.[37] In 2021 Musk coauthored a peer-reviewed scientific paper on COVID-19 together with Pardis Sabeti, among others.[38] Since Sabeti has an Erdős number of 3,[39] Musk has an Erdős number of 4[40] and consequently an Erdős–Bacon number of 6. Sergey Brin has an Erdős number of three through papers with Jeffrey Ullman and Ronald Graham,[41] and he has two cameos in the 2013 comedy The Internship,[42] leading to a Bacon number of two via Rose Byrne[43] and consequently an Erdős–Bacon number of 5. Bill Gates has an Erdős number of four[44] and in 1987 he participated in a short mockumentary Citizen Steve about Steven Spielberg, where co-starred with Whoopi Goldberg, giving him a Bacon number of two[45] and consequently an Erdős–Bacon number of 6. Table Name Erdős number Bacon number Erdős–Bacon number Mayim Bialik 527[46] Jordan Ellenberg 3[11]2[12]5 Richard Feynman 336[7] Colin Firth 5[lower-alpha 1][24] 1[27] 6 Stephen Hawking 4 2[lower-alpha 2] 6[9] Daniel Kleitman 123[4] Danica McKellar 4[47][15][16][48] 2[lower-alpha 1] 6 Nicholas Metropolis 2[7] 2[8] 4 Elon Musk 4 2[lower-alpha 2] 6 Natalie Portman 5[18][19][20][21][22] 2[lower-alpha 1] 7[17] Bruce Reznick 1 2 3 Carl Sagan 4 2[lower-alpha 2] 6[10] Kristen Stewart 5[29] 2[49][50] 7 Notes: 1. See discussion above (Actors). 2. Includes role as self. References 1. Singh, Simon (May 1, 2002). "And the winner tonight is". The Telegraph. Archived from the original on November 12, 2012. Retrieved September 26, 2013. 2. "There's not much separating her from Bacon, Erdos". USA Today. August 14, 2007. Archived from the original on November 4, 2012. 3. Collins, James J.; Chow, Carson C. (1998). "It's a small world". Nature. 393 (6684): 409–10. Bibcode:1998Natur.393..409C. doi:10.1038/30835. PMID 9623993. S2CID 6827605. 4. Grossman, Jerry (January 27, 1999). "The Erdös Number Project". Oakland University. Archived from the original on 1999-02-03. Retrieved 2021-03-03. 5. Erdős, P.; Hildebrand, A.; Odlyzko, A.; Pudaite, P.; Reznick, B. (1987). "The asymptotic behavior of a family of sequences". Pacific J. Math. no. 2 (126): 227–241. 6. Grossman, Jerry (December 6, 2018). "The Erdös Number Project". Oakland University. Archived from the original on 2020-03-11. Retrieved 2021-03-03. 7. "Richard Feynman". Erdős Bacon Sabbath Project. Archived from the original on 2017-12-25. Retrieved 2015-12-26. 8. "The Oracle of Bacon". oracleofbacon.org. 9. "Stephen Hawking". Erdős Bacon Sabbath Project. Archived from the original on 2017-12-25. Retrieved 2018-11-10. 10. "Carl Sagan". Erdős Bacon Sabbath Project. Archived from the original on 2018-01-11. Retrieved 2018-11-10. 11. "MR: Search MSC database". mathscinet.ams.org. Retrieved 2022-02-06. MR Erdos Number = 3 Jordan S. Ellenberg coauthored with Christopher M. Skinner MR1844206 Christopher M. Skinner coauthored with Andrew M. Odlyzko MR1210537 Andrew M. Odlyzko coauthored with Paul Erdős1 MR0535395 12. "The Oracle of Bacon". oracleofbacon.org. Retrieved 2022-02-14. 13. "From Noam Chomsky to Paul Erdős in four papers". www.csauthors.net. 14. "The Oracle of Bacon". oracleofbacon.org. 15. Chayes, L; McKellar, D; Winn, B (1998). "Percolation and Gibbs states multiplicity for ferromagnetic Ashkin-Teller models on $\mathbb {Z} ^{2}$". Journal of Physics A: Mathematical and General. 31 (45): 9055–9063. Bibcode:1998JPhA...31.9055C. doi:10.1088/0305-4470/31/45/005. 16. Chayes, J. T.; Chayes, L.; Kotecký, R. (1995). "The analysis of the Widom-Rowlinson model by stochastic geometric methods". Communications in Mathematical Physics. 172 (3): 551. Bibcode:1995CMaPh.172..551C. doi:10.1007/BF02101808. S2CID 15051914. 17. "MICHAEL'S ERDŐS-BACON NUMBER | The Liquid Narrative Research Group". liquidnarrative.eae.utah.edu. 18. Baird, A; Kagan, J; Gaudette, T; Walz, KA; Hershlag, N; Boas, DA (2002). "Frontal Lobe Activation during Object Permanence: Data from Near-Infrared Spectroscopy". NeuroImage. 16 (4): 1120–5. doi:10.1006/nimg.2002.1170. PMID 12202098. S2CID 15630444. 19. Baird, Abigail A.; Colvin, Mary K.; Vanhorn, John D.; Inati, Souheil; Gazzaniga, Michael S. (2005). "Functional Connectivity: Integrating Behavioral, Diffusion Tensor Imaging, and Functional Magnetic Resonance Imaging Data Sets". Journal of Cognitive Neuroscience. 17 (4): 687–93. CiteSeerX 10.1.1.484.1868. doi:10.1162/0898929053467569. PMID 15829087. S2CID 4666737. 20. Victor, Jonathan D.; Maiese, Kenneth; Shapley, Robert; Sidtis, John; Gazzaniga, Michael S. (1989). "Acquired central dyschromatopsia: analysis of a case with preservation of color discrimination". Clinical Vision Sciences. 4: 183–96. 21. Azor, Ruth; Gillis, J.; Victor, J. D. (1982). "Combinatorial Applications of Hermite Polynomials". SIAM Journal on Mathematical Analysis. 13 (5): 879–90. doi:10.1137/0513062. 22. Erdos, P.; Gillis, J. (2009). "Note on the Transfinite Diameter". Journal of the London Mathematical Society. s1-12 (3): 185. doi:10.1112/jlms/s1-12.2.185. 23. "The Oracle of Bacon". oracleofbacon.org. Retrieved 2023-05-03. 24. Kanai, Ryota; Feilden, Tom; Firth, Colin; Rees, Geraint (2011). "Political Orientations Are Correlated with Brain Structure in Young Adults". Current Biology. 21 (8): 677–80. doi:10.1016/j.cub.2011.03.017. PMC 3092984. PMID 21474316. 25. "Colin Firth credited in brain research". BBC News. 2011-06-05. Retrieved 2021-03-13. 26. "From Geraint Rees 0001 to Paul Erdős in four papers". csauthors.net. Archived from the original on 2021-03-13. Retrieved 2021-03-13. 27. Where the Truth Lies at IMDb 28. "The Oracle of Bacon". oracleofbacon.org. Retrieved 2021-03-13. 29. Gershgorn, Dave. "Kristen Stewart (yes, that Kristen Stewart) just released a research paper on artificial intelligence". 30. "From Paul Erdős to Kristen Stewart in five papers". Archived from the original on 2018-03-14. Retrieved 2021-03-13. 31. "The Oracle of Bacon". oracleofbacon.org. Retrieved 2021-03-13. 32. Lee, Inkyu; Chan, Albert M.; Sundberg, Carl-Erik (2004). "Space-time bit-interleaved coded modulation for OFDM systems". IEEE Transactions on Signal Processing. 52 (3): 820–25. Bibcode:2004ITSP...52..820L. doi:10.1109/TSP.2003.822350. S2CID 16281296. 33. Duren, Peter; Khavinson, Dmitry; Shapiro, Harold S.; Sundberg, Carl-Erik (1994). "Invariant subspaces in Bergman spaces and the biharmonic equation". Michigan Mathematical Journal. 41 (2): 247–59. doi:10.1307/mmj/1029004992. 34. Erdős, Paul; Shapiro, Harold S. (1965). "Large and small subspaces of Hilbert space". Michigan Mathematical Journal. 12 (2): 169–78. doi:10.1307/mmj/1028999306. 35. Patriots Day at IMDb as "Computer Forensic Tech" 36. Tate, Ryan (2012-09-20). "10 Awkward Hollywood Cameos by Tech Founders". Wired. Archived from the original on 2017-12-01. Retrieved 2021-05-08. 37. "The Oracle of Bacon". oracleofbacon.org. Retrieved 2021-05-08. Elon Musk has a Bacon number of 2. Elon Musk was in Iron Man 2 with Mickey Rourke was in Diner with Kevin Bacon 38. Bartsch, Yannic C.; Fischinger, Stephanie; Siddiqui, Sameed M.; Chen, Zhilin; Yu, Jingyou; Gebre, Makda; Atyeo, Caroline; Gorman, Matthew J.; Zhu, Alex Lee; Kang, Jaewon; Burke, John S.; Slein, Matthew; Gluck, Matthew J.; Beger, Samuel; Hu, Yiyuan; Rhee, Justin; Petersen, Eric; Mormann, Benjamin; de St Aubin, Michael; Hasdianda, Mohammad A.; Jambaulikar, Guruprasad; Boyer, Edward W.; Sabeti, Pardis C.; Barouch, Dan H.; Julg, Boris D.; Musk, Elon R.; Menon, Anil S.; Lauffenburger, Douglas A.; Nilles, Eric J.; Alter, Galit (2021-02-15). "Discrete SARS-CoV-2 antibody titers track with functional humoral stability". Nature Communications. 12 (1): 1018. Bibcode:2021NatCo..12.1018B. doi:10.1038/s41467-021-21336-8. PMC 7884400. PMID 33589636. 39. "MR: Search MSC database". mathscinet.ams.org. Retrieved 2021-05-08. MR Erdos Number = 3 Pardis C. Sabeti coauthored with Michael Mitzenmacher MR3595146 Michael Mitzenmacher coauthored with Joel H. Spencer MR2056083 Joel H. Spencer coauthored with Paul Erdős1 MR0382007 40. "Jerry Grossman's Web Page > The Erdös Number Project > Some Famous People with Finite Erdös Numbers >". Oakland University. Retrieved 2021-05-08. Elon Musk entrepreneur 4 41. "From Sergey Brin to Paul Erdős in three papers". www.csauthors.net. 42. "An Ex-Googler's Take On "The Internship"". June 8, 2013. 43. "The Oracle of Bacon". oracleofbacon.org. 44. "From Bill Gates to Paul Erdős in four papers". www.csauthors.net. 45. "The Oracle of Bacon". oracleofbacon.org. 46. "Mayim Bialik". Erdős Bacon Sabbath Project. Archived from the original on 2018-01-15. Retrieved 2014-02-09. 47. "The Erdős Number Project, Erdos1". Archived from the original on 2006-12-07. Retrieved 2006-12-20. 48. Kotecký, R.; Preiss, D. (1986). "Cluster expansion for abstract polymer models". Communications in Mathematical Physics. 103 (3): 491–8. Bibcode:1986CMaPh.103..491K. doi:10.1007/BF01211762. S2CID 121879006. 49. Twilight at IMDb co-starred Kristen Stewart and Michael Sheen 50. Frost/Nixon at IMDb co-starred Michael Sheen and Kevin Bacon Kevin Bacon • Filmography Films directed • Losing Chase (1996) • Loverboy (2005) Family • Kyra Sedgwick (wife) • Sosie Bacon (daughter) • Edmund Bacon (father) • Michael Bacon (brother) Related articles • The Bacon Brothers • Six Degrees of Kevin Bacon • Erdős–Bacon number • SixDegrees.org
Wikipedia
What are sets? There's a piece of pedagogical practice in maths which I hate. Sets are ubiquitously useful in mathematics, so any student of mathematics will have to learn about them. The first definition given is usually something like the following, let's call it the proto-definition: A set is a well-defined collection of objects. So, for example, there is the set of all real numbers, the set of all polynomials with at least one root, the set of all sets of real numbers, and so on. If you have some property $P(x)$ which can be true or false of objects $x$, then there is a set $\{x : P(x)\}$ of all objects $x$ for which $P(x)$ is true. This picture of sets is good for a first approximation, but it has a serious flaw. The problem with the proto-definition is that it admits impossible objects, sets which cannot exist without a contradiction. There's several well-known paradoxical sets, but probably the best-known is the Russell set $R$ of all sets $x$ so that $x \not \in x$. This $R$ is a set according to the proto-definition—sets are perfectly good objects and can be elements of other sets, and asking whether $x \in x$ is a perfectly fine question to ask about a set $x$. The paradox arises when we ask whether $R \in R$. If yes, then by the definition of $R$, we conclude $R \not \in R$. If no, then again by the definition, $R \in R$. So no matter what we get a contradiction. It's depressingly common for a mathematics classroom to give this proto-definition, introduce the Russell set and show it's paradoxical, and then stop. I think this is terrible pedagogy. It's certainly good to point out that the proto-definition of sets has problems—because it does!—but it's bad to not say how to resolve the problems to get an adequate concept of set. In an asshole move, we destroy the floor beneath students' feet without building a replacement for them. There is a better picture of what sets are, but many maths students will go through a whole degree without ever being shown it. A full explanation is the domain of a dedicated set theory course. But the big picture, without getting into the gritty details, can be explained without excessive background. (For example, someone who has taken an American-style intro-to-proofs class is in a position to understand this picture.) This blog post is my attempt at such an explanation. The trouble with the proto-definition is that it is too permissive; it admits too many objects, and so paradoxical objects slip in. If we are to resolve the problem, we must have a narrower concept of set. That is, all our objects we admit as sets will be well-defined collections, but we will have to be more restrictive than with the proto-definition. Of course, we could be too restrictive. After all, we could "resolve" the problem by declaring that nothing is a set, and thereby avoid any paradoxical sets! We want to strike a balance: be permissive enough that we have all the sets we want to do mathematics, but not so permissive so as to let paradoxical objects in. The proto-definition is what one might call a top-down explanation of what a set is. It gives a broad definition that yields all sets in a single step. The better approach, known as the iterative conception of set or the cumulative hierarchy, is bottom-up: we start with some basic objects, and then inductively build upward from there stage by stage to include more and more sets. Our starting point will be non-set objects, call them urelements. These could include, for example, numbers. But I'm going to purposely be a bit vague here about what we want to take for urelements, because as we'll discuss later, this doesn't really matter. Let's call this starting point stage $0$. At stage $1$ we introduce our first sets. Namely, the sets introduced at stage $1$ are all sets whose elements are urelements. In other words, the sets in stage $1$ are the sets which are elements of $\mathcal P(U)$, the powerset of the set $U$ of urelements. Let's be a bit careful with that name, however. We called it a powerset, but is it a set? The only sets we've admitted so far are those whose elements are all urelements. But $\mathcal P(U)$ doesn't have urelements as elements—its elements are sets—so it's not (yet!) a set. The fix is to keep building upward. For a first attempt, let's try the following: For stage $2$, we add in the sets whose elements came from stage $1$—that is, add in the sets which are elements of $\mathcal P(\mathcal P(U))$. This includes, for example, $\mathcal P(U)$ as a set, so the powerset really is a set. This first try isn't quite what we want, however. The trouble is that it doesn't allow sets of mixed type. We have the stage $1$ sets, whose elements are urelements, and we have the stage $2$ sets, whose elements are stage $1$ sets. But we don't have sets, like $\{2,\mathbb N\}$, whose elements can be either urelements or stage $1$ sets. (These sets may be a bit weird, but if we're going to have a general concept worth a damn we're gonna have to include some weird sets.) The fix is easy though: the stage $2$ sets are the sets whose elements are either stage $1$ sets or urelements. Let's rephrase this to a simpler statement, one which is easier to generalize: the stage $2$ sets are the sets whose elements are objects of stage $<2$. In general, the inductive step goes like this. Having already constructed stages $0$ through $n$, stage $n+1$ gives us sets whose elements are objects of stage $<n+1$. This gives us all finite index stages, but we can keep going. We can have a stage after all the finite index stages, call it stage $\omega+1$, to consist of sets whose elements are objects of stage $n$ for some finite $n$. Then go again: stage $\omega+2$ gives sets whose elements are objects of stage $<\omega+2.$ And so on to define stage $\omega+n$ for all finite $n$. And then we can go past another limit stage to get stage $\omega+\omega+1$ consisting of sets whose elements are objects of stage $n$ or stage $\omega+n$ for some finite $n$. Having the basic idea in mind, let me return to the remark that it doesn't really matter what we take as the urelements. Suppose, for instance, that we had the real numbers among the urelements but not the complex numbers. Is this a problem? It turns out not to be a problem. We can build an isomorphic copy of $\mathbb C$ from pairs of real numbers, so by going up a few stages in the iterative construction we get a copy of $\mathbb C$. (Technical detail: to do this, we need to be able to code ordered pairs as (unordered) sets. There's multiple ways to do this. Perhaps the best known in Kuratowski's: $(x,y) = \{\{x\},\{x,y\}\}$. It is an exercise to check that this lets you distinguish the first versus second element in an ordered pair.) What if we didn't start out with the real numbers in the urelements? We can construct $\mathbb R$ as, for example, Dedekind cuts of rational numbers. And rational numbers can be constructed as equivalence classes of natural numbers. So if we have the natural numbers we can find a copy of $\mathbb R$ as a set of sets of … sets of natural numbers, in some finite stage of our construction. Indeed, it turns out we don't need any non-set objects: any mathematical object can be coded by a set, possibly in a very complicated manner. For example, here's one way to code natural numbers as sets. Let $0$ be coded by the empty set (which we added in at stage $1$). Then given a set coding $n$—let me abuse notation and denote that set simply as $n$—let $n+1$ be coded by the set $n \cup \{n\}$. An inductive argument shows that the set $n$ was added at stage $n+1$. (If something seems circular here, we'll come back to that issue shortly.) And because we have the successor operation $+1$ we can inductively define $+$, $\times$, and so on, giving an isomorphic copy of $\mathbb N$ where everything is a set. Call this isomorphic copy $\omega$, because that's its usual name. Note that $\omega$ doesn't show up as a set until stage $\omega+1$, giving us a first example of where we want to go beyond finite stages. Because we can code any mathematical object as a set, it suffices to have sets as our only objects and to completely dispense with urelements. This is the standard approach among set theorists, because the parsimony makes some arguments simpler. And the cost is nil, because once you know you have a copy of a mathematical object you can just use it and not worry about where it came from. Like how after showing you can construct real numbers as Dedekind cuts you can just work with real numbers and not worry about the implementation details. But if you prefer to include non-set objects, there's no harm. It's known that the two approaches are equivalent, proving the same theorems. With this simplified ontology, our starting stage $0$ is to start with nothing, and build up from there. Let's introduce some notation. Let $V_\alpha$ denote the set of all sets which appear by stage $\alpha$. (This is a set, because $V_\alpha$ appears by stage $\alpha+1$.) We can then give the following inductive definition, which amounts to a distillation of the above explanation: $V_0 = \emptyset$; $V_{\alpha+1} = \mathcal P(V_\alpha)$; and $V_\gamma = \bigcup_{\alpha < \gamma} V_\alpha$ if $\gamma$ is a limit stage. Finally, the sets are exactly those objects which occur at some stage $\alpha$ of this iterative construction. A couple minor comments. First, you can prove from this definition that $V_\alpha \subseteq V_\beta$ if $\alpha$ is an earlier stage than $\beta$. (The construction being iterative, it is naturally proved by induction.) That is, we get for free that each stage includes all elements from previous stages, and so we automatically jump the small hurdle we faced with urelements. Second, when doing cumulative constructions of transfinite length, it's usually convenient to collect everything so far at limit stages, and so only introduce new objects at successor stages. Here, if a set $x$ is in $V_\gamma$ for a limit stage $\gamma$, then by definition $x$ is in $V_\alpha$ for some earlier stage $\alpha < \gamma$. So the limit stages don't introduce any new sets, they just collect all the previous stages together. This was passed over in the above explanation with urelements, which is why we seemingly skipped stage $\omega$ and went straight to stage $\omega+1$. We have a copy of the natural numbers in $V_{\omega+1}$. And so most of the familiar objects of mathematics appear by $V_{\omega+\omega}$, the second limit stage. So this refinement of the proto-definition gives us enough sets to do the mathematics we want to do. Why does this definition forbid impossible objects like the Russell set? Recall the would-be definition of the Russell set: $R = \{ x : x \not \in x\}$. In order for $R$ to be a set, it would have to occur at some stage $\alpha$. But we can see that this never happens. For example, the set $V_\alpha$ first appears as a set at stage $\alpha+1$, and $V_\alpha \not \in V_\alpha$. (Indeed, you can prove $x \not \in x$ for every set $x$ according to the iterative conception view of sets.) That is, the Russell 'set' would have to occur past stage $\alpha$ for every $\alpha$, and so never gets formed. Other paradoxes of size, such as the Burali–Forti paradox, get ruled out for the same reason: the paradoxical 'sets' they are based on don't show up by any stage $\alpha$, and so aren't actually sets. In particular, there is no set of all sets. Does this mean there are no paradoxical sets lurking in the cumulative hierarchy? We've seen that the known examples are avoided—and this includes more modern, more sophisticated examples, such as that furnished by Kunen's inconsistency theorem—but maybe there's a different, more subtle species hiding. All our knowledge is fallible, and we cannot 100% rule out the possibility of paradox. (We can't even 100% rule out the possibility that finite arithmetic has paradoxes!) But we can be very confidant that we are safe. Explaining why is a big topic, so let me briefly give two reasons beyond that this concept avoids the known issues. First, in the past century or so we have developed a highly detailed understanding of the cumulative hierarchy. If there were paradoxical sets lurking, it's rather probable that we would've found them by now. Second, the iterative conception of set doesn't furnish us with a single formal concept, but rather a whole hierarchy of stronger and stronger concepts. So if we did find a paradox, we could do damage control, see just how bad it is, and (hopefully!) determine that the vast majority of maths is untouched. This hierarchy of stronger and stronger conceptions of sets allowing more and more objects as sets can be measured by how many stages are in the construction. Stages are ordinals, the order-types of well-orders. Just like $\mathbb N$ being well-ordered allows us to do iterative constructions with the natural numbers as stages, we can do transfinite iterative constructions by allowing infinite ordinals as stages. The more stages allowed, the more sets you get. We've seen a sketch of why having stages below the ordinal $\omega+\omega$ is enough to construct most mathematical objects, but sometimes we need to go longer. For example, some work in category theory needs to go beyond the stage given by an inaccessible cardinal, a kind of really long ordinal. (In brief: $\kappa > \omega$ is inaccessible if (a) there is no cofinal function $f : \alpha \to \kappa$ for an ordinal $\alpha < \kappa$ and (b) if $\alpha < \kappa$ then the powerset of $\alpha$ is smaller in cardinality than $\kappa$.) So beyond its use as possible damage control, this hierarchy gives us a yardstick to measure just what is needed for such and such bit of mathematics. (An aside: 20th century and 21st century set theory has, in part, been concerned with the question of how far this hierarchy can go. And it's been discovered that seemingly dizzily high ordinals can have connections to small objects, such as what's true about reals and sets of reals.) Let me close by briefly addressing the issue of axioms. After all, a common way lecturers handwave the Russell paradox away for students is to say that Axiomatic Set Theory™ fixes the problem, and yet this blog post has shied away from talking about axioms. That was an intentional move on my part: just saying "here's some axioms" doesn't fix any problem. If a concept has a problem, you want to describe a better version of the concept that avoids the problem. And then, if you're so inclined, you can write down axioms to describe this better version. The standard basic axioms to (partially!) describe this cumulative hierarchy picture of sets are ZFC, the Zermelo–Fraenkel axioms. For example, ZFC's axiom of infinity ensures there's at least $\omega$ many stages. (Technical detail: You have to formally phrase it more carefully than that, since how do you say what $\omega$ is without already having that it exists?) For another example, ZFC's powerset axiom ensures that you always have a next stage in the cumulative hierarchy. These axioms are far stronger than what's needed for most math. That's okay! Strong axioms makes your life easier by not requiring you to carefully check whether you have enough power to carry out your desired constructions. So if you desire, you can simply not worry about what axioms you're using and just do your work. On the other hand, if you do like to think about questions of logical strength, there's a hierarchy of formal systems corresponding to how many stages you require, whether less or more than what ZFC requires. (Confession: things are subtler than just looking at number of stages, but this is a decent enough first approximation that I hope you forgive me for this slight mistruth.) So there's a picture for what sets are that avoids the problems with the proto-definition, while being permissive enough to let us do what we want with sets. If you have any comments on this post, please email me at kameryn.j.w [ at ] shsu ( dot ) edu.
CommonCrawl
Implicit surface In mathematics, an implicit surface is a surface in Euclidean space defined by an equation $F(x,y,z)=0.$ An implicit surface is the set of zeros of a function of three variables. Implicit means that the equation is not solved for x or y or z. The graph of a function is usually described by an equation $z=f(x,y)$ and is called an explicit representation. The third essential description of a surface is the parametric one: $(x(s,t),y(s,t),z(s,t))$, where the x-, y- and z-coordinates of surface points are represented by three functions $x(s,t)\,,y(s,t)\,,z(s,t)$ depending on common parameters $s,t$. Generally the change of representations is simple only when the explicit representation $z=f(x,y)$ is given: $z-f(x,y)=0$ (implicit), $(s,t,f(s,t))$ (parametric). Examples: 1. The plane $x+2y-3z+1=0.$ 2. The sphere $x^{2}+y^{2}+z^{2}-4=0.$ 3. The torus $(x^{2}+y^{2}+z^{2}+R^{2}-a^{2})^{2}-4R^{2}(x^{2}+y^{2})=0.$ 4. A surface of genus 2: $2y(y^{2}-3x^{2})(1-z^{2})+(x^{2}+y^{2})^{2}-(9z^{2}-1)(1-z^{2})=0$ (see diagram). 5. The surface of revolution $x^{2}+y^{2}-(\ln(z+3.2))^{2}-0.02=0$ (see diagram wineglass). For a plane, a sphere, and a torus there exist simple parametric representations. This is not true for the fourth example. The implicit function theorem describes conditions under which an equation $F(x,y,z)=0$ can be solved (at least implicitly) for x, y or z. But in general the solution may not be made explicit. This theorem is the key to the computation of essential geometric features of a surface: tangent planes, surface normals, curvatures (see below). But they have an essential drawback: their visualization is difficult. If $F(x,y,z)$ is polynomial in x, y and z, the surface is called algebraic. Example 5 is non-algebraic. Despite difficulty of visualization, implicit surfaces provide relatively simple techniques to generate theoretically (e.g. Steiner surface) and practically (see below) interesting surfaces. Formulas Throughout the following considerations the implicit surface is represented by an equation $F(x,y,z)=0$ where function $F$ meets the necessary conditions of differentiability. The partial derivatives of $F$ are $F_{x},F_{y},F_{z},F_{xx},\ldots $. Tangent plane and normal vector A surface point $(x_{0},y_{0},z_{0})$ is called regular if and only if the gradient of $F$ at $(x_{0},y_{0},z_{0})$ is not the zero vector $(0,0,0)$, meaning $(F_{x}(x_{0},y_{0},z_{0}),F_{y}(x_{0},y_{0},z_{0}),F_{z}(x_{0},y_{0},z_{0}))\neq (0,0,0)$. If the surface point $(x_{0},y_{0},z_{0})$ is not regular, it is called singular. The equation of the tangent plane at a regular point $(x_{0},y_{0},z_{0})$ is $F_{x}(x_{0},y_{0},z_{0})(x-x_{0})+F_{y}(x_{0},y_{0},z_{0})(y-y_{0})+F_{z}(x_{0},y_{0},z_{0})(z-z_{0})=0,$ and a normal vector is $\mathbf {n} (x_{0},y_{0},z_{0})=(F_{x}(x_{0},y_{0},z_{0}),F_{y}(x_{0},y_{0},z_{0}),F_{z}(x_{0},y_{0},z_{0}))^{T}.$ Normal curvature In order to keep the formula simple the arguments $(x_{0},y_{0},z_{0})$ are omitted: $\kappa _{n}={\frac {\mathbf {v} ^{\top }H_{F}\mathbf {v} }{\|\operatorname {grad} F\|}}$ is the normal curvature of the surface at a regular point for the unit tangent direction $\mathbf {v} $. $H_{F}$ is the Hessian matrix of $F$ (matrix of the second derivatives). The proof of this formula relies (as in the case of an implicit curve) on the implicit function theorem and the formula for the normal curvature of a parametric surface. Applications of implicit surfaces As in the case of implicit curves it is an easy task to generate implicit surfaces with desired shapes by applying algebraic operations (addition, multiplication) on simple primitives. Equipotential surface of point charges The electrical potential of a point charge $q_{i}$ at point $\mathbf {p} _{i}=(x_{i},y_{i},z_{i})$ generates at point $\mathbf {p} =(x,y,z)$ the potential (omitting physical constants) $F_{i}(x,y,z)={\frac {q_{i}}{\|\mathbf {p} -\mathbf {p} _{i}\|}}.$ The equipotential surface for the potential value $c$ is the implicit surface $F_{i}(x,y,z)-c=0$ which is a sphere with center at point $\mathbf {p} _{i}$. The potential of $4$ point charges is represented by $F(x,y,z)={\frac {q_{1}}{\|\mathbf {p} -\mathbf {p} _{1}\|}}+{\frac {q_{2}}{\|\mathbf {p} -\mathbf {p} _{2}\|}}+{\frac {q_{3}}{\|\mathbf {p} -\mathbf {p} _{3}\|}}+{\frac {q_{4}}{\|\mathbf {p} -\mathbf {p} _{4}\|}}.$ For the picture the four charges equal 1 and are located at the points $(\pm 1,\pm 1,0)$. The displayed surface is the equipotential surface (implicit surface) $F(x,y,z)-2.8=0$. Constant distance product surface A Cassini oval can be defined as the point set for which the product of the distances to two given points is constant (in contrast, for an ellipse the sum is constant). In a similar way implicit surfaces can be defined by a constant distance product to several fixed points. In the diagram metamorphoses the upper left surface is generated by this rule: With ${\begin{aligned}F(x,y,z)={}&{\Big (}{\sqrt {(x-1)^{2}+y^{2}+z^{2}}}\cdot {\sqrt {(x+1)^{2}+y^{2}+z^{2}}}\\&\qquad \cdot {\sqrt {x^{2}+(y-1)^{2}+z^{2}}}\cdot {\sqrt {x^{2}+(y+1)^{2}+z^{2}}}{\Big )}\end{aligned}}$ the constant distance product surface $F(x,y,z)-1.1=0$ is displayed. Metamorphoses of implicit surfaces A further simple method to generate new implicit surfaces is called metamorphosis of implicit surfaces: For two implicit surfaces $F_{1}(x,y,z)=0,F_{2}(x,y,z)=0$ (in the diagram: a constant distance product surface and a torus) one defines new surfaces using the design parameter $\mu \in [0,1]$: $F(x,y,z)=\mu F_{1}(x,y,z)+(1-\mu )F_{2}(x,y,z)=0$ In the diagram the design parameter is successively $\mu =0,\,0.33,\,0.66,\,1$ . Smooth approximations of several implicit surfaces $\Pi $-surfaces [1] can be used to approximate any given smooth and bounded object in $R^{3}$ whose surface is defined by a single polynomial as a product of subsidiary polynomials. In other words, we can design any smooth object with a single algebraic surface. Let us denote the defining polynomials as $f_{i}\in \mathbb {R} [x_{1},\ldots ,x_{n}](i=1,\ldots ,k)$. Then, the approximating object is defined by the polynomial $F(x,y,z)=\prod _{i}f_{i}(x,y,z)-r$[1] where $r\in \mathbb {R} $ stands for the blending parameter that controls the approximating error. Analogously to the smooth approximation with implicit curves, the equation $F(x,y,z)=F_{1}(x,y,z)\cdot F_{2}(x,y,z)\cdot F_{3}(x,y,z)-r=0$ represents for suitable parameters $c$ smooth approximations of three intersecting tori with equations ${\begin{aligned}F_{1}=(x^{2}+y^{2}+z^{2}+R^{2}-a^{2})^{2}-4R^{2}(x^{2}+y^{2})=0,\\[3pt]F_{2}=(x^{2}+y^{2}+z^{2}+R^{2}-a^{2})^{2}-4R^{2}(x^{2}+z^{2})=0,\\[3pt]F_{3}=(x^{2}+y^{2}+z^{2}+R^{2}-a^{2})^{2}-4R^{2}(y^{2}+z^{2})=0.\end{aligned}}$ (In the diagram the parameters are $R=1,\,a=0.2,\,r=0.01.$) Visualization of implicit surfaces There are various algorithms for rendering implicit surfaces,[2] including the marching cubes algorithm.[3] Essentially there are two ideas for visualizing an implicit surface: One generates a net of polygons which is visualized (see surface triangulation) and the second relies on ray tracing which determines intersection points of rays with the surface.[4] The intersection points can be approximated by sphere tracing, using a signed distance function to find the distance to the surface.[5] See also • Implicit curve References 1. Adriano N. Raposo; Abel J.P. Gomes (2019). "Pi-surfaces: products of implicit surfaces towards constructive composition of 3D objects". WSCG 2019 27. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision. arXiv:1906.06751. 2. Jules Bloomenthal; Chandrajit Bajaj; Brian Wyvill (15 August 1997). Introduction to Implicit Surfaces. Morgan Kaufmann. ISBN 978-1-55860-233-5. 3. Ian Stephenson (1 December 2004). Production Rendering: Design and Implementation. Springer Science & Business Media. ISBN 978-1-85233-821-3. 4. Eric Haines, Tomas Akenine-Moller: Ray Tracing Gems, Springer, 2019, ISBN 978-1-4842-4427-2 5. Hardy, Alexandre; Steeb, Willi-Hans (2008). Mathematical Tools in Computer Graphics with C# Implementations. World Scientific. ISBN 978-981-279-102-3. Further reading • Gomes, A., Voiculescu, I., Jorge, J., Wyvill, B., Galbraith, C.: Implicit Curves and Surfaces: Mathematics, Data Structures and Algorithms, 2009, Springer-Verlag London, ISBN 978-1-84882-405-8 • Thorpe: Elementary Topics in Differential Geometry, Springer-Verlag, New York, 1979, ISBN 0-387-90357-7 External links • Sultanow: Implizite Flächen • Hartmann: Geometry and Algorithms for COMPUTER AIDED DESIGN • GEOMVIEW • K3Dsurf: 3d surface generator • SURF: Visualisierung algebraischer Flächen Dimension Dimensional spaces • Vector space • Euclidean space • Affine space • Projective space • Free module • Manifold • Algebraic variety • Spacetime Other dimensions • Krull • Lebesgue covering • Inductive • Hausdorff • Minkowski • Fractal • Degrees of freedom Polytopes and shapes • Hyperplane • Hypersurface • Hypercube • Hyperrectangle • Demihypercube • Hypersphere • Cross-polytope • Simplex • Hyperpyramid Dimensions by number • Zero • One • Two • Three • Four • Five • Six • Seven • Eight • n-dimensions See also • Hyperspace • Codimension Category
Wikipedia
A remark on a Liouville problem with boundary for the Stokes and the Navier-Stokes equations Existence and uniqueness of time-periodic solutions to the Navier-Stokes equations in the whole plane October 2013, 6(5): 1259-1275. doi: 10.3934/dcdss.2013.6.1259 $H^{\infty}$-calculus for a system of Laplace operators with mixed order boundary conditions Matthias Geissert 1, , Horst Heck 1, and Christof Trunk 1, TU Darmstadt, FB Mathematik, Schlossgartenstr 7, D-64289 Darmstadt, Germany, Germany, Germany Received January 2012 Revised February 2012 Published March 2013 In this paper we prove that the $L^p$ realisation of a system of Laplace operators subjected to mixed first and zero order boundary conditions admits a bounded $H^{\infty}$-calculus. Furthermore, we apply this result to the Magnetohydrodynamic equation with perfectly conducting wall condition. Keywords: $H^\infty$-calculus, MHD system., Hodge boundary condition, Laplace Operator. Mathematics Subject Classification: Primary: 35K51; Secondary: 76W0. Citation: Matthias Geissert, Horst Heck, Christof Trunk. $H^{\infty}$-calculus for a system of Laplace operators with mixed order boundary conditions. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1259-1275. doi: 10.3934/dcdss.2013.6.1259 H. Abels, Bounded imaginary powers and $H_\infty$-calculus of the Stokes operator in unbounded domains,, in, 64 (2005), 1. doi: 10.1007/3-7643-7385-7_1. Google Scholar H. Abels, Nonstationary Stokes system with variable viscosity in bounded and unbounded domains,, Discrete Contin. Dyn. Syst. Ser. S, 3 (2010), 141. doi: 10.3934/dcdss.2010.3.141. Google Scholar T. Akiyama, H. Kasai, Y. Shibata and M. Tsutsumi, On a resolvent estimate of a system of Laplace operators with perfect wall condition,, Funkcial. Ekvac., 47 (2004), 361. doi: 10.1619/fesi.47.361. Google Scholar J. Bolik and W. von Wahl, Estimating $\nablau$ in terms of div $u$, curl $u$ either $(\nu,u)$ or $\nu \times u$ and the topology,, Math. Methods Appl. Sci., 20 (1997), 737. doi: 10.1002/(SICI)1099-1476(199706)20:9<737::AID-MMA863>3.3.CO;2-9. Google Scholar M. Cowling, I. Doust, A. McIntosh and A. Yagi, Banach space operators with a bounded $H^{\infty}$ functional calculus,, J. Austral. Math. Soc. Ser. A, 60 (1996), 51. Google Scholar T. G. Cowling, "Magnetohydrodynamics,", Interscience Tracts on Physics and Astronomy, (1957). Google Scholar R. Denk, G. Dore, M. Hieber, J. Prüss and A. Venni, New thoughts on old results of R. T. Seeley,, Math. Ann., 328 (2004), 545. doi: 10.1007/s00208-003-0493-y. Google Scholar E. Dintelmann, M. Geissert and M. Hieber, Strong $L^p$-solutions to the Navier-Stokes flow past moving obstacles: The case of several obstacles and time dependent velocity,, Trans. Amer. Math. Soc., 361 (2009), 653. doi: 10.1090/S0002-9947-08-04684-9. Google Scholar R. Denk, M. Hieber and J. Prüss, $\mathcal R$-boundedness, Fourier multipliers and problems of elliptic and parabolic type,, Mem. Amer. Math. Soc., 166 (2003). Google Scholar G. Dore and A. Venni, On the closedness of the sum of two closed operators,, Math. Z., 196 (1987), 189. doi: 10.1007/BF01163654. Google Scholar M. Haase, "The Functional Calculus for Sectorial Operators,", Operator Theory: Advances and Applications, 169 (2006). doi: 10.1007/3-7643-7698-8. Google Scholar P. C. Kunstmann, $H^{\infty}$-calculus for the Stokes operator on unbounded domains,, Arch. Math. (Basel), 91 (2008), 178. doi: 10.1007/s00013-008-2621-0. Google Scholar N. Kalton, P. Kunstmann and L. Weis, Perturbation and interpolation theorems for the $H^\infty$-calculus with applications to differential operators,, Math. Ann., 336 (2006), 747. doi: 10.1007/s00208-005-0742-3. Google Scholar L. D. Landau and E. M. Lifschitz, "Lehrbuch der Theoretischen Physik ('Landau-Lifschitz'), Band VIII,", Fourth edition, (1985). Google Scholar A. McIntosh, Operators which have an $H_\infty$ functional calculus,, in, 14 (1986), 210. Google Scholar M. Mitrea and S. Monniaux, On the analyticity of the semigroup generated by the Stokes operator with Neumann-type boundary conditions on Lipschitz subdomains of Riemannian manifolds,, Trans. Amer. Math. Soc., 361 (2009), 3125. doi: 10.1090/S0002-9947-08-04827-7. Google Scholar A. Noll and J. Saal, $H^\infty$-calculus for the Stokes operator on $L_q$-spaces,, Math. Z., 244 (2003), 651. Google Scholar R. T. Seeley, Complex powers of an elliptic operator,, in, (1967), 288. Google Scholar R. Seeley, The resolvent of an elliptic boundary problem,, Amer. J. Math., 91 (1969), 889. Google Scholar R. Seeley, Norms and domains of the complex powers $A_Bz$,, Amer. J. Math., 93 (1971), 299. Google Scholar J. A. Shercliff, "A Textbook of Magnetohydrodyamics,", Pergamon Press, (1965). Google Scholar L. Weis, Operator-valued Fourier multiplier theorems and maximal $L_p$-regularity,, Math. Ann., 319 (2001), 735. doi: 10.1007/PL00004457. Google Scholar Z. Yoshida and Y. Giga, On the Ohm-Navier-Stokes system in magnetohydrodynamics,, J. Math. Phys., 24 (1983), 2860. doi: 10.1063/1.525667. Google Scholar Marek Fila, Kazuhiro Ishige, Tatsuki Kawakami. Convergence to the Poisson kernel for the Laplace equation with a nonlinear dynamical boundary condition. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1285-1301. doi: 10.3934/cpaa.2012.11.1285 Yoshitsugu Kabeya. Eigenvalues of the Laplace-Beltrami operator under the homogeneous Neumann condition on a large zonal domain in the unit sphere. Discrete & Continuous Dynamical Systems - A, 2019, 0 (0) : 0-0. doi: 10.3934/dcds.2020040 Jaeyoung Byeon, Sangdon Jin. The Hénon equation with a critical exponent under the Neumann boundary condition. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4353-4390. doi: 10.3934/dcds.2018190 Dorina Mitrea, Marius Mitrea, Sylvie Monniaux. The Poisson problem for the exterior derivative operator with Dirichlet boundary condition in nonsmooth domains. Communications on Pure & Applied Analysis, 2008, 7 (6) : 1295-1333. doi: 10.3934/cpaa.2008.7.1295 Lan Zeng, Guoxi Ni, Yingying Li. Low Mach number limit of strong solutions for 3-D full compressible MHD equations with Dirichlet boundary condition. Discrete & Continuous Dynamical Systems - B, 2019, 24 (10) : 5503-5522. doi: 10.3934/dcdsb.2019068 Kazuo Yamazaki. $(N-1)$ velocity components condition for the generalized MHD system in $N-$dimension. Kinetic & Related Models, 2014, 7 (4) : 779-792. doi: 10.3934/krm.2014.7.779 Haiyang He. Asymptotic behavior of the ground state Solutions for Hénon equation with Robin boundary condition. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2393-2408. doi: 10.3934/cpaa.2013.12.2393 Radosław Kurek, Paweł Lubowiecki, Henryk Żołądek. The Hess-Appelrot system. Ⅲ. Splitting of separatrices and chaos. Discrete & Continuous Dynamical Systems - A, 2018, 38 (4) : 1955-1981. doi: 10.3934/dcds.2018079 Sergey P. Degtyarev. On Fourier multipliers in function spaces with partial Hölder condition and their application to the linearized Cahn-Hilliard equation with dynamic boundary conditions. Evolution Equations & Control Theory, 2015, 4 (4) : 391-429. doi: 10.3934/eect.2015.4.391 M. S. Mahmoud, P. Shi, Y. Shi. $H_\infty$ and robust control of interconnected systems with Markovian jump parameters. Discrete & Continuous Dynamical Systems - B, 2005, 5 (2) : 365-384. doi: 10.3934/dcdsb.2005.5.365 Amol Sasane. Extension of the $\nu$-metric for stabilizable plants over $H^\infty$. Mathematical Control & Related Fields, 2012, 2 (1) : 29-44. doi: 10.3934/mcrf.2012.2.29 Antonio Vitolo. $H^{1,p}$-eigenvalues and $L^\infty$-estimates in quasicylindrical domains. Communications on Pure & Applied Analysis, 2011, 10 (5) : 1315-1329. doi: 10.3934/cpaa.2011.10.1315 O. A. Veliev. On the spectrality and spectral expansion of the non-self-adjoint mathieu-hill operator in $ L_{2}(-\infty, \infty) $. Communications on Pure & Applied Analysis, 2020, 19 (3) : 1537-1562. doi: 10.3934/cpaa.2020077 Rafael De La Llave, R. Obaya. Regularity of the composition operator in spaces of Hölder functions. Discrete & Continuous Dynamical Systems - A, 1999, 5 (1) : 157-184. doi: 10.3934/dcds.1999.5.157 Nikos Katzourakis. Nonuniqueness in vector-valued calculus of variations in $L^\infty$ and some Linear elliptic systems. Communications on Pure & Applied Analysis, 2015, 14 (1) : 313-327. doi: 10.3934/cpaa.2015.14.313 Nikos Katzourakis. Corrigendum to the paper: Nonuniqueness in Vector-Valued Calculus of Variations in $ L^\infty $ and some Linear Elliptic Systems. Communications on Pure & Applied Analysis, 2019, 18 (4) : 2197-2198. doi: 10.3934/cpaa.2019098 Sébastien Court. Stabilization of a fluid-solid system, by the deformation of the self-propelled solid. Part II: The nonlinear system.. Evolution Equations & Control Theory, 2014, 3 (1) : 83-118. doi: 10.3934/eect.2014.3.83 Sébastien Court. Stabilization of a fluid-solid system, by the deformation of the self-propelled solid. Part I: The linearized system.. Evolution Equations & Control Theory, 2014, 3 (1) : 59-82. doi: 10.3934/eect.2014.3.59 Felix Sadyrbaev. Nonlinear boundary value problems of the calculus of variations. Conference Publications, 2003, 2003 (Special) : 760-770. doi: 10.3934/proc.2003.2003.760 Samia Challal, Abdeslem Lyaghfouri. Hölder continuity of solutions to the $A$-Laplace equation involving measures. Communications on Pure & Applied Analysis, 2009, 8 (5) : 1577-1583. doi: 10.3934/cpaa.2009.8.1577 Matthias Geissert Horst Heck Christof Trunk
CommonCrawl
Oberwolfach Research Institute for Mathematics The Oberwolfach Research Institute for Mathematics (German: Mathematisches Forschungsinstitut Oberwolfach) is a center for mathematical research in Oberwolfach, Germany. It was founded by mathematician Wilhelm Süss in 1944. It organizes weekly workshops on diverse topics where mathematicians and scientists from all over the world come to do collaborative research. The Institute is a member of the Leibniz Association, funded mainly by the German Federal Ministry of Education and Research and by the state of Baden-Württemberg. It also receives substantial funding from the Friends of Oberwolfach foundation, from the Oberwolfach Foundation and from numerous donors. History The Oberwolfach Research Institute for Mathematics (MFO) was founded as the Reich Institute of Mathematics (German: Reichsinstitut für Mathematik) on 1 September 1944. It was one of several research institutes founded by the Nazis in order to further the German war effort, which at that time was clearly failing.[1] The location was selected to be remote as not to be a target for ally bombing. Originally it was housed in a building called the Lorenzenhof, a large Black Forest hunting lodge. After the war, Süss, a member of the Nazi party, was suspended for two months in 1945 as part of the county's denazification efforts, but thereafter remained head of the institute. Though the institute lost its government funding, Süss was able to keep it going with other grants, and contributed to rebuilding mathematics in Germany following the fall of the Third Reich by hosting international mathematical conferences. Some of these were organised by Reinhold Baer, a mathematician who was expelled from University of Halle in 1933 for being Jewish, but later returned to Germany in 1956 at the University of Frankfurt. The institute regained government funding in the 1950s.[1] After Süss's death in 1958, Hellmuth Kneser was briefly director before Theodor Schneider permanently took over in the role in 1959. In that year, he and others formed the mathematical society Gesellschaft für Mathematische Forschung e. V. in order to run the MFO.[1] 1967: 10 October : Inauguration of the guest house of the MFO, a gift of the Volkswagen Foundation 1975: 13 June : Inauguration of the library and meetings building of the MFO, which replaced the old castle, also a gift of the Volkswagen Foundation 1989: 26 May : Inauguration of the extension of the guest building 1995: Establishment of the research programme "Research in Pairs" 2005: 1 January : The MFO becomes a member of the Leibniz Association 2007: Establishment of the post-doctoral programme "Oberwolfach Leibniz Fellows" 2007: 5 May : Inauguration of the library extension, a gift of the Klaus Tschira Stiftung and the Volkswagen Foundation 2005 – 2010: General restoration of the guest house and the library building Statue The iconic model of the Boy surface was installed in front of the Institute, as a gift from Mercedes-Benz on 28 January 1991. The Boy Surface is named after Werner Boy who constructed the surface in his 1901 thesis, written under the direction of David Hilbert. Directors • 1944–1958, Wilhelm Süss • 1958–1959, Hellmuth Kneser • 1959–1963, Theodor Schneider • 1963–1994, Martin Barner • 1994–2002, Matthias Kreck • 2002–2013, Gert-Martin Greuel • 2013–present Gerhard Huisken Oberwolfach Prize The Oberwolfach Prize is awarded approximately every three years for excellent achievements in changing fields of mathematics to young mathematicians not older than 35 years. It is financed by the Oberwolfach Foundation and awarded in cooperation with the institute. Prize winners • 1991 Peter Kronheimer • 1993 Jörg Brüdern and Jens Franke • 1996 Gero Friesecke and Stefan Sauter • 1998 Alice Guionnet • 2000 Luca Trevisan • 2003 Paul Biran • 2007 Ngô Bảo Châu • 2010 Nicola Gigli and László Székelyhidi • 2013 Hugo Duminil-Copin • 2016 Jacob Fox • 2019 Oscar Randal-Williams References 1. Jackson, Allyn (August 2000). "Oberwolfach,Yesterday and Today" (PDF). Notices of the AMS. 47 (7). External links Wikimedia Commons has media related to Mathematisches Forschungsinstitut Oberwolfach. • Home page of the institute • Web page about the Oberwolfach Prize 48°20′44″N 8°14′07″E The European Mathematical Society International member societies • European Consortium for Mathematics in Industry • European Society for Mathematical and Theoretical Biology National member societies • Austria • Belarus • Belgium • Belgian Mathematical Society • Belgian Statistical Society • Bosnia and Herzegovina • Bulgaria • Croatia • Cyprus • Czech Republic • Denmark • Estonia • Finland • France • Mathematical Society of France • Society of Applied & Industrial Mathematics • Société Francaise de Statistique • Georgia • Germany • German Mathematical Society • Association of Applied Mathematics and Mechanics • Greece • Hungary • Iceland • Ireland • Israel • Italy • Italian Mathematical Union • Società Italiana di Matematica Applicata e Industriale • The Italian Association of Mathematics applied to Economic and Social Sciences • Latvia • Lithuania • Luxembourg • Macedonia • Malta • Montenegro • Netherlands • Norway • Norwegian Mathematical Society • Norwegian Statistical Association • Poland • Portugal • Romania • Romanian Mathematical Society • Romanian Society of Mathematicians • Russia • Moscow Mathematical Society • St. Petersburg Mathematical Society • Ural Mathematical Society • Slovakia • Slovak Mathematical Society • Union of Slovak Mathematicians and Physicists • Slovenia • Spain • Catalan Society of Mathematics • Royal Spanish Mathematical Society • Spanish Society of Statistics and Operations Research • The Spanish Society of Applied Mathematics • Sweden • Swedish Mathematical Society • Swedish Society of Statisticians • Switzerland • Turkey • Ukraine • United Kingdom • Edinburgh Mathematical Society • Institute of Mathematics and its Applications • London Mathematical Society Academic Institutional Members • Abdus Salam International Centre for Theoretical Physics • Academy of Sciences of Moldova • Bernoulli Center • Centre de Recerca Matemàtica • Centre International de Rencontres Mathématiques • Centrum voor Wiskunde en Informatica • Emmy Noether Research Institute for Mathematics • Erwin Schrödinger International Institute for Mathematical Physics • European Institute for Statistics, Probability and Operations Research • Institut des Hautes Études Scientifiques • Institut Henri Poincaré • Institut Mittag-Leffler • Institute for Mathematical Research • International Centre for Mathematical Sciences • Isaac Newton Institute for Mathematical Sciences • Mathematisches Forschungsinstitut Oberwolfach • Mathematical Research Institute • Max Planck Institute for Mathematics in the Sciences • Research Institute of Mathematics of the Voronezh State University • Serbian Academy of Science and Arts • Mathematical Society of Serbia • Stefan Banach International Mathematical Center • Thomas Stieltjes Institute for Mathematics Institutional Members • Central European University • Faculty of Mathematics at the University of Barcelona • Cellule MathDoc Leibniz Association Research Museums and Collections • Deutsche Sammlung von Mikroorganismen und Zellkulturen • German Maritime Museum • German Mining Museum • Museum für Naturkunde • Museum Koenig • Römisch-Germanisches Zentralmuseum • Senckenberg Nature Research Society Other Research Institutions • Centre for Contemporary History • Centre for Tropical Marine Ecology • Dagstuhl • Ferdinand-Braun-Institut • Georg Eckert Institute for International Textbook Research • German Institute for Economic Research • German Institute of Global and Area Studies • German Research Institute for Public Administration • GESIS – Leibniz Institute for the Social Sciences • Ifo Institute for Economic Research • Innovations for High Performance Microelectronics • Institut für Kristallzüchtung • Kiel Institute for the World Economy • Leibniz Institute for Astrophysics Potsdam • Leibniz Institute for Baltic Sea Research • Leibniz Institute for Neurobiology • Leibniz Institute for Science and Mathematics Education at the University of Kiel • Leibniz Institute of Agricultural Development in Central and Eastern Europe • Leibniz-Institut für Festkörper- und Werkstoffforschung • Leibniz-Institut für Molekulare Pharmakologie • Mathematical Research Institute of Oberwolfach • German National Library of Economics • Research Center Borstel • RWI Essen • Socio-Economic Panel • WZB Berlin Social Science Center Authority control International • ISNI • VIAF National • Germany • Israel • United States Academics • CiNii
Wikipedia
The value of myocardial MIBI washout rate in risk stratification of coronary artery disease Mohammed Omar Mohammed Othman ORCID: orcid.org/0000-0002-0974-04811, Hosna Mohammed Moustafa1, Mohammed Mahmoud Abd El-Ghany2 & Shaimaa Ahmed Abd El-Mon'em El-Rasad1 Although it is well established that MIBI does not redistribute as thallium within the myocardium, it showed a reverse redistribution phenomenon that can be expressed by the rate of myocardial MIBI washout. The aim in this study was calculating the global myocardial washout of the MIBI (GWR) in patients diagnosed with coronary artery disease (CAD) of different risk stratifications. This prospective study included 100 patients. All patients were stratified into low-, intermediate-, and high-risk groups according to clinical evaluation using Framingham score, stress ECG results using Duke's score and finally myocardial perfusion imaging prognostic findings. GWR was estimated in each of these groups. GWR mean was 9.5%, 13%, and 18% within clinically stratified patients into low-, intermediate-, and high-risk patients respectively with correlation coefficient 0.4. In addition, GWR mean was 9.7%, 15.4%, and 18.7% within patients stratified according to exercise ECG findings into low-, intermediate-, and high-risk patients respectively with correlation coefficient 0.6. Combining all myocardial perfusion findings, GWR mean was 7.9%, 15.1%, and 19.3% in patient with low-, intermediate-, and high-risk imaging findings respectively with correlation coefficient 0.71. GWR is positively correlated with the risk stratifications of the CAD patients. GWR can be used as an additional parameter to assess the risk of CAD patients. CAD is a chronic condition affecting large numbers of people across the world. Hence, the assessment of the risk and actual burden of disease is the determinant key for the management with other interventions improving the quality of a patient's life [1]. It is well noted now that the mortality rate owing to CAD has decreased, and this is a result of better detection of CAD patients, assessment of their risk, and better subsequent treatment [2]. Single-photon computed tomography (SPECT) myocardial perfusion scan using Tc99m MIBI is considered one of the main non-invasive diagnostic tools that provide valuable information in assessment of CAD [3]. It is well known that MIBI, being lipophilic, enters the myocardial mitochondria passively then, being a cation, it is trapped inside by membrane negative potential. With impaired respiratory chain resulted from repeated ischemic changes, the mitochondrial membrane partially losses its negative charges with subsequent enhanced MIBI washout [3]. Aim of the work is calculation of GWR in CAD patients with different risk stratifications. Study design and population This prospective control study was performed in Nuclear Medicine Department in Kasr Alainy Hospital from March 2017 until December 2018. It included 100 patients [76 males and 24 females] with 54.2 ± 8.9 years mean age. Tc99m MIBI myocardial perfusion gated SPECT imaging was done for all patients. CAD patients with different risk stratifications referred for myocardial perfusion scan. We excluded patients who underwent pharmacological stress study, as Duke's criteria cannot be applied. Clinical risk stratifications All patients were stratified clinically through the total points received according to the primary version of Framingham risk score into low-, intermediate-, and high-risk patients. For female patients, total points of 19 or less was considered low risk, from 20 to 23 points was considered intermediate risk and above 23 points was considered high risk. While for male patients, total points of 12 or less was considered low risk, from 12 to 15 points was considered intermediate risk and above 15 points was considered high risk [4]. Exercise stress study results All patients were then separately stratified by the exercise stress results through the total points received according to Duke's risk score into low-, intermediate-, and high-risk patients. The Duke Treadmill Score (DTS) was calculated as follows: $$ \mathrm{DTS}=\mathrm{Exercise}\ \mathrm{time}\ \left(\mathrm{in}\ \mathrm{minutes}\right)-\left(5\times \mathrm{ST}\ \mathrm{deviation}\ \mathrm{in}\ \mathrm{mm}\right)-\left(4\times \mathrm{angina}\ \mathrm{in}\mathrm{dex}\right) $$ The exercise time was based on using the Bruce protocol. ST deviation referred to maximum ST change (elevation or depression) in any lead except lead aVR. The angina index was 0 points if no chest pain occurs, 1 point if non-limiting chest pain occurs, and 2 points if typical anginal pain occurs that limits exercise. Patients were categorized as low risk (score > 5), intermediate risk (score between 4 and − 11), and high risk (score < − 11) [5]. Myocardial perfusions scan (MPS) Two days protocol was used, rest in 1 day and stress exercise in the other. Additional delayed image was done following the rest study. Imaging protocol used Patients were instructed to fast for 4 to 6 h before the test. Cardiac medication including beta blockers, theophylline derivatives, nitrates, and calcium channel blockers were stopped 48 h prior to the study. Fatty foods (egg, milk, or chocolate) were given 15 min after Tc99m MIBI injection to facilitate liver and biliary system clearance. A dose of 15 ± 3 mCi (555 ± 111 MBq) of Tc99m MIBI was injected intravenously for each study. Rest study included two images; one at 60–90 min post-injection and the other delayed image was 4 h after the same injection. The stress study was done as usual at 30–45 min after injection. Dual head SPECT gamma camera was used with the two heads perpendicular to each other. High-resolution low energy collimator was used. Photon peak was adjusted at 140 keV with window 20% and 1.3 zoom factor and matrix 64 × 64. Patient was supine with elevated left arm to decreased attenuation. Imaging from 45° right anterior oblique to 135° left posterior oblique position. Synchronized ECG data with R wave trigger, 8 frames per cardiac cycle to produce 32 projections, each was 40 s. Reconstruction was done with filtered back projection and Butterworth filter. All raw data sets were corrected with isotope decay factor and checked for patient motion by reviewing a rotating cine display. Imaging interpretation Perfusion defects in stress images were interpreted qualitatively in terms of their location, size, and severity. Location was interpreted according to coronary vascular territories likely to be involved. Size was interpreted according to the number of segments within affected wall. Small was when the number of affected segments is less than one thirds of the total wall segments, moderate was when the number of affected segments is more than one third but less than two thirds of the total wall segments while severe was when the number of affected segments is more than two thirds of the total wall segments). Severity was interpreted according to colon display (mild, moderate, and severe). The reversibility of the stress-induced defects was assessed in the rest images in terms of complete (became normally perfused), partial (showed perfusion improvement but not return normal), or absent (fixed perfusion defects in both stress and rest studies). Semi-quantitative data was done using two-dimensional polar map display showing apex at center and the base of the ventricle at the periphery. Then, it was divided into 17 segments representing the whole the myocardium. apex in the center, the four apical segments as a first ring, the six mid-cavity segments as the second ring, and the six apical segments as the outermost ring. Each segment was scored visually based on the tracer uptake as 0, normal; 1, mildly reduced; 2, moderately reduced; 3, severely reduced; and 4, absent tracer uptake in rest and stress images. A summed rest score (SRS) and summed stress score (SSS) were obtained by adding the scores of the 17 segments of the rest and stress images, respectively. The summed difference score (SDS) was determined by subtracting the SRS from the SSS [6]. Gated data used for quantitative results including end systolic volume, end diastolic volume, and ejection fraction of left ventricle. Stress-induced transient ischemic dilatation was also identified [6]. MPS risk stratification All patients were classified into three groups according to MPS findings as follows [7]: High risk group patients have one or more of these findings: stress-induced large perfusion defect (> 20% of total left ventricular volume), stress-induced multiple perfusion defects of moderate size, severe resting left ventricular dysfunction (LV EF < 35%), and SSS > 13. Intermediate risk group, patients have one or more of these findings: stress-induced moderate (10–20%) perfusion defect, mild/moderate resting LV dysfunction (LV EF = 35–49%), and SSS 8-13. Low risk group, patients have all these findings: normal or small myocardial perfusion defect (< 10% of total left ventricular volume), normal left ventricular function (LV EF 50% or more), and SSS < 8. Calculation of GWR Calculation of GWR was calculated for each patient using counts per pixel in the two-dimensional polar map display as follows. $$ GWR\kern0.5em =\frac{\mathrm{Ce}-\left(\mathrm{Cd}\times \mathrm{decay}\ \mathrm{factor}\right)}{\mathrm{Ce}}\mathrm{x}\ 100\% $$ Ce: total myocardial counts in early imaging Cd: myocardial counts in delayed imaging Decay factor: = 1/(1/2) x , x = (time difference)/6 Data were statistically described in terms of mean ± standard deviation (± SD), median and range, or frequencies (number of cases) and percentages when appropriate. Comparison of numerical variables between the study groups was done using Kruskal-Wallis test with post hoc multiple 2-group comparisons. Correlation between various variables was done using Pearson moment correlation equation for linear relation of normally distributed variables and Spearman rank correlation equation for non-normal variables/non-linear monotonic relation. Stepwise multivariate regression analysis was performed to examine the potential interactions among the entered covariates. The Student t test was used for comparison of paired data, and P values less than 0.05 was considered statistically significant. All statistical calculations were done using computer program IBM SPSS (Statistical Package for the Social Science; IBM Corp, Armonk, NY, USA) for Microsoft Windows. Characteristics of the patients The study included 100 patients, [76 males and 24 females with mean age 54.25 ± 8.9 years, maximum age was 78 years and minimum age 35 years. Fifty-four patients were hypertensive, 41 patients were diabetic, 36 patients were smoker, and 39 patients were dyslipidemic. Descriptive data Within the 100 patients according to Framingham score, 15, 55, and 30 patients were considered as low-, intermediate-, and high-risk respectively. Within the 100 patients according to Duke's risk score, 37, 43, and 20 patients were considered as low-, intermediate-, and high-risk respectively. Scintigraphic myocardial perfusion findings within the 100 patients demonstrated 43 patients had perfusion abnormality in less than two segments (small perfusion defects), 29 patients in 2–8 segments (moderate perfusion defects), and 28 patients in more than 8 segments (large perfusion defects). Forty-three patients had SSS less than 8, 15 patients had SSS between 8 and 13, and 42 patients had SSS more than 13. Twelve patients with global myocardial had impaired motions. Sixty-three patients had EF more than or equal 50, 12 patients had EF between 49 and 35, while 25 patients had EF less than 35. Combining all myocardial perfusion findings of the 100 patients, 40, 30, and 30 patients were considered as low-, intermediate-, and high-risk respectively. According to Framingham scoring system, the mean GWR was 9.5% within 15 low-risk patients, 13% within 55 intermediate-risk patients, and 18% within 30 high-risk patients. According to Duke's risk score, the mean GWR was 9.7% within 37 low-risk patients, 15.4 within 43 intermediate-risk patients, and 18.7 within 20 high-risk patients. Combining all myocardial perfusion findings, the mean GWR was 7.9% within 46 patients with low-risk scintigraphic features, 15.1% within 30 patients with intermediate-risk scintigraphic features, and 19.3% within 24 patients with high-risk scintigraphic features. GWR was found to be intermediately positively correlated with the clinical risk and exercise ECG risk stratifications with correlation coefficient 0.4 and 0.6 respectively and strongly positively correlated with myocardial perfusion imaging poor prognostic features with correlation coefficient 0.71 (Figs. 1, 2, and 3). GWR correlation with Framingham's score GWR correlation with Duke's score GWR correlation with MPS stratification GWR was negatively correlated with the ejection fraction with correlation coefficient − 0.4 (Fig. 4) and strongly positively correlated with the number of affected segment and summed difference score with correlation coefficient 0.7 and 0.76 respectively (Figs. 5 and 6). GWR correlation with EF GWR correlation with affected segment number GWR correlation with SDS Case presentation is found in Figs. 7 and 8. Polar map in stress, early rest, and delayed rest of MPS of 65-year-old female patient presented with dyspnea, diabetic but not hypertensive classified clinically to be low risk. Her perfusion scan demonstrated mild small ischemia in the apex in the stress study with complete recovery in the rest phase. SSS = 3, SRS = 1, and SDS 2. EF was 62%. Her GWR was 8% Polar map in stress, early rest, and delayed rest of MPS of 62-year-old male patient presented with severe chest pain radiating to the back, diabetic, hypertensive, and smoker classified clinically to be high risk. His perfusion scan demonstrated large severe hypoperfusion in the stress study involving the apex as well as the apical and mid segments of the anterior wall. The latter demonstrated partial recovery in the rest phase. SSS = 18, SRS = 12, and SDS 6. EF was 32%. Her GWR was 23% CAD is a chronic condition affecting large numbers of people across the world accurate evaluation of CAD would greatly increase the survival rate of CAD patients [1]. The ultimate goal is identifying those patients at high risk of adverse outcomes who may benefit from revascularization procedures as well as identification of a low-risk population to avoid unnecessary invasive tests and procedure [8]. The ESC guidelines on the management of stable CAD define high event risk patients as those with annual mortality > 3% and low-risk patients with annual mortality < 1% [9]. We aimed in the present study to estimate MIBI GWR in CAD patients with different risk stratifications according to Framingham score, Duke's score, and myocardial perfusion findings so that GWR can be an additional prognostic parameter that aid in decision making in patient with CAD. Richter et al. was first to explain the accelerated MIBI washout in 1995 on 36 patients with CAD. They calculated the GWR after 120 min. In 35 of 114 segments, the score improved within 120 min so that 69.9% ± 22.5% to 74.5% ± 20.8% (P < 0.01). In 7 of 114 segments, the score deteriorated within 120 min so that 85.6% ± 9.9% to 80.1% ± 10.7% (P < 0.02). They concluded that some early defects might show changes in the late phase, such that some defects improved and other defects became worse. The major drawback in the Richter's study was using the planar image for quantitative assessment. This caused distortion and decreased accuracy of his results [10]. Khandaker et al. attempted to resolve the uncertain rating assigned to SPECT MPI in risk assessment of CAD in asymptomatic patients. Two hundred and sixty asymptomatic patients with mean age 67 ± 8 years and without known CAD were included in the study. Survival at 10 years was 79% in patients with normal or low-risk cardiac scan. They concluded that SPECT MPI could accurately detect and stratify asymptomatic patients. They concluded that mortality was 4.0% per year in patients with high-risk scans compared to 1.6% patients with normal scans [11]. In the present study, the analysis of myocardial perfusion SPECT in the study group included qualitative, semi-quantitative, and quantitative data. Qualitative findings showed 43 patients had perfusion abnormality in less than two segments (small perfusion defects), 29 patients in 2–8 segments (moderate perfusion defects), and 28 patients in more than 8 segments (large perfusion defects). Fifty patients showed normal wall motion, 38 patients with regional wall abnormality, and 12 patients with global myocardial impaired motions. America et al. studied the additive prognostic value of perfusion and functional data of quantitative gated SPECT in 453 consecutive patients followed for average 1.33 years. Their result included 236 patients had an abnormal study, of whom 27 patients experienced serious cardiac events and 47 patients experience non-serious cardiac events and the rest witnessed no specific cardiac event. They concluded that perfusion and functional parameters derived from quantitative gated SPECT imaging could adequately be used for cardiac risk assessment. EF 52% or SSS > 22 are at increased risk for subsequent hard events. Patients with an SSS > 14 are at increased risk for at least non-serious cardiac events [12]. In the present study, semi-quantitative and quantitative scintigraphic findings showed 38 patients with SDS less than 2, 28 patients with SDS from 2 to 4, and 34 patients with SDS more than 4 and 34 patients with SDS more than 4. Also, 63 patients had normal EF being more than or equal 50%, while 12 patients had EF from 35 to 49. Only 25 patients had low EF less than 35%. Combining all myocardial perfusion findings and semi-quantitative parameters showed that 46 patients considered as low risk, 30 patients were considered intermediate risk, and 24 patients were considered high risk. The GWR was positively correlated with correlation coefficient of 0.71. In addition, global washout was positively correlated with SDS with correlation coefficient of 0.76. Shiroodi et al. studied the ability of GWR to diagnose and evaluate heart failure severity and other left ventricular functional parameters. The study included 17 patients in different stages of dilated cardiomyopathy and decreased ejection fraction. The study demonstrated that GWR correlated with functional cardiac parameters using MPI in patients with idiopathic dilated cardiomyopathy. GWR correlated positively with EDV index and ESV index while it correlated negatively with EF [13]. These findings were close to our results regarding correlation of washout rate to myocardial perfusion findings. GWR correlated positively with ESV and EDV while it was negatively correlated with ejection fraction with correlation coefficient − 0.4. Omar et al. estimated washout rate in 50 patients referred for MPI. They studied the correlation between the percentages of reversibility and the percentage of washout rate for each vascular territory with significant linear correlation with correlation coefficient for LAD, LCx, and RCA vascular territories was 0.69, 0.66, and 0.77 respectively [14]. Also, Du B et al. in 2014 studied seven patients to reveal the concealed balanced ischemia diagnosed by angiography. They reported that significantly higher global washout rate was observed in the three vessels CAD group (21.1 ± 4.6%) than the control group (9.5 ± 4.9%) [15]. Unfortunately, in this study, we had a relatively small number of subjects to evaluate overall pattern of MIBI washout rate in different CAD patients. Also, we had no follow-up for the patients to study myocardial washout rate as an independent poor prognostic factor. GWR can be used as an additional parameter for risk stratification of CAD patients. Also, GWR is correlated with the poor prognostic finding in myocardial perfusion scan being most evident with SDS. All the datasets used and analyzed in this study are available with the corresponding author on reasonable request. MPI: Myocardial perfusion imaging GWR: Global myocardial washout rate LVEF: Left ventricular ejection fraction SDS: Summed difference score SPECT: Single-photon computed tomography Kannel W (1994) Incidence, prevalence, and mortality of cardiovascular diseases. In: Hurst's the heart Kannel WB (1987) Prevalence and clinical aspects of unrecognized myocardial infarction and sudden unexpected death. Circulation. 75:II4–II5 Cremer P, Hachamovitch R, Tamarappoo B (2014) Clinical decision making with myocardial perfusion imaging in patients with known or suspected coronary artery disease. Semin Nucl Med 44(4):320–329 D'Agostino RB, Vasan RS, Pencina MJ, Wolf PA, Cobain M, Massaro JM et al (2008) General cardiovascular risk profile for use in primary care. Circulation. 117(6):743–753 Fihn SD, Gardin JM, Abrams J, Berra K, Blankenship JC, Douglas PS et al (2012) 2012 ACCF/AHA/ACP/AATS/PCNA/SCAI/STS guideline for the diagnosis and management of patients with stable ischemic heart disease. J Am Coll Cardiol 60(24):e44–e164 Czaja M, Wygoda Z, Duszańska A et al (2017) Interpreting myocardial perfusion scintigraphy using single-photon emission computed tomography. Part 1. Kardiochirurgia i Torakochirurgia Polska= Polish J Cardio-Thoracic Surg 14(3):192 Kwok JM, Miller TD, Hodge DO, Gibbons RJ (2002) Prognostic value of the Duke treadmill score in the elderly. J Am Coll Cardiol 39(9):1475–1481 Knuuti J, Wijns W, Saraste A et al (2020) 2019 ESC guidelines for the diagnosis and management of chronic coronary syndromes: the task force for the diagnosis and management of chronic coronary syndromes of the European Society of Cardiology (ESC). Eur Heart J 41(3):407–477 Montalescot G, Sechtem U, Achenbach S, Andreotti F, Arden C, Budaj A et al (2013) 2013 ESC guidelines on the management of stable coronary artery disease: the task force on the management of stable coronary artery disease of the European Society of Cardiology. Eur Heart J 34(38):2949–3003 Richter W-S, Cordes M, Calder D, Eichstaedt H, Felix R (1995) Washout and redistribution between immediate and two-hour myocardial images using technetium-99m sestamibi. Eur J Nucl Med 22(1):49–55 Khandaker MH, Miller TD, Chareonthaitawee P, Askew JW, Hodge DO, Gibbons RJ (2009) Stress single photon emission computed tomography for detection of coronary artery disease and risk stratification of asymptomatic patients at moderate risk. J Nucl Cardiol 16(4):516–523 America YG, Bax JJ, Boersma E, Stokkel M, van der Wall EE (2009) The additive prognostic value of perfusion and functional data assessed by quantitative gated SPECT in women. J Nucl Cardiol 16(1):10–19 Shiroodi MK, Shafiei B, Baharfard N, Gheidari ME, Nazari B, Pirayesh E et al (2012) Tc99m MIBI washout as a complementary factor in the evaluation of idiopathic dilated cardiomyopathy (IDCM) using myocardial perfusion imaging. Int J Cardiovasc Imaging 28(1):211–217 Omar M, Moustafa H (2014) Myocardial 99Tc-MIBI washout. Egyptian J Nucl Med 10(2):1 Du B, Li N, Li X et al (2014) Myocardial washout rate of resting 99m Tc-Sestamibi (MIBI) uptake to differentiate between normal perfusion and severe three-vessel coronary artery disease documented with invasive coronary angiography. Ann Nucl Med 28(3):285–292 Not applicable (no funding received for this study). Nuclear Medicine, El-Kasr Al-Ainy Cairo University, 20 El-Enshirah El-Kabeer street, El-Mohandeseen, Giza, Egypt Mohammed Omar Mohammed Othman, Hosna Mohammed Moustafa & Shaimaa Ahmed Abd El-Mon'em El-Rasad Cardiology, El-Kasr Al-Ainy Cairo University, 20 El-Enshirah El-Kabeer street, El-Mohandeseen, Giza, Egypt Mohammed Mahmoud Abd El-Ghany Mohammed Omar Mohammed Othman Hosna Mohammed Moustafa Shaimaa Ahmed Abd El-Mon'em El-Rasad M.O.M.O and H.M.M. put the idea of the study, editors of the manuscript, and participated in the study design. M.O.M.O and S.A.A.E. participated in the study design and performed the statistical analysis. M.O.M.O and M.M.A. contributed to the patients' collection and clinical assessment. The authors read and approved the final manuscript. Correspondence to Mohammed Omar Mohammed Othman. The study was approved by the research committee of Faculty of Medicine, Kasr Alainy Hospital, Cairo University, 2017. No reference number provided as the committee just say yes or no according to the system in our faculty of medicine at 2017 (date of starting of this research). All patients included in this study gave written informed consent to participate in this research. If the patients were disoriented about the setting at the time of the study, written informed consent for their participation was given by their legal guardian. All patients included in this research were fully conscious and gave written informed consent to publish the data contained within this study. Othman, M.O.M., Moustafa, H.M., El-Ghany, M.M.A. et al. The value of myocardial MIBI washout rate in risk stratification of coronary artery disease. Egypt J Radiol Nucl Med 52, 73 (2021). https://doi.org/10.1186/s43055-020-00382-0 Duke's score
CommonCrawl
Happer's Statement: CO₂ will be a major benefit to the Earth Nov 20, 2019 TBS Staff William Happer on #GlobalWarming: "CO2 will be a major benefit to the Earth…" Some people claim that increased levels of atmospheric CO2 will cause catastrophic global warming, flooding from rising oceans, spreading tropical diseases, ocean acidification, and other horrors. But these frightening scenarios have almost no basis in genuine science. This Statement reviews facts that have persuaded me that more CO2 will be a major benefit to the Earth. Numbers are very important for a sensible discussion of climate. So I have included a few key equations and simple derivations of important results for readers with a technical background. I hope that less technically minded readers will not be put off by the equations. Most of the discussion should be understandable to anyone with an interest in the science of climate. I have also included Internet references for those who would like to dig deeper. "Numbers are very important for a sensible discussion of climate."—Happer TheBestSchools.org's Interview of me, to which I will occasionally refer, included Fig. 1. This shows the estimated CO2 levels during the Phanerozoic eon that began about 550 million years ago with the Cambrian, the first geological period with abundant, well-preserved fossils. Figure 1. The ratio, RCO2, of past atmospheric CO2 concentrations to average values (about 300 ppm) of the past few million years, This particular proxy record comes from analyzing the fraction of the rare stable isotope 13C to the dominant isotope 12C in carbonate sediments and paleosols. Other proxies give qualitatively similar results.[1] CO2 concentrations have been much higher than now over most of life's history.—Happer The important message of Fig. 1 is that CO2 concentrations have been much higher than present values over most of the history of life. Even though CO2 concentrations were measured in thousands of parts per million by volume (ppm) over most of the Phanerozoic, not the few hundred ppm of today, life flourished in the oceans and on the land. Average pH values in the ocean surface were as low as pH = 7.7, a bit lower than the pH = 8.1 today. But this was still far from acidic, pH < 7, because of the enormous natural alkalinity of seawater. The mean global temperature was sometimes higher and sometimes lower than today's. But the temperature did not correlate very well with CO2 levels. For example, there were ice ages in the Ordovician, some 450 million years ago, when the CO2 levels were several thousand ppm.[2] Overview of this Dialogue Interview with Karoly Interview with Happer Major Statement by Karoly Major Statement by Happer Final Reply by Happer More Dialogues: 10 Most Controversial US Profs Discussions of climate today almost always involve fossil fuels. Some people claim that fossil fuels are inherently evil. Quite the contrary, the use of fossil fuels to power modern society gives the average person a standard of living that only the wealthiest could enjoy a few centuries ago. But fossil fuels must be extracted responsibly, minimizing environmental damage from mining and drilling operations, and with due consideration of costs and benefits. Similarly, fossil fuels must be burned responsibly, deploying cost-effective technologies that minimize emissions of real pollutants such as fly ash, carbon monoxide, oxides of sulfur and nitrogen, heavy metals, volatile organic compounds, etc. Extremists have conflated these genuine environmental concerns with the emission of CO2, which cannot be economically removed from exhaust gases. Calling CO2 a "pollutant" that must be eliminated, with even more zeal than real pollutants, is Orwellian Newspeak.[3] "Buying insurance" against potential climate disasters by forcibly curtailing the use of fossil fuels is like buying "protection" from the mafia. There is nothing to insure against, except the threats of an increasingly totalitarian coalition of politicians, government bureaucrats, crony capitalists, thuggish nongovernmental organizations like Greenpeace, etc. Fig. 1 summarizes the most important theme of this discussion. It is not true that releasing more CO2 into the atmosphere is a dangerous, unprecedented experiment. The Earth has already "experimented" with much higher CO2 levels than we have today or that can be produced by the combustion of all economically recoverable fossil fuels. The thing that hath been, it is that which shall be; and that which is done is that which shall be done: and there is no new thing under the sun.[4] Life on Earth does better with more CO2. CO2 levels are increasing There is no doubt that the concentrations of CO2 are increasing. For example, Fig. 2 shows CO2 concentrations measured at an altitude of about 3400 m the side of the volcano, Mauna Loa, on the island of Hawaii. Figure 2. Atmospheric fraction f of CO2 measured at Mauna Loa, Hawaii,[5] at 19° N latitude. Similar observations are available from a dozen other observatories, from the South Pole to Point Alert at 82° N latitude in the Canadian Arctic.[6] As can be seen from the month-by-month data of Fig. 2 (the red dashed lines) CO2 values decrease rapidly in the northern-hemisphere summer because photosynthesis by growing plants sucks CO2 from the air. CO2 values increase in the winter when photosynthesis diminishes but respiration of the biosphere continues. The average growth rate of atmospheric CO2 at this writing (2016), the slope of the black trend line, is about df/dt = 2 ppm per year. This corresponds to about half of the CO2 emissions from burning fossil fuels, cement manufacture, land-use changes, and other human causes.[7] The other half of the emissions is absorbed by the oceans and land. "Exhaled human breath typically consists of f=40,000 ppm to 50,000 ppm of CO2."—Happer Local values of CO2 can be very different from those of Fig. 2. For example, exhaled human breath typically consists of f = 40,000 ppm to 50,000 ppm of CO2, a fact that should make one wonder about the campaign to demonize CO2 as a "pollutant." Without strong ventilation, CO2 levels in rooms filled with lots of people commonly reach 2000 ppm with no apparent ill effects. On a calm summer day, CO2 concentrations in a corn field can drop to f = 200 ppm or less, because the growing corn sucks so much CO2 out of the air.[8] The US Navy tries to keep CO2 levels in submarines below f = 5000 ppm to avoid any measurable effect on sailors[9] and NASA sets similar limits for humans in spacecraft.[10] Americans "pollute" the air w/about 320K metric tons of CO2 daily—just breathing.—Happer As illustrated in Fig. 3, both humans and power plants exhale mostly nitrogen and about 1% argon. The remainder consists almost entirely of carbon dioxide, water vapor, and oxygen. Humans exhale about the same fraction of water vapor as a power plant, but less carbon dioxide and more oxygen. The large fraction of oxygen remaining in human breath is why mouth-to-mouth resuscitation works. The "smoke" from the stacks of the power plant, or from the girl's breath on a frosty day, is condensed water vapor. CO2 is completely transparent. Each human exhales about 1 kg of CO2 per day, so the 320 million people of the United States "pollute" the atmosphere with about 320,000 metric tons of CO2 per day. Talk about a "carbon footprint"! Figure 3. The main components of the exhaust gas of a modern power plant are similar to the components in human breath. Atmospheric transmission of radiation Around the year 1861, John Tyndall (1820–1893) discovered that gaseous molecules of H2O, CO2, and many other volatile chemicals are transparent to visible light, but can absorb invisible heat radiation, like that given off by a warm tea kettle or by the Earth's surface and atmosphere.[11] Today, we call these "greenhouse gases," and we know that the absorption is mostly due to oscillating electric dipole moments, induced by the vibrations and rotations of the molecules. The vibrations and rotations of the most abundant atmospheric gases, N2 and O2, produce no oscillating dipole moments, so N2 and O2 do not absorb thermal radiation and are not greenhouse gases. A dipole moment would not know which way to point in the highly symmetric N2 and O2 molecules. Fig. 4 shows how the different gases that compose the earth's atmosphere affect the transmission of visible light from the sun to the earth's surface, and thermal radiation from the surface to outer space. Although the atmospheric fraction of greenhouse gases, H2O (about 1%) and CO2 (about 0.04%) is small, they can have a big effect since they act much like dyes for liquids. A few drops of dye are sufficient to turn a whole mug of beer green on St. Paddy's day, and a tiny amount of CO2 and H2O is sufficient to substantially change the "color" of the atmosphere for an observer able to see infrared as well as visible radiation. Figure 4. Fractional absorption of radiation passing from the earth's surface to space, or vice-versa, versus wavelength.[12] N2, which makes up about 78% of the atmosphere, attenuates much like the bottom curve, labeled, "Rayleigh Scattering." All of the atmospheric gases are nearly transparent to sunlight, so on cloud-free days some 70% to 75% of sunlight can heat up the surface. The exact amount depends on the relative humidity, since water vapor absorbs some near-infrared sunlight. An atmosphere of pure N2 and O2 would allow most of the surface thermal radiation to escape to space. But the small fractions of the greenhouse gases, CO2 and H2O, permit only 15% to 30% (depending on relative humidity) of surface radiation to escape to space. The daytime surface is cooled more by rising, often humid, air than by thermal radiation. The smooth curves on the top panel are "Planck brightnesses" (not to scale) analogous to the curves of Fig. 8, but with energy per unit wavelength, λ, not energy per unit spatial frequency, ν = 1/λ, of the radiation. Commenting on greenhouse warming of the Earth by water vapor in his classic book, Heat: A Mode of Motion ,[13] Tyndall makes the eloquent (and correct) statement: Aqueous vapor is a blanket, more necessary to the vegetable life of England than clothing is to man. Remove for a single summer-night the aqueous vapor from the air which overspreads this country, and you would assuredly destroy every plant capable of being destroyed by a freezing temperature. The warmth of our fields and gardens would pour itself unrequited into space, and the sun would rise upon an island held fast in the iron grip of frost. "The most important greenhouse gas of Earth's atmosphere is water vapor."—Happer Tyndall correctly recognized in 1861 that the most important greenhouse gas of the Earth's atmosphere is water vapor. CO2 was a modest supporting actor, then as now. Radiative cooling of the Earth Conduction of heat from the interior of the Earth brings an energy flux of about Ii =0.08 W/m2 to the surface.[14] This is only about 0.02% of the mean thermal energy[15], Is = 340 W/m2, that would have to be uniformly reradiated if the Earth absorbed all the energy from the solar flux, on average, about F = 4 Is = 1360 W/m2. In contrast to Earth, Jupiter radiates almost twice as much energy as it receives from sunlight.[16] The "solar constant" F was first measured precisely by the American physicist Samuel Pierpont Langley (1834–1906) during expeditions to California's Mt. Whitney in the late 1800's. He determined that F = 2 cal/(cm2 min), which converts to a value only a few percent higher than today's official value, since 1 cal = 4.184 J. The solar constant is enough to vaporize about 2 mm of 20 C water per hour (the heat of vaporization is about 580 cal cm-3). Langley doubted that the energy output of the sun was exactly constant. He suspected that modest variations in F contributed to climate change. How much the variations in solar output contribute is still being debated today, as was discussed in the Interview. Summarizing his fund-raising arguments for a permanent, high-altitude solar observatory in the year 1903, Langley said: Now that great undertakings are the order of the day, let us hope that some way opens to reach the solution of a problem which so concerns the whole human race.[17] Raising funds for scientific research has always entailed various degrees of hyperbole! Without sunlight and only internal heat to keep warm, the Earth's absolute surface temperature T would be very cold indeed. A first estimate can be made with the celebrated Stefan-Boltzmann formula:[18] $$ J= εσT^4 \tag 1 $$ where J is the thermal radiation flux per unit of surface area, and the Stefan-Boltzmann constant (originally determined from experimental measurements) has the value σ = 5.67 × 10-8 W/(m2K4). The Slovenian experimental physicist, Jožef Stefan (1835–1893), discovered the proportionality of thermal radiation to the fourth power of the absolute temperature. Stefan's student, Ludwig Boltzmann (1844–1906), showed that the factor of T4 was required by thermodynamics and by Maxwell's equations for electromagnetic radiation. If we assume that the Earth's surface has maximum emissivity, ε = 1, and is only emitting Ji = 0.08 W/m2 of internal heat, Eq. (1) would imply a surface temperature of only Ti = (Ji/σ)¼ = 34 K above absolute zero, somewhat warmer than the 20 K boiling point of liquid hydrogen, but much colder than the 78 K boiling point of liquid nitrogen. If we use Eq. (1) in the same way to calculate how warm the surface would have to be to radiate the same thermal energy as the mean solar flux, Js = F/4 = 340 W/m2, we find Ts = 278 K or 5 C, a bit colder than the average temperature (287 K or 14 C) of the Earth's surface,[19] but "in the ball park." Figure 5. The temperature profile of the Earth's atmosphere.[20] This illustration is for mid-latitudes, like Princeton, NJ, at 40.4o N, where the tropopause is usually at an altitude of about 11 km. The tropopause is closer to 17 km near the equator, and as low as 9 km near the north and south poles. These estimates can be refined by taking into account the Earth's atmosphere. In the Interview we already discussed the representative temperature profile, Fig. 5. The famous "blue marble" photograph of the Earth,[21] reproduced in Fig. 6, is also very instructive. Much of the Earth is covered with clouds, which reflect about 30% of sunlight back into space, thereby preventing its absorption and conversion to heat. Rayleigh scattering (which gives the blue color of the daytime sky) also deflects shorter-wavelength sunlight back to space and prevents heating. Fig. 6 was taken close to midsummer for the southern hemisphere, as one might guess from the southern locations of the white cloud tops of the intertropical convergence zone (ITCZ) — the latitude of maximum thermal convection — and where the sun is nearly overhead at noon. The rising, warm air pulls in moist surface air from the north and south to form heavy clouds, with very high tops and abundant rain. The ITCZ completes one north-south migration every year, crossing the equator approximately at the times of the spring and fall equinoxes. Over the Indian Ocean, where the migration is particularly large, reaching from nearly the Tropic of Capricorn at 23o south latitude to a bit beyond the Tropic of Cancer at 23o north latitude, the ITCZ brings the southwest monsoon to India and the flooding of the Nile to Africa.[22] Figure 6. The Earth from space. A photograph taken by Astronaut/Geologist Harrison Schmitt on December 7, 1972, during the mission Apollo 17. The Apollo 17 crew were lucky that the timing of their launch allowed them to see Earth in nearly full sunlight. In some missions, the astronauts looked back on the nighttime Earth. Today, whole-Earth images analogous to Fig. 6 are continuously recorded by geostationary satellites, orbiting at the same angular velocity as the Earth, and therefore hovering over nearly the same spot on the equator at an altitude of about 35,800 km.[23] In addition to visible images, which can only be recorded in daytime, the geostationary satellites record images of the thermal radiation emitted both day and night. Fig. 7 shows radiation with wavelengths close to 10.7 µ in the "infrared window" of the absorption spectrum shown in Fig. 4, where there is little absorption from either the main greenhouse gas, H2O, or from less-important CO2. Darker tones in Fig. 7 indicate more intense radiation. The cold "white" cloud tops emit much less radiation than the surface, which is "visible" at cloud-free regions of the Earth. This is the opposite from Fig. 6, where maximum reflected sunlight is coming from the white cloud tops, and much less reflection from the land and ocean, where much of the solar radiation is absorbed and converted to heat. Figure 7. Radiation with wavelengths close to the 10.7 µ (1µ = 10-6m), as observed with a geostationary satellite over the western hemisphere of the Earth.[23] This is radiation in the infrared window of Fig. 4, where the surface can radiate directly to space from cloud-free regions. Clouds are one of the most potent factors controlling Earth' s surface temperature.—Happer As one can surmise from Fig. 6 and Fig. 7, clouds are one of the most potent factors that control the surface temperature of the earth. Their effects are comparable to those of the greenhouse gases, H2O and CO2, but it is much harder to model the effects of clouds. Clouds tend to cool the Earth by scattering visible and near-visible solar radiation back to space before the radiation can be absorbed and converted to heat. But clouds also prevent the warm surface from radiating directly to space. Instead, the radiation comes from the cloud tops that are normally cooler than the surface. Low-cloud tops are not much cooler than the surface, so low clouds are net coolers. In Fig. 7, a large area of low clouds can be seen off the coast of Chile. They are only slightly cooler than the surrounding waters of the Pacific Ocean in cloud-free areas. High cirrus clouds can warm the surface since they are cold and nearly opaque in the thermal infrared. They emit much less long-wave infrared radiation to space than would be emitted by the cloud-free surface. But the cirrus clouds can be nearly transparent for visible sunlight and do little to hinder solar heating of the surface. Richard Lindzen of MIT[24] has suggested that changes in the extent of cirrus clouds, in response to more or less heating of the surface, may act as a negative feedback mechanism, the "iris effect." The iris effect might account for the remarkable temperature stability of the Earth's surface, and explain the "faint young sun paradox" — the geological evidence for ice-free oceans in the very earliest history of the Earth, some four billion years ago when the Sun is calculated to have radiated about 30% less power than today, so that the Earth's surface should have been cold enough to be ice covered.[25] Fig. 8 shows the measured spectral distribution of the infrared radiation from which the satellite images of Fig. 7 were made. The horizontal scale is the spatial frequency ν of the light, the inverse of the wavelength λ used to label the horizontal scale of Fig. 4, that is, ν = 1/λ. The smooth, dashed lined on Fig. 8 are the theoretical blackbody brightness functions, discovered by Max Planck[26] when he invented quantum mechanics in 1900, $$ B = \frac {h_P c^2 ν^3}{e^x-1}. \tag 2 $$ Here, hP (= 6.63 × 10-34 J s) is Planck's constant; c (= 3 × 108 m/s) is the speed of light; x (= hPcν/(kBT)) is the ratio of the energy, hPcν, of a photon of spatial frequency ν to the characteristic thermal energy, (kBT); and Boltzmann's constant is kB = 1.38 × 10-23 J/K. The units of B are W/(m2 sr cm-1). Here, sr (= steradian) is the unit of solid angle. There are 4π steradians for all solid angles emanating from a point in 3-dimensional space.[27] Figure 8. Spectrally resolved, vertical upwelling thermal radiation I from the Earth, the jagged lines, as observed by a satellite.[28] The smooth, dashed lines are theoretical Planck brightnesses, B, for various temperatures. The vertical units are 1 c.g.s = 1 erg/(s cm2 sr cm-1) = 1 mW/(m2 sr cm-1). The Stefan-Boltzmann energy fluxes of Eq. (1) are simply the area under the Planck brightness curve, multiplied by a factor of π to account for upwelling radiation from all solid angles, not just vertically upward, $$ π∫_0^∞ dν B= σT^4. \tag 3 $$ Using Eq. (2) with Eq. (3) gives an exact formula, \( σ=2π^5 k_B^4/(15c^2 h_P^3), \) for the Stefan-Boltzmann coefficient, which Stefan and Boltzmann had been obliged to determine experimentally before the invention of quantum mechanics. Except at the South Pole, the data of Fig. 8 show that the observed thermal radiation from the Earth is less intense than Planck radiation from the surface would be without greenhouse gases. Although the surface radiation is completely blocked in the bands of the greenhouse gases, as one would expect from Fig. 4, radiation from H2O and CO2 molecules at higher, colder altitudes can escape to space. At the "emission altitude," which depends on frequency ν, there are not enough greenhouse molecules left overhead to block the escape of radiation. The thermal emission cross section of CO2 molecules at band center is so large that the few molecules in the relatively warm upper stratosphere (see Fig. 5) produce the sharp spikes in the center of the bands of Fig. 8. The flat bottoms of the CO2 bands of Fig 8 are emission from the nearly isothermal lower stratosphere (see Fig. 5) which has a temperature close to 220 K over most of the Earth. To the left of the CO2 band on Fig. 8 is the radiation from rotating water molecules with their permanent electric dipole moments. The permanent dipole moment of the H2O molecule, which is bent in its equilibrium state, helps to make water vapor a particularly potent greenhouse gas. The dipole moment is also partially responsible for the "anomalous properties"[29] of water — its ability to dissolve salts, its high boiling temperature, etc. The CO2 band is due to bending vibrations, similar to those of a xylophone bar. Because of its high symmetry, the CO2 molecule, in its unbent, equilibrium state, does not have an electric dipole moment and has no "pure-rotational band" like that of H2O. The H2O band on the extreme right side of Fig. 8 is due to bending vibrations, analogous to those of CO2. It is hard for H2O molecules to reach cold, higher altitudes, since the molecules condense onto snowflakes or rain drops in clouds. So the H2O emissions to space come from the relatively warm and humid troposphere, and they are only moderately less intense than the Planck brightness of the surface. CO2 molecules radiate to space from the relatively dry and cold lower stratosphere. So for most latitudes, the CO2 band observed from space has much less intensity than the Planck brightness of the surface. With the exception of the absorption/emission band of the ozone molecule, O3, and stray resonances of H2O, the "atmospheric window," from about 800 cm-1 to 1200 cm-1 in Fig. 8, is very nearly a segment of the Planck brightness curve B at the surface temperature, Ts. From the window radiation we see that the surface of the Sahara is about Ts = 320 K = 47 C = 117 F, apparently a hot summer day. The nearby Mediterranean sea has a surface temperature of about Ts = 285 K = 12 C = 54 F, pretty chilly. The H2O and CO2 bands are about the same for the Sahara and the Mediterranean. The upper-atmospheric temperature profiles of nearby regions are much more similar than the surface temperatures. "Concentrations of H2O vapor can be quite different at different locations."—Happer Concentrations of H2O vapor can be quite different at different locations on Earth. A good example is the bottom panel of Fig. 8, the thermal radiation from the Antarctic ice sheet, where almost no H2O emission can be seen. There, most of the water vapor has been frozen onto the ice cap, at a temperature of around 190 K. Near both the north and south poles there is a dramatic wintertime inversion[30] of the normal temperature profile of Fig. 5. The ice surface becomes much colder than most of the troposphere and lower stratosphere. Cloud tops in the intertropical convergence zone (ITCZ) can reach the tropopause and can be almost as cold as the Antarctic ice sheet. The spectral distribution of cloud-top radiation from the ITCZ looks very similar to cloud-free radiation from the Antarctic ice, shown on the bottom panel of Fig. 8. The Schwarzschild equation The observed intensity I of upwelling radiation shown in Fig. 8 comes from the radiation emitted by the surface and by greenhouse gases in the atmosphere above the surface. The rate of change of the intensity with altitude is given by the Schwarzschild equation[31] $$ \frac {∂I}{∂z}= κ(B-I). \tag 4 $$ The radiation intensity, I = I(ν,z), of frequency ν at the altitude z gets larger or smaller with increasing height, depending on whether I is smaller or larger than the local Planck brightness, B = B(ν,T) of Eq. (2), which changes with altitude z because of the changing temperature, T = T(z), sketched in Fig. 5. The local attenuation coefficient is $$ κ= N(z) ∑_j f_j σ_j (ν,z). \tag 5 $$ The total molecular number density, N(z), decreases rapidly with altitude z. The fraction fj of the jth greenhouse is nearly independent of altitude z for CO2, a well-mixed greenhouse gas. The fractions depends strongly on altitude and latitude for H2O and O3, which are not well-mixed. The attenuation coefficient κ decreases with altitude, along with the molecular density N, although peak resonance cross sections σ can actually increase with altitude because of diminished pressure broadening. The Schwarzschild equation (4) tells us that the atmosphere tries to make the local brightness, I(ν,z), equal to the local Planck brightness, B(ν,T). If the intensity I diminishes with altitude, as it normally does in the troposphere, the energy lost from the radiation goes into heating the air molecules. If the intensity I increases with altitude, as it normally does in the middle stratosphere, the growth in radiation energy comes from cooling the air molecules. Unlike "cavity radiation" to which the Stefan-Boltzmann formula, Eq. (1), applies, infrared radiation in the atmosphere is never even close to thermal equilibrium and cannot be described by a single temperature T. This would require that for all directions of propagation, n, the thermal-radiation intensity would be equal to the local Planck brightness, I(ν,z,n) = B(ν,T). In contrast, the collisional exchange of energy between translational, rotational, and vibrational states of the molecules is so fast (with each molecule making more than a billion collisions per second at one atmosphere of pressure) that the distribution of molecules over their energy states is very well described by a local temperature, T = T(z). To solve the Schwarzschild equation for the intensity, I = I(ν,∞), at the "infinite" altitude of the satellite, we need to specify the value of the upwelling intensity, I = I(ν,zs), at the surface altitude, zs = 0. To good approximation this is equal to the Planck brightness at the surface altitude and temperature, Ts = T(zs). $$ I(ν,z_s )=B(ν,T_s )=B_s. \tag 6 $$ Figure 9. The absorption cross section of a CO2 molecule at the surface pressure (1 atmosphere) and a temperature of 300 K. The parameters of the red, straight-line approximation are σ0 = 1.27 × 10-19 cm2, λ0 = 0.0805 cm, ν2 = 669.2 cm-1. The green "exact" cross sections came from the HITRAN database[32] for CO2. Logarithmic forcing by CO2 Fig. 9 shows the radiation absorption cross section σj(ν,z) of Eq. (5) for a CO2 molecule. The exact, "line-by-line" cross section is the complicated green curve of Fig. 9, consisting of thousands of resonance absorption lines of vibrational-rotational transitions. But quite a good fit can be made with the triangular red approximation.[33] Since Fig. 9 is plotted on a logarithmic scale, the cross section corresponding to the red triangle can be written as $$ σ(ν)= σ_0 e^{-λ_0|ν-ν_2|}. \tag 7 $$ Here ν2 = 667 cm-1 is the resonant bending-mode frequency of a non-rotating CO2 molecule. The empirically determined exponential line-shape parameter is λ0 = 0.0805 cm, and the peak cross section is σ0 = 1.27 × 10-19 cm2. The form of Eq. (7) is peculiar to the CO2 molecule, and that functional form is not a good approximation to the absorption cross section of the main greenhouse molecule, H2O or the less important molecules, O3 and CH4. The approximation of Eq. (7) works as well for higher altitudes as for the surface. However, slightly different parameters σ0 and λ0 must be used at higher altitudes because the distribution of CO2 molecules over internal vibration-rotation states is temperature-dependent. The probability that a photon emitted by the surface will escape to space without absorption by CO2 molecules is $$ w = w(ν) = e^{-fnσ(ν)}. \tag 8 $$ In Eq. (8), σ(ν) is the altitude-averaged cross section of CO2 molecules. The column density of all air molecules is $$ n=∫_0^∞ dz\;N(z) = 2.15\; \mathrm{x} \;10^{25} \, \mathrm{cm}^{-2}. \tag 9 $$ This is the number of molecules in a 1 cm2 column of air, extending from the surface to outer space, that would have a mass of 1.03 kg and would give one normal atmosphere of pressure on the surface. CO2 molecules comprise a fraction f of the all of the molecules. The cross section of Fig. 9 falls off so rapidly with detuning, |ν - ν2|, of the photon frequency ν from the band-center frequency ν2, that to first approximation the surface escape probability is "binary," as illustrated in Fig. 4 by the panel labeled "Carbon Dioxide." This is a plot of 1 - w versus wavelength. For small detunings, |ν - ν2|, of the photon frequency, ν, the troposphere will be opaque, with w = 0. Then the blackbody radiation of the surface will be completely attenuated and except the spike at band center, the satellite will record the Planck brightness Bt of the tropopause, at the altitude zt = 11 km (at Princeton), and of the nearly isothermal lower stratosphere at a temperature of about Tt = T(zt) = 220 K. For large detuning, both the troposphere and stratosphere will be transparent, and a photon emitted from the surface will escape freely to space. Then the intensity observed by the satellite will be the Planck brightness Bs at the surface temperature Ts. So we expect to be able to approximate the brightness observed by the satellite as $$ I=B_s w + B_t(1\,-\,w), \tag {10} $$ where in analogy to Eq. (6), $$ B_t=B(ν,T_t ). \tag {11} $$ The simple approximation of Eq. (10) is plotted in Fig. 10. There is good, semi-quantitative agreement with the measurements of Fig. 8. The main difference is the absence of resonance-line structure in the simple theory of Fig. 10, and the also the absence of bands from the greenhouse gases H2O and O3. The dashed line in Fig. 10 is the intensity for a doubling of CO2 concentrations from the present value of 400 ppm to 800 ppm with no change in the temperature profile of Fig. 5. Doubling the CO2 concentration makes little difference, and simply leads to a slight broadening of the width, Δν, of the band. To understand the band broadening quantitatively, we define the two "band edge frequencies" as those frequencies ν∓ for which the escape probability Eq. (8) is w = e-1 = 0.37, that is $$ fnσ(ν_∓ )=fnσ_0 e^{-λ_0 |ν_∓-ν_2 |}=1. \tag {12} $$ Taking the natural logarithm of both sides of Eq. (12) and recalling that ln(1) = 0 and ln(xy) = ln(x) + ln(y), we find that the band-edge frequencies are $$ ν_∓ = ν_2 ∓ \frac{1}{λ_0}\mathrm{ln}⁡(f⁄f_0 ) \mathrm{, where}\;f_0 = 1⁄nσ_0 = 0.4\,\mathrm{ppm}. \tag {13} $$ For example, at the CO2 concentration of the year 2015, f = 400 ppm, the band-edge frequencies are ν- = 582 cm-1 and ν+ = 756 cm-1, the projections of the black dots of Fig. 10 on the horizontal axes. The width of the CO2 band is $$ ∆ν = ν_+ - ν_- = \frac{2}{λ_0}\mathrm{ln}⁡(f⁄f_0 ). \tag {14} $$ Doubling the CO2 concentration from f to 2f will increase the band width by $$ ∆(∆ν) = \frac{2}{λ_0}\mathrm{ln}⁡(2) = 17.2 \, \mathrm{cm}^{-1}. \tag {15} $$ The radiation intensity to space will be changed by approximately $$ ∆J=-∆B\;∆(∆ν)π=-∆B×54.1 \, \mathrm{cm}^{-1}. \tag {16} $$ The factor of π in Eq. (16) is to account for upwelling from all solid angles, not just vertically upward. The effective difference in surface and tropospheric brightness is $$ ∆B = \frac{1}{2}[B_s(ν_+) + B_s(ν_-) \, - \, B_t(ν_+ ) \, - \, B_t(ν_-)]. \tag {17} $$ From inspection of Fig. 8, we see that a difference of surface and tropospheric brightness is about ΔB = 75 mW/(m2 sr cm-1) over the Mediterranean, and according to Eq. (16), doubling the CO2 concentration would decrease the radiation to space by $$ ∆J=-4.04\, \mathrm{W}/ \mathrm{m}^2. \tag {18} $$ As one can see from Fig. 8, ΔB varies substantially with latitude and so will the change in radiation to space ΔJ of Eq. (16). At the south pole, with its temperature inversion, and with ΔB = -30 mW/(m2 sr cm-1), doubling the CO2 concentration will increase, not decrease, the radiation to space. Using the Stefan-Boltzman equation (1), and accounting for the loss of surface radiation in the CO2 band by assuming an effective emissivity, ε = 0.8, we see that a temperature rise, $$ ∆T=\frac{∆J}{4εσT^3}=0.92\, \mathrm{K}, \tag {19} $$ would compensate for the slight loss of radiation to space, Eq. (19), from doubling the CO2 concentration. Of course, the large variation of ΔB with latitude and the effects of H2O and O3 shown in Fig. 8 need to be taken into account. But the message of the discussion above is that simple, feedback-free estimates give a climate sensitivity S — the warming from a doubling of the CO2 fraction f — of about S = 1 K. Most climate models do not focus on the thermal radiation to space, which we have discussed above, but on the "radiative forcing" of the change of radiation transport at, or just above, the tropopause. Most climate models do not focus on the thermal radiation to space, which we have discussed above, but on the "radiative forcing" of the change of radiation transport at, or just above, the tropopause.[34] This is because heating and cooling of the stratosphere and troposphere are nearly independent. Surface and tropospheric warming should be similar, with 10% to 20% more tropospheric warming than surface warming because of the release of latent heat into the troposphere from ascending air. The basic physics of radiation to space and radiative forcing at the tropopause are similar. Both will be proportional to ln(f/f0), though with different constants of proportionality and slightly different values of f0. Figure 10. The simple theoretical estimate Eq. (10) of the thermal emission from the earth for comparison with the actual measurements of Fig. 8. The solid lines are the nadir intensities that a satellite would observe for today's CO2 fraction, f = 400 ppm, and the filled black circles are the left and right band edges of Eq. (13). The dashed line, the nadir intensity for twice today's CO2 concentration, f = 800 ppm, differs only at the edges of the band. Radiative transfer is very insensitive to f. The vertical units are mW/(m2 sr cm-1). Radiation, which we have discussed above, is an important part of the energy transfer budget of the earth, but not the only part. Radiation, which we have discussed above, is an important part of the energy transfer budget of the earth, but not the only part. More solar energy is absorbed in the tropics, near the equator, where the sun beats down nearly vertically at noon, than at the poles where the noontime sun is low on the horizon, even at midsummer, and where there is no sunlight at all in the winter. As a result, more visible and near infrared solar radiation ("short-wave radiation" or SWR) is absorbed in the tropics than is radiated back to space as thermal radiation ("long-wave radiation" or LWR). The opposite situation prevails near the poles, where thermal radiation releases more energy to space than is received by sunlight. Energy is conserved because the excess solar energy from the tropics is carried to the poles by warm air currents, and to a lesser extent, by warm ocean currents. The basic physics is sketched in Fig. 11.[35] Figure 11. Most sunlight is absorbed in the tropics, and some of the heat energy is carried by air currents to the polar regions to be released back into space as thermal radiation. Along with energy, angular momentum — imparted to the air from the rotating Earth's surface near the equator — is transported to higher northern and southern latitudes, where it is reabsorbed by the Earth's surface. The Hadley circulation near the equator is largely driven by buoyant forces on warm, solar-heated air, but for mid latitudes the "Coriolis force" due to the rotation of the earth leads to transport of energy and angular momentum through slanted "baroclinic eddies." Among other consequences of the conservation of angular momentum are the easterly trade winds near the equator and the westerly winds at mid latitudes. Predictions about what more CO2 will do to the Earth's climate are based on numerical modeling of the fluid flows in the atmosphere and oceans. Predictions about what more CO2 will do to the Earth's climate are based on numerical modeling of the fluid flows in the atmosphere and oceans. The state of the atmosphere is determined by many numerical quantities. One of the most important is the wind velocity v of air "parcels" located at each position r above the Earth's surface. We assume that the parcels are small enough that all of the air in a single parcel has nearly the same velocity v, temperature T, pressure p. The volume of the atmosphere is very large. If we agree to consider only the atmosphere within a spherical shell of radial thickness ΔR = 100 km, surrounding the earth with its radius R = 6371 km, the volume of the atmosphere would be V = 4πR2 ΔR = 5.1 × 1010 km3. If we were to be satisfied with a uniform "1 km grid size," we would need to store about 255 billion numbers to characterize the state of the dry atmosphere. For each grid point we need three velocity projections, vx, vy, and vz — say to the east, to the north, and vertically up — and for dry air, two thermodynamic quantities, for example, the pressure p and the mass density ρ. Unless the atmospheric properties were the same throughout the 1 km3 "grid-point" volume, this huge collection of numbers would still not be a very realistic representation of the atmosphere's state. In practice, one would use larger grid volumes in the upper atmosphere, where there is less spatial variation of winds, temperature, etc. To model the time evolution of the wind velocity and other parameters describing the atmospheric state, we need differential equations analogous to the Schwarzschild equation (4) for the change of radiation with altitude. The rate of change of the velocity v with time t is given by the celebrated Navier-Stokes equation, $$ \frac{∂ \mathbf v}{∂t}+ \mathbf v \cdot \nabla \mathbf v = \mathbf g\,-\, \frac{1}{ρ} \nabla p\,-\, 2 \Omega × \mathbf v \,-\, \mathbf \Omega \,×\, ( \mathbf \Omega × \mathbf r)+ν∇^2 \mathbf v. \tag {20} $$ Here, g is the acceleration of gravity (9.8 m/s "straight down" at Princeton), ρ is the mass density (1.3 kg/m3 at sea level), p is the air pressure, ∇ is the spatial gradient operator, which gives the vector rate of change with distance, Ω is the vector rotation rate of Earth (2π radian/day around the south-north axis), and ν is the kinematic viscosity of air (1.6 × 10-5 m2/s for 25 C air at sea level). The Navier-Stokes equation (20) is the compressible-fluid version of Newton's second law, a = F/m. The Navier-Stokes equation (20) is the compressible-fluid version of Newton's second law, a = F/m, that is, the acceleration, a = dv/dt — or the time rate of change of velocity v of a particle of mass m — is the ratio of the force F acting on the particle to the mass. The acceleration of gravity g characterizes the gravitational attraction of the fluid parcel by the Earth, and the term proportional to -∇p describes the buoyant force from the slightly higher pressures p at the bottom of the fluid parcel than at the top. The pressure-gradient forces and the gravity forces acting on fluid parcels nearly cancel in most situations, a fact first pointed out by Archimedes in 212 BC.[36] Since the Navier-Stokes equation is normally used for a coordinate system that rotates with the earth, it includes a velocity-dependent Coriolis acceleration, the term with only one factor of Ω in Eq. (20). The Coriolis acceleration is very important and is responsible for the northeasterly trade winds, the westerly mid-latitude winds, and the easterly polar winds shown in Fig. 9. The rotation also causes a centripetal acceleration, the term with two factors of Ω in Eq. (20), which is small and often lumped with the acceleration of gravity g. The frictional forces of one fluid layer sliding across another are given by the last term, proportional to the kinetic viscosity coefficient ν. Except very near the Earth's surface, the frictional accelerations are negligible compared to the other terms. The conservation of fluid-mass is described by an equation analogous to the Navier-Stokes equation, $$ \frac{∂ρ}{∂t}\,+\,\nabla \cdot ρ \, \mathbf v = 0. \tag {21} $$ Equations (20) and (21) give us four of the five equations needed to model the dry atmosphere, with the five independent variables vx, vy, vz, p, and ρ. To get a fifth equation, we must turn to the thermodynamics of the air parcels. It is often convenient to use the entropy per unit mass, s, as a thermodynamic variable. Then a complete set of equations for calculating the evolution of dry air could include the fifth equation, $$ \frac{∂s}{∂t}\,+\, \nabla \cdot s \, \mathbf v= \frac{\dot q}{T}. \tag {22} $$ By far the biggest contributors to the heating rate \(\dot q\) (in units of W/kg) on the right of the equation are radiative heating by sunlight, and heating or cooling by absorption or emission of thermal radiation. Conduction of heat from neighboring parcels and viscous heating are much less important. Eq. (20) gives the rate of change of velocity v, in terms of the pressure p and the mass density ρ. The rate of change of the mass density ρ is given by Eq. (21), but there is no corresponding equation for the rate of change of the pressure p. However, the two independent thermodynamic variables, entropy density s and mass density ρ, are sufficient to define the pressure p, so Eqs. (20), (21), and (22) are a complete set for determining changes in the state of the atmosphere. For example, to within an additive constant for s, the thermodynamic variables s, p, and ρ are related by the "equation of state,"[37] $$ s= \frac {fk_B}{2m} \mathrm{ln}⁡(p⁄ρ^γ ). \tag {23} $$ Here, the mean mass of an air molecule is m = 4.8 × 10-26 kg, while the number of thermal degrees of freedom for dry atmospheric air is very nearly f = 5, that is, 3 for translation plus 2 for rotation. The vibrational degrees of freedom of N2 and O2 are nearly "frozen out" at the relatively low temperatures of the Earth's atmosphere. The ratio of heat capacity at constant pressure to heat capacity at constant volume is γ = 1 + 2/f, or very nearly, γ = 1.4. "Climate model builders have a hard job."—Happer Including water vapor, clouds, and precipitation further complicates the modeling considerations outlined above. Climate model builders have a hard job. Equilibrium Climate Sensitivity If increasing CO2 causes very large warming, harm can indeed be done. But most studies suggest that warmings of up to 2 K will be good for the planet,[38] extending growing seasons, cutting winter heating bills, etc. We will denote temperature differences in Kelvin (K) since they are exactly the same as differences in Celsius (C). A temperature change of 1 K = 1 C is equal to a change of 1.8 Fahrenheit (F). The great Swedish chemist Svante Arrhenius (1859–1927) seems to have been the first to make a quantitative estimate of the warming from CO2. In 1896, on page 265 of his pioneering paper, "On the Influence of Carbonic Acid in the Air upon the Temperature of the Ground,"[39] Arrhenius states that decreasing f, fraction of CO2 in the air, by a factor of 0.67 = 2/3 would cause the surface temperature to fall by ΔT = -3.5 K and increasing f by a factor of 1.5 = 3/2 would cause the temperature to increase by ΔT = +3.4 K. Summarizing his estimates, Arrhenius stated, Thus, if the quantity of carbonic acid increases in geometric progression, the augmentation of the temperature will increase very nearly in arithmetic progression. The mathematical expression of this statement most often used is that the surface-temperature increase, ΔT = T2 - T1, due to increasing the fraction of CO2 from f1 with temperature T1 to f2 with temperature T2 should be given by the equation, $$ ΔT = S \; \mathrm{log}_2 (⁡{f_2} ⁄ {f_1}). \tag {24} $$ S is the most important single parameter in the debate over climate change. Here, log2(x) denotes the base-2 logarithm of x. For example, log2(1) = log2(20) = 0, or log2(4) = log2(22) = 2. The doubling sensitivity S is how much the Earth's average surface temperature will increase if the atmospheric concentrations of CO2 doubles. S is the most important single parameter in the debate over climate change. The logarithmic dependence of Eq. (24) comes from the peculiar dependence of the CO2 cross section on frequency, shown in Fig. 9, which leads to a band width proportional to ln(f/f0), as shown in Eq. (14). If a 50% increase of CO2 were to increase the temperature by 3.4 K, as in Arrhenius's original estimate mentioned above, the doubling sensitivity would be S = 3.4 K/log2(1.5) = 5.8 K. Ten years later, on page 53 of his popular book, Worlds in the Making: The Evolution of the Universe ,[40] Arrhenius again states the logarithmic law of warming, with a slightly smaller climate sensitivity, S = 4 K: If the quantity of carbon dioxide in the air should sink to one half its present percentage, the temperature would fall by 4 K; a diminution by one-quarter would reduce the temperature by 8 K. On the other hand, any doubling of the percentage of carbon dioxide in the air would raise the temperature of the Earth's surface by 4 K and if the carbon dioxide were increased by fourfold, the temperature would rise by 8 K. Convection of the atmosphere, water vapor, and clouds all interact in a complicated way with the change of CO2 to give the numerical value of the doubling sensitivity S of Eq. (21). Remarkably, Arrhenius somehow guessed the logarithmic dependence on CO2 concentration f before Planck's discovery of how thermal radiation really works. The most recent report of the Intergovernmental Panel for Climate Change (IPCC) states that[41] equilibrium climate sensitivity is likely in the range 1.5 K to 4.5 K (high confidence). As the Roman poet Horace remarked: Parturient montes, nascetur ridiculus mus:[42] Mountains will go into labor, a ridiculous mouse will be born. "Could it be that the climate establishment does not want to work itself out of a job?"—Happer More than a century after Arrhenius, and after the expenditure of many tens of billions of dollars on climate science, the official value of S still differs little from the guess that Arrhenius made in 1912: S = 4 K. Could it be that the climate establishment does not want to work itself out of a job? An equivalent form of Eq. (24) is $$ \frac{f_2}{f_1}\;=\, 2^{∆T⁄S}. \tag {25} $$ For a constant rate of increase, R = df/dt, of the CO2 concentration from f1 at the present time t1 to f2 at a later time t2, we can write: $$ f_2\,=\, f \, + \, R∆t \;\mathrm{, where} \; ∆t\, = \, t_2\,-\,t_1. \tag {26} $$ Substituting Eq. (26) into Eq. (25) and solving for Δt, we find $$ ∆t \, = \, \frac{f_1}{R} (2^{∆T⁄S} \, – \, 1). \tag {27} $$ From Eq. (27) we find that for a current CO2 concentration of f1 = 400 ppm and at the current rate of increase, R = 2 ppm/year, the time to raise the temperature by ΔT = 2 K is $$ ∆t \, = \, 200(2^{2K⁄S} \, - \, 1) \, \mathrm{years}. \tag {28} $$ Fig. 12 shows the dependence of equilibrium temperature rise ΔT versus CO2 concentration, f. The solutions of Eq. (24) are given for various possible doubling sensitivities, S. The concentrations, f, required for an equilibrium temperature rise by ΔT = 2 K are indicated by the points on the curves and are labeled by the time Δt, in years, needed for doubling at the present rate of increase, R = 2 ppm/year, of atmospheric CO2. Figure 12. Warming from CO2 from Eq. (24) for various sensitivities, S. We have used Eq. (27) to calculate the corresponding time, Δt (in years), needed to increase the temperature by 2 K. Observations indicate that the doubling sensitivity is close to the feedback-free value of S = 1 K, for which 600 years would be needed at the present growth rate, R = 2 ppm/year. The warming ΔT of (24) is a value averaged over the entire surface of the Earth and over an entire year. It is a very small number compared to the temperature differences between day and night, or between winter and summer at most locations on the Earth. The warming from CO2 is expected to be greater at night than during the day, and greater near the poles than near the equator. Because some time is needed to warm the oceans, the warmings in some finite time — the "transient climate sensitivities" — are a bit smaller than the equilibrium climate sensitivities, S, especially for short time intervals. Overestimate of S Despite predictions—the Earth's surface has warmed very little over the past 20 years—Happer Contrary to the predictions of most climate models, there has been very little warming of the Earth's surface over the last two decades. An example is shown in Fig. 13, due to John Christy.[43] Figure 13. A comparison of lower atmospheric temperatures, measured with balloons and satellites, with climate model predictions. The climate models, on which economic models and government policies are predicated, predict much more warming than has been observed. The discrepancy between models and observations is also summarized by Fyfe, Gillett, and Zwiers, as shown in Fig. 14.[44] Figure 14. A comparison of the surface warming predicted by climate models with observed warming. As one can see from Fig. 14, the warming observed over the period 1993–2012 has been about half the predicted value, while the observed warming during the period 1998–2012 has been about one fifth of the model predictions. And the discrepancy may well be worse than indicated by Fyfe, et al., who used surface temperature records that are plagued with systematic errors, like urban heat island effects,[45] that give an erroneous warming trend to the Earth's surface temperature. The satellite data of Fig. 7 do not have these systematic errors. Fig 14 shows data from a network of surface stations, but Fig. 13 shows the temperature change of the lower atmosphere, from the surface to 50,000 ft. The release of latent heat, as water vapor of rising air condenses to liquid water droplets and ice, should cause 10% to 20% more warming of the lower atmosphere than of the surface. At this writing, more than 50 mechanisms have been proposed to explain the discrepancy of Fig. 14. These range from aerosol cooling to heat absorption by the ocean. Some of the more popular excuses for the discrepancy have been summarized by Fyfe, et al. But the most straightforward explanation for the discrepancy between observations and models is that the doubling sensitivity, which most models assume to be close to the "most likely" IPCC value, S = 3 K, is much too large. If one assumes negligible feedback, where other properties of the atmosphere change little in response to additions of CO2, the doubling efficiency can be estimated to be about S = 1 K, for example, as we discussed in connection with Eq. (19). The much larger doubling sensitivities claimed by the IPCC, which look increasingly dubious with each passing year, are due to "positive feedbacks." A favorite positive feedback is the assumption that water vapor will be lofted to higher, colder altitudes by the addition of more CO2, thereby increasing the effective opacity of the vapor. Changes in cloudiness can also provide either positive feedback which increases S or negative feedback which decreases S. The simplest interpretation of the discrepancy of Fig. 13 and Fig. 14 is that the net feedback is small and possibly even negative. Recent work by Harde indicates a doubling sensitivity of S = 0.6 K.[46] Benefits of CO2 "More CO2 in the atmosphere will be good for life on planet earth."—Happer More CO2 in the atmosphere will be good for life on planet earth. Few realize that the world has been in a CO2 famine for millions of years — a long time for us, but a passing moment in geological history. Over the past 550 million years since the Cambrian, when abundant fossils first appeared in the sedimentary record, CO2 levels have averaged many thousands of parts per million (ppm), not today's few hundred ppm, which is not that far above the minimum level, around 150 ppm, when many plants die of CO2 starvation.[47] An example of how plants respond to low and high levels of CO2 is shown in Fig. 15 from the review by Gerhart and Ward.[48] Figure 15. The response of seedlings of velvetleaf (Abutilon theophrasti), a C3 plant, to various CO2 levels. Velvetleaf can barely survive at the CO2 level of 150 ppm, which are approached at glacial maxima, when much of the CO2 has been absorbed by the cool oceans. All green plants grow faster with more atmospheric CO2. It is found that the growth rate is approximately proportional to the square root of the CO2 concentrations, so the increase in CO2 concentrations from about 300 ppm to 400 ppm over the past century should have increased growth rates by a factor of about √(4/3) = 1.15, or 15%. Most crop yields have increased by much more than 15% over the past century. Better crop varieties, better use of fertilizer, better water management, etc., have all contributed. But the fact remains that a substantial part of the increase is due to more atmospheric CO2. A particularly dramatic example of the response of green plants to increases of atmospheric CO2 is shown in Fig. 16.[49] Figure 16. Dr. Sherwood Idso with Eldarica pine trees grown in various amounts of CO2 in experiments done about 10 years ago when the ambient concentration of CO2 was 385 ppm. "We owe our existence to green plants that convert CO2 & H2O to carbohydrates…"—Happer We owe our existence to green plants that convert CO2 molecules and water molecules, H2O, to carbohydrates with the aid of sunlight. Land plants get the carbon they need from the CO2 in the air. Other essential nutrients — water, nitrogen, phosphorus, potassium, etc. — come from the soil. Just as plants grow better in fertilized, well-watered soils, they grow better in air with several times higher CO2 concentrations than present values. The current low CO2 levels have exposed a design flaw, made several billion years ago by Nature when she first evolved the enzyme, Ribulose-1,5-bisphosphate carboxylase/ oxygenase, or "RuBisCO" for short. RuBisCO is the most abundant protein in the world. Using the energetic molecules, adenosine triphosphate, or ATP, produced by the primary step of photosynthesis, RuBisCO converts CO2 to simple carbohydrate molecules that are subsequently elaborated into sugar, starch, amino acids and all the other molecules on which life depends. A sketch of RuBisCO was given in Fig. 14 of the Interview. The "C" in the nickname RuBisCO, which stands for "carboxylase" in the full word, remind us of the CO2 molecule that RuBisCO was designed to target. At current low levels of atmospheric CO2, much of the available CO2 is used up in full sunlight and this spells trouble for the plant. The last letter "O" in the nickname RuBisCO, which stands for "oxygenase" in the full name, remind us that an alternate enzyme target is the oxygen molecule, O2. If RuBisCO, primed with chemical energy from ATP, cannot find a CO2 molecule, it will grab an O2 molecule instead and use its chemical energy to produce toxic byproducts like hydrogen peroxide instead of useful carbohydrates. This "photooxydation" is a serious problem. At current low CO2 levels it leads to a reduction of photosynthetic efficiency by about 25% in C3 plants, which include wheat, rice, soybeans, cotton, and many others important crops. In these plants, the first molecule synthesized from CO2 has three carbons, and they are said to have the C3 photosynthetic pathway. The low CO2 levels of the past tens of millions of years have driven the development of C4 plants (corn and sugar cane, for example) that cope with oxygen poisoning of RuBisCO by protecting it in special "bundle sheaths" from which oxygen is nearly excluded. CO2 molecules are ferried into the bundle sheaths by molecules with four carbons, which give the C4 pathway its name. A sketch of the C3 and C4 photosynthetic pathways is given in Fig. 15 of the Interview. The extra biochemical energy for the more elaborate C4 photosynthetic pathway comes at a cost, but one that is worth paying in times of unusually low CO2 concentrations, like today. Thousands of experiments leave no doubt that all plants, both the great majority with the old-fashioned C3 path, but also those with the new-fangled C4 path, grow better with more CO2 in the atmosphere.[50] But the nutritional value of additional CO2 is only part of its benefit to plants. Of equal or greater importance, more CO2 in the atmosphere makes plants more drought-resistant. Plant leaves are perforated by stomata, little holes in the gas-tight surface skin that allow CO2 molecules to diffuse from the outside atmosphere into the moist interior of the leaf where they are photosynthesized into carbohydrates. A leaf in full sunlight can easily reach a temperature of 30 C, where the concentration of water molecules, H2O, in the moist interior air of the leaf is about 42,000 ppm, more than one hundred times greater than the 400 ppm concentration of CO2 in fresh air outside the leaf. And CO2 molecules, being much heavier than H2O molecules, diffuse more slowly in air. So, depending on the relative humidity of the outside air, as many as 100 H2O molecules can diffuse out of the leaf for every CO2 molecule that diffuses in, to be captured by photosynthesis. This is the reason that most land plants need at least 100 grams of water to produce one gram of carbohydrate. In the course of evolution, land plants have developed finely tuned feedback mechanisms that allow them to grow leaves with more stomata in air that is poor in CO2, like today, or with fewer stomata for air that is richer in CO2, as has been the case over most of the geological history of land plants.[51] If the amount of CO2 doubles in the atmosphere, plants reduce the number of stomata in newly grown leaves by about a factor of two. With half as many stomata to leak water vapor, plants need about half as much water. Satellite observations like those of Fig. 17 from R.J. Donohue, et al.,[52] have shown a very pronounced "greening" of the Earth as plants have responded to the modest increase of CO2 from about 340 ppm to 400 ppm during the satellite era. More greening and greater agricultural yields can be expected as CO2 concentrations increase further. Figure 17. The analysis of satellite observations by Dr. Randall J. Donohohue and co-workers[53] shows a clear greening of the earth from the modest increase of CO2 concentrations from about 340 ppm to 400 ppm from the year 1982 to 2010. The greening is most pronounced in arid areas where increased CO2 levels diminish the water requirement of plants. More bogeymen The Earth stubbornly refuses to warm nearly as much as demanded by computer models.—Happer Fig. 13 and Fig. 14 show that the earth has stubbornly refused to warm nearly as much as demanded by computer models. To cope with this threat to full employment, the climate establishment has invented a host of bogeymen, other supposed threats from more CO2. It is almost comical to list them. For example, it has recently been claimed that beer supplies are threatened.[54] The climate-alarm establishment has largely dropped the term "global warming" and replaced it by the much more flexible phrase "climate change." The unspoken and assiduously promoted assumption is that the Earth's climate would never change, were it not for mankind. But the Earth's climate has always changed and it always will. The evidence for climate change on all time scales is overwhelming. But past changes were not driven by CO2, and CO2 will have little effect on future change. "Extreme weather is not increasing."—Happer #GlobalWarming One of the bogeymen is that more CO2 will lead to, and already has led to, more extreme weather, including tornadoes, hurricanes, droughts, floods, blizzards, or snowless winters. But as you can see from Fig. 18, the world has continued to produce extreme events at the same rate it always has, both long before and after there was much increase of CO2 in the atmosphere. In short, extreme weather is not increasing. Figure 18. Extreme weather is not increasing. The yearly number of strong tornadoes is shown from 1954 to 2014, the yearly number of hurricanes from 1850 to 2015, the snow cover from 1967 to 2015, and the drought and flood indices from 1900 to 2012.[55] We also hear that more CO2 will cause rising sea levels to flood coastal cities, large parts of Florida, tropical island paradises, etc. The facts, from the IPCC's Fifth Annual Report (2013), are shown in Fig. 19.[56] A representative sea level rise of about 2 mm/year would give about 20 cm or 8 in of sea level rise over a century. For comparison, at Coney Island, Brooklyn, NY, the sea level at high tide is typically 4 feet higher than that at low tide. Figure 19. IPCC data on sea level. Since the year 1880, the sea level has been rising at an average rate of about 1.7 mm/year, but up to a factor of 2 faster or slower in shorter time intervals. GMSL means "global mean sea level." Yet another bogeyman is "ocean acidification." The ocean is mildly alkaline with a representative pH of 8, compared to a pH of 7 for neutral water (neither acid nor alkaline) at a temperature of 25 C. Fig. 20 shows the pH of ocean surface water in contact with the atmosphere with various concentrations of CO2.[57] Figure 20. pH of ocean surface water at a temperature of 25 C versus the concentration of CO2 in the atmosphere. An ocean alkalinity of 2.3 mM was assumed, and a boron concentration of 0.42 mM. If there were no CO2 in the atmosphere, the ocean pH would be about 11.3, close to that of household ammonia and much too caustic for most life. Boric acid, the second most abundant weak acid after CO2, lowers this caustic pH only slightly. It is CO2 that gets the ocean pH down to values hospitable to life. Doubling the atmospheric CO2 level from the present value of 400 ppm to 800 ppm would reduce the pH from 8.2 to 7.9, a change comparable to normal variations of pH with position and time in the oceans today, as shown in Fig. 21. Measured variations of pH in the ocean today are shown in Fig. 21.[58] In biologically productive areas, photosynthesizing organisms remove so much CO2 during the day that the pH can increase by 0.2 to 0.3 units, with similar decreases at night when respiring organisms return CO2 to the water. Figure 21. The natural, day-by-day variations in pH of biologically productive areas of the oceans are larger than those that would be caused by doubling CO2 concentrations. Doubling would take a century or more. Droughts, floods, heat waves, cold snaps, hurricanes, tornadoes, blizzards, and other weather- and climate-related events will complicate our life on Earth, no matter how many laws governments pass to "stop climate change." But if we understand these phenomena, and are able to predict them, they will be much less damaging to human society. So I strongly support high-quality research on climate and related fields like oceanography, geology, solar physics, etc. Especially important are good measurement programs like the various satellite measurements of atmospheric temperature[59] or the Argo[60] system of floating buoys that is revolutionizing our understanding of ocean currents, temperature, salinity, and other important properties. Too much 'climate research' money is pouring into very questionable efforts…Happer But too much "climate research" money is pouring into very questionable efforts, like mitigation of the made-up horrors mentioned above. It reminds me of Gresham's Law: "Bad money drives out good."[61] The torrent of money showered on scientists willing to provide rationales, however shoddy, for the war on fossil fuels, and cockamamie mitigation schemes for non-existent problems, has left insufficient funding for honest climate science. The Philosophical Transactions of the Royal Society of London — once edited by John Tyndall, who discovered greenhouse gases — is one of the most prestigious scientific journals in the world. Every issue of this journal used to include an "Advertisement," which contained the statement:[62] It is an established rule of the Society, to which they will always adhere, never to give their opinion, as a Body, upon any subject, either of Nature or Art, that comes before them. Alas, recent leaders of the Royal Society have ignored this precept. Explaining the politically correct view on climate, former President of the Royal Society, Lord Robert May, told BBC reporter Roger Harrabin:[63] I am the President of the Royal Society, and I am telling you the debate on climate change is over. These are the tactics of a religious cult or a pseudoscience like Lysenkosim.[64] It is not traditional science, where everything is supposed to be open to question. "The Earth is in no danger from increasing levels of CO2."—Happer #GlobalWarming The Earth is in no danger from increasing levels of CO2. More CO2 will be a major benefit to the biosphere and to humanity. Some of the reasons are: As shown in Fig. 1, much higher CO2 levels than today's prevailed over most last 550 million years of higher life forms on Earth. Geological history shows that the biosphere does better with more CO2. As shown in Fig. 13 and Fig. 14, observations over the past two decades show that the warming predicted by climate models has been greatly exaggerated. The temperature increase for doubling CO2 levels appears to be close to the feedback-free doubling sensitivity of S =1 K, and much less than the "most likely" value S = 3 K promoted by the IPCC and assumed in most climate models. As shown in Fig. 12, if CO2 emissions continue at levels comparable to those today, centuries will be needed for the added CO2 to warm the Earth's surface by 2 K, generally considered to be a safe and even beneficial amount. Over the past tens of millions of years, the Earth has been in a CO2 famine with respect to the optimal levels for plants, the levels that have prevailed over most of the geological history of land plants. There was probably CO2 starvation of some plants during the coldest periods of recent ice ages. As shown in Fig. 15–17, more atmospheric CO2 will substantially increase plant growth rates and drought resistance. There is no reason to limit the use of fossil fuels because they release CO2 to the atmosphere. However, fossil fuels do need to be mined, transported, and burned with cost-effective controls of real environmental problems — for example, fly ash, oxides of sulfur and nitrogen, volatile organic compounds, groundwater contamination, etc. Sometime in the future, perhaps by the year 2050 when most of the original climate crusaders will have passed away, historians will write learned papers on how it was possible for a seemingly enlightened civilization of the early 21st century to demonize CO2, much as the most "Godly" members of society executed unfortunate "witches" in earlier centuries. The global warming crusade has been driven by many forces: political imperatives, huge amounts of research funds for scientists willing to support politicians, crony capitalists getting rich from "saving the planet," the puzzling need by so many people to feel a sense of guilt, anxieties about overpopulation of the world, etc. But genuine science has not been one of the drivers. Widespread scientific illiteracy — alas, even in the scientific community — has facilitated this latest episode of human folly. I hope very much that this Focused Civil Dialogue contributes to increased scientific literacy. My son, James W. Happer, read several drafts of this manuscript and made many helpful suggestions on how to improve it. 1. R.A. Berner and Z. Kothavala, "GEOCARB III: A revised model of atmospheric CO2 over the Phanerozoic time," American Journal of Science , 2001, 301: 182–204. 2. P.C. Quinton and K.G. MacLeod, "Oxygen isotopes from conodont apatite of the midcontinent US: Implications for Late Ordovician climate evolution," Palaeogeography, Palaeoclimatology, Palaeoecology , 2014, 404: 57–66. 3. George Orwell, "The Principles of Newspeak," Appendix to 1984 (London: Secker & Warburg, 1948). 4. Ecclesiastes 1:9. 5. National Oceanic and Atmospheric Administration (NOAA), Earth System Research Laboratory. 6. For NOAA sites, see links to observatories. 7. "Is the airborne fraction of anthropogenic carbon dioxide increasing?," Science Daily , December 31, 2009. 8. H.W. Chapman, L.S. Gleason, and W.E. Loomis, "The Carbon Dioxide Content of Field Air," Plant Physiology , 1954, 29: 500–503. 9. R. Carey, et al., "An overview into submarine CO2 scrubber development," Ocean Engineering , 1983, 10: 227–233. 10. John T. James and Ariel Macatangay, 'Carbon Dioxide — Our Common 'Enemy,'" [PDF] (NASA Report, 2009). 11. J.R. Fleming, Historical Perspectives on Climate Change (Oxford University Press, 1998). 12. "Atmospheric transmission" (Wikipedia). 13. John Tyndall, Heat: A Mode of Motion , fifth ed. (London: Longmans, Green, 1875). (First ed.: Heat Considered as a Mode of Motion [1863].) The quoted passage occurs on p. 359 of the fifth edition. 14. "Heat and Convection in the Earth." 15. "Solar Energy Reaching the Earth's Surface." 16. "The Outer Planets." 17. S.P. Langley, "The 'Solar Constant' and Related Problems," Astrophysical Journal , 1903, 17: 89–99. 18. For an interesting history of the Stefan-Boltzmann formula, see John Crepeau, "A Brief History of the T4 Radiation Law." [PDF] 19. Fraser Cain, "Earth Surface Temperature" (Universe Today, 2009). 20. Chris Faesi, "Unifying Planetary Atmospheres" (Astrobites, 2013). 21. Al Reinert, "The Blue Marble Shot: Our First Complete Photograph of Earth" ( The Atlantic , 2011). This iconic photograph was taken from Apollo 17 on December 7, 1972, by astronaut Dr. Harrison ("Jack") Schmitt. Schmitt commented in an e-mail to William Happer on 9/23/2013: "at about 4 am ET [on 7 December, 1972] … at the first opportunity we were set up to coast to the moon. As you can see it is not a truly full Earth. We were still too close for that. Also, our trajectory to a 23 North latitude on the moon required that we view the southern hemisphere of the Earth more than the northern." 22. Tim Vasquez, "The intertropical convergence zone" (Weatherwise, 2009). 23. "Geostationary Satellite Images" (University of Wisconsin, Space Science Engineering Center [SSEC]). 24. R. Rondanelli and R.S. Lindzen, "Can thin cirrus clouds in the tropics provide a solution to the faint young Sun paradox?," [PDF] Journal of Geophysical Research , 2010, 115: DO2108. 25. "The Faint Young Sun Paradox: An Unsolved Mystery" (Future and Cosmos, 2014). 26. For an instructive further discussion of the Planck brightness and related phenomena, see this set of lecture slides by Professor Simon Carn entitled "Thermal Emission of EM Radiation" [PDF] (Michigan Technological University, 2014). 27. "Solid angle" (Wolfram MathWorld). 28. "Weather Satellites" (What-When-How). 29. "States of matter: Water and hydrogen bonding" (Chem1: General Chemistry Virtual Textbook). 30. "Temperature inversion in the Arctic" (CREATE Arctic Science, 2013). 31. For an instructive further discussion of the Schwarzschild equation for radiation and related phenomena, see this set of lecture slides by Professor Simon Carn entitled "Atmospheric Emission" [PDF] (Michigan Technological University). 32. The HITRAN database. 33. D.J. Wilson and J. Gea-Banacloche, "Simple model to estimate the contribution of atmospheric CO2 to the Earth' s greenhouse effect," American Journal of Physics , 2012, 80: 306–315. 34. "Anthropogenic and Natural Radiative Forcing" [PDF] (IPCC WGI Fifth Assessment Report, 2012). 35. "The Role of the Tropics in the General Circulation," Chapter 3.7 of Introduction to Tropical Meteorology , 2nd ed. (GOES-R, 2011). 36. "What is Archimedes' Principle?"(EDinformatics, 1999). 37. "Entropy Changes in an Ideal Gas," (MIT, 2006). 38. See, e.g., R.S.J. Tol, "The Economic Effects of Climate Change," Journal of Economic Perspectives , 2009, 23(2): 29–51. Such studies do not fully account for the positive effects of CO2 fertilization and water-efficiency gains. 39. S. Arrhenius, "On the Influence of Carbonic Acid in the Air upon the Temperature of the Ground," Philosophical Magazine and Journal of Science , Fifth Series, 1896, 41: 237–276. 40. S. Arrhenius, Worlds in the Making: The Evolution of the Universe , tr. H. Borns. (Harper & Brothers, 1908). Originally published as Das Werden der Welten (1907). 41. Nathanael Massey, "IPCC Revises Climate Sensitivity" ( Scientific American , 2013). 42. Ars Poetica , l. 139 (c. 19 BC). 43. "Testimony of John R. Christy," [PDF] U.S. House Commitete on Science, Space, and Technology; 2 Feb 2016 (U.S. House of Representatives, Document Repository). 44. J.C. Fyfe, N.P. Gillett, and F.W. Zwiers, "Overestimated global warming over the past 20 years : Nature Climate Change : Nature Publishing Group," Nature Climate Change , 2013, 3: 767–769. 45. "NOAA Can Improve Management of the U.S. Historical Climatology Network" (U.S. Government Accountability Office [GAO-11-800], 2011). 46. H. Harde, "Advanced Two-Layer Climate Model for the Assessment of Global Warming by CO2," Open Journal of Atmospheric and Climate Change , November 2014, 1(3): DOI: 10.15764/ACC.2014.03001 (Scientific Online). 47. J.K. Dippery, D.T. Tissue, R.B. Thomas, and B.R. Strain, "Effects of low and elevated CO2 levels on C3 and C4 annuals," Oecologia , 1995, 101: 13–20. 48. L.M. Gerhart and J.K. Ward, "Plant responses to low [CO2] of the past," New Phytologist , 2010, 188: 674–695. 49. S.B. Idso and B.A. Kimball, "Effects of Atmospheric CO2 Enrichment on Regrowth of Sour Orange Trees ( Citrus aurantium ; Rutaceae ) after Coppicing," American Journal of Botany , 1994, 81: 843–846. 50. M.B. Kirkham, Elevated Carbon Dioxide: Impacts on Soil and Plant Water Relations (CRC Press, 2011). 51. M. Steinthorsdottir, B. Wohlfarth, M.E. Kylander, M. Blaauw, and P.J. Reimer, "Stomatal proxy record of CO2 concentrations from the last termination suggests an important role for CO2 at climate change transitions," Quaternary Science Reviews , 2013, 68: 43–58. 52. R.J. Donohue, M.L. Roderick, T.R. McVicar, and G.D. Farquhar, "Impact of CO2 fertilization on maximum foliage cover across the globe's warm, arid environments," Geophysical Research Letters , 2013, 40: 3031–3035. 53. The figure was attached to an e-mail from R.J. Donohue to W. Happer, dated 4/27/2015, where Dr. Donohue states: "I do recognise this image — it is one of mine. There's no publication from which it comes. Instead I produced it for an AGU media release that accompanied our GRL paper. Unfortunately, I discovered after the media release that I had given AGU the 1982 to 2006 trend map, not the 1982 to 2010 one as stated in the caption. So please find attached the proper '1982 to 2010 GIMMS3g derived relative trend in fractional cover' image. Please feel free to use this map, and if you could state the source as R.J. Donohue/CSIRO, that would be excellent." [AGU stands for the American Geophysical Union; GIMMS3g stands for Global Inventory Modeling and Mapping System — Third Generation; CSIRO stands for the Australia-based Commonwealth Scientific and Industrial Research Organization — ed.] 54. "NOAA Ias Worried about Beer Threats from 'Climate Change' or Something" (Pirate's Cove, 2016). 55. The sources of the four graphs comprising Figure 18 are as follows: Tornadoes : "U.S. Tornado Climatology: Historical Records and Trends" (NOAA); Northern Hemisphere Snow Cover : Global Snow Lab (Rutgers University); Hurricanes : "Tropical Cyclone Climatology" (National Hurricane Center, NOAA); Drought : "Drought — January 2016" (NOAA). 56. Climate Change 2013: The Physical Science Basis ; Fig. 3–13 (IPCC [Intergovernmental Panel on Climate Change]; Fifth Assessment Report, 2013). 57. R. Cohen and W. Happer, "Fundamentals of Ocean pH," (CO2 Coalition, 2015). 58. G.E. Hofmann, et al., "High-Frequency Dynamics of Ocean pH: A Multi-Ecosystem Comparison," PLOS ONE , 2011, 6(1): e28983. 59. "Global Warming" (Roy Spencer, Ph.D.). 60. "Argo" (University of California at San Diego). 61. "Gresham's Law" (Investopedia). 62. For example, see "Advertisement," [PDF] Philosophical Transactions of the Royal Society of London , Series A, 1927, 226: v–vi. 63. "Harrabin's Notes: Getting the Message" (BBC, 2010). 64. Valery N. Soyfer, Lysenko and the Tragedy of Soviet Science (Rutgers University Press, 1994).
CommonCrawl
# MATH 222 <br> SECOND SEMESTER <br> CALCULUS Spring 2011 ## Math 222 - 2nd Semester Calculus Lecture notes version 1.7(Spring 2011) This is a self contained set of lecture notes for Math 222. The notes were written by Sigurd Angenent, starting from an extensive collection of notes and problems compiled by Joel Robbin. Some problems were contributed by A.Miller. The $\mathrm{LT}_{\mathrm{E}} \mathrm{X}$ files, as well as the Xfig and Octave files which were used to produce these notes are available at the following web site $$ \text { Www.math. wisc. edu/ angenent/Free-Lecture-Notes } $$ They are meant to be freely available for non-commercial use, in the sense that "free software" is free. More precisely: Copyright (c) 2006 Sigurd B. Angenent. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License". ## Chapter 1: Methods of Integration 1. The indefinite integral 2. You can always check the answer 3. About " $+C$ " 4. Standard Integrals 5. Method of substitution 6. The double angle trick 7. Integration by Parts 8. Reduction Formulas 9. Partial Fraction Expansion 10. PROBLEMS Chapter 2: Taylor's Formulaand Infinite Series 11. Taylor Polynomials 12. Examples 13. Some special Taylor polynomials 14. The Remainder Term 15. Lagrange's Formula for the Remainder Term 16. The limit as $x \rightarrow 0$, keeping $n$ fixed 17. The limit $n \rightarrow \infty$, keeping $x$ fixed 18. Convergence of Taylor Series 19. Leibniz' formulas for $\ln 2$ and $\pi / 4$ 20. Proof of Lagrange's formula 21. Proof of Theorem 16.8 22. PROBLEMS Chapter 3: Complex Numbers and the Complex Exponential 56 23. Complex numbers 24. Argument and Absolute Value 25. Geometry of Arithmetic 26. Applications in Trigonometry 27. Calculus of complex valued functions 28. The Complex Exponential Function $\quad 61$ 28. Complex solutions of polynomial equations 63 29. Other handy things you can do with complex numbers 65 30. PROBLEMS $\begin{array}{ll}\text { Chapter 4: Differential Equations } & 72\end{array}$ 32. What is a DiffEq? $\quad 72$ 33. First Order Separable Equations 72 34. First Order Linear Equations $\quad 73$ 35. Dynamical Systems and Determinism $\quad 75$ $\begin{array}{ll}\text { 36. Higher order equations } & 77\end{array}$ 37. Constant Coefficient Linear Homogeneous Equations 78 38. Inhomogeneous Linear Equations 81 39. Variation of Constants $\quad 82$ 40. Applications of Second Order Linear Equations 85 41. PROBLEMS 89 $\begin{array}{lr}\text { Chapter 5: Vectors } & 97\end{array}$ 42. Introduction to vectors $\quad 97$ 43. Parametric equations for lines and planes 102 44. Vector Bases 104 45. Dot Product 105 46. Cross Product 112 47. A few applications of the cross product 115 $\begin{array}{ll}\text { 48. Notation } & 118\end{array}$ 49. PROBLEMS 118 Chapter 6: Vector Functions and Parametrized Curves 124 50. Parametric Curves 124 51. Examples of parametrized curves 125 53. Higher derivatives and product rules 128 54. Interpretation of $\overrightarrow{\boldsymbol{x}}^{\prime}(t)$ as the velocity vector $\quad 129$ 55. Acceleration and Force 131 56. Tangents and the unit tangent vector 133 57. Sketching a parametric curve 135 58. Length of a curve 137 59. The arclength function 139 60. Graphs in Cartesian and in Polar Coordinates 140 $\begin{array}{ll}\text { 61. PROBLEMS } & 141\end{array}$ $\begin{array}{ll}\text { GNU Free Documentation License } & 148\end{array}$ 1. APPLICABILITY AND DEFINITIONS 148 2. VERBATIM COPYING 149 3. COPYING IN QUANTITY 149 4. MODIFICATIONS 149 5. COMBINING DOCUMENTS 150 6. COLLECTIONS OF DOCUMENTS 150 7. AGGREGATION WITH INDEPENDENT WORKS 150 8. TRANSLATION 150 9. TERMINATION 150 10. FUTURE REVISIONS OF THIS LICENSE 151 11. RELICENSING ## Chapter 1: Methods of Integration ## The indefinite integral We recall some facts about integration from first semester calculus. 1.1. Definition. A function $y=F(x)$ is called an antiderivative of another function $y=f(x)$ if $F^{\prime}(x)=f(x)$ for all $x$. 1.2. Example. $F_{1}(x)=x^{2}$ is an antiderivative of $f(x)=2 x$. $F_{2}(x)=x^{2}+2004$ is also an antiderivative of $f(x)=2 x$. $G(t)=\frac{1}{2} \sin (2 t+1)$ is an antiderivative of $g(t)=\cos (2 t+1)$. The Fundamental Theorem of Calculus states that if a function $y=f(x)$ is continuous on an interval $a \leq x \leq b$, then there always exists an antiderivative $F(x)$ of $f$, and one has $$ \int_{a}^{b} f(x) \mathrm{d} x=F(b)-F(a) . $$ The best way of computing an integral is often to find an antiderivative $F$ of the given function $f$, and then to use the Fundamental Theorem (1). How you go about finding an antiderivative $F$ for some given function $f$ is the subject of this chapter. The following notation is commonly used for antiderivates: $$ F(x)=\int f(x) \mathrm{d} x . $$ The integral which appears here does not have the integration bounds $a$ and $b$. It is called an indefinite integral, as opposed to the integral in (1) which is called a definite integral. It's important to distinguish between the two kinds of integrals. Here is a list of differences: By definition $\int f(x) \mathrm{d} x$ is any func- $\int_{a}^{b} f(x) \mathrm{d} x$ was defined in terms of tion of $x$ whose derivative is $f(x) . \quad \begin{aligned} & \text { Riemann sums and can be inter- } \\ & \text { preted as "area under the graph of }\end{aligned}$ $y=f(x)$ ", at least when $f(x)>0$. $x$ is not a dummy variable, for exam- $x$ is a dummy variable, for example, ple, $\int 2 x \mathrm{~d} x=x^{2}+C$ and $\int 2 t \mathrm{~d} t=\int_{0}^{1} 2 x \mathrm{~d} x=1$, and $\int_{0}^{1} 2 t \mathrm{~d} t=1$, so | $t^{2}+C$ are functions of diffferent vari- | $\int_{0}^{1} 2 x \mathrm{~d} x=\int_{0}^{1} 2 t \mathrm{~d} t$. | | :--- | :--- | | ables, so they are not equal. | | ## You can always check the answer Suppose you want to find an antiderivative of a given function $f(x)$ and after a long and messy computation which you don't really trust you get an "answer", $F(x)$. You can then throw away the dubious computation and differentiate the $F(x)$ you had found. If $F^{\prime}(x)$ turns out to be equal to $f(x)$, then your $F(x)$ is indeed an antiderivative and your computation isn't important anymore. 2.1. Example. Suppose we want to find $\int \ln x \mathrm{~d} x$. My cousin Bruce says it might be $F(x)=x \ln x-x$. Let's see if he's right: $$ \frac{\mathrm{d}}{\mathrm{d} x}(x \ln x-x)=x \cdot \frac{1}{x}+1 \cdot \ln x-1=\ln x . $$ Who knows how Bruce thought of this ${ }^{1}$, but he's right! We now know that $\int \ln x \mathrm{~d} x=x \ln x-x+C$. ## About " $+C$ " Let $f(x)$ be a function defined on some interval $a \leq x \leq b$. If $F(x)$ is an antiderivative of $f(x)$ on this interval, then for any constant $C$ the function $\tilde{F}(x)=$ $F(x)+C$ will also be an antiderivative of $f(x)$. So one given function $f(x)$ has many different antiderivatives, obtained by adding different constants to one given antiderivative. 3.1. Theorem. If $F_{1}(x)$ and $F_{2}(x)$ are antiderivatives of the same function $f(x)$ on some interval $a \leq x \leq b$, then there is a constant $C$ such that $F_{1}(x)=$ $F_{2}(x)+C$. Proof. Consider the difference $G(x)=F_{1}(x)-F_{2}(x)$. Then $G^{\prime}(x)=F_{1}^{\prime}(x)-$ $F_{2}^{\prime}(x)=f(x)-f(x)=0$, so that $G(x)$ must be constant. Hence $F_{1}(x)-F_{2}(x)=C$ for some constant. It follows that there is some ambiguity in the notation $\int f(x) \mathrm{d} x$. Two functions $F_{1}(x)$ and $F_{2}(x)$ can both equal $\int f(x) \mathrm{d} x$ without equaling each other. When this happens, they $\left(F_{1}\right.$ and $\left.F_{2}\right)$ differ by a constant. This can sometimes lead to confusing situations, e.g. you can check that $$ \begin{aligned} & \int 2 \sin x \cos x \mathrm{~d} x=\sin ^{2} x \\ & \int 2 \sin x \cos x \mathrm{~d} x=-\cos ^{2} x \end{aligned} $$ are both correct. (Just differentiate the two functions $\sin ^{2} x$ and $-\cos ^{2} x !$ ) These two answers look different until you realize that because of the trig identity $\sin ^{2} x+$ $\cos ^{2} x=1$ they really only differ by a constant: $\sin ^{2} x=-\cos ^{2} x+1$. To avoid this kind of confusion we will from now on never forget to include the "arbitrary constant $+C$ " in our answer when we compute an antiderivative. ${ }^{1}$ He integrated by parts. ## Standard Integrals Here is a list of the standard derivatives and hence the standard integrals everyone should know. $$ \begin{aligned} \int f(x) \mathrm{d} x & =F(x)+C & \\ \int x^{n} \mathrm{~d} x & =\frac{x^{n+1}}{n+1}+C & \text { for all } n \neq-1 \\ \int \frac{1}{x} \mathrm{~d} x & =\ln |x|+C & \\ \int \sin x \mathrm{~d} x & =-\cos x+C & \\ \int \cos x \mathrm{~d} x & =\sin x+C & \\ \int \tan x \mathrm{~d} x & =-\ln \cos x+C & \\ \int \frac{1}{1+x^{2}} \mathrm{~d} x & =\arctan x+C & \left(=\frac{\pi}{2}-\arccos x+C\right) \\ \int \frac{1}{\sqrt{1-x^{2}} \mathrm{~d} x} & =\arcsin x+C & \text { for }-\frac{\pi}{2}<x<\frac{\pi}{2} . \end{aligned} $$ All of these integrals are familiar from first semester calculus (like Math 221), except for the last one. You can check the last one by differentiation (using $\ln \frac{a}{b}=\ln a-\ln b$ simplifies things a bit). ## Method of substitution The chain rule says that $$ \frac{\mathrm{d} F(G(x))}{\mathrm{d} x}=F^{\prime}(G(x)) \cdot G^{\prime}(x) $$ so that $$ \int F^{\prime}(G(x)) \cdot G^{\prime}(x) \mathrm{d} x=F(G(x))+C . $$ 5.1. Example. Consider the function $f(x)=2 x \sin \left(x^{2}+3\right)$. It does not appear in the list of standard integrals we know by heart. But we do notice ${ }^{2}$ that $2 x=\frac{\mathrm{d}}{\mathrm{d} x}\left(x^{2}+3\right)$. So let's call $G(x)=x^{2}+3$, and $F(u)=-\cos u$, then $$ F(G(x))=-\cos \left(x^{2}+3\right) $$ and $$ \frac{\mathrm{d} F(G(x))}{\mathrm{d} x}=\underbrace{\sin \left(x^{2}+3\right)}_{F^{\prime}(G(x))} \cdot \underbrace{2 x}_{G^{\prime}(x)}=f(x), $$ ${ }^{2}$ You will start noticing things like this after doing several examples. so that $$ \int 2 x \sin \left(x^{2}+3\right) \mathrm{d} x=-\cos \left(x^{2}+3\right)+C . $$ The most transparent way of computing an integral by substitution is by introducing new variables. Thus to do the integral $$ \int f(G(x)) G^{\prime}(x) \mathrm{d} x $$ where $f(u)=F^{\prime}(u)$, we introduce the substitution $u=G(x)$, and agree to write $\mathrm{d} u=\mathrm{d} G(x)=G^{\prime}(x) \mathrm{d} x$. Then we get $$ \int f(G(x)) G^{\prime}(x) \mathrm{d} x=\int f(u) \mathrm{d} u=F(u)+C . $$ At the end of the integration we must remember that $u$ really stands for $G(x)$, so that $$ \int f(G(x)) G^{\prime}(x) \mathrm{d} x=F(u)+C=F(G(x))+C . $$ For definite integrals this implies $$ \int_{a}^{b} f(G(x)) G^{\prime}(x) \mathrm{d} x=F(G(b))-F(G(a)) . $$ which you can also write as $$ \int_{a}^{b} f(G(x)) G^{\prime}(x) \mathrm{d} x=\int_{G(a)}^{G(b)} f(u) \mathrm{d} u . $$ 5.2. Example. [Substitution in a definite integral. ] As an example we compute $$ \int_{0}^{1} \frac{x}{1+x^{2}} \mathrm{~d} x $$ using the substitution $u=G(x)=1+x^{2}$. Since $\mathrm{d} u=2 x \mathrm{~d} x$, the associated indefinite integral is $$ \int \underbrace{\frac{1}{1+x^{2}}}_{\frac{1}{u}} \underbrace{x \mathrm{~d} x}_{\frac{1}{2} \mathrm{~d} u}=\frac{1}{2} \int \frac{1}{u} \mathrm{~d} u . $$ To find the definite integral you must compute the new integration bounds $G(0)$ and $G(1)$ (see equation (3).) If $x$ runs between $x=0$ and $x=1$, then $u=G(x)=1+x^{2}$ runs between $u=1+0^{2}=1$ and $u=1+1^{2}=2$, so the definite integral we must compute is $$ \int_{0}^{1} \frac{x}{1+x^{2}} \mathrm{~d} x=\frac{1}{2} \int_{1}^{2} \frac{1}{u} \mathrm{~d} u $$ which is in our list of memorable integrals. So we find $$ \int_{0}^{1} \frac{x}{1+x^{2}} \mathrm{~d} x=\frac{1}{2} \int_{1}^{2} \frac{1}{u} \mathrm{~d} u=\frac{1}{2}[\ln u]_{1}^{2}=\frac{1}{2} \ln 2 . $$ ## The double angle trick If an integral contains $\sin ^{2} x$ or $\cos ^{2} x$, then you can remove the squares by using the double angle formulas from trigonometry. Recall that $$ \cos ^{2} \alpha-\sin ^{2} \alpha=\cos 2 \alpha \quad \text { and } \quad \cos ^{2} \alpha+\sin ^{2} \alpha=1, $$ Adding these two equations gives $$ \cos ^{2} \alpha=\frac{1}{2}(\cos 2 \alpha+1) $$ while substracting them gives $$ \sin ^{2} \alpha=\frac{1}{2}(1-\cos 2 \alpha) . $$ 6.1. Example. The following integral shows up in many contexts, so it is worth knowing: $$ \begin{aligned} \int \cos ^{2} x \mathrm{~d} x & =\frac{1}{2} \int(1+\cos 2 x) \mathrm{d} x \\ & =\frac{1}{2}\left\{x+\frac{1}{2} \sin 2 x\right\}+C \\ & =\frac{x}{2}+\frac{1}{4} \sin 2 x+C . \end{aligned} $$ Since $\sin 2 x=2 \sin x \cos x$ this result can also be written as $$ \int \cos ^{2} x \mathrm{~d} x=\frac{x}{2}+\frac{1}{2} \sin x \cos x+C . $$ If you don't want to memorize the double angle formulas, then you can use "Complex Exponentials" to do these and many similar integrals. However, you will have to wait until we are in $\S 28$ where this is explained. ## Integration by Parts The product rule states $$ \frac{\mathrm{d}}{\mathrm{d} x}(F(x) G(x))=\frac{\mathrm{d} F(x)}{\mathrm{d} x} G(x)+F(x) \frac{\mathrm{d} G(x)}{\mathrm{d} x} $$ and therefore, after rearranging terms, $$ F(x) \frac{\mathrm{d} G(x)}{\mathrm{d} x}=\frac{\mathrm{d}}{\mathrm{d} x}(F(x) G(x))-\frac{\mathrm{d} F(x)}{\mathrm{d} x} G(x) . $$ This implies the formula for integration by parts $$ \int F(x) \frac{\mathrm{d} G(x)}{\mathrm{d} x} \mathrm{~d} x=F(x) G(x)-\int \frac{\mathrm{d} F(x)}{\mathrm{d} x} G(x) \mathrm{d} x . $$ 7.1. Example - Integrating by parts once. $$ \int \underbrace{x}_{F(x)} \underbrace{e^{x}}_{G^{\prime}(x)} \mathrm{d} x=\underbrace{x}_{F(x)} \underbrace{e^{x}}_{G(x)}-\int \underbrace{e^{x}}_{G(x)} \underbrace{1}_{F^{\prime}(x)} \mathrm{d} x=x e^{x}-e^{x}+C . $$ Observe that in this example $e^{x}$ was easy to integrate, while the factor $x$ becomes an easier function when you differentiate it. This is the usual state of affairs when integration by parts works: differentiating one of the factors $(F(x))$ should simplify the integral, while integrating the other $\left(G^{\prime}(x)\right)$ should not complicate things (too much). Another example: $\sin x=\frac{\mathrm{d}}{\mathrm{d} x}(-\cos x)$ so $$ \int x \sin x \mathrm{~d} x=x(-\cos x)-\int(-\cos x) \cdot 1 \mathrm{~d} x==-x \cos x+\sin x+C . $$ 7.2. Example - Repeated Integration by Parts. Sometimes one integration by parts is not enough: since $e^{2 x}=\frac{\mathrm{d}}{\mathrm{d} x}\left(\frac{1}{2} e^{2 x}\right)$ one has $$ \begin{aligned} \int \underbrace{x^{2}}_{F(x)} \underbrace{e^{2 x}}_{G^{\prime}(x)} \mathrm{d} x & =x^{2} \frac{e^{2 x}}{2}-\int \frac{e^{2 x}}{2} 2 x \mathrm{~d} x \\ & =x^{2} \frac{e^{2 x}}{2}-\left\{\frac{e^{2 x}}{4} 2 x-\int \frac{e^{2 x}}{4} 2 \mathrm{~d} x\right\} \\ & =x^{2} \frac{e^{2 x}}{2}-\left\{\frac{e^{2 x}}{4} 2 x-\frac{e^{2 x}}{8} 2+C\right\} \\ & =\frac{1}{2} x^{2} e^{2 x}-\frac{1}{2} x e^{2 x}+\frac{1}{4} e^{2 x}-C \end{aligned} $$ (Be careful with all the minus signs that appear when you integrate by parts.) The same procedure will work whenever you have to integrate $$ \int P(x) e^{a x} \mathrm{~d} x $$ where $P(x)$ is a polynomial, and $a$ is a constant. Each time you integrate by parts, you get this $$ \begin{aligned} \int P(x) e^{a x} \mathrm{~d} x & =P(x) \frac{e^{a x}}{a}-\int \frac{e^{a x}}{a} P^{\prime}(x) \mathrm{d} x \\ & =\frac{1}{a} P(x) e^{a x}-\frac{1}{a} \int P^{\prime}(x) e^{a x} \mathrm{~d} x . \end{aligned} $$ You have replaced the integral $\int P(x) e^{a x} \mathrm{~d} x$ with the integral $\int P^{\prime}(x) e^{a x} \mathrm{~d} x$. This is the same kind of integral, but it is a little easier since the degree of the derivative $P^{\prime}(x)$ is less than the degree of $P(x)$. 7.3. Example - My cousin Bruce's computation. Sometimes the factor $G^{\prime}(x)$ is "invisible". Here is how you can get the antiderivative of $\ln x$ by integrating by parts: $$ \begin{aligned} \int \ln x \mathrm{~d} x & =\int \underbrace{\ln x}_{F(x)} \cdot \underbrace{1}_{G^{\prime}(x)} \mathrm{d} x \\ & =\ln x \cdot x-\int \frac{1}{x} \cdot x \mathrm{~d} x \\ & =x \ln x-\int 1 \mathrm{~d} x \\ & =x \ln x-x+C . \end{aligned} $$ You can do $\int P(x) \ln x \mathrm{~d} x$ in the same way if $P(x)$ is a polynomial. ## Reduction Formulas Consider the integral $$ I_{n}=\int x^{n} e^{a x} \mathrm{~d} x . $$ Integration by parts gives you $$ \begin{aligned} I_{n} & =x^{n} \frac{1}{a} e^{a x}-\int n x^{n-1} \frac{1}{a} e^{a x} \mathrm{~d} x \\ & =\frac{1}{a} x^{n} e^{a x}-\frac{n}{a} \int x^{n-1} e^{a x} \mathrm{~d} x . \end{aligned} $$ We haven't computed the integral, and in fact the integral that we still have to do is of the same kind as the one we started with (integral of $x^{n-1} e^{a x}$ instead of $\left.x^{n} e^{a x}\right)$. What we have derived is the following reduction formula $$ I_{n}=\frac{1}{a} x^{n} e^{a x}-\frac{n}{a} I_{n-1} $$ which holds for all $n$. For $n=0$ the reduction formula says $$ I_{0}=\frac{1}{a} e^{a x}, \text { i.e. } \int e^{a x} \mathrm{~d} x=\frac{1}{a} e^{a x}+C . $$ When $n \neq 0$ the reduction formula tells us that we have to compute $I_{n-1}$ if we want to find $I_{n}$. The point of a reduction formula is that the same formula also applies to $I_{n-1}$, and $I_{n-2}$, etc., so that after repeated application of the formula we end up with $I_{0}$, i.e., an integral we know. 8.1. Example. To compute $\int x^{3} e^{a x} \mathrm{~d} x$ we use the reduction formula three times: $$ \begin{aligned} I_{3} & =\frac{1}{a} x^{3} e^{a x}-\frac{3}{a} I_{2} \\ & =\frac{1}{a} x^{3} e^{a x}-\frac{3}{a}\left\{\frac{1}{a} x^{2} e^{a x}-\frac{2}{a} I_{1}\right\} \\ & =\frac{1}{a} x^{3} e^{a x}-\frac{3}{a}\left\{\frac{1}{a} x^{2} e^{a x}-\frac{2}{a}\left(\frac{1}{a} x e^{a x}-\frac{1}{a} I_{0}\right)\right\} \end{aligned} $$ Insert the known integral $I_{0}=\frac{1}{a} e^{a x}+C$ and simplify the other terms and you get $$ \int x^{3} e^{a x} \mathrm{~d} x=\frac{1}{a} x^{3} e^{a x}-\frac{3}{a^{2}} x^{2} e^{a x}+\frac{6}{a^{3}} x e^{a x}-\frac{6}{a^{4}} e^{a x}+C . $$ ### Reduction formula requiring two partial integrations. Consider $$ S_{n}=\int x^{n} \sin x \mathrm{~d} x $$ Then for $n \geq 2$ one has $$ \begin{aligned} S_{n} & =-x^{n} \cos x+n \int x^{n-1} \cos x \mathrm{~d} x \\ & =-x^{n} \cos x+n x^{n-1} \sin x-n(n-1) \int x^{n-2} \sin x \mathrm{~d} x . \end{aligned} $$ Thus we find the reduction formula $$ S_{n}=-x^{n} \cos x+n x^{n-1} \sin x-n(n-1) S_{n-2} . $$ Each time you use this reduction, the exponent $n$ drops by 2 , so in the end you get either $S_{1}$ or $S_{0}$, depending on whether you started with an odd or even $n$. 8.3. A reduction formula where you have to solve for $I_{n}$. We try to compute $$ I_{n}=\int(\sin x)^{n} \mathrm{~d} x $$ by a reduction formula. Integrating by parts twice we get $$ \begin{aligned} I_{n} & =\int(\sin x)^{n-1} \sin x \mathrm{~d} x \\ & =-(\sin x)^{n-1} \cos x-\int(-\cos x)(n-1)(\sin x)^{n-2} \cos x \mathrm{~d} x \\ & =-(\sin x)^{n-1} \cos x+(n-1) \int(\sin x)^{n-2} \cos ^{2} x \mathrm{~d} x . \end{aligned} $$ We now use $\cos ^{2} x=1-\sin ^{2} x$, which gives $$ \begin{aligned} I_{n} & =-(\sin x)^{n-1} \cos x+(n-1) \int\left\{\sin ^{n-2} x-\sin ^{n} x\right\} \mathrm{d} x \\ & =-(\sin x)^{n-1} \cos x+(n-1) I_{n-2}-(n-1) I_{n} . \end{aligned} $$ You can think of this as an equation for $I_{n}$, which, when you solve it tells you $$ n I_{n}=-(\sin x)^{n-1} \cos x+(n-1) I_{n-2} $$ and thus implies $$ I_{n}=-\frac{1}{n} \sin ^{n-1} x \cos x+\frac{n-1}{n} I_{n-2} . $$ Since we know the integrals $$ I_{0}=\int(\sin x)^{0} \mathrm{~d} x=\int \mathrm{d} x=x+C \text { and } I_{1}=\int \sin x \mathrm{~d} x=-\cos x+C $$ the reduction formula $(\mathcal{S})$ allows us to calculate $I_{n}$ for any $n \geq 0$. 8.4. A reduction formula which will be handy later. In the next section you will see how the integral of any "rational function" can be transformed into integrals of easier functions, the hardest of which turns out to be $$ I_{n}=\int \frac{\mathrm{d} x}{\left(1+x^{2}\right)^{n}} $$ When $n=1$ this is a standard integral, namely $$ I_{1}=\int \frac{\mathrm{d} x}{1+x^{2}}=\arctan x+C . $$ When $n>1$ integration by parts gives you a reduction formula. Here's the computation: $$ \begin{aligned} I_{n} & =\int\left(1+x^{2}\right)^{-n} \mathrm{~d} x \\ & =\frac{x}{\left(1+x^{2}\right)^{n}}-\int x(-n)\left(1+x^{2}\right)^{-n-1} 2 x \mathrm{~d} x \\ & =\frac{x}{\left(1+x^{2}\right)^{n}}+2 n \int \frac{x^{2}}{\left(1+x^{2}\right)^{n+1}} \mathrm{~d} x \end{aligned} $$ Apply $$ \frac{x^{2}}{\left(1+x^{2}\right)^{n+1}}=\frac{\left(1+x^{2}\right)-1}{\left(1+x^{2}\right)^{n+1}}=\frac{1}{\left(1+x^{2}\right)^{n}}-\frac{1}{\left(1+x^{2}\right)^{n+1}} $$ to get $$ \int \frac{x^{2}}{\left(1+x^{2}\right)^{n+1}} \mathrm{~d} x=\int\left\{\frac{1}{\left(1+x^{2}\right)^{n}}-\frac{1}{\left(1+x^{2}\right)^{n+1}}\right\} \mathrm{d} x=I_{n}-I_{n+1} . $$ Our integration by parts therefore told us that $$ I_{n}=\frac{x}{\left(1+x^{2}\right)^{n}}+2 n\left(I_{n}-I_{n+1}\right) $$ which you can solve for $I_{n+1}$. You find the reduction formula $$ I_{n+1}=\frac{1}{2 n} \frac{x}{\left(1+x^{2}\right)^{n}}+\frac{2 n-1}{2 n} I_{n} . $$ As an example of how you can use it, we start with $I_{1}=\arctan x+C$, and conclude that $$ \begin{aligned} \int \frac{\mathrm{d} x}{\left(1+x^{2}\right)^{2}} & =I_{2}=I_{1+1} \\ & =\frac{1}{2 \cdot 1} \frac{x}{\left(1+x^{2}\right)^{1}}+\frac{2 \cdot 1-1}{2 \cdot 1} I_{1} \\ & =\frac{1}{2} \frac{x}{1+x^{2}}+\frac{1}{2} \arctan x+C . \end{aligned} $$ Apply the reduction formula again, now with $n=2$, and you get $$ \begin{aligned} \int \frac{\mathrm{d} x}{\left(1+x^{2}\right)^{3}} & =I_{3}=I_{2+1} \\ & =\frac{1}{2 \cdot 2} \frac{x}{\left(1+x^{2}\right)^{2}}+\frac{2 \cdot 2-1}{2 \cdot 2} I_{2} \\ & =\frac{1}{4} \frac{x}{\left(1+x^{2}\right)^{2}}+\frac{3}{4}\left\{\frac{1}{2} \frac{x}{1+x^{2}}+\frac{1}{2} \arctan x\right\} \\ & =\frac{1}{4} \frac{x}{\left(1+x^{2}\right)^{2}}+\frac{3}{8} \frac{x}{1+x^{2}}+\frac{3}{8} \arctan x+C . \end{aligned} $$ ## Partial Fraction Expansion A rational function is one which is a ratio of polynomials, $$ f(x)=\frac{P(x)}{Q(x)}=\frac{p_{n} x^{n}+p_{n-1} x^{n-1}+\cdots+p_{1} x+p_{0}}{q_{d} x^{d}+q_{d-1} x^{d-1}+\cdots+q_{1} x+q_{0}} . $$ Such rational functions can always be integrated, and the trick which allows you to do this is called a partial fraction expansion. The whole procedure consists of several steps which are explained in this section. The procedure itself has nothing to do with integration: it's just a way of rewriting rational functions. It is in fact useful in other situations, such as finding Taylor series (see Part 178 of these notes) and computing "inverse Laplace transforms" (see MATH 319.) 9.1. Reduce to a proper rational function. A proper rational function is a rational function $P(x) / Q(x)$ where the degree of $P(x)$ is strictly less than the degree of $Q(x)$. the method of partial fractions only applies to proper rational functions. Fortunately there's an additional trick for dealing with rational functions that are not proper. If $P / Q$ isn't proper, i.e. if $\operatorname{degree}(P) \geq \operatorname{degree}(Q)$, then you divide $P$ by $Q$, with result $$ \frac{P(x)}{Q(x)}=S(x)+\frac{R(x)}{Q(x)} $$ where $S(x)$ is the quotient, and $R(x)$ is the remainder after division. In practice you would do a long division to find $S(x)$ and $R(x)$. 9.2. Example. Consider the rational function $$ f(x)=\frac{x^{3}-2 x+2}{x^{2}-1} . $$ Here the numerator has degree 3 which is more than the degree of the denominator (which is 2). To apply the method of partial fractions we must first do a division with remainder. One has so that $$ f(x)=\frac{x^{3}-2 x+2}{x^{2}-1}=x+\frac{-x+2}{x^{2}-1} $$ When we integrate we get $$ \begin{aligned} \int \frac{x^{3}-2 x+2}{x^{2}-1} \mathrm{~d} x & =\int\left\{x+\frac{-x+2}{x^{2}-1}\right\} \mathrm{d} x \\ & =\frac{x^{2}}{2}+\int \frac{-x+2}{x^{2}-1} \mathrm{~d} x . \end{aligned} $$ The rational function which still have to integrate, namely $\frac{-x+2}{x^{2}-1}$, is proper, i.e. its numerator has lower degree than its denominator. 9.3. Partial Fraction Expansion: The Easy Case. To compute the partial fraction expansion of a proper rational function $P(x) / Q(x)$ you must factor the denominator $Q(x)$. Factoring the denominator is a problem as difficult as finding all of its roots; in Math 222 we shall only do problems where the denominator is already factored into linear and quadratic factors, or where this factorization is easy to find. In the easiest partial fractions problems, all the roots of $Q(x)$ are real numbers and distinct, so the denominator is factored into distinct linear factors, say $$ \frac{P(x)}{Q(x)}=\frac{P(x)}{\left(x-a_{1}\right)\left(x-a_{2}\right) \cdots\left(x-a_{n}\right)} . $$ To integrate this function we find constants $A_{1}, A_{2}, \ldots, A_{n}$ so that $$ \frac{P(x)}{Q(x)}=\frac{A_{1}}{x-a_{1}}+\frac{A_{2}}{x-a_{2}}+\cdots+\frac{A_{n}}{x-a_{n}} . $$ Then the integral is $$ \int \frac{P(x)}{Q(x)} d x=A_{1} \ln \left|x-a_{1}\right|+A_{2} \ln \left|x-a_{2}\right|+\cdots+A_{n} \ln \left|x-a_{n}\right|+C . $$ One way to find the coefficients $A_{i}$ in (\#) is called the method of equating coefficients. In this method we multiply both sides of (\#) with $Q(x)=(x-$ $\left.a_{1}\right) \cdots\left(x-a_{n}\right)$. The result is a polynomial of degree $n$ on both sides. Equating the coefficients of these polynomial gives a system of $n$ linear equations for $A_{1}, \ldots$, $A_{n}$. You get the $A_{i}$ by solving that system of equations. Another much faster way to find the coefficients $A_{i}$ is the Heaviside trick . Multiply equation (\#) by $x-a_{i}$ and then plug in ${ }^{4} x=a_{i}$. On the right you are left with $A_{i}$ so $$ A_{i}=\left.\frac{P(x)\left(x-a_{i}\right)}{Q(x)}\right|_{x=a_{i}}=\frac{P\left(a_{i}\right)}{\left(a_{i}-a_{1}\right) \cdots\left(a_{i}-a_{i-1}\right)\left(a_{i}-a_{i+1}\right) \cdots\left(a_{i}-a_{n}\right)} . $$ 3 Named after Oliver Heaviside, a physicist and electrical engineer in the late 19th and early 20 ieth century. ${ }^{4}$ More properly, you should take the limit $x \rightarrow a_{i}$. The problem here is that equation (\#) has $x-a_{i}$ in the denominator, so that it does not hold for $x=a_{i}$. Therefore you cannot set $x$ equal to $a_{i}$ in any equation derived from (\#), but you can take the limit $x \rightarrow a_{i}$, which in practice is just as good. 9.4. Previous Example continued. To integrate $\frac{-x+2}{x^{2}-1}$ we factor the denominator, $$ x^{2}-1=(x-1)(x+1) . $$ The partial fraction expansion of $\frac{-x+2}{x^{2}-1}$ then is $$ \frac{-x+2}{x^{2}-1}=\frac{-x+2}{(x-1)(x+1)}=\frac{A}{x-1}+\frac{B}{x+1} . $$ Multiply with $(x-1)(x+1)$ to get $$ -x+2=A(x+1)+B(x-1)=(A+B) x+(A-B) . $$ The functions of $x$ on the left and right are equal only if the coefficient of $x$ and the constant term are equal. In other words we must have $$ A+B=-1 \text { and } A-B=2 . $$ These are two linear equations for two unknowns $A$ and $B$, which we now proceed to solve. Adding both equations gives $2 A=1$, so that $A=\frac{1}{2}$; from the first equation one then finds $B=-1-A=-\frac{3}{2}$. So $$ \frac{-x+2}{x^{2}-1}=\frac{1 / 2}{x-1}-\frac{3 / 2}{x+1} . $$ Instead, we could also use the Heaviside trick: multiply ( $\dagger)$ with $x-1$ to get $$ \frac{-x+2}{x+1}=A+B \frac{x-1}{x+1} $$ Take the limit $x \rightarrow 1$ and you find $$ \frac{-1+2}{1+1}=A \text {, i.e. } A=\frac{1}{2} . $$ Similarly, after multiplying ( $\dagger)$ with $x+1$ one gets $$ \frac{-x+2}{x-1}=A \frac{x+1}{x-1}+B, $$ and letting $x \rightarrow-1$ you find $$ B=\frac{-(-1)+2}{(-1)-1}=-\frac{3}{2}, $$ as before. Either way, the integral is now easily found, namely, $$ \begin{aligned} \int \frac{x^{3}-2 x+1}{x^{2}-1} \mathrm{~d} x & =\frac{x^{2}}{2}+x+\int \frac{-x+2}{x^{2}-1} \mathrm{~d} x \\ & =\frac{x^{2}}{2}+x+\int\left\{\frac{1 / 2}{x-1}-\frac{3 / 2}{x+1}\right\} \mathrm{d} x \\ & =\frac{x^{2}}{2}+x+\frac{1}{2} \ln |x-1|-\frac{3}{2} \ln |x+1|+C . \end{aligned} $$ 9.5. Partial Fraction Expansion: The General Case. Buckle up. When the denominator $Q(x)$ contains repeated factors or quadratic factors (or both) the partial fraction decomposition is more complicated. In the most general case the denominator $Q(x)$ can be factored in the form $$ Q(x)=\left(x-a_{1}\right)^{k_{1}} \cdots\left(x-a_{n}\right)^{k_{n}}\left(x^{2}+b_{1} x+c_{1}\right)^{\ell_{1}} \cdots\left(x^{2}+b_{m} x+c_{m}\right)^{\ell_{m}} $$ Here we assume that the factors $x-a_{1}, \ldots, x-a_{n}$ are all different, and we also assume that the factors $x^{2}+b_{1} x+c_{1}, \ldots, x^{2}+b_{m} x+c_{m}$ are all different. It is a theorem from advanced algebra that you can always write the rational function $P(x) / Q(x)$ as a sum of terms like this $$ \frac{P(x)}{Q(x)}=\cdots+\frac{A}{\left(x-a_{i}\right)^{k}}+\cdots+\frac{B x+C}{\left(x^{2}+b_{j} x+c_{j}\right)^{\ell}}+\cdots $$ How did this sum come about? For each linear factor $(x-a)^{k}$ in the denominator (4) you get terms $$ \frac{A_{1}}{x-a}+\frac{A_{2}}{(x-a)^{2}}+\cdots+\frac{A_{k}}{(x-a)^{k}} $$ in the decomposition. There are as many terms as the exponent of the linear factor that generated them. For each quadratic factor $\left(x^{2}+b x+c\right)^{\ell}$ you get terms $$ \frac{B_{1} x+C_{1}}{x^{2}+b x+c}+\frac{B_{2} x+C_{2}}{\left(x^{2}+b x+c\right)^{2}}+\cdots+\frac{B_{m} x+C_{m}}{\left(x^{2}+b x+c\right)^{\ell}} . $$ Again, there are as many terms as the exponent $\ell$ with which the quadratic factor appears in the denominator (4). In general, you find the constants $A_{\ldots}, B_{\ldots}$ and $C_{\ldots}$ by the method of equating coefficients. 9.6. Example. To do the integral $$ \int \frac{x^{2}+3}{x^{2}(x+1)\left(x^{2}+1\right)^{2}} d x $$ apply the method of equating coefficients to the form $$ \frac{x^{2}+3}{x^{2}(x+1)\left(x^{2}+1\right)^{2}}=\frac{A_{1}}{x}+\frac{A_{2}}{x^{2}}+\frac{A_{3}}{x+1}+\frac{B_{1} x+C_{1}}{x^{2}+1}+\frac{B_{2} x+C_{2}}{\left(x^{2}+1\right)^{2}} . $$ Solving this last problem will require solving a system of seven linear equations in the seven unknowns $A_{1}, A_{2}, A_{3}, B_{1}, C_{1}, B_{2}, C_{2}$. A computer program like Maple can do this easily, but it is a lot of work to do it by hand. In general, the method of equating coefficients requires solving $n$ linear equations in $n$ unknowns where $n$ is the degree of the denominator $Q(x)$. See Problem 104 for a worked example where the coefficients are found. 1 Unfortunately, in the presence of quadratic factors or repeated lin? ear factors the Heaviside trick does not give the whole answer; you must use the method of equating coefficients. Once you have found the partial fraction decomposition $(\mathcal{E} X)$ you still have to integrate the terms which appeared. The first three terms are of the form $\int A(x-a)^{-p} d x$ and they are easy to integrate: $$ \int \frac{A d x}{x-a}=A \ln |x-a|+C $$ and $$ \int \frac{A d x}{(x-a)^{p}}=\frac{A}{(1-p)(x-a)^{p-1}}+C $$ if $p>1$. The next, fourth term in $(\varepsilon X)$ can be written as $$ \begin{aligned} \int \frac{B_{1} x+C_{1}}{x^{2}+1} \mathrm{~d} x & =B_{1} \int \frac{x}{x^{2}+1} \mathrm{~d} x+C_{1} \int \frac{\mathrm{d} x}{x^{2}+1} \\ & =\frac{B_{1}}{2} \ln \left(x^{2}+1\right)+C_{1} \arctan x+C_{\text {integration const. }} \end{aligned} $$ While these integrals are already not very simple, the integrals $$ \int \frac{B x+C}{\left(x^{2}+b x+c\right)^{p}} \mathrm{~d} x \quad \text { with } p>1 $$ which can appear are particularly unpleasant. If you really must compute one of these, then complete the square in the denominator so that the integral takes the form $$ \int \frac{A x+B}{\left((x+b)^{2}+a^{2}\right)^{p}} d x . $$ After the change of variables $u=x+b$ and factoring out constants you have to do the integrals $$ \int \frac{d u}{\left(u^{2}+a^{2}\right)^{p}} \quad \text { and } \quad \int \frac{u d u}{\left(u^{2}+a^{2}\right)^{p}} . $$ Use the reduction formula we found in example 8.4 to compute this integral. An alternative approach is to use complex numbers (which are on the menu for this semester.) If you allow complex numbers then the quadratic factors $x^{2}+b x+c$ can be factored, and your partial fraction expansion only contains terms of the form $A /(x-a)^{p}$, although $A$ and $a$ can now be complex numbers. The integrals are then easy, but the answer has complex numbers in it, and rewriting the answer in terms of real numbers again can be quite involved. ## PROBLEMS ## DEFINITE VERSUS INDEFINITE INTEGRALS 1. Compute the following three integrals: $$ A=\int x^{-2} d x, \quad B=\int t^{-2} d t, \quad C=\int x^{-2} d t . $$ 2. One of the following three integrals is not the same as the other two: $$ A=\int_{1}^{4} x^{-2} d x, \quad B=\int_{1}^{4} t^{-2} d t, \quad C=\int_{1}^{4} x^{-2} d t . $$ Which one? The following integrals are straightforward provided you know the list of standard antiderivatives. They can be done without using substitution or any other tricks, and you learned them in first semester calculus. 3. $\begin{aligned} & \int\left\{6 x^{5}-2 x^{-4}-7 x\right. \\ &\left.+3 / x-5+4 e^{x}+7^{x}\right\} d x\end{aligned}$ 4. $\int\left(x / a+a / x+x^{a}+a^{x}+a x\right) d x$ 5. $\int\left\{\sqrt{x}-\sqrt[3]{x^{4}}+\frac{7}{\sqrt[3]{x^{2}}}-6 e^{x}+1\right\} d x$ 6. $\int\left\{2^{x}+\left(\frac{1}{2}\right)^{x}\right\} d x$ 7. $\int_{-3}^{0}\left(5 y^{4}-6 y^{2}+14\right) d y$ 8. $\int_{1}^{3}\left(\frac{1}{t^{2}}-\frac{1}{t^{4}}\right) d t$ 9. $\int_{1}^{2} \frac{t^{6}-t^{2}}{t^{4}} d t$ 10. $\int_{1}^{2} \frac{x^{2}+1}{\sqrt{x}} d x$ 11. $\int_{0}^{2}\left(x^{3}-1\right)^{2} d x$ 12. $\int_{1}^{2}(x+1 / x)^{2} d x$ 13. $\int_{3}^{3} \sqrt{x^{5}+2} d x$ 14. $\int_{1}^{-1}(x-1)(3 x+2) d x$ 15. $\int_{1}^{4}(\sqrt{t}-2 / \sqrt{t}) d t$ 16. $\int_{1}^{8}\left(\sqrt[3]{r}+\frac{1}{\sqrt[3]{r}}\right) d r$ 17. $\int_{-1}^{0}(x+1)^{3} d x$ 18. $\int_{1}^{e} \frac{x^{2}+x+1}{x} d x$ 19. $\int_{4}^{9}\left(\sqrt{x}+\frac{1}{\sqrt{x}}\right)^{2} d x$ 20. $\int_{0}^{1}\left(\sqrt[4]{x^{5}}+\sqrt[5]{x^{4}}\right) d x$ 21. $\int_{1}^{8} \frac{x-1}{\sqrt[3]{x^{2}}} d x$ 22. $\int_{\pi / 4}^{\pi / 3} \sin t d t$ 23. $\int_{0}^{\pi / 2}(\cos \theta+2 \sin \theta) d \theta$ 24. $\int_{0}^{\pi / 2}(\cos \theta+\sin 2 \theta) d \theta$ 25. $\int_{2 \pi / 3}^{\pi} \frac{\tan x}{\cos x} d x$ 26. $\int_{\pi / 3}^{\pi / 2} \frac{\cot x}{\sin x} d x$ 27. $\int_{1}^{\sqrt{3}} \frac{6}{1+x^{2}} d x$ 28. $\int_{0}^{0.5} \frac{d x}{\sqrt{1-x^{2}}}$ 29. $\int_{4}^{8}(1 / x) d x$ 30. $\int_{\ln 3}^{\ln 6} 8 e^{x} d x$ 31. $\int_{8}^{9} 2^{t} d t$ 32. $\int_{-e^{2}}^{-e} \frac{3}{x} d x$ 33. $\int_{-2}^{3}\left|x^{2}-1\right| d x$ 34. $\int_{-1}^{2}\left|x-x^{2}\right| d x$ 35. $\int_{-1}^{2}(x-2|x|) d x$ 36. $\int_{0}^{2}\left(x^{2}-|x-1|\right) d x$ 37. $\int_{0}^{2} f(x) d x$ where $f(x)= \begin{cases}x^{4} & \text { if } 0 \leq x<1 \\ x^{5}, & \text { if } 1 \leq x \leq 2\end{cases}$ 38. $\int_{-\pi}^{\pi} f(x) d x$ where $f(x)= \begin{cases}x, & \text { if }-\pi \leq x \leq 0 \\ \sin x, & \text { if } 0<x \leq \pi\end{cases}$ 39. Compute $$ I=\int_{0}^{2} 2 x\left(1+x^{2}\right)^{3} d x $$ in two different ways: (i) Expand $\left(1+x^{2}\right)^{3}$, multiply with $2 x$, and integrate each term. (ii) Use the substitution $u=1+x^{2}$. 40. Compute $$ I_{n}=\int 2 x\left(1+x^{2}\right)^{n} d x $$ 41. If $f^{\prime}(x)=x-1 / x^{2}$ and $f(1)=1 / 2$ find $f(x)$. 42. Consider $\int_{0}^{2}|x-1| d x$. Let $f(x)=|x-1|$ so that $$ f(x)= \begin{cases}x-1 & \text { if } x \geq 1 \\ 1-x & \text { if } x<1\end{cases} $$ Define $$ F(x)=\left\{\begin{array}{cl} \frac{x^{2}}{2}-x & \text { if } x \geq 1 \\ x-\frac{x^{2}}{2} & \text { if } x<1 \end{array}\right. $$ Then since $F$ is an antiderivative of $f$ we have by the Fundamental Theorem of Calculus: $$ \int_{0}^{2}|x-1| d x=\int_{0}^{2} f(x) d x=F(2)-F(0)=\left(\frac{2^{2}}{2}-2\right)-\left(0-\frac{0^{2}}{2}\right)=0 $$ But this integral cannot be zero, $f(x)$ is positive except at one point. How can this be? ## BASIC SUBSTITUTIONS Use a substitution to evaluate the following integrals. 43. $\int_{1}^{2} \frac{u \mathrm{~d} u}{1+u^{2}}$ 50. $\int_{1}^{2} \frac{\ln 2 x}{x} \mathrm{~d} x$ 44. $\int_{1}^{2} \frac{x \mathrm{~d} x}{1+x^{2}}$ 51. $\int \frac{\ln \left(2 x^{2}\right)}{x} d x$ 45. $\int_{\pi / 4}^{\pi / 3} \sin ^{2} \theta \cos \theta d \theta$ 52. $\int_{\xi=0}^{\sqrt{ } 2} \xi\left(1+2 \xi^{2}\right)^{10} \mathrm{~d} \xi$ 46. $\int_{2}^{3} \frac{1}{r \ln r}, \mathrm{~d} r$ 53. $\int_{2}^{3} \sin \rho(\cos 2 \rho)^{4} \mathrm{~d} \rho$ 47. $\int \frac{\sin 2 x}{1+\cos ^{2} x} d x$ 54. $\int \alpha e^{-\alpha^{2}} \mathrm{~d} \alpha$ 48. $\int \frac{\sin 2 x}{1+\sin x} d x$ 55. $\int \frac{e^{\frac{1}{t}}}{t^{2}} \mathrm{~d} t$ 49. $\int_{0}^{1} z \sqrt{1-z^{2}} \mathrm{~d} z$ 56. Group problem. The inverse sine function is the inverse function to the (restricted) sine function, i.e. when $-\pi / 2 \leq \theta \leq \pi / 2$ we have $$ \theta=\arcsin (y) \Longleftrightarrow y=\sin \theta . $$ The inverse sine function is sometimes called Arc Sine function and denoted $\theta=$ $\arcsin (y)$. We avoid the notation $\sin ^{-1}(x)$ which is used by some as it is ambiguous (it could stand for either $\arcsin x$ or for $(\sin x)^{-1}=1 /(\sin x)$ ). (i) If $y=\sin \theta$, express $\sin \theta, \cos \theta$, and $\tan \theta$ in terms of $y$ when $0 \leq \theta<\pi / 2$. (ii) If $y=\sin \theta$, express $\sin \theta, \cos \theta$, and $\tan \theta$ in terms of $y$ when $\pi / 2<\theta \leq \pi$. (iii) If $y=\sin \theta$, express $\sin \theta, \cos \theta$, and $\tan \theta$ in terms of $y$ when $-\pi / 2<\theta<0$. (iv) Evaluate $\int \frac{\mathrm{d} y}{\sqrt{1-y^{2}}}$ using the substitution $y=\sin \theta$, but give the final answer in terms of $y$. 57. Group problem. Express in simplest form: (i) $\cos \left(\arcsin ^{-1}(x)\right)$; (ii) $\tan \left\{\arcsin \frac{\ln \frac{1}{4}}{\ln 16}\right\}$; (iii) $\sin (2 \arctan a)$ 58. Group problem. Draw the graph of $y=f(x)=\arcsin (\sin (x))$, for $-2 \pi \leq x \leq+2 \pi$. Make sure you get the same answer as your graphing calculator. 59. Use the change of variables formula to evaluate $\int_{1 / 2}^{\sqrt{3} / 2} \frac{\mathrm{d} x}{\sqrt{1-x^{2}}}$ first using the substitution $x=\sin u$ and then using the substitution $x=\cos u$. 60. The inverse tangent function is the inverse function to the (restricted) tangent function, i.e. for $\pi / 2<\theta<\pi / 2$ we have $$ \theta=\arctan (w) \Longleftrightarrow w=\tan \theta . $$ The inverse tangent function is sometimes called Arc Tangent function and denoted $\theta=\arctan (y)$. We avoid the notation $\tan ^{-1}(x)$ which is used by some as it is ambiguous (it could stand for either $\arctan x$ or for $\left.(\tan x)^{-1}=1 /(\tan x)\right)$. (i) If $w=\tan \theta$, express $\sin \theta$ and $\cos \theta$ in terms of $w$ when $$ \begin{array}{lll} \text { (a) } 0 \leq \theta<\pi / 2 & \text { (b) } \pi / 2<\theta \leq \pi & \text { (c) }-\pi / 2<\theta<0 \end{array} $$ (ii) Evaluate $\int \frac{\mathrm{d} w}{1+w^{2}}$ using the substitution $w=\tan \theta$, but give the final answer in terms of $w$. 61. Use the substitution $x=\tan (\theta)$ to find the following integrals. Give the final answer in terms of $x$. (a) $\int \sqrt{1+x^{2}} \mathrm{~d} x$ (b) $\int \frac{1}{\left(1+x^{2}\right)^{2}} \mathrm{~d} x$ (c) $\int \frac{\mathrm{d} x}{\sqrt{1+x^{2}}}$ Evaluate these integrals: 62. $\int \frac{\mathrm{d} x}{\sqrt{1-x^{2}}}$ 65. $\int \frac{x \mathrm{~d} x}{\sqrt{1-4 x^{4}}}$ 68. $\int_{0}^{\sqrt{3} / 2} \frac{\mathrm{d} x}{\sqrt{1-x^{2}}}$ 63. $\int \frac{\mathrm{d} x}{\sqrt{4-x^{2}}}$ 66. $\int_{-1 / 2}^{1 / 2} \frac{d x}{\sqrt{4-x^{2}}}$ 69. $\int \frac{\mathrm{d} x}{x^{2}+1}$, 64. $\int \frac{\mathrm{d} x}{\sqrt{2 x-x^{2}}}$ 67. $\int_{-1}^{1} \frac{\mathrm{d} x}{\sqrt{4-x^{2}}}$ 70. $\int \frac{\mathrm{d} x}{x^{2}+a^{2}}$, 71. $\int \frac{\mathrm{d} x}{7+3 x^{2}}$, 72. $\int \frac{\mathrm{d} x}{3 x^{2}+6 x+6}$ 74. $\int_{a}^{a \sqrt{3}} \frac{\mathrm{d} x}{x^{2}+a^{2}}$. 73. $\int_{1}^{\sqrt{3}} \frac{\mathrm{d} x}{x^{2}+1}$ ## INTEGRATION BY PARTS AND REDUCTION FORMULAE 75. Evaluate $\int x^{n} \ln x \mathrm{~d} x$ where $n \neq-1$. 76. Evaluate $\int e^{a x} \sin b x \mathrm{~d} x$ where $a^{2}+b^{2} \neq 0$. [Hint: Integrate by parts twice.] 77. Evaluate $\int e^{a x} \cos b x \mathrm{~d} x$ where $a^{2}+b^{2} \neq 0$. 78. Prove the formula $$ \int x^{n} e^{x} \mathrm{~d} x=x^{n} e^{x}-n \int x^{n-1} e^{x} \mathrm{~d} x $$ and use it to evaluate $\int x^{2} e^{x} \mathrm{~d} x$. 79. Prove the formula $$ \int \sin ^{n} x \mathrm{~d} x=-\frac{1}{n} \cos x \sin ^{n-1} x+\frac{n-1}{n} \int \sin ^{n-2} x \mathrm{~d} x, \quad n \neq 0 $$ 80. Evaluate $\int \sin ^{2} x \mathrm{~d} x$. Show that the answer is the same as the answer you get using the half angle formula. 81. Evaluate $\int_{0}^{\pi} \sin ^{14} x \mathrm{~d} x$ 82. Prove the formula $$ \int \cos ^{n} x \mathrm{~d} x=\frac{1}{n} \sin x \cos ^{n-1} x+\frac{n-1}{n} \int \cos ^{n-2} x \mathrm{~d} x, \quad n \neq 0 $$ and use it to evaluate $\int_{0}^{\pi / 4} \cos ^{4} x \mathrm{~d} x$. 83. Prove the formula $$ \int x^{m}(\ln x)^{n} \mathrm{~d} x=\frac{x^{m+1}(\ln x)^{n}}{m+1}-\frac{n}{m+1} \int x^{m}(\ln x)^{n-1} \mathrm{~d} x, \quad m \neq-1, $$ and use it to evaluate the following integrals: 84. $\int \ln x \mathrm{~d} x$ 85. $\int(\ln x)^{2} \mathrm{~d} x$ 86. $\int x^{3}(\ln x)^{2} \mathrm{~d} x$ 87. Evaluate $\int x^{-1} \ln x \mathrm{~d} x$ by another method. [Hint: the solution is short!] 88. For an integer $n>1$ derive the formula $$ \int \tan ^{n} x \mathrm{~d} x=\frac{1}{n-1} \tan ^{n-1} x-\int \tan ^{n-2} x \mathrm{~d} x $$ Using this, find $\int_{0}^{\pi / 4} \tan ^{5} x \mathrm{~d} x$ by doing just one explicit integration. Use the reduction formula from example 8.4 to compute these integrals: 89. $\int \frac{\mathrm{d} x}{\left(1+x^{2}\right)^{3}}$ 90. $\int \frac{\mathrm{d} x}{\left(1+x^{2}\right)^{4}}$ 91. $\int \frac{x \mathrm{~d} x}{\left(1+x^{2}\right)^{4}}$ [Hint: $\int x /\left(1+x^{2}\right)^{n} \mathrm{~d} x$ is easy.] 92. $\int \frac{1+x}{\left(1+x^{2}\right)^{2}} \mathrm{~d} x$ 93. Group problem. The reduction formula from example 8.4 is valid for all $n \neq 0$. In particular, $n$ does not have to be an integer, and it does not have to be positive. Find a relation between $\int \sqrt{1+x^{2}} \mathrm{~d} x$ and $\int \frac{\mathrm{d} x}{\sqrt{1+x^{2}}}$ by setting $n=-\frac{1}{2}$. 94. Apply integration by parts to $$ \int \frac{1}{x} d x $$ Let $u=\frac{1}{x}$ and $d v=d x$. This gives us, $d u=\frac{-1}{x^{2}} d x$ and $v=x$. Simplifying $$ \int \frac{1}{x} d x=\left(\frac{1}{x}\right)(x)-\int x \frac{-1}{x^{2}} d x $$ $$ \int \frac{1}{x} d x=1+\int \frac{1}{x} d x $$ and subtracting the integral from both sides gives us $0=1$. How can this be? ## INTEGRATION OF RATIONAL FUNCTIONS Express each of the following rational functions as a polynomial plus a proper rational function. (See $\S 9.1$ for definitions.) 95. $\frac{x^{3}}{x^{3}-4}$, 97. $\frac{x^{3}-x^{2}-x-5}{x^{3}-4}$. 96. $\frac{x^{3}+2 x}{x^{3}-4}$, 98. $\frac{x^{3}-1}{x^{2}-1}$. ## COMPLETING THE SQUARE Write $a x^{2}+b x+c$ in the form $a(x+p)^{2}+q$, i.e. find $p$ and $q$ in terms of $a, b$, and $c$ (this procedure, which you might remember from high school algebra, is called "completing the square."). Then evaluate the integrals 99. $\int \frac{\mathrm{d} x}{x^{2}+6 x+8}$, 102. Use the method of equating coeffi- cients to find numbers $A, B, C$ such that 100. $\int \frac{\mathrm{d} x}{x^{2}+6 x+10}$, $\frac{x^{2}+3}{x(x+1)(x-1)}=\frac{A}{x}+\frac{B}{x+1}+\frac{C}{x-1}$ 101. $\int \frac{\mathrm{d} x}{5 x^{2}+20 x+25}$. and then evaluate the integral $\int \frac{x^{2}+3}{x(x+1)(x-1)} \mathrm{d} x$. 103. Do the previous problem using the Heaviside trick. 104. Find the integral $\int \frac{x^{2}+3}{x^{2}(x-1)} \mathrm{d} x$. Evaluate the following integrals: 105. $\int_{-5}^{-2} \frac{x^{4}-1}{x^{2}+1} d x$ 106. $\int \frac{x^{3} \mathrm{~d} x}{x^{4}+1}$ 107. $\int \frac{x^{5} \mathrm{~d} x}{x^{2}-1}$ 108. $\int \frac{x^{5} \mathrm{~d} x}{x^{4}-1}$ 109. $\int \frac{x^{3}}{x^{2}-1} d x$ 110. $\int \frac{e^{3 x} \mathrm{~d} x}{e^{4 x}-1}$ 111. $\int \frac{e^{x} \mathrm{~d} x}{\sqrt{1+e^{2 x}}}$ 112. $\int \frac{e^{x} \mathrm{~d} x}{e^{2 x}+2 e^{x}+2}$ 113. $\int \frac{\mathrm{d} x}{1+e^{x}}$ 114. $\int \frac{\mathrm{d} x}{x\left(x^{2}+1\right)}$ 115. $\int \frac{\mathrm{d} x}{x\left(x^{2}+1\right)^{2}}$ 116. $\int \frac{\mathrm{d} x}{x^{2}(x-1)}$ 117. $\int \frac{1}{(x-1)(x-2)(x-3)} d x$ 118. $\int \frac{x^{2}+1}{(x-1)(x-2)(x-3)} d x$ 119. $\int \frac{x^{3}+1}{(x-1)(x-2)(x-3)} \mathrm{d} x$ 120. Group problem. (a) Compute $\int_{1}^{2} \frac{\mathrm{d} x}{x(x-h)}$ where $h$ is a positive number. (b) What happens to your answer to (a) when $h \rightarrow 0^{+}$? (c) Compute $\int_{1}^{2} \frac{\mathrm{d} x}{x^{2}}$. ## MISCELLANEOUS AND MIXED INTEGRALS 121. Find the area of the region bounded by the curves $$ x=1, \quad x=2, \quad y=\frac{2}{x^{2}-4 x+5}, \quad y=\frac{x^{2}-8 x+7}{x^{2}-8 x+16} . $$ 122. Let $\mathcal{P}$ be the piece of the parabola $y=x^{2}$ on which $0 \leq x \leq 1$. (i) Find the area between $\mathcal{P}$, the $x$-axis and the line $x=1$. (ii) Find the length of $\mathcal{P}$. 123. Let $a$ be a positive constant and $$ F(x)=\int_{0}^{x} \sin (a \theta) \cos (\theta) \mathrm{d} \theta . $$ [Hint: use a trig identity for $\sin A \cos B$, or wait until we have covered complex exponentials and then come back to do this problem.] (i) Find $F(x)$ if $a \neq 1$. (ii) Find $F(x)$ if $a=1$. (Don't divide by zero.) Evaluate the following integrals: 124. $\int_{0}^{a} x \sin x \mathrm{~d} x$ 126. $\int_{3}^{4} \frac{x \mathrm{~d} x}{\sqrt{x^{2}-1}}$ 125. $\int_{0}^{a} x^{2} \cos x \mathrm{~d} x$ 126. $\int_{1 / 4}^{1 / 3} \frac{x \mathrm{~d} x}{\sqrt{1-x^{2}}}$ 127. $\int_{3}^{4} \frac{\mathrm{d} x}{x \sqrt{x^{2}-1}}$ 128. $\int \frac{x}{(x-1)^{3}} d x$ 129. $\int \frac{x \mathrm{~d} x}{x^{2}+2 x+17}$ 130. $\int \frac{4}{(x-1)^{3}(x+1)} d x$ 131. $\int \frac{x^{4}}{x^{2}-36} \mathrm{~d} x$ 132. $\int \frac{1}{\sqrt{1-2 x-x^{2}}} \mathrm{~d} x$ 133. $\int \frac{x^{4}}{36-x^{2}} \mathrm{~d} x$ 134. $\int_{1}^{e} x \ln x \mathrm{~d} x$ 135. $\int \frac{\left(x^{2}+1\right) \mathrm{d} x}{x^{4}-x^{2}}$ 136. $\int 2 x \ln (x+1) d x$ 137. $\int \frac{\left(x^{2}+3\right) d x}{x^{4}-2 x^{2}}$ 138. $\int_{e^{2}}^{e^{3}} x^{2} \ln x \mathrm{~d} x$ 139. $\int e^{x}(x+\cos (x)) \mathrm{d} x$ 140. $\int_{1}^{e} x(\ln x)^{3} \mathrm{~d} x$ 141. $\int\left(e^{x}+\ln (x)\right) \mathrm{d} x$ 142. $\int \arctan (\sqrt{x}) \mathrm{d} x$ 143. $\int \frac{3 x^{2}+2 x-2}{x^{3}-1} \mathrm{~d} x$ 144. $\int x(\cos x)^{2} \mathrm{~d} x$ 145. $\int_{0}^{\pi} \sqrt{1+\cos (6 w)} \mathrm{d} w$ 146. $\int \frac{x^{4}}{x^{4}-16} \mathrm{~d} x$ 147. $\int \frac{1}{1+\sin (x)} \mathrm{d} x$ 148. Find $$ \int \frac{\mathrm{d} x}{x(x-1)(x-2)(x-3)} $$ and $$ \int \frac{\left(x^{3}+1\right) \mathrm{d} x}{x(x-1)(x-2)(x-3)} $$ 150. Find $$ \int \frac{\mathrm{d} x}{x^{3}+x^{2}+x+1} $$ 151. Group problem. You don't always have to find the antiderivative to find a definite integral. This problem gives you two examples of how you can avoid finding the antiderivative. (i) To find $$ I=\int_{0}^{\pi / 2} \frac{\sin x \mathrm{~d} x}{\sin x+\cos x} $$ you use the substitution $u=\pi / 2-x$. The new integral you get must of course be equal to the integral $I$ you started with, so if you add the old and new integrals you get $2 I$. If you actually do this you will see that the sum of the old and new integrals is very easy to compute. (ii) Use the same trick to find $\int_{0}^{\pi / 2} \sin ^{2} x \mathrm{~d} x$ 152. Group problem. Graph the equation $x^{\frac{2}{3}}+y^{\frac{2}{3}}=a^{\frac{2}{3}}$. Compute the area bounded by this curve. 153. Group problem. The Bow-Tie Graph. Graph the equation $y^{2}=x^{4}-x^{6}$. Compute the area bounded by this curve. 153. Group problem. The FAn-TAILed Fish. Graph the equation $$ y^{2}=x^{2}\left(\frac{1-x}{1+x}\right) . $$ Find the area enclosed by the loop. (Hint: Rationalize the denominator of the integrand.) 155. Find the area of the region bounded by the curves $$ x=2, \quad y=0, \quad y=x \ln \frac{x}{2} $$ 156. Find the volume of the solid of revolution obtained by rotating around the $x$-axis the region bounded by the lines $x=5, x=10, y=0$, and the curve $$ y=\frac{x}{\sqrt{x^{2}+25}} \text {. } $$ 157. How to find the integral of $f(x)=\frac{1}{\cos x}$ (i) Verify the answer given in the table in the lecture notes. (ii) Note that $$ \frac{1}{\cos x}=\frac{\cos x}{\cos ^{2} x}=\frac{\cos x}{1-\sin ^{2} x}, $$ and apply the substitution $s=\sin x$ followed by a partial fraction decomposition to compute $\int \frac{\mathrm{d} x}{\cos x}$. ## RATIONALIZING SUBSTITUTIONS Recall that a rational function is the ratio of two polynomials. 158. Prove that the family of rational functions is closed under taking sums, products, quotients (except do not divide by the zero polynomial), and compositions. To integrate rational functions of $x$ and $\sqrt{1+x^{2}}$ one may do a trigonometric substitution, e.g., $x=\tan (\theta)$ and $1+\tan ^{2}(\theta)=\sec ^{2}(\theta)$. This turns the problem into a trig integral. Or one could use $1+\sinh ^{2}(t)=\cosh ^{2}(t)$ and convert the problem into a rational function of $e^{t}$. Another technique which works is to use the parameterization of the hyperbola by rational functions: $$ x=\frac{1}{2}\left(t-\frac{1}{t}\right) \quad y=\frac{1}{2}\left(t+\frac{1}{t}\right) $$ 159. Show that $y^{2}-x^{2}=1$ and hence $y=\sqrt{1+x^{2}}$. Use this to rationalize the integrals, i.e, make them into an integral of a rational function of $t$. You do not need to integrate the rational function. 160. $\int \sqrt{1+x^{2}} \mathrm{~d} x$ 162. $\int \frac{\mathrm{d} s}{\sqrt{s^{2}+2 s+3}}$ 161. $\int \frac{x^{4}}{\sqrt{1+x^{2}}} \mathrm{~d} x$ 163. Show that $t=x+y=x+\sqrt{1+x^{2}}$. Hence if $$ \int g(x) d x=\int f(t) d t=F(t)+C $$ then $$ \int g(x) d x=F\left(x+\sqrt{1-x^{2}}\right)+C . $$ 164. Note that $x=\sqrt{y^{2}-1}$. Show that $t$ is a function of $y$. Express these integrals as integrals of rational functions of $t$. 165. $\int \frac{\mathrm{d} y}{\left(y^{2}-1\right)^{1 / 2}}$ 167. $\int \frac{s^{4}}{\left(s^{2}-36\right)^{3 / 2}} \mathrm{~d} s$ 166. $\int \frac{y^{4}}{\left(y^{2}-1\right)^{1 / 2}} \mathrm{~d} y$ 168. $\int \frac{\mathrm{d} s}{\left(s^{2}+2 s\right)^{1 / 2}}$ 169. Note that $1=\left(\frac{x}{y}\right)^{2}+\left(\frac{1}{y}\right)^{2}$. What substitution would rationalize integrands which have $\sqrt{1-z^{2}}$ in them? Show how to write $t$ as a function of $z$. Express these integrals as integrals of rational functions of $t$. 170. $\int \sqrt{1-z^{2}} \mathrm{~d} z$ 173. $\int \frac{s^{4}}{\left(36-s^{2}\right)^{3 / 2}} \mathrm{~d} s$ 171. $\int \frac{\mathrm{d} z}{\sqrt{1-z^{2}}}$ 172. $\int \frac{z^{2}}{\sqrt{1-z^{2}}} \mathrm{~d} z$ 174. $\int \frac{\mathrm{d} s}{(s+5) \sqrt{s^{2}+5 s}}$ Integrating a rational function of $\sin$ and $\cos$ $$ \int r(\sin (\theta), \cos (\theta)) d \theta $$ Examples of such integrals are: $$ \int \frac{(\cos \theta)^{2}-(\cos \theta)(\sin \theta)+1}{(\cos \theta)^{2}+(\sin \theta)^{3}+(\cos \theta)+1} d \theta $$ or $$ \int \frac{(\sin \theta)^{3}(\cos \theta)+(\cos \theta)+(\sin \theta)+1}{(\cos \theta)^{2}(\sin \theta)^{3}-(\cos \theta)} d \theta $$ The goal of the following problems is to show that such integrals can be rationalized, not to integrat the rational function. 175. Substitute $z=\sin (\theta)$ and express $\int r(\sin (\theta), \cos (\theta)) d \theta$ as a rational function of $z$ and $\sqrt{1-z^{2}}$. 176. Express it as rational function of $t$. 177. Express $t$ as a function of $\theta$. $$ \text { Is } \frac{\pi}{2} \text { a rational number? } $$ 178. Consider the integral: $$ \int \sqrt{1-x^{2}} d x $$ Substitute $u=1-x^{2}$ so $$ \begin{aligned} & u=1-x^{2} \\ & x=\sqrt{1-u}=(1-u)^{\frac{1}{2}} \\ & d x=\left(\frac{1}{2}\right)(1-u)^{-\frac{1}{2}}(-1) d u \end{aligned} $$ Hence $$ \int \sqrt{1-x^{2}} d x=\int \sqrt{u}\left(\frac{1}{2}\right)(1-u)^{-\frac{1}{2}}(-1) d u $$ Take the definite integral from $x=-1$ to $x=1$ and note that $u=0$ when $x=-1$ and $u=0$ also when $x=1$, so $$ \int_{-1}^{1} \sqrt{1-x^{2}} d x=\int_{0}^{0} \sqrt{u}\left(\frac{1}{2}\right)(1-u)^{-\frac{1}{2}}(-1) d u=0 $$ The last being zero since $\int_{0}^{0}$ anything is always 0 . But the integral on the left is equal to half the area of the unit disk, hence $\frac{\pi}{2}$. Therefor $\frac{\pi}{2}=0$ which is a rational number. How can this be? # Chapter 2: Taylor's Formula and Infinite Series All continuous functions which vanish at $x=a$<br>are approximately equal at $x=a$,<br>but some are more approximately equal than others. ## Taylor Polynomials Suppose you need to do some computation with a complicated function $y=f(x)$, and suppose that the only values of $x$ you care about are close to some constant $x=a$. Since polynomials are simpler than most other functions, you could then look for a polynomial $y=P(x)$ which somehow "matches" your function $y=f(x)$ for values of $x$ close to $a$. And you could then replace your function $f$ with the polynomial $P$, hoping that the error you make isn't too big. Which polynomial you will choose depends on when you think a polynomial "matches" a function. In this chapter we will say that a polynomial $P$ of degree $n$ matches a function $f$ at $x=a$ if $P$ has the same value and the same derivatives of order $1,2, \ldots, n$ at $x=a$ as the function $f$. The polynomial which matches a given function at some point $x=a$ is the Taylor polynomial of $f$. It is given by the following formula. 11.1. Definition. The Taylor polynomial of a function $y=f(x)$ of degree $n$ at a point $a$ is the polynomial $$ T_{n}^{a} f(x)=f(a)+f^{\prime}(a)(x-a)+\frac{f^{\prime \prime}(a)}{2 !}(x-a)^{2}+\cdots+\frac{f^{(n)}(a)}{n !}(x-a)^{n} . $$ (Recall that $n !=1 \cdot 2 \cdot 3 \cdots n$, and by definition $0 !=1$. 11.2. Theorem. The Taylor polynomial has the following property: it is the only polynomial $P(x)$ of degree $n$ whose value and whose derivatives of orders $1,2, \ldots$, and $n$ are the same as those of $f$, i.e. it's the only polynomial of degree $n$ for which $$ P(a)=f(a), \quad P^{\prime}(a)=f^{\prime}(a), \quad P^{\prime \prime}(a)=f^{\prime \prime}(a), \quad \ldots, \quad P^{(n)}(a)=f^{(n)}(a) $$ holds. Proof. We do the case $a=0$, for simplicity. Let $n$ be given, consider a polynomial $P(x)$ of degree $n$, say, $$ P(x)=a_{0}+a_{1} x+a_{2} x^{2}+a_{3} x^{3}+\cdots+a_{n} x^{n}, $$ and let's see what its derivatives look like. They are: $$ \begin{aligned} & P(x)=a_{0}+a_{1} x+a_{2} x^{2}+a_{3} x^{3}+a_{4} x^{4}+\cdots \\ & P^{\prime}(x)=a_{1}+2 a_{2} x+3 a_{3} x^{2}+4 a_{4} x^{3}+\cdots \\ & P^{(2)}(x)=1 \cdot 2 a_{2}+2 \cdot 3 a_{3} x+3 \cdot 4 a_{4} x^{2}+\cdots \\ & P^{(3)}(x)=1 \cdot 2 \cdot 3 a_{3}+2 \cdot 3 \cdot 4 a_{4} x+\cdots \\ & \begin{aligned} P^{(4)}(x)= & 1 \cdot 2 \cdot 3 \cdot 4 a_{4} \quad+\cdots \end{aligned} \end{aligned} $$ When you set $x=0$ all the terms which have a positive power of $x$ vanish, and you are left with the first entry on each line, i.e. $$ P(0)=a_{0}, \quad P^{\prime}(0)=a_{1}, \quad P^{(2)}(0)=2 a_{2}, \quad P^{(3)}(0)=2 \cdot 3 a_{3}, \text { etc. } $$ and in general $$ P^{(k)}(0)=k ! a_{k} \text { for } 0 \leq k \leq n . $$ For $k \geq n+1$ the derivatives $p^{(k)}(x)$ all vanish of course, since $P(x)$ is a polynomial of degree $n$. Therefore, if we want $P$ to have the same values and derivatives at $x=0$ of orders $1, \ldots, n$ as the function $f$, then we must have $k ! a_{k}=P^{(k)}(0)=f^{(k)}(0)$ for all $k \leq n$. Thus $$ a_{k}=\frac{f^{(k)}(0)}{k !} \quad \text { for } 0 \leq k \leq n $$ ## Examples Note that the zeroth order Taylor polynomial is just a constant, $$ T_{0}^{a} f(x)=f(a), $$ while the first order Taylor polynomial is $$ T_{1}^{a} f(x)=f(a)+f^{\prime}(a)(x-a) . $$ This is exactly the linear approximation of $f(x)$ for $x$ close to $a$ which was derived in 1st semester calculus. The Taylor polynomial generalizes this first order approximation by providing "higher order approximations" to $f$. Figure 1. The Taylor polynomials of degree $\mathbf{0}, \mathbf{1}$ and 2 of $f(x)=e^{x}$ at $a=0$. The zeroth order Taylor polynomial has the right value at $x=0$ but it doesn't know whether or not the function $f$ is increasing at $x=0$. The first order Taylor polynomial has the right slope at $x=0$, but it doesn't see if the graph of $f$ is curved up or down at $x=0$. The second order Taylor polynomial also has the right curvature at $x=0$. Most of the time we will take $a=0$ in which case we write $T_{n} f(x)$ instead of $T_{n}^{a} f(x)$, and we get a slightly simpler formula $$ T_{n} f(x)=f(0)+f^{\prime}(0) x+\frac{f^{\prime \prime}(0)}{2 !} x^{2}+\cdots+\frac{f^{(n)}(0)}{n !} x^{n} . $$ You will see below that for many functions $f(x)$ the Taylor polynomials $T_{n} f(x)$ give better and better approximations as you add more terms (i.e. as you increase $n$ ). For this reason the limit when $n \rightarrow \infty$ is often considered, which leads to the infinite sum $$ T_{\infty} f(x)=f(0)+f^{\prime}(0) x+\frac{f^{\prime \prime}(0)}{2 !} x^{2}+\frac{f^{\prime \prime \prime}(0)}{3 !} x^{3}+\cdots $$ At this point we will not try to make sense of the "sum of infinitely many numbers". 12.1. Example: Compute the Taylor polynomials of degree 0,1 and 2 of $f(x)=e^{x}$ at $a=0$, and plot them. One has $$ f(x)=e^{x} \Longrightarrow f^{\prime}(x)=e^{x} \Longrightarrow f^{\prime \prime}(x)=e^{x}, $$ so that $$ f(0)=1, \quad f^{\prime}(0)=1, \quad f^{\prime \prime}(0)=1 . $$ Therefore the first three Taylor polynomials of $e^{x}$ at $a=0$ are Figure 2. The top edge of the shaded region is the graph of $y=e^{x}$. The graphs are of the functions $y=1+x+C x^{2}$ for various values of $C$. These graphs all are tangent at $x=0$, but one of the parabolas matches the graph of $y=e^{x}$ better than any of the others. $$ \begin{aligned} & T_{0} f(x)=1 \\ & T_{1} f(x)=1+x \\ & T_{2} f(x)=1+x+\frac{1}{2} x^{2} . \end{aligned} $$ The graphs are found in Figure 2. As you can see from the graphs, the Taylor polynomial $T_{0} f(x)$ of degree 0 is close to $e^{x}$ for small $x$, by virtue of the continuity of $e^{x}$ The Taylor polynomial of degree 0 , i.e. $T_{0} f(x)=1$ captures the fact that $e^{x}$ by virtue of its continuity does not change very much if $x$ stays close to $x=0$. The Taylor polynomial of degree 1, i.e. $T_{1} f(x)=1+x$ corresponds to the tangent line to the graph of $f(x)=e^{x}$, and so it also captures the fact that the function $f(x)$ is increasing near $x=0$. Clearly $T_{1} f(x)$ is a better approximation to $e^{x}$ than $T_{0} f(x)$. The graphs of both $y=T_{0} f(x)$ and $y=T_{1} f(x)$ are straight lines, while the graph of $y=e^{x}$ is curved (in fact, convex). The second order Taylor polynomial captures this convexity. In fact, the graph of $y=T_{2} f(x)$ is a parabola, and since it has the same first and second derivative at $x=0$, its curvature is the same as the curvature of the graph of $y=e^{x}$ at $x=0$. So it seems that $y=T_{2} f(x)=1+x+x^{2} / 2$ is an approximation to $y=e^{x}$ which beats both $T_{0} f(x)$ and $T_{1} f(x)$. 12.2. Example: Find the Taylor polynomials of $f(x)=\sin x$. When you start computing the derivatives of $\sin x$ you find $$ f(x)=\sin x, \quad f^{\prime}(x)=\cos x, \quad f^{\prime \prime}(x)=-\sin x, \quad f^{(3)}(x)=-\cos x, $$ and thus $$ f^{(4)}(x)=\sin x $$ So after four derivatives you're back to where you started, and the sequence of derivatives of $\sin x$ cycles through the pattern $$ \sin x, \cos x,-\sin x,-\cos x, \sin x, \cos x,-\sin x,-\cos x, \sin x, \ldots $$ on and on. At $x=0$ you then get the following values for the derivatives $f^{(j)}(0)$, $$ \begin{array}{c|c|c|c|c|c|c|c|c|c} j & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & \cdots \\ \hline f^{(j)}(0) & 0 & 1 & 0 & -1 & 0 & 1 & 0 & -1 & \cdots \end{array} $$ This gives the following Taylor polynomials $$ \begin{aligned} & T_{0} f(x)=0 \\ & T_{1} f(x)=x \\ & T_{2} f(x)=x \\ & T_{3} f(x)=x-\frac{x^{3}}{3 !} \\ & T_{4} f(x)=x-\frac{x^{3}}{3 !} \\ & T_{5} f(x)=x-\frac{x^{3}}{3 !}+\frac{x^{5}}{5 !} \end{aligned} $$ Note that since $f^{(2)}(0)=0$ the Taylor polynomials $T_{1} f(x)$ and $T_{2} f(x)$ are the same! The second order Taylor polynomial in this example is really only a polynomial of degree 1 . In general the Taylor polynomial $T_{n} f(x)$ of any function is a polynomial of degree at most $n$, and this example shows that the degree can sometimes be strictly less. Figure 3. Taylor polynomials of $f(x)=\sin x$ 12.3. Example - Compute the Taylor polynomials of degree two and three of $f(x)=1+x+x^{2}+x^{3}$ at $a=3$. Solution: Remember that our notation for the $n^{\text {th }}$ degree Taylor polynomial of a function $f$ at $a$ is $T_{n}^{a} f(x)$, and that it is defined by (6). We have $$ f^{\prime}(x)=1+2 x+3 x^{2}, \quad f^{\prime \prime}(x)=2+6 x, \quad f^{\prime \prime \prime}(x)=6 $$ Therefore $f(3)=40, f^{\prime}(3)=34, f^{\prime \prime}(3)=20, f^{\prime \prime \prime}(3)=6$, and thus $$ T_{2}^{s} f(x)=40+34(x-3)+\frac{20}{2 !}(x-3)^{2}=40+34(x-3)+10(x-3)^{2} . $$ Why don't we expand the answer? You could do this (i.e. replace $(x-3)^{2}$ by $x^{2}-6 x+9$ throughout and sort the powers of $x$ ), but as we will see in this chapter, the Taylor polynomial $T_{n}^{a} f(x)$ is used as an approximation for $f(x)$ when $x$ is close to $a$. In this example $T_{2}^{3} f(x)$ is to be used when $x$ is close to 3 . If $x-3$ is a small number then the successive powers $x-3,(x-3)^{2},(x-3)^{3}, \ldots$ decrease rapidly, and so the terms in (8) are arranged in decreasing order. We can also compute the third degree Taylor polynomial. It is $$ \begin{aligned} T_{3}^{3} f(x) & =40+34(x-3)+\frac{20}{2 !}(x-3)^{2}+\frac{6}{3 !}(x-3)^{3} \\ & =40+34(x-3)+10(x-3)^{2}+(x-3)^{3} . \end{aligned} $$ If you expand this (this takes a little work) you find that $$ 40+34(x-3)+10(x-3)^{2}+(x-3)^{3}=1+x+x^{2}+x^{3} . $$ So the third degree Taylor polynomial is the function $f$ itself! Why is this so? Because of Theorem 11.2! Both sides in the above equation are third degree polynomials, and their derivatives of order $0,1,2$ and 3 are the same at $x=3$, so they must be the same polynomial. ## Some special Taylor polynomials Here is a list of functions whose Taylor polynomials are sufficiently regular that you can write a formula for the $n$th term. $$ \begin{aligned} T_{n} e^{x} & =1+x+\frac{x^{2}}{2 !}+\frac{x^{3}}{3 !}+\cdots+\frac{x^{n}}{n !} \\ T_{2 n+1}\{\sin x\} & =x-\frac{x^{3}}{3 !}+\frac{x^{5}}{5 !}-\frac{x^{7}}{7 !}+\cdots+(-1)^{n} \frac{x^{2 n+1}}{(2 n+1) !} \\ T_{2 n}\{\cos x\} & =1-\frac{x^{2}}{2 !}+\frac{x^{4}}{4 !}-\frac{x^{6}}{6 !}+\cdots+(-1)^{n} \frac{x^{2 n}}{(2 n) !} \\ T_{n}\left\{\frac{1}{1-x}\right\} & =1+x+x^{2}+x^{3}+x^{4}+\cdots+x^{n} \quad \text { (Geometric Series) } \\ T_{n}\{\ln (1+x)\} & =x-\frac{x^{2}}{2}+\frac{x^{3}}{3}-\frac{x^{4}}{4}+\cdots+(-1)^{n+1} \frac{x^{n}}{n} \end{aligned} $$ All of these Taylor polynomials can be computed directly from the definition, by repeatedly differentiating $f(x)$. Another function whose Taylor polynomial you should know is $f(x)=(1+x)^{a}$, where $a$ is a constant. You can compute $T_{n} f(x)$ directly from the definition, and when you do this you find $$ \text { (9) } \begin{aligned} & T_{n}\left\{(1+x)^{a}\right\}=1+a x+\frac{a(a-1)}{1 \cdot 2} x^{2}+\frac{a(a-1)(a-2)}{1 \cdot 2 \cdot 3} x^{3} \\ &+\cdots+\frac{a(a-1) \cdots(a-n+1)}{1 \cdot 2 \cdots n} x^{n} . \end{aligned} $$ This formula is called Newton's binomial formula. The coefficient of $x^{n}$ is called a binomial coefficient, and it is written $$ \left(\begin{array}{l} a \\ n \end{array}\right)=\frac{a(a-1) \cdots(a-n+1)}{n !} . $$ When $a$ is an integer $\left(\begin{array}{l}a \\ n\end{array}\right)$ is also called " $a$ choose $n . "$ Note that you already knew special cases of the binomial formula: when $a$ is a positive integer the binomial coefficients are just the numbers in Pascal's triangle. When $a=-1$ the binomial formula is the Geometric series. ## The Remainder Term The Taylor polynomial $T_{n} f(x)$ is almost never exactly equal to $f(x)$, but often it is a good approximation, especially if $x$ is small. To see how good the approximation is we define the "error term" or, "remainder term". 14.1. Definition. If $f$ is an $n$ times differentiable function on some interval containing a, then $$ R_{n}^{a} f(x)=f(x)-T_{n}^{a} f(x) $$ is called the $n^{\text {th }}$ order remainder (or error) term in the Taylor polynomial of $f$. If $a=0$, as will be the case in most examples we do, then we write $$ R_{n} f(x)=f(x)-T_{n} f(x) . $$ 14.2. Example. If $f(x)=\sin x$ then we have found that $T_{3} f(x)=x-\frac{1}{6} x^{3}$, so that $$ R_{3}\{\sin x\}=\sin x-x+\frac{1}{6} x^{3} . $$ This is a completely correct formula for the remainder term, but it's rather useless: there's nothing about this expression that suggests that $x-\frac{1}{6} x^{3}$ is a much better approximation to $\sin x$ than, say, $x+\frac{1}{6} x^{3}$. The usual situation is that there is no simple formula for the remainder term. 14.3. An unusual example, in which there is a simple formula for $R_{n} f(x)$. Consider $f(x)=1-x+3 x^{2}-15 x^{3}$. Then you find $$ T_{2} f(x)=1-x+3 x^{2}, \text { so that } R_{2} f(x)=f(x)-T_{2} f(x)=-15 x^{3} . $$ The moral of this example is this: Given a polynomial $f(x)$ you find its $n^{\text {th }}$ degree Taylor polynomial by taking all terms of degree $\leq n$ in $f(x)$; the remainder $R_{n} f(x)$ then consists of the remaining terms. 14.4. Another unusual, but important example where you can compute $R_{n} f(x)$. Consider the function $$ f(x)=\frac{1}{1-x} . $$ Then repeated differentiation gives $$ f^{\prime}(x)=\frac{1}{(1-x)^{2}}, \quad f^{(2)}(x)=\frac{1 \cdot 2}{(1-x)^{3}}, \quad f^{(3)}(x)=\frac{1 \cdot 2 \cdot 3}{(1-x)^{4}}, \quad \ldots $$ and thus $$ f^{(n)}(x)=\frac{1 \cdot 2 \cdot 3 \cdots n}{(1-x)^{n+1}} $$ Consequently, $$ f^{(n)}(0)=n ! \Longrightarrow \frac{1}{n !} f^{(n)}(0)=1 $$ and you see that the Taylor polynomials of this function are really simple, namely $$ T_{n} f(x)=1+x+x^{2}+x^{3}+x^{4}+\cdots+x^{n} . $$ But this sum should be really familiar: it is just the Geometric Sum (each term is $x$ times the previous term). Its sum is given by ${ }^{5}$ $$ T_{n} f(x)=1+x+x^{2}+x^{3}+x^{4}+\cdots+x^{n}=\frac{1-x^{n+1}}{1-x}, $$ which we can rewrite as $$ T_{n} f(x)=\frac{1}{1-x}-\frac{x^{n+1}}{1-x}=f(x)-\frac{x^{n+1}}{1-x} . $$ The remainder term therefore is $$ R_{n} f(x)=f(x)-T_{n} f(x)=\frac{x^{n+1}}{1-x} . $$ ${ }^{5}$ Multiply both sides with $1-x$ to verify this, in case you had forgotten the formula! ## Lagrange's Formula for the Remainder Term 15.1. Theorem. Let $f$ be an $n+1$ times differentiable function on some interval I containing $x=0$. Then for every $x$ in the interval $I$ there is a $\xi$ between 0 and $x$ such that $$ R_{n} f(x)=\frac{f^{(n+1)}(\xi)}{(n+1) !} x^{n+1} . $$ ( $\xi$ between 0 and $x$ means either $0<\xi<x$ or $x<\xi<0$, depending on the sign of $x$.) This theorem (including the proof) is similar to the Mean Value Theorem. The proof is a bit involved, and I've put it at the end of this chapter. There are calculus textbooks which, after presenting this remainder formula, give a whole bunch of problems which ask you to find $\xi$ for given $f$ and $x$. Such problems completely miss the point of Lagrange's formula. The point is that even though you usually can't compute the mystery point $\xi$ precisely, Lagrange's formula for the remainder term allows you to estimate it. Here is the most common way to estimate the remainder: 15.2. Estimate of remainder term. If $f$ is an $n+1$ times differentiable function on an interval containing $x=0$, and if you have a constant $M$ such that $$ \left|f^{(n+1)}(t)\right| \leq M \text { for all } t \text { between } 0 \text { and } x, $$ then $$ \left|R_{n} f(x)\right| \leq \frac{M|x|^{n+1}}{(n+1) !} $$ Proof. We don't know what $\xi$ is in Lagrange's formula, but it doesn't matter, for wherever it is, it must lie between 0 and $x$ so that our assumption $(\dagger)$ implies $\left|f^{(n+1)}(\xi)\right| \leq$ $M$. Put that in Lagrange's formula and you get the stated inequality. 15.3. How to compute $e$ in a few decimal places. Consider $f(x)=e^{x}$. We computed the Taylor polynomials before. If you set $x=1$, then you get $e=f(1)=$ $T_{n} f(1)+R_{n} f(1)$, and thus, taking $n=8$, $$ e=1+\frac{1}{1 !}+\frac{1}{2 !}+\frac{1}{3 !}+\frac{1}{4 !}+\frac{1}{5 !}+\frac{1}{6 !}+\frac{1}{7 !}+\frac{1}{8 !}+R_{8}(1) . $$ By Lagrange's formula there is a $\xi$ between 0 and 1 such that $$ R_{8}(1)=\frac{f^{(9)}(\xi)}{9 !} 1^{9}=\frac{e^{\xi}}{9 !} . $$ (remember: $f(x)=e^{x}$, so all its derivatives are also $e^{x}$.) We don't really know where $\xi$ is, but since it lies between 0 and 1 we know that $1<e^{\xi}<e$. So the remainder term $R_{8}(1)$ is positive and no more than $e / 9$ !. Estimating $e<3$, we find $$ \frac{1}{9 !}<R_{8}(1)<\frac{3}{9 !} $$ Thus we see that $$ 1+\frac{1}{1 !}+\frac{1}{2 !}+\frac{1}{3 !}+\cdots+\frac{1}{7 !}+\frac{1}{8 !}+\frac{1}{9 !}<e<1+\frac{1}{1 !}+\frac{1}{2 !}+\frac{1}{3 !}+\cdots+\frac{1}{7 !}+\frac{1}{8 !}+\frac{3}{9 !} $$ or, in decimals, $$ 2.718281 \ldots<e<2.718287 \ldots $$ 15.4. Error in the approximation $\sin x \approx x$. In many calculations involving $\sin x$ for small values of $x$ one makes the simplifying approximation $\sin x \approx x$, justified by the known limit $$ \lim _{x \rightarrow 0} \frac{\sin x}{x}=1 $$ Question: How big is the error in this approximation? To answer this question, we use Lagrange's formula for the remainder term again. Let $f(x)=\sin x$. Then the first degree Taylor polynomial of $f$ is $$ T_{1} f(x)=x . $$ The approximation $\sin x \approx x$ is therefore exactly what you get if you approximate $f(x)=$ $\sin x$ by its first degree Taylor polynomial. Lagrange tells us that $$ f(x)=T_{1} f(x)+R_{1} f(x), \text { i.e. } \sin x=x+R_{1} f(x), $$ where, since $f^{\prime \prime}(x)=-\sin x$, $$ R_{1} f(x)=\frac{f^{\prime \prime}(\xi)}{2 !} x^{2}=-\frac{1}{2} \sin \xi \cdot x^{2} $$ for some $\xi$ between 0 and $x$. As always with Lagrange's remainder term, we don't know where $\xi$ is precisely, so we have to estimate the remainder term. The easiest way to do this (but not the best: see below) is to say that no matter what $\xi$ is, $\sin \xi$ will always be between -1 and 1 . Hence the remainder term is bounded by $$ \left|R_{1} f(x)\right| \leq \frac{1}{2} x^{2} $$ and we find that $$ x-\frac{1}{2} x^{2} \leq \sin x \leq x+\frac{1}{2} x^{2} . $$ Question: How small must we choose $x$ to be sure that the approximation $\sin x \approx x$ isn't off by more than $1 \%$ ? If we want the error to be less than $1 \%$ of the estimate, then we should require $\frac{1}{2} x^{2}$ to be less than $1 \%$ of $|x|$, i.e. $$ \frac{1}{2} x^{2}<0.01 \cdot|x| \Leftrightarrow|x|<0.02 $$ So we have shown that, if you choose $|x|<0.02$, then the error you make in approximating $\sin x$ by just $x$ is no more than $1 \%$. A final comment about this example: the estimate for the error we got here can be improved quite a bit in two different ways: (1) You could notice that one has $|\sin x| \leq x$ for all $x$, so if $\xi$ is between 0 and $x$, then $|\sin \xi| \leq|\xi| \leq|x|$, which gives you the estimate $$ \left|R_{1} f(x)\right| \leq \frac{1}{2}|x|^{3} \quad \text { instead of } \frac{1}{2} x^{2} \text { as in (ף). } $$ (2) For this particular function the two Taylor polynomials $T_{1} f(x)$ and $T_{2} f(x)$ are the same (because $f^{\prime \prime}(0)=0$ ). So $T_{2} f(x)=x$, and we can write $$ \sin x=f(x)=x+R_{2} f(x), $$ In other words, the error in the approximation $\sin x \approx x$ is also given by the second order remainder term, which according to Lagrange is given by $$ R_{2} f(x)=\frac{-\cos \xi}{3 !} x^{3} \quad \stackrel{|\cos \xi| \leq 1}{\Longrightarrow}\left|R_{2} f(x)\right| \leq \frac{1}{6}|x|^{3}, $$ which is the best estimate for the error in $\sin x \approx x$ we have so far. ## The limit as $x \rightarrow 0$, keeping $n$ fixed 16.1. Little-oh. Lagrange's formula for the remainder term lets us write a function $y=f(x)$, which is defined on some interval containing $x=0$, in the following way $$ f(x)=f(0)+f^{\prime}(0) x+\frac{f^{(2)}(0)}{2 !} x^{2}+\cdots+\frac{f^{(n)}(0)}{n !} x^{n}+\frac{f^{(n+1)}(\xi)}{(n+1) !} x^{n+1} $$ The last term contains the $\xi$ from Lagrange's theorem, which depends on $x$, and of which you only know that it lies between 0 and $x$. For many purposes it is not necessary to know the last term in this much detail - often it is enough to know that "in some sense" the last term is the smallest term, in particular, as $x \rightarrow 0$ it is much smaller than $x$, or $x^{2}$, or, $\ldots$, or $x^{n}$ : 16.2. Theorem. If the $n+1$ st derivative $f^{(n+1)}(x)$ is continuous at $x=0$ then the remainder term $R_{n} f(x)=f^{(n+1)}(\xi) x^{n+1} /(n+1)$ ! satisfies $$ \lim _{x \rightarrow 0} \frac{R_{n} f(x)}{x^{k}}=0 $$ for any $k=0,1,2, \ldots, n$. Proof. Since $\xi$ lies between 0 and $x$, one has $\lim _{x \rightarrow 0} f^{(n+1)}(\xi)=f^{(n+1)}(0)$, and therefore $$ \lim _{x \rightarrow 0} \frac{R_{n} f(x)}{x^{k}}=\lim _{x \rightarrow 0} f^{(n+1)}(\xi) \frac{x^{n+1}}{x^{k}}=\lim _{x \rightarrow 0} f^{(n+1)}(\xi) \cdot x^{n+1-k}=f^{(n+1)}(0) \cdot 0=0 . $$ So we can rephrase (11) by saying $$ f(x)=f(0)+f^{\prime}(0) x+\frac{f^{(2)}(0)}{2 !} x^{2}+\cdots+\frac{f^{(n)}(0)}{n !} x^{n}+\text { remainder } $$ where the remainder is much smaller than $x^{n}, x^{n-1}, \ldots, x^{2}, x$ or 1 . In order to express the condition that some function is "much smaller than $x^{n}$," at least for very small $x$, Landau introduced the following notation which many people find useful. 16.3. Definition. "o $\left(x^{n}\right)$ " is an abbreviation for any function $h(x)$ which satisfies $$ \lim _{x \rightarrow 0} \frac{h(x)}{x^{n}}=0 $$ So you can rewrite (11) as $$ f(x)=f(0)+f^{\prime}(0) x+\frac{f^{(2)}(0)}{2 !} x^{2}+\cdots+\frac{f^{(n)}(0)}{n !} x^{n}+o\left(x^{n}\right) . $$ The nice thing about Landau's little-oh is that you can compute with it, as long as you obey the following (at first sight rather strange) rules which will be proved in class $$ \begin{aligned} x^{n} \cdot o\left(x^{m}\right) & =o\left(x^{n+m}\right) & & \\ o\left(x^{n}\right) \cdot o\left(x^{m}\right) & =o\left(x^{n+m}\right) & & \text { if } n<m \\ x^{m} & =o\left(x^{n}\right) & & \text { if } n<m \\ o\left(x^{n}\right)+o\left(x^{m}\right) & =o\left(x^{n}\right) & & \text { for any constant } C \\ o\left(C x^{n}\right) & =o\left(x^{n}\right) & & r \text { and } \end{aligned} $$ Figure 4. How the powers stack up. All graphs of $y=x^{n}(n>1)$ are tangent to the $x$-axis at the origin. But the larger the exponent $n$ the "flatter" the graph of $y=x^{n}$ is. 16.4. Example: prove one of these little-oh rules. Let's do the first one, i.e. let's show that $x^{n} \cdot o\left(x^{m}\right)$ is $o\left(x^{n+m}\right)$ as $x \rightarrow 0$. Remember, if someone writes $x^{n} \cdot o\left(x^{m}\right)$, then the $o\left(x^{m}\right)$ is an abbreviation for some function $h(x)$ which satisfies $\lim _{x \rightarrow 0} h(x) / x^{m}=0$. So the $x^{n} \cdot o\left(x^{m}\right)$ we are given here really is an abbreviation for $x^{n} h(x)$. We then have $$ \lim _{x \rightarrow 0} \frac{x^{n} h(x)}{x^{n+m}}=\lim _{x \rightarrow 0} \frac{h(x)}{x^{m}}=0, \text { since } h(x)=o\left(x^{m}\right) . $$ 16.5. Can you see that $x^{3}=o\left(x^{2}\right)$ by looking at the graphs of these functions? A picture is of course never a proof, but have a look at figure 4 which shows you the graphs of $y=x, x^{2}, x^{3}, x^{4}, x^{5}$ and $x^{10}$. As you see, when $x$ approaches 0 , the graphs of higher powers of $x$ approach the $x$-axis (much?) faster than do the graphs of lower powers. You should also have a look at figure 5 which exhibits the graphs of $y=x^{2}$, as well as several linear functions $y=C x$ (with $C=1, \frac{1}{2}, \frac{1}{5}$ and $\frac{1}{10}$.) For each of these linear functions one has $x^{2}<C x$ if $x$ is small enough; how small is actually small enough depends on $C$. The smaller the constant $C$, the closer you have to keep $x$ to 0 to be sure that $x^{2}$ is smaller than $C x$. Nevertheless, no matter how small $C$ is, the parabola will eventually always reach the region below the line $y=C x$. 16.6. Example: Little-oh arithmetic is a little funny. Both $x^{2}$ and $x^{3}$ are functions which are $o(x)$, i.e. $$ x^{2}=o(x) \quad \text { and } \quad x^{3}=o(x) $$ Nevertheless $x^{2} \neq x^{3}$. So in working with little-oh we are giving up on the principle that says that two things which both equal a third object must themselves be equal; in other words, $a=b$ and $b=c$ implies $a=c$, but not when you're using little-ohs! You can also put it like this: just because two quantities both are much smaller than $x$, they don't have to be equal. In particular, $$ \text { you can never cancel little-ohs!!! } $$ Figure 5. $x^{2}$ is smaller than any multiple of $x$, if $x$ is small enough. Compare the quadratic function $y=x^{2}$ with a linear function $y=C x$. Their graphs are a parabola and a straight line. Parts of the parabola may lie above the line, but as $x \searrow 0$ the parabola will always duck underneath the line. In other words, the following is pretty wrong $$ o\left(x^{2}\right)-o\left(x^{2}\right)=0 . $$ Why? The two $o\left(x^{2}\right)$ 's both refer to functions $h(x)$ which satisfy $\lim _{x \rightarrow 0} h(x) / x^{2}=0$, but there are many such functions, and the two $o\left(x^{2}\right)$ 's could be abbreviations for different functions $h(x)$. Contrast this with the following computation, which at first sight looks wrong even though it is actually right: $$ o\left(x^{2}\right)-o\left(x^{2}\right)=o\left(x^{2}\right) . $$ In words: if you subtract two quantities both of which are negligible compared to $x^{2}$ for small $x$ then the result will also be negligible compared to $x^{2}$ for small $x$. 16.7. Computations with Taylor polynomials. The following theorem is very useful because it lets you compute Taylor polynomials of a function without differentiating it. 16.8. Theorem. If $f(x)$ and $g(x)$ are $n+1$ times differentiable functions then $$ T_{n} f(x)=T_{n} g(x) \Longleftrightarrow f(x)=g(x)+o\left(x^{n}\right) . $$ In other words, if two functions have the same $n$th degree Taylor polynomial, then their difference is much smaller than $x^{n}$, at least, if $x$ is small. In principle the definition of $T_{n} f(x)$ lets you compute as many terms of the Taylor polynomial as you want, but in many (most) examples the computations quickly get out of hand. To see what can happen go though the following example: 16.9. How NOT to compute the Taylor polynomial of degree 12 of $f(x)=$ $1 /\left(1+x^{2}\right)$. Diligently computing derivatives one by one you find $$ \begin{aligned} f(x) & =\frac{1}{1+x^{2}} & & \text { so } f(0)=1 \\ f^{\prime}(x) & =\frac{-2 x}{\left(1+x^{2}\right)^{2}} & & \text { so } f^{\prime}(0)=0 \\ f^{\prime \prime}(x) & =\frac{6 x^{2}-2}{\left(1+x^{2}\right)^{3}} & & \text { so } f^{\prime \prime}(0)=-2 \\ f^{(3)}(x) & =24 \frac{x-x^{3}}{\left(1+x^{2}\right)^{4}} & & \text { so } f^{(3)}(0)=0 \\ f^{(4)}(x) & =24 \frac{1-10 x^{2}+5 x^{4}}{\left(1+x^{2}\right)^{5}} & & \text { so } f^{(4)}(0)=24=4 ! \\ f^{(5)}(x) & =240 \frac{-3 x+10 x^{3}-3 x^{5}}{\left(1+x^{2}\right)^{6}} & & \text { so } f^{(4)}(0)=0 \\ f^{(6)}(x) & =-720 \frac{-1+21 x^{2}-35 x^{4}+7 x^{6}}{\left(1+x^{2}\right)^{7}} & & \text { so } f^{(4)}(0)=720=6 ! \end{aligned} $$ I'm getting tired of differentiating - can you find $f^{(12)}(x)$ ? After a lot of work we give up at the sixth derivative, and all we have found is $$ T_{6}\left\{\frac{1}{1+x^{2}}\right\}=1-x^{2}+x^{4}-x^{6} . $$ By the way, $$ f^{(12)}(x)=479001600 \frac{1-78 x^{2}+715 x^{4}-1716 x^{6}+1287 x^{8}-286 x^{10}+13 x^{12}}{\left(1+x^{2}\right)^{13}} $$ and $479001600=12 !$ 16.10. The right approach to finding the Taylor polynomial of any degree of $f(x)=1 /\left(1+x^{2}\right)$. Start with the Geometric Series: if $g(t)=1 /(1-t)$ then $$ g(t)=1+t+t^{2}+t^{3}+t^{4}+\cdots+t^{n}+o\left(t^{n}\right) . $$ Now substitute $t=-x^{2}$ in this limit, $$ g\left(-x^{2}\right)=1-x^{2}+x^{4}-x^{6}+\cdots+(-1)^{n} x^{2 n}+o\left(\left(-x^{2}\right)^{n}\right) $$ Since $o\left(\left(-x^{2}\right)^{n}\right)=o\left(x^{2 n}\right)$ and $$ g\left(-x^{2}\right)=\frac{1}{1-\left(-x^{2}\right)}=\frac{1}{1+x^{2}}, $$ we have found $$ \frac{1}{1+x^{2}}=1-x^{2}+x^{4}-x^{6}+\cdots+(-1)^{n} x^{2 n}+o\left(x^{2 n}\right) $$ By Theorem (16.8) this implies $$ T_{2 n}\left\{\frac{1}{1+x^{2}}\right\}=1-x^{2}+x^{4}-x^{6}+\cdots+(-1)^{n} x^{2 n} . $$ 16.11. Example of multiplication of Taylor series. Finding the Taylor series of $e^{2 x} /(1+x)$ directly from the definition is another recipe for headaches. Instead, you should exploit your knowledge of the Taylor series of both factors $e^{2 x}$ and $1 /(1+x)$ : $$ \begin{aligned} e^{2 x} & =1+2 x+\frac{2^{2} x^{2}}{2 !}+\frac{2^{3} x^{3}}{3 !}+\frac{2^{4} x^{4}}{4 !}+o\left(x^{4}\right) \\ & =1+2 x+2 x^{2}+\frac{4}{3} x^{3}+\frac{2}{3} x^{4}+o\left(x^{4}\right) \\ \frac{1}{1+x} & =1-x+x^{2}-x^{3}+x^{4}+o\left(x^{4}\right) . \end{aligned} $$ Then multiply these two $$ \begin{aligned} & e^{2 x} \cdot \frac{1}{1+x}=\left(1+2 x+2 x^{2}+\frac{4}{3} x^{3}+\frac{2}{3} x^{4}+o\left(x^{4}\right)\right) \cdot\left(1-x+x^{2}-x^{3}+x^{4}+o\left(x^{4}\right)\right) \\ & =1-x+x^{2}-x^{3}+x^{4}+o\left(x^{4}\right) \\ & +2 x-2 x^{2}+2 x^{3}-2 x^{4}+o\left(x^{4}\right) \\ & +2 x^{2}-2 x^{3}+2 x^{4}+o\left(x^{4}\right) \\ & +\frac{4}{3} x^{3}-\frac{4}{3} x^{4}+o\left(x^{4}\right) \\ & +\frac{2}{3} x^{4}+o\left(x^{4}\right) \\ & =1+x+x^{2}+\frac{1}{3} x^{3}+\frac{1}{3} x^{4}+o\left(x^{4}\right) \quad(x \rightarrow 0) \end{aligned} $$ 16.12. Taylor's formula and Fibonacci numbers. The Fibonacci numbers are defined as follows: the first two are $f_{0}=1$ and $f_{1}=1$, and the others are defined by the equation $$ f_{n}=f_{n-1}+f_{n-2} $$ So $$ \begin{aligned} & f_{2}=f_{1}+f_{0}=1+1=2, \\ & f_{3}=f_{2}+f_{1}=2+1=3, \\ & f_{4}=f_{3}+f_{2}=3+2=5, \end{aligned} $$ etc. The equation (Fib) lets you compute the whole sequence of numbers, one by one, when you are given only the first few numbers of the sequence ( $f_{0}$ and $f_{1}$ in this case). Such an equation for the elements of a sequence is called a recursion relation. Now consider the function $$ f(x)=\frac{1}{1-x-x^{2}} . $$ Let $$ T_{\infty} f(x)=c_{0}+c_{1} x+c_{2} x^{2}+c_{3} x^{3}+\cdots $$ be its Taylor series. Due to Lagrange's remainder theorem you have, for any $n$, $$ \frac{1}{1-x-x^{2}}=c_{0}+c_{1} x+c_{2} x^{2}+c_{3} x^{3}+\cdots+c_{n} x^{n}+o\left(x^{n}\right) \quad(x \rightarrow 0) . $$ Multiply both sides with $1-x-x^{2}$ and you get $$ \begin{aligned} & 1=\left(1-x-x^{2}\right) \cdot\left(c_{0}+c_{1} x+c_{2} x^{2}+\cdots+c_{n}+o\left(x^{n}\right)\right) \quad(x \rightarrow 0) \\ & =c_{0}+c_{1} x+c_{2} x^{2}+\cdots+c_{n} x^{n}+o\left(x^{n}\right) \\ & -c_{0} x-c_{1} x^{2}-\cdots-c_{n-1} x^{n}+o\left(x^{n}\right) \\ & -c_{0} x^{2}-\cdots-c_{n-2} x^{n}-o\left(x^{n}\right) \quad(x \rightarrow 0) \\ & =c_{0}+\left(c_{1}-c_{0}\right) x+\left(c_{2}-c_{1}-c_{0}\right) x^{2}+\left(c_{3}-c_{2}-c_{1}\right) x^{3}+\cdots \\ & \cdots+\left(c_{n}-c_{n-1}-c_{n-2}\right) x^{n}+o\left(x^{n}\right) \quad(x \rightarrow 0) \end{aligned} $$ Compare the coefficients of powers $x^{k}$ on both sides for $k=0,1, \ldots, n$ and you find $$ c_{0}=1, \quad c_{1}-c_{0}=0 \Longrightarrow c_{1}=c_{0}=1, \quad c_{2}-c_{1}-c_{0}=0 \Longrightarrow c_{2}=c_{1}+c_{0}=2 $$ and in general $$ c_{n}-c_{n-1}-c_{n-2}=0 \Longrightarrow c_{n}=c_{n-1}+c_{n-2} $$ Therefore the coefficients of the Taylor series $T_{\infty} f(x)$ are exactly the Fibonacci numbers: $$ c_{n}=f_{n} \text { for } n=0,1,2,3, \ldots $$ Since it is much easier to compute the Fibonacci numbers one by one than it is to compute the derivatives of $f(x)=1 /\left(1-x-x^{2}\right)$, this is a better way to compute the Taylor series of $f(x)$ than just directly from the definition. 16.13. More about the Fibonacci numbers. In this example you'll see a trick that lets you compute the Taylor series of any rational function. You already know the trick: find the partial fraction decomposition of the given rational function. Ignoring the case that you have quadratic expressions in the denominator, this lets you represent your rational function as a sum of terms of the form $$ \frac{A}{(x-a)^{p}} \text {. } $$ These are easy to differentiate any number of times, and thus they allow you to write their Taylor series. Let's apply this to the function $f(x)=1 /\left(1-x-x^{2}\right)$ from the example 16.12. First we factor the denominator. $$ 1-x-x^{2}=0 \Longleftrightarrow x^{2}+x-1=0 \Longleftrightarrow x=\frac{-1 \pm \sqrt{5}}{2} $$ The number $$ \phi=\frac{1+\sqrt{5}}{2} \approx 1.61803398874989 \ldots $$ is called the Golden Ratio. It satisfies ${ }^{6}$ $$ \phi+\frac{1}{\phi}=\sqrt{5} $$ The roots of our polynomial $x^{2}+x-1$ are therefore $$ x_{-}=\frac{-1-\sqrt{5}}{2}=-\phi, \quad x_{+}=\frac{-1+\sqrt{5}}{2}=\frac{1}{\phi} . $$ and we can factor $1-x-x^{2}$ as follows $$ 1-x-x^{2}=-\left(x^{2}+x-1\right)=-\left(x-x_{-}\right)\left(x-x_{+}\right)=-\left(x-\frac{1}{\phi}\right)(x+\phi) . $$ ${ }^{6}$ To prove this, use $\frac{1}{\phi}=\frac{2}{1+\sqrt{5}}=\frac{2}{1+\sqrt{5}} \frac{1-\sqrt{5}}{1-\sqrt{5}}=\frac{-1+\sqrt{5}}{2}$. So $f(x)$ can be written as $$ f(x)=\frac{1}{1-x-x^{2}}=\frac{-1}{\left(x-\frac{1}{\phi}\right)(x+\phi)}=\frac{A}{x-\frac{1}{\phi}}+\frac{B}{x+\phi} $$ The Heaviside trick will tell you what $A$ and $B$ are, namely, $$ A=\frac{-1}{\frac{1}{\phi}+\phi}=\frac{-1}{\sqrt{5}}, \quad B=\frac{1}{\frac{1}{\phi}+\phi}=\frac{1}{\sqrt{5}} $$ The $n$th derivative of $f(x)$ is $$ f^{(n)}(x)=\frac{A(-1)^{n} n !}{\left(x-\frac{1}{\phi}\right)^{n+1}}+\frac{B(-1)^{n} n !}{(x+\phi)^{n+1}} $$ Setting $x=0$ and dividing by $n$ ! finally gives you the coefficient of $x^{n}$ in the Taylor series of $f(x)$. The result is the following formula for the $n$th Fibonacci number $$ c_{n}=\frac{f^{(n)}(0)}{n !}=\frac{1}{n !} \frac{A(-1)^{n} n !}{\left(-\frac{1}{\phi}\right)^{n+1}}+\frac{1}{n !} \frac{B(-1)^{n} n !}{(\phi)^{n+1}}=-A \phi^{n+1}-B\left(\frac{1}{\phi}\right)^{n+1} $$ Using the values for $A$ and $B$ you find $$ f_{n}=c_{n}=\frac{1}{\sqrt{5}}\left\{\phi^{n+1}-\frac{1}{\phi^{n+1}}\right\} $$ 16.14. Differentiating Taylor polynomials. If $$ T_{n} f(x)=a_{0}+a_{1} x+a_{2} x^{2}+\cdots+a_{n} x^{n} $$ is the Taylor polynomial of a function $y=f(x)$, then what is the Taylor polynomial of its derivative $f^{\prime}(x)$ ? 16.15. Theorem. The Taylor polynomial of degree $n-1$ of $f^{\prime}(x)$ is given by $$ T_{n-1}\left\{f^{\prime}(x)\right\}=a_{1}+2 a_{2} x+\cdots+n a_{n} x^{n-1} . $$ In other words, "the Taylor polynomial of the derivative is the derivative of the Taylor polynomial." Proof. Let $g(x)=f^{\prime}(x)$. Then $g^{(k)}(0)=f^{(k+1)}(0)$, so that $$ \begin{aligned} T_{n-1} g(x) & =g(0)+g^{\prime}(0) x+g^{(2)}(0) \frac{x^{2}}{2 !}+\cdots+g^{(n-1)}(0) \frac{x^{n-1}}{(n-1) !} \\ & =f^{\prime}(0)+f^{(2)}(0) x+f^{(3)}(0) \frac{x^{2}}{2 !}+\cdots+f^{(n)}(0) \frac{x^{n-1}}{(n-1) !} \end{aligned} $$ On the other hand, if $T_{n} f(x)=a_{0}+a_{1} x+\cdots+a_{n} x^{n}$, then $a_{k}=f^{(k)}(0) / k$ !, so that $$ k a_{k}=\frac{k}{k !} f^{(k)}(0)=\frac{f^{(k)}(0)}{(k-1) !} . $$ In other words, $$ 1 \cdot a_{1}=f^{\prime}(0), 2 a_{2}=f^{(2)}(0), 3 a_{3}=\frac{f^{(3)}(0)}{2 !}, \text { etc. } $$ So, continuing from $(\$)$ you find that $$ T_{n-1}\left\{f^{\prime}(x)\right\}=T_{n-1} g(x)=a_{1}+2 a_{2} x+\cdots+n a_{n} x^{n-1} $$ as claimed. 16.16. Example. We compute the Taylor polynomial of $f(x)=1 /(1-x)^{2}$ by noting that Since $$ f(x)=F^{\prime}(x), \text { where } F(x)=\frac{1}{1-x} . $$ theorem 16.15 implies that $$ T_{n+1} F(x)=1+x+x^{2}+x^{3}+\cdots+x^{n+1}, $$ $$ T_{n}\left\{\frac{1}{(1-x)^{2}}\right\}=1+2 x+3 x^{2}+4 x^{3}+\cdots+(n+1) x^{n} $$ 16.17. Example. [Example: Taylor polynomials of $\arctan x$. ] Let $f(x)=\arctan x$. Then know that $$ f^{\prime}(x)=\frac{1}{1+x^{2}} . $$ By substitution of $t=-x^{2}$ in the Taylor polynomial of $1 /(1-t)$ we had found $$ T_{2 n}\left\{f^{\prime}(x)\right\}=T_{2 n}\left\{\frac{1}{1+x^{2}}\right\}=1-x^{2}+x^{4}-x^{6}+\cdots+(-1)^{n} x^{2 n} . $$ This Taylor polynomial must be the derivative of $T_{2 n+1} f(x)$, so we have $$ T_{2 n+1}\{\arctan x\}=x-\frac{x^{3}}{3}+\frac{x^{5}}{5}+\cdots+(-1)^{n} \frac{x^{2 n+1}}{2 n+1} . $$ ## The limit $n \rightarrow \infty$, keeping $x$ fixed 17.1. Sequences and their limits. We shall call a sequence any ordered sequence of numbers $a_{1}, a_{2}, a_{3}, \ldots$ for each positive integer $n$ we have to specify a number $a_{n}$. ### Examples of sequences. ## definition $\downarrow$ $a_{n}=n$ $b_{n}=0$ $c_{n}=\frac{1}{n}$ $d_{n}=\left(-\frac{1}{3}\right)^{n}$ $E_{n}=1+\frac{1}{1 !}+\frac{1}{2 !}+\frac{1}{3 !}+\cdots+\frac{1}{n !}$ $S_{n}=T_{2 n+1}\{\sin x\}=x-\frac{x^{3}}{3 !}+\cdots+(-1)^{n} \frac{x^{2 n+1}}{(2 n+1) !}$ first few number in the sequence $$ \begin{array}{r} \downarrow \\ 1,2,3,4, \ldots \\ 0,0,0,0, \ldots \\ \frac{1}{1}, \frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \ldots \\ -\frac{1}{3}, \frac{1}{9},-\frac{1}{27}, \frac{1}{81}, \ldots \\ 1,2,2 \frac{1}{2}, 2 \frac{2}{3}, 2 \frac{17}{24}, 2 \frac{43}{60}, \ldots \\ x, x-\frac{x^{3}}{3 !}, x-\frac{x^{3}}{3 !}+\frac{x^{5}}{5 !}, \ldots \end{array} $$ The last two sequences are derived from the Taylor polynomials of $e^{x}$ (at $x=1$ ) and $\sin x$ (at any $x$ ). The last example $S_{n}$ really is a sequence of functions, i.e. for every choice of $x$ you get a different sequence. 17.3. Definition. A sequence of numbers $\left(a_{n}\right)_{n=1}^{\infty}$ converges to a limit $L$, if for every $\epsilon>0$ there is a number $N_{\epsilon}$ such that for all $n>N_{\epsilon}$ one has $$ \left|a_{n}-L\right|<\epsilon . $$ One writes $$ \lim _{n \rightarrow \infty} a_{n}=L $$ 17.4. Example: $\lim _{n \rightarrow \infty} \frac{1}{n}=0$. The sequence $c_{n}=1 / n$ converges to 0 . To prove this let $\epsilon>0$ be given. We have to find an $N_{\epsilon}$ such that $$ \left|c_{n}\right|<\epsilon \text { for all } n>N_{\epsilon} \text {. } $$ The $c_{n}$ are all positive, so $\left|c_{n}\right|=c_{n}$, and hence $$ \left|c_{n}\right|<\epsilon \Longleftrightarrow \frac{1}{n}<\epsilon \Longleftrightarrow n>\frac{1}{\epsilon} $$ which prompts us to choose $N_{\epsilon}=1 / \epsilon$. The calculation we just did shows that if $n>\frac{1}{\epsilon}=$ $N_{\epsilon}$, then $\left|c_{n}\right|<\epsilon$. That means that $\lim _{n \rightarrow \infty} c_{n}=0$. 17.5. Example: $\lim _{n \rightarrow \infty} a^{n}=0$ if $|a|<1$. As in the previous example one can show that $\lim _{n \rightarrow \infty} 2^{-n}=0$, and more generally, that for any constant $a$ with $-1<a<1$ one has Indeed, $$ \lim _{n \rightarrow \infty} a^{n}=0 $$ holds if and only if $$ \left|a^{n}\right|=|a|^{n}=e^{n \ln |a|}<\epsilon $$ $$ n \ln |a|<\ln \epsilon . $$ Since $|a|<1$ we have $\ln |a|<0$ so that dividing by $\ln |a|$ reverses the inequality, with result $$ \left|a^{n}\right|<\epsilon \Longleftrightarrow n>\frac{\ln \epsilon}{\ln |a|} $$ The choice $N_{\epsilon}=(\ln \epsilon) /(\ln |a|)$ therefore guarantees that $\left|a^{n}\right|<\epsilon$ whenever $n>N_{\epsilon}$. One can show that the operation of taking limits of sequences obeys the same rules as taking limits of functions. 17.6. Theorem. If $$ \lim _{n \rightarrow \infty} a_{n}=A \text { and } \lim _{n \rightarrow \infty} b_{n}=B $$ then one has $$ \begin{aligned} \lim _{n \rightarrow \infty} a_{n} \pm b_{n} & =A \pm B \\ \lim _{n \rightarrow \infty} a_{n} b_{n} & =A B \\ \lim _{n \rightarrow \infty} \frac{a_{n}}{b_{n}} & \left.=\frac{A}{B} \quad \text { (assuming } B \neq 0\right) . \end{aligned} $$ The so-called "sandwich theorem" for ordinary limits also applies to limits of sequences. Namely, one has 17.7. "Sandwich theorem". If $a_{n}$ is a sequence which satisfies $b_{n}<a_{n}<c_{n}$ for all $n$, and if $\lim _{n \rightarrow \infty} b_{n}=\lim _{n \rightarrow \infty} c_{n}=0$, then $\lim _{n \rightarrow \infty} a_{n}=0$. Finally, one can show this: 17.8. Theorem. If $f(x)$ is a function which is continuous at $x=A$, and $a_{n}$ is a sequence which converges to $A$, then $$ \lim _{n \rightarrow \infty} f\left(a_{n}\right)=f\left(\lim _{n \rightarrow \infty} a_{n}\right)=f(A) . $$ 17.9. Example. Since $\lim _{n \rightarrow \infty} 1 / n=0$ and since $f(x)=\cos x$ is continuous at $x=0$ we have $$ \lim _{n \rightarrow \infty} \cos \frac{1}{n}=\cos 0=1 $$ 17.10. Example. You can compute the limit of any rational function of $n$ by dividing numerator and denominator by the highest occurring power of $n$. Here is an example: $$ \lim _{n \rightarrow \infty} \frac{2 n^{2}-1}{n^{2}+3 n}=\lim _{n \rightarrow \infty} \frac{2-\left(\frac{1}{n}\right)^{2}}{1+3 \cdot \frac{1}{n}}=\frac{2-0^{2}}{1+3 \cdot 0^{2}}=2 $$ 17.11. Example. [Application of the Sandwich theorem. ] We show that $\lim _{n \rightarrow \infty} \frac{1}{\sqrt{n^{2}+1}}=$ 0 in two different ways. Method 1: Since $\sqrt{n^{2}+1}>\sqrt{n^{2}}=n$ we have $$ 0<\frac{1}{\sqrt{n^{2}+1}}<\frac{1}{n} $$ The sequences " 0 " and $\frac{1}{n}$ both go to zero, so the Sandwich theorem implies that $1 / \sqrt{n^{2}+1}$ also goes to zero. Method 2: Divide numerator and denominator both by $n$ to get $$ a_{n}=\frac{1 / n}{\sqrt{1+(1 / n)^{2}}}=f\left(\frac{1}{n}\right), \quad \text { where } f(x)=\frac{x}{\sqrt{1+x^{2}}} . $$ Since $f(x)$ is continuous at $x=0$, and since $\frac{1}{n} \rightarrow 0$ as $n \rightarrow \infty$, we conclude that $a_{n}$ converges to 0 . 17.12. Example: $\lim _{n \rightarrow \infty} \frac{x^{n}}{n !}=0$ for any real number $x$. If $|x| \leq 1$ then this is easy, for we would have $\left|x^{n}\right| \leq 1$ for all $n \geq 0$ and thus $$ \left|\frac{x^{n}}{n !}\right| \leq \frac{1}{n !}=\frac{1}{1 \cdot \underbrace{2 \cdot 3 \cdots(n-1) \cdot n}_{n-1 \text { factors }}} \leq \frac{1}{1 \cdot \underbrace{2 \cdot 2 \cdots 2 \cdot 2}_{n-1 \text { factors }}}=\frac{1}{2^{n-1}} $$ which shows that $\lim _{n \rightarrow \infty} \frac{x^{n}}{n !}=0$, by the Sandwich Theorem. For arbitrary $x$ you first choose an integer $N \geq 2 x$. Then for all $n \geq N$ one has $$ \begin{array}{rlr} \frac{x^{n}}{n !} & \leq \frac{|x| \cdot|x| \cdots|x| \cdot|x|}{1 \cdot 2 \cdot 3 \cdots n} \\ & \leq \frac{N \cdot N \cdot N \cdots N \cdot N}{1 \cdot 2 \cdot 3 \cdots n}\left(\frac{1}{2}\right)^{n} & \text { use }|x| \leq \frac{N}{2} \end{array} $$ Split fraction into two parts, one containing the first $N$ factors from both numerator and denominator, the other the remaining factors: $$ \underbrace{\frac{N}{1} \cdot \frac{N}{2} \cdot \frac{N}{3} \cdots \frac{N}{N}}_{=N^{N} / N !} \cdot \frac{N}{N+1} \cdots \frac{N}{n}=\frac{N^{N}}{N !} \cdot \underbrace{\frac{N}{N+1}}_{<1} \cdot \underbrace{\frac{N}{N+2}}_{<1} \cdots \underbrace{\frac{N}{n}}_{<1} \leq \frac{N^{N}}{N !} $$ Hence we have $$ \left|\frac{x^{n}}{n !}\right| \leq \frac{N^{N}}{N !}\left(\frac{1}{2}\right)^{n} $$ if $2|x| \leq N$ and $n \geq N$. Here everything is independent of $n$, except for the last factor $\left(\frac{1}{2}\right)^{n}$ which causes the whole thing to converge to zero as $n \rightarrow \infty$. ## Convergence of Taylor Series 18.1. Definition. Let $y=f(x)$ be some function defined on an interval $a<x<b$ containing 0 . We say the Taylor series $T_{\infty} f(x)$ converges to $f(x)$ for a given $x$ if $$ \lim _{n \rightarrow \infty} T_{n} f(x)=f(x) $$ The most common notations which express this condition are $$ f(x)=\sum_{k=0}^{\infty} f^{(k)}(0) \frac{x^{k}}{k !} $$ or $$ f(x)=f(0)+f^{\prime}(0) x+f^{\prime \prime}(0) \frac{x^{2}}{2 !}+f^{(3)}(0) \frac{x^{3}}{3 !}+\cdots $$ In both cases convergence justifies the idea that you can add infinitely many terms, as suggested by both notations. There is no easy and general criterion which you could apply to a given function $f(x)$ that would tell you if its Taylor series converges for any particular $x$ (except $x=0$ - what does the Taylor series look like when you set $x=0$ ?). On the other hand, it turns out that for many functions the Taylor series does converge to $f(x)$ for all $x$ in some interval $-\rho<x<\rho$. In this section we will check this for two examples: the "geometric series" and the exponential function. Before we do the examples I want to make this point about how we're going to prove that the Taylor series converges: Instead of taking the limit of the $T_{n} f(x)$ as $n \rightarrow \infty$, you are usually better off looking at the remainder term. Since $T_{n} f(x)=f(x)-R_{n} f(x)$ you have $$ \lim _{n \rightarrow \infty} T_{n} f(x)=f(x) \Longleftrightarrow \lim _{n \rightarrow \infty} R_{n} f(x)=0 $$ So: to check that the Taylor series of $f(x)$ converges to $f(x)$ we must show that the remainder term $R_{n} f(x)$ goes to zero as $n \rightarrow \infty$. 18.2. Example: The Geometric series converges for $-1<x<1$. If $f(x)=$ $1 /(1-x)$ then by the formula for the Geometric Sum you have $$ \begin{aligned} f(x) & =\frac{1}{1-x} \\ & =\frac{1-x^{n+1}+x^{n+1}}{1-x} \\ & =1+x+x^{2}+\cdots+x^{n}+\frac{x^{n+1}}{1-x} \\ & =T_{n} f(x)+\frac{x^{n+1}}{1-x} . \end{aligned} $$ We are not dividing by zero since $|x|<1$ so that $1-x \neq 0$. The remainder term is $$ R_{n} f(x)=\frac{x^{n+1}}{1-x} $$ Since $|x|<1$ we have $$ \lim _{n \rightarrow \infty}\left|R_{n} f(x)\right|=\lim _{n \rightarrow \infty} \frac{|x|^{n+1}}{|1-x|}=\frac{\lim _{n \rightarrow \infty}|x|^{n+1}}{|1-x|}=\frac{0}{|1-x|}=0 . $$ Thus we have shown that the series converges for all $-1<x<1$, i.e. $$ \frac{1}{1-x}=\lim _{n \rightarrow \infty}\left\{1+x+x^{2}+\cdots+x^{n}\right\}=1+x+x^{2}+x^{3}+\cdots $$ 18.3. Convergence of the exponential Taylor series. Let $f(x)=e^{x}$. It turns out the Taylor series of $e^{x}$ converges to $e^{x}$ for every value of $x$. Here's why: we had found that $$ T_{n} e^{x}=1+x+\frac{x^{2}}{2 !}+\cdots+\frac{x^{n}}{n !}, $$ and by Lagrange's formula the remainder is given by $$ R_{n} e^{x}=e^{\xi} \frac{x^{n+1}}{(n+1) !}, $$ where $\xi$ is some number between 0 and $x$. If $x>0$ then $0<\xi<x$ so that $e^{\xi} \leq e^{x}$; if $x<0$ then $x<\xi<0$ implies that $e^{\xi}<e^{0}=1$. Either way one has $e^{\xi} \leq e^{|x|}$, and thus $$ \left|R_{n} e^{x}\right| \leq e^{|x|} \frac{|x|^{n+1}}{(n+1) !} . $$ We have shown before that $\lim _{n \rightarrow \infty} x^{n+1} /(n+1) !=0$, so the Sandwich theorem again implies that $\lim _{n \rightarrow \infty}\left|R_{n} e^{x}\right|=0$. Conclusion: $$ e^{x}=\lim _{n \rightarrow \infty}\left\{1+x+\frac{x^{2}}{2 !}+\cdots+\frac{x^{n}}{n !}\right\}=1+x+\frac{x^{2}}{2 !}+\frac{x^{3}}{3 !}+\frac{x^{4}}{4 !}+\cdots $$ Do Taylor series always converge? And if the series of some function $y=f(x)$ converges, must it then converge to $f(x)$ ? Although the Taylor series of most function we run into converge to the functions itself, the following example shows that it doesn't have to be so. 18.4. The day that all Chemistry stood still. The rate at which a chemical reaction " $\mathrm{A} \rightarrow \mathrm{B}$ " proceeds depends among other things on the temperature at which the reaction is taking place. This dependence is described by the Arrhenius law which states that the rate at which a reaction takes place is proportional to $$ f(T)=e^{-\frac{\Delta E}{k T}} $$ where $\Delta E$ is the amount of energy involved in each reaction, $k$ is Boltzmann's constant, and $T$ is the temperature in degrees Kelvin. If you ignore the constants $\Delta E$ and $k$ (i.e. if you set them equal to one by choosing the right units) then the reaction rate is proportional to $$ f(T)=e^{-1 / T} . $$ If you have to deal with reactions at low temperatures you might be inclined to replace this function with its Taylor series at $T=0$, or at least the first non-zero term in this series. If you were to do this you'd be in for a surprise. To see what happens, let's look at the following function, $$ f(x)= \begin{cases}e^{-1 / x} & x>0 \\ 0 & x \leq 0\end{cases} $$ This function goes to zero very quickly as $x \rightarrow 0$. In fact one has $$ \lim _{x \searrow 0} \frac{f(x)}{x^{n}}=\lim _{x \searrow 0} \frac{e^{-1 / x}}{x^{n}}=\lim _{t \rightarrow \infty} t^{n} e^{-t}=0 . \quad(\text { set } t=1 / x) $$ This implies $$ f(x)=o\left(x^{n}\right) \quad(x \rightarrow 0) $$ for any $n=1,2,3 \ldots$ As $x \rightarrow 0$, this function vanishes faster than any power of $x$. Figure 6. An innocent looking function with an unexpected Taylor series. See example 18.4 which shows that even when a Taylor series of some function $f$ converges you can't be sure that it converges to $f-$ it could converge to a different function. If you try to compute the Taylor series of $f$ you need its derivatives at $x=0$ of all orders. These can be computed (not easily), and the result turns out to be that all derivatives of $f$ vanish at $x=0$, $$ f(0)=f^{\prime}(0)=f^{\prime \prime}(0)=f^{(3)}(0)=\cdots=0 . $$ The Taylor series of $f$ is therefore $$ T_{\infty} f(x)=0+0 \cdot x+0 \cdot \frac{x^{2}}{2 !}+0 \cdot \frac{x^{3}}{3 !}+\cdots=0 . $$ Clearly this series converges (all terms are zero, after all), but instead of converging to the function $f(x)$ we started with, it converges to the function $g(x)=0$. What does this mean for the chemical reaction rates and Arrhenius' law? We wanted to "simplify" the Arrhenius law by computing the Taylor series of $f(T)$ at $T=0$, but we have just seen that all terms in this series are zero. Therefore replacing the Arrhenius reaction rate by its Taylor series at $T=0$ has the effect of setting all reaction rates equal to zero. ## Leibniz' formulas for $\ln 2$ and $\pi / 4$ Leibniz showed that $$ \frac{1}{1}-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\frac{1}{5}-\cdots=\ln 2 $$ and $$ \frac{1}{1}-\frac{1}{3}+\frac{1}{5}-\frac{1}{7}+\frac{1}{9}-\cdots=\frac{\pi}{4} $$ Both formulas arise by setting $x=1$ in the Taylor series for $$ \begin{aligned} & \ln (1+x)=x-\frac{x^{2}}{2}+\frac{x^{3}}{3}+\frac{x^{4}}{4}-\cdots \\ & \arctan x=x-\frac{x^{3}}{3}+\frac{x^{5}}{5}+\frac{x^{7}}{7}-\cdots \end{aligned} $$ This is only justified if you show that the series actually converge, which we'll do here, at least for the first of these two formulas. The proof of the second is similar. The following is not Leibniz' original proof. You begin with the geometric sum $$ 1-x+x^{2}-x^{3}+\cdots+(-1)^{n} x^{n}=\frac{1}{1+x}+\frac{(-1)^{n+1} x^{n+1}}{1+x} $$ Then you integrate both sides from $x=0$ to $x=1$ and get $$ \begin{aligned} \frac{1}{1}-\frac{1}{2}+\frac{1}{3}-\cdots+(-1)^{n} \frac{1}{n+1} & =\int_{0}^{1} \frac{\mathrm{d} x}{1+x}+(-1)^{n+1} \int_{0}^{1} \frac{x^{n+1} \mathrm{~d} x}{1+x} \\ & =\ln 2+(-1)^{n+1} \int_{0}^{1} \frac{x^{n+1} \mathrm{~d} x}{1+x} \end{aligned} $$ (Use $\int_{0}^{1} x^{k} \mathrm{~d} x=\frac{1}{k+1}$.) Instead of computing the last integral you estimate it by saying $$ 0 \leq \frac{x^{n+1}}{1+x} \leq x^{n+1} \Longrightarrow 0 \leq \int_{0}^{1} \frac{x^{n+1} \mathrm{~d} x}{1+x} \leq \int_{0}^{1} x^{n+1} \mathrm{~d} x=\frac{1}{n+2} $$ Hence $$ \lim _{n \rightarrow \infty}(-1)^{n+1} \int_{0}^{1} \frac{x^{n+1} \mathrm{~d} x}{1+x}=0 $$ and we get $$ \begin{aligned} \lim _{n \rightarrow \infty} \frac{1}{1}-\frac{1}{2}+\frac{1}{3}-\cdots+(-1)^{n} \frac{1}{n+1} & =\ln 2+\lim _{n \rightarrow \infty}(-1)^{n+1} \int_{0}^{1} \frac{x^{n+1} \mathrm{~d} x}{1+x} \\ & =\ln 2 . \end{aligned} $$ Euler proved that $$ \frac{\pi^{2}}{6}=1+\frac{1}{4}+\frac{1}{9}+\cdots \frac{1}{n^{2}}+\cdots $$ ## Proof of Lagrange's formula For simplicity assume $x>0$. Consider the function $$ g(t)=f(0)+f^{\prime}(0) t+\frac{f^{\prime \prime}(0)}{2} t^{2}+\cdots+\frac{f^{(n)}(0)}{n !} t^{n}+K t^{n+1}-f(t), $$ where $$ K \stackrel{\text { def }}{=}-\frac{f(0)+f^{\prime}(0) x+\cdots+\frac{f^{(n)}(0)}{n !} x^{n}-f(x)}{x^{n+1}} $$ We have chosen this particular $K$ to be sure that $$ g(x)=0 . $$ Just by computing the derivatives you also find that $$ g(0)=g^{\prime}(0)=g^{\prime \prime}(0)=\cdots=g^{(n)}(0)=0, $$ while $$ g^{(n+1)}(t)=(n+1) ! K-f^{(n+1)}(t) . $$ We now apply Rolle's Theorem $n$ times: - since $g(t)$ vanishes at $t=0$ and at $t=x$ there exists an $x_{1}$ with $0<x_{1}<x$ such that $g^{\prime}\left(x_{1}\right)=0$ - since $g^{\prime}(t)$ vanishes at $t=0$ and at $t=x_{1}$ there exists an $x_{2}$ with $0<x_{2}<x_{1}$ such that $g^{\prime}\left(x_{2}\right)=0$ - since $g^{\prime \prime}(t)$ vanishes at $t=0$ and at $t=x_{2}$ there exists an $x_{3}$ with $0<x_{3}<x_{2}$ such that $g^{\prime \prime}\left(x_{3}\right)=0$ - since $g^{(n)}(t)$ vanishes at $t=0$ and at $t=x_{n}$ there exists an $x_{n+1}$ with $0<$ $x_{n+1}<x_{n}$ such that $g^{(n)}\left(x_{n+1}\right)=0$. We now set $\xi=x_{n+1}$, and observe that we have shown that $g^{(n+1)}(\xi)=0$, so by (15) we get $$ K=\frac{f^{(n+1)}(\xi)}{(n+1) !} $$ Apply that to (14) and you finally get $$ f(x)=f(0)+f^{\prime}(0) x+\cdots+\frac{f^{(n)}(0)}{n !} x^{n}+\frac{f^{(n+1)}(\xi)}{(n+1) !} x^{n+1} . $$ ## Proof of Theorem 16.8 21.1. Lemma. If $h(x)$ is a $k$ times differentiable function on some interval containing 0 , and if for some integer $k<n$ one has $h(0)=h^{\prime}(0)=\cdots=h^{(k-1)}(0)=0$, then $$ \lim _{x \rightarrow 0} \frac{h(x)}{x^{k}}=\frac{h^{(k)}(0)}{k !} $$ Proof. Just apply l'Hopital's rule $k$ times. You get $$ \begin{aligned} & \lim _{x \rightarrow 0} \frac{h(x)}{x^{k}} \stackrel{=\frac{0}{0}}{=} \lim _{x \rightarrow 0} \frac{h^{\prime}(x)}{k x^{k-1}} \stackrel{=\frac{0}{0}}{=} \lim _{x \rightarrow 0} \frac{h^{(2)}(x)}{k(k-1) x^{k-2}} \stackrel{=\frac{0}{0}}{=} \cdots \\ & \cdots=\lim _{x \rightarrow 0} \frac{h^{(k-1)}(x)}{k(k-1) \cdots 2 x^{1}} \stackrel{=\frac{0}{0}}{=} \frac{h^{(k)}(0)}{k(k-1) \cdots 2 \cdot 1} \end{aligned} $$ First define the function $h(x)=f(x)-g(x)$. If $f(x)$ and $g(x)$ are $n$ times differentiable, then so is $h(x)$. The condition $T_{n} f(x)=T_{n} g(x)$ means that $$ f(0)=g(0), \quad f^{\prime}(0)=g^{\prime}(0), \quad \ldots, \quad f^{(n)}(0)=g^{(n)}(0), $$ which says, in terms of $h(x)$, $$ h(0)=h^{\prime}(0)=h^{\prime \prime}(0)=\cdots=h^{(n)}(0)=0, $$ i.e. $$ T_{n} h(x)=0 . $$ We now prove the first pat of the theorem: suppose $f(x)$ and $g(x)$ have the same $n$th degree Taylor polynomial. Then we have just argued that $T_{n} h(x)=0$, and Lemma 21.1 (with $k=n$ ) says that $\lim _{x \rightarrow 0} h(x) / x^{n}=0$, as claimed. To conclude we show the converse also holds. So suppose that $\lim _{x \rightarrow 0} h(x) / x^{n}=0$. We'll show that $(\dagger)$ follows. If $(\dagger)$ were not true then there would be a smallest integer $k \leq n$ such that $$ h(0)=h^{\prime}(0)=h^{\prime \prime}(0)=\cdots=h^{(k-1)}(0)=0, \text { but } h^{(k)}(0) \neq 0 . $$ This runs into the following contradiction with Lemma 21.1 $$ 0 \neq \frac{h^{(k)}(0)}{k !}=\lim _{x \rightarrow 0} \frac{h(x)}{x^{k}}=\lim _{x \rightarrow 0} \frac{h(x)}{x^{n}} \cdot \frac{x^{n}}{x^{k}}=0 \cdot \underbrace{\lim _{x \rightarrow 0} x^{n-k}}_{(*)}=0 . $$ Here the limit $(*)$ exists because $n \geq k$. ## PROBLEMS ## TAYLOR'S FORMULA 179. Find a second order polynomial (i.e. a quadratic function) $Q(x)$ such that $Q(7)=43, Q^{\prime}(7)=19, Q^{\prime \prime}(7)=11$. 180. Find a second order polynomial $p(x)$ such that $p(2)=3, p^{\prime}(2)=8$, and $p^{\prime \prime}(2)=-1$. 181. A Third order polynomial $P(x)$ satisfies $P(0)=1, P^{\prime}(0)=-3, P^{\prime \prime}(0)=$ $-8, P^{\prime \prime \prime}(0)=24$. Find $P(x)$. 182. Let $f(x)=\sqrt{x+25}$. Find the polynomial $P(x)$ of degree three such that $P^{(k)}(0)=f^{(k)}(0)$ for $k=0,1,2,3$. 183. Let $f(x)=1+x-x^{2}-x^{3}$. Compute and graph $T_{0} f(x), T_{1} f(x), T_{2} f(x)$, $T_{3} f(x)$, and $T_{4} f(x)$, as well as $f(x)$ itself (so, for each of these functions find where they are positive or negative, where they are increasing/decreasing, and find the inflection points on their graph.) 184. Find $T_{3} \sin x$ and $T_{5} \sin x$. Graph $T_{3} \sin x$ and $T_{5} \sin x$ as well as $y=\sin x$ in one picture. (As before, find where these functions are positive or negative, where they are increasing/decreasing, and find the inflection points on their graph. This problem can\&should be done without a graphing calculator.) Compute $T_{0}^{a} f(x), \quad T_{1}^{a} f(x) \quad$ and $T_{2}^{a} f(x)$ for the following functions. 185. $f(x)=x^{3}, a=0$; then for $a=1$ and $a=2$. 186. $f(x)=\frac{1}{x}, a=1$. Also do $a=2$. 187. $f(x)=\sqrt{x}, a=1$. 188. $f(x)=\ln x, a=1$. Also $a=e^{2}$. 189. $f(x)=\ln \sqrt{x}, a=1$. 190. $f(x)=\sin (2 x), a=0$, also $a=\pi / 4$. 191. $f(x)=\cos (x), a=\pi$. 192. $f(x)=(x-1)^{2}, a=0$, and also $a=1$. 193. $f(x)=\frac{1}{e^{x}}, a=0$. 194. Find the $n$th degree Taylor polynomial $T_{n}^{a} f(x)$ of the following functions $f(x)$ | $n$ | $a$ | $f(x)$ | | :---: | :---: | :---: | | 2 | 0 | $1+x-x^{3}$ | | 3 | 0 | $1+x-x^{3}$ | | 25 | 0 | $1+x-x^{3}$ | | 25 | 2 | $1+x-x^{3}$ | | 2 | 1 | $1+x-x^{3}$ | | 1 | 1 | $x^{2}$ | | 2 | 1 | $x^{2}$ | | 5 | 1 | $1 / x$ | | 5 | 0 | $1 /(1+x)$ | | 3 | 0 | $1 /\left(1-3 x+2 x^{2}\right)$ | For which of these combinations $(n, a, f(x))$ is $T_{n}^{a} f(x)$ the same as $f(x)$ ? Compute the Taylor series $T_{\infty} f(t)$ for the following functions ( $\alpha$ is a constant). Give a formula for the coefficient of $x^{n}$ in $T_{\infty} f(t)$. (Be smart. Remember properties of the logarithm, definitions of the hyperbolic functions, partial fraction decomposition.) 195. $e^{t}$ 196. $e^{\alpha t}$ 197. $\sin (3 t)$ 198. $\sinh t$ 199. $\cosh t$ 200. $\frac{1}{1+2 t}$ 201. $\frac{3}{(2-t)^{2}}$ 202. $\ln (1+t)$ 203. $\ln (2+2 t)$ 204. $\ln \sqrt{1+t}$ 205. $\ln (1+2 t)$ 206. $\ln \sqrt{\frac{1+t}{1-t}}$ 207. $\frac{1}{1-t^{2}} \quad$ [hint:PFD!] 208. $\frac{t}{1-t^{2}}$ 208. $\sin t+\cos t$ 209. $2 \sin t \cos t$ 210. $\tan t$ (3 terms only) 211. $1+t^{2}-\frac{2}{3} t^{4}$ 212. $(1+t)^{5}$ 213. $\sqrt[3]{1+t}$ 214. $f(x)=\frac{x^{4}}{1+4 x^{2}}$, what is $f^{(10)}(0)$ ? 215. Group problem. Compute the Taylor series of the following two functions $$ f(x)=\sin a \cos x+\cos a \sin x $$ and $$ g(x)=\sin (a+x) $$ where $a$ is a constant. 217. Group problem. Compute the Taylor series of the following two functions $$ h(x)=\cos a \cos x-\sin a \sin x $$ and $$ k(x)=\cos (a+x) $$ where $a$ is a constant. 218. Group problem. The following questions ask you to rediscover Newton's Binomial Formula, which is just the Taylor series for $(1+x)^{n}$. Newton's formula generalizes the formulas for $(a+b)^{2},(a+b)^{3}$, etc that you get using Pascal's triangle. It allows non integer exponents which are allowed to be either positive and negative. Reread section 13 before doing this problem. (a) Find the Taylor series of $f(x)=$ $\sqrt{1+x}\left(=(1+x)^{1 / 2}\right)$ (b) Find the coefficient of $x^{4}$ in the Taylor series of $f(x)=(1+x)^{\pi}$ (don't do the arithmetic!) (c) Let $p$ be any real number. Compute the terms of degree $0,1,2$ and 3 of the Taylor series of $$ f(x)=(1+x)^{p} $$ (d) Compute the Taylor polynomial of degree $n$ of $f(x)=(1+x)^{p}$. (e) Write the result of (d) for the exponents $p=2,3$ and also, for $p=$ $-1,-2,-3$ and finally for $p=\frac{1}{2}$. The Binomial Theorem states that this series converges when $|x|<1$. ## LAGRANGE'S FORMULA FOR THE REMAINDER 219. Find the fourth degree Taylor polynomial $T_{4}\{\cos x\}$ for the function $f(x)=$ $\cos x$ and estimate the error $\mid \cos x-$ $P_{4}(x) \mid$ for $|x|<1$. 220. Find the 4th degree Taylor polynomial $T_{4}\{\sin x\}$ for the function $f(x)=$ $\sin x$. Estimate the error $\mid \sin x-$ $T_{4}\{\sin x\} \mid$ for $|x|<1$. 221. (Computing the cube root of 9) The cube root of $8=2 \times 2 \times 2$ is easy, and 9 is only one more than 8 . So you could try to compute $\sqrt[3]{9}$ by viewing it as $\sqrt[3]{8+1}$. (a) Let $f(x)=\sqrt[3]{8+x}$. Find $T_{2} f(x)$, and estimate the error $\mid \sqrt[3]{9}-$ $T_{2} f(1) \mid$. (b) Repeat part (i) for " $n=3$ ", i.e. compute $T_{3} f(x)$ and estimate $\mid \sqrt[3]{9}-$ $T_{3} f(1) \mid$. 222. Follow the method of problem 221 to compute $\sqrt{10}$ : (a) Use Taylor's formula with $f(x)=\sqrt{9+x}, n=1$, to calculate $\sqrt{10}$ approximately. Show that the error is less than $1 / 216$. (b) Repeat with $n=2$. Show that the error is less than 0.0003 . 223. Find the eighth degree Taylor polynomial $T_{8} f(x)$ about the point 0 for the function $f(x)=\cos x$ and estimate the error $\left|\cos x-T_{8} f(x)\right|$ for $|x|<1$. Now find the ninth degree Taylor polynomial, and estimate $\mid \cos x-$ $T_{9} f(x) \mid$ for $|x| \leq 1$. ## LITTLE-OH AND MANIPULATING TAYLOR POLYNOMIALS Are the following statements True or False? In mathematics this means that you should either show that the statement always holds or else give at least one counterexample, thereby showing that the statement is not always true. 224. $\left(1+x^{2}\right)^{2}-1=o(x)$ ? 225. $\left(1+x^{2}\right)^{2}-1=o\left(x^{2}\right)$ ? 226. $\sqrt{1+x}-\sqrt{1-x}=o(x)$ ? 227. $o(x)+o(x)=o(x)$ ? 228. $o(x)-o(x)=o(x)$ ? 229. $o(x) \cdot o(x)=o(x)$ ? 230. $o\left(x^{2}\right)+o(x)=o\left(x^{2}\right)$ ? 231. $o\left(x^{2}\right)-o\left(x^{2}\right)=o\left(x^{3}\right)$ ? 232. $o(2 x)=o(x)$ ? 233. $o(x)+o\left(x^{2}\right)=o(x)$ ? 234. $o(x)+o\left(x^{2}\right)=o\left(x^{2}\right)$ ? 235. $1-\cos x=o(x)$ ? 236. Define $$ f(x)= \begin{cases}e^{-1 / x^{2}} & x \neq 0 \\ 0 & x=0\end{cases} $$ This function goes to zero very quickly as $x \rightarrow 0$ but is 0 only at 0 . Prove that $f(x)=o\left(x^{n}\right)$ for every $n$. 237. For which value(s) of $k$ is $\sqrt{1+x^{2}}=$ $1+o\left(x^{k}\right)($ as $x \rightarrow 0)$ ? For which value(s) of $k$ is $\sqrt[3]{1+x^{2}}=$ $1+o\left(x^{k}\right)($ as $x \rightarrow 0)$ ? For which value(s) of $k$ is 1 $\cos x^{2}=o\left(x^{k}\right)($ as $x \rightarrow 0) ?$ 238. Group problem. Let $g_{n}$ be the coefficient of $x^{n}$ in the Taylor series of the function $$ g(x)=\frac{1}{2-3 x+x^{2}} $$ (a) Compute $g_{0}$ and $g_{1}$ directly from the definition of the Taylor series. (b) Show that the recursion relation $g_{n}=3 g_{n-1}-2 g_{n-2}$ holds for all $n \geq 2$. (c) Compute $g_{2}, g_{3}, g_{4}, g_{5}$. (d) Using a partial fraction decomposition of $g(x)$ find a formula for $g^{(n)}(0)$, and hence for $g_{n}$. 239. Answer the same questions as in the previous problem, for the functions $$ h(x)=\frac{x}{2-3 x+x^{2}} $$ and $$ k(x)=\frac{2-x}{2-3 x+x^{2}} . $$ 240. Let $h_{n}$ be the coefficient of $x^{n}$ in the Taylor series of $$ h(x)=\frac{1+x}{2-5 x+2 x^{2}} . $$ (a) Find a recursion relation for the $h_{n}$ (b) Compute $h_{0}, h_{1}, \ldots, h_{8}$. (c) Derive a formula for $h_{n}$ valid for all $n$, by using a partial fraction expansion. (d) Is $h_{2009}$ more or less than a million? A billion? Find the Taylor series for the following functions, by substituting, adding, multiplying, applying long division and/or differentiating known series for $\frac{1}{1+x}, e^{x}, \sin x, \cos x$ and $\ln x$. 241. $e^{a t}$ 242. $e^{1+t}$ 243. $e^{-t^{2}}$ 244. $\frac{1+t}{1-t}$ 245. $\frac{1}{1+2 t}$ 246. $$ f(x)=\left\{\begin{array}{cl} \frac{\sin (x)}{x} & \text { if } x \neq 0 \\ 1 & \text { if } x=0 \end{array}\right. $$ 247. $\frac{\ln (1+x)}{x}$ 248. $\frac{e^{t}}{1-t}$ 249. $\frac{1}{\sqrt{1-t}}$ 250. $\frac{1}{\sqrt{1-t^{2}}}$ (recommendation: use the answer to problem 249) 250. $\arcsin t$ (use problem 249 again) 252. Compute $T_{4}\left[e^{-t} \cos t\right]$ (See example 16.11.) 253. $T_{4}\left[e^{-t} \sin 2 t\right]$ 254. $\frac{1}{2-t-t^{2}}$ 255. $\sqrt[3]{1+2 t+t^{2}}$ 256. $\ln \left(1-t^{2}\right)$ 257. $\sin t \cos t$ ## LIMITS OF SEQUENCES Compute the following limits: 258. $\lim _{n \rightarrow \infty} \frac{n}{2 n-3}$ 259. $\lim _{n \rightarrow \infty} \frac{n^{2}}{2 n-3}$ 260. $\lim _{n \rightarrow \infty} \frac{n^{2}}{2 n^{2}+n-3}$ 261. $\lim _{n \rightarrow \infty} \frac{2^{n}+1}{1-2^{n}}$ 262. $\lim _{n \rightarrow \infty} \frac{2^{n}+1}{1-3^{n}}$ 263. $\lim _{n \rightarrow \infty} \frac{e^{n}+1}{1-2^{n}}$ 264. $\lim _{n \rightarrow \infty} \frac{n^{2}}{(1.01)^{n}}$ 265. $\lim _{n \rightarrow \infty} \frac{1000^{n}}{n !}$ 266. $\lim _{n \rightarrow \infty} \frac{n !+1}{(n+1) !}$ 267. Group problem. Compute $\lim _{n \rightarrow \infty} \frac{(n !)^{2}}{(2 n) !}$ [Hint: write out all the factors in numerator and denominator.] 268. Group problem. Let $f_{n}$ be the $n$th Fibonacci number. Compute $$ \lim _{n \rightarrow \infty} \frac{f_{n}}{f_{n-1}} $$ 269. Prove that the Taylor series for $f(x)=\cos x$ converges to $f(x)$ for all real numbers $x$ (by showing that the remainder term goes to zero as $n \rightarrow \infty)$. 270. Prove that the Taylor series for $g(x)=\sin (2 x)$ converges to $g(x)$ for all real numbers $x$. 271. Prove that the Taylor series for $h(x)=\cosh (x)$ converges to $h(x)$ for all real numbers $x$. 272. Prove that the Taylor series for $k(x)=e^{2 x+3}$ converges to $k(x)$ for all real numbers $x$. 273. Prove that the Taylor series for $\ell(x)=$ $\cos \left(x-\frac{\pi}{7}\right)$ converges to $\ell(x)$ for all real numbers $x$. 274. Group problem. If the Taylor series of a function $y=f(x)$ converges for all $x$, does it have to converge to $f(x)$, or could it converge to some other function? 275. For which real numbers $x$ does the Taylor series of $f(x)=\frac{1}{1-x}$ converge to $f(x)$ ? 276. For which real numbers $x$ does the Taylor series of $f(x)=\frac{1}{1-x^{2}}$ converge to $f(x)$ ? (hint: a substitution may help.) 277. For which real numbers $x$ does the Taylor series of $f(x)=\frac{1}{1+x^{2}}$ converge to $f(x)$ ? 278. For which real numbers $x$ does the Taylor series of $f(x)=\frac{1}{3+2 x}$ converge to $f(x)$ ? 279. For which real numbers $x$ does the Taylor series of $f(x)=\frac{1}{2-5 x}$ converge to $f(x)$ ? 279. Group problem. For which real numbers $x$ does the Taylor series of $f(x)=\frac{1}{2-x-x^{2}}$ converge to $f(x)$ ? (hint: use PFD and the Geometric Series to find the remainder term.) 280. Show that the Taylor series for $f(x)=$ $\ln (1+x)$ converges when $-1<x<1$ by integrating the Geometric Series $$ \begin{gathered} \frac{1}{1+t}=1-t+t^{2}-t^{3}+\cdots \\ +(-1)^{n} t^{n}+(-1)^{n+1} \frac{t^{n+1}}{1+t} \\ \text { from } t=0 \text { to } t=x . \quad(\text { See } \S 19 .) \end{gathered} $$ 282. Show that the Taylor series for $f(x)=$ $e^{-x^{2}}$ converges for all real numbers $x$. (Set $t=-x^{2}$ in the Taylor series with remainder for $e^{t}$.) 283. Show that the Taylor series for $f(x)=$ $\sin \left(x^{4}\right)$ converges for all real numbers $x$. (Set $t=x^{4}$ in the Taylor series with remainder for $\sin t$.) 284. Show that the Taylor series for $f(x)=$ $1 /\left(1+x^{3}\right)$ converges whenever $-1<x<$ 1 (Use the Geometric Series.) 285. For which $x$ does the Taylor series of $f(x)=2 /\left(1+4 x^{2}\right)$ converge? (Again, use the Geometric Series.) 286. Group problem. The error function from statistics is defined by $$ \operatorname{erf}(x)=\frac{1}{\sqrt{\pi}} \int_{0}^{x} e^{-t^{2} / 2} \mathrm{~d} t $$ (a) Find the Taylor series of the error function from the Taylor series of $f(r)=e^{r}$ (set $r=-t^{2} / 2$ and integrate). (b) Estimate the error term and show that the Taylor series of the error function converges for all real $x$. 287. Group problem. Prove Leibniz' formula for $\frac{\pi}{4}$ by mimicking the proof in section 19. Specifically, find a formula for the remainder in : $$ \frac{1}{1+t^{2}}=1-t^{2}+\cdots+(-1)^{n} t^{2 n}+R_{2 n}(t) $$ and integrate this from $t=0$ to $t=1$. ## APPROXIMATING INTEGRALS 288. (a) Compute $T_{2}\{\sin t\}$ and give an upper bound for $R_{2}\{\sin t\}$ for $0 \leq t \leq 0.5$ (b) Use part (a) to approximate $\int_{0}^{0.5} \sin \left(x^{2}\right) d x$, and give an upper bound for the error in your approximation. 289. (a) Find the second degree Taylor polynomial for the function $e^{t}$. (b) Use it to give an estimate for the integral $$ \int_{0}^{1} e^{x^{2}} d x $$ (c) Suppose instead we used the 5 th degree Taylor polynomial $p(t)$ for $e^{t}$ to give an estimate for the integral: $$ \int_{0}^{1} e^{x^{2}} d x $$ Give an upper bound for the error: $$ \left|\int_{0}^{1} e^{x^{2}} d x-\int_{0}^{1} p\left(x^{2}\right) d x\right| $$ Note: You need not find $p(t)$ or the integral $\int_{0}^{1} p\left(x^{2}\right) d x$. 290. Approximate $\int_{0}^{0.1} \arctan x d x$ and estimate the error in your approximation by analyzing $T_{2} f(t)$ and $R_{2} f(t)$ where $f(t)=\arctan t$. 291. Approximate $\int_{0}^{0.1} x^{2} e^{-x^{2}} d x$ and estimate the error in your approximation by analyzing $T_{3} f(t)$ and $R_{3} f(t)$ where $f(t)=t e^{-t}$. 292. Estimate $\int_{0}^{0.5} \sqrt{1+x^{4}} d x$ with an error of less than $10^{-4}$. 293. Estimate $\int_{0}^{0.1} \arctan x d x$ with an error of less than 0.001 . ## Chapter 3: Complex Numbers and the Complex Exponential ## Complex numbers The equation $x^{2}+1=0$ has no solutions, because for any real number $x$ the square $x^{2}$ is nonnegative, and so $x^{2}+1$ can never be less than 1 . In spite of this it turns out to be very useful to assume that there is a number $i$ for which one has $$ i^{2}=-1 . $$ Any complex number is then an expression of the form $a+b i$, where $a$ and $b$ are oldfashioned real numbers. The number $a$ is called the real part of $a+b i$, and $b$ is called its imaginary part. Traditionally the letters $z$ and $w$ are used to stand for complex numbers. Since any complex number is specified by two real numbers one can visualize them by plotting a point with coordinates $(a, b)$ in the plane for a complex number $a+b i$. The plane in which one plot these complex numbers is called the Complex plane, or Argand plane. Figure 7. A complex number. You can add, multiply and divide complex numbers. Here's how: To add (subtract) $z=a+b i$ and $w=c+d i$ $$ \begin{aligned} & z+w=(a+b i)+(c+d i)=(a+c)+(b+d) i, \\ & z-w=(a+b i)-(c+d i)=(a-c)+(b-d) i . \end{aligned} $$ To multiply $z$ and $w$ proceed as follows: $$ \begin{aligned} z w & =(a+b i)(c+d i) \\ & =a(c+d i)+b i(c+d i) \\ & =a c+a d i+b c i+b d i^{2} \\ & =(a c-b d)+(a d+b c) i \end{aligned} $$ where we have use the defining property $i^{2}=-1$ to get rid of $i^{2}$. To divide two complex numbers one always uses the following trick. $$ \begin{aligned} \frac{a+b i}{c+d i} & =\frac{a+b i}{c+d i} \cdot \frac{c-d i}{c-d i} \\ & =\frac{(a+b i)(c-d i)}{(c+d i)(c-d i)} \end{aligned} $$ Now $$ (c+d i)(c-d i)=c^{2}-(d i)^{2}=c^{2}-d^{2} i^{2}=c^{2}+d^{2}, $$ SO $$ \begin{aligned} \frac{a+b i}{c+d i} & =\frac{(a c+b d)+(b c-a d) i}{c^{2}+d^{2}} \\ & =\frac{a c+b d}{c^{2}+d^{2}}+\frac{b c-a d}{c^{2}+d^{2}} i \end{aligned} $$ Obviously you do not want to memorize this formula: instead you remember the trick, i.e. to divide $c+d i$ into $a+b i$ you multiply numerator and denominator with $c-d i$. For any complex number $w=c+d i$ the number $c-d i$ is called its complex conjugate. Notation: $$ w=c+d i, \quad \bar{w}=c-d i . $$ A frequently used property of the complex conjugate is the following formula $$ w \bar{w}=(c+d i)(c-d i)=c^{2}-(d i)^{2}=c^{2}+d^{2} . $$ The following notation is used for the real and imaginary parts of a complex number $z$. If $z=a+b i$ then $$ a=\text { the Real Part of } z=\mathfrak{R e}(z), \quad b=\text { the Imaginary Part of } z=\mathfrak{I m}(z) . $$ Note that both $\mathfrak{R e} z$ and $\mathfrak{I m} z$ are real numbers. A common mistake is to say that $\mathfrak{I m} z=b i$. The " $i$ " should not be there. ## Argument and Absolute Value For any given complex number $z=a+b i$ one defines the absolute value or modulus to be $$ |z|=\sqrt{a^{2}+b^{2}} $$ so $|z|$ is the distance from the origin to the point $z$ in the complex plane (see figure 7 ). The angle $\theta$ is called the argument of the complex number $z$. Notation: $$ \arg z=\theta . $$ The argument is defined in an ambiguous way: it is only defined up to a multiple of $2 \pi$. E.g. the argument of -1 could be $\pi$, or $-\pi$, or $3 \pi$, or, etc. In general one says $\arg (-1)=\pi+2 k \pi$, where $k$ may be any integer. From trigonometry one sees that for any complex number $z=a+b i$ one has $$ a=|z| \cos \theta, \text { and } b=|z| \sin \theta, $$ so that $$ |z|=|z| \cos \theta+i|z| \sin \theta=|z|(\cos \theta+i \sin \theta) $$ and $$ \tan \theta=\frac{\sin \theta}{\cos \theta}=\frac{b}{a} . $$ 24.1. Example: Find argument and absolute value of $z=2+i$. Solution: $|z|=\sqrt{2^{2}+1^{2}}=\sqrt{ } 5 . z$ lies in the first quadrant so its argument $\theta$ is an angle between 0 and $\pi / 2$. From $\tan \theta=\frac{1}{2}$ we then conclude $\arg (2+i)=\theta=\arctan \frac{1}{2}$. ## Geometry of Arithmetic Since we can picture complex numbers as points in the complex plane, we can also try to visualize the arithmetic operations "addition" and "multiplication." To add $z$ and Figure 8. Addition of $z=a+b i$ and $w=c+d i$ $w$ one forms the parallelogram with the origin, $z$ and $w$ as vertices. The fourth vertex then is $z+w$. See figure 8 . Figure 9. Multiplication of $a+b i$ by $i$. To understand multiplication we first look at multiplication with $i$. If $z=a+b i$ then $$ i z=i(a+b i)=i a+b i^{2}=a i-b=-b+a i . $$ Thus, to form $i z$ from the complex number $z$ one rotates $z$ counterclockwise by 90 degrees. See figure 9. If $a$ is any real number, then multiplication of $w=c+d i$ by $a$ gives $$ a w=a c+a d i, $$ Figure 10. Multiplication of a real and a complex number so aw points in the same direction, but is $a$ times as far away from the origin. If $a<0$ then aw points in the opposite direction. See figure 10. Next, to multiply $z=a+b i$ and $w=c+d i$ we write the product as $$ z w=(a+b i) w=a w+b i w . $$ Figure 11 shows $a+b i$ on the right. On the left, the complex number $w$ was first drawn, Figure 11. Multiplication of two complex numbers then $a w$ was drawn. Subsequently $i w$ and biw were constructed, and finally $z w=a w+b i w$ was drawn by adding $a w$ and biw. One sees from figure 11 that since $i w$ is perpendicular to $w$, the line segment from 0 to biw is perpendicular to the segment from 0 to aw. Therefore the larger shaded triangle on the left is a right triangle. The length of the adjacent side is $a|w|$, and the length of the opposite side is $b|w|$. The ratio of these two lengths is $a: b$, which is the same as for the shaded right triangle on the right, so we conclude that these two triangles are similar. The triangle on the left is $|w|$ times as large as the triangle on the right. The two angles marked $\theta$ are equal. Since $|z w|$ is the length of the hypothenuse of the shaded triangle on the left, it is $|w|$ times the hypothenuse of the triangle on the right, i.e. $|z w|=|w| \cdot|z|$. The argument of $z w$ is the angle $\theta+\varphi$; since $\theta=\arg z$ and $\varphi=\arg w$ we get the following two formulas $$ \begin{aligned} |z w| & =|z| \cdot|w| \\ \arg (z w) & =\arg z+\arg w, \end{aligned} $$ in other words, $$ \begin{gathered} \text { when you multiply complex numbers, their lengths get multiplied } \\ \text { and their arguments get added. } \end{gathered} $$ ## Applications in Trigonometry 26.1. Unit length complex numbers. For any $\theta$ the number $z=\cos \theta+i \sin \theta$ has length 1: it lies on the unit circle. Its argument is $\arg z=\theta$. Conversely, any complex number on the unit circle is of the form $\cos \phi+i \sin \phi$, where $\phi$ is its argument. 26.2. The Addition Formulas for Sine \& Cosine. For any two angles $\theta$ and $\phi$ one can multiply $z=\cos \theta+i \sin \theta$ and $w=\cos \phi+i \sin \phi$. The product $z w$ is a complex number of absolute value $|z w|=|z| \cdot|w|=1 \cdot 1$, and with $\operatorname{argument} \arg (z w)=$ $\arg z+\arg w=\theta+\phi$. So $z w$ lies on the unit circle and must be $\cos (\theta+\phi)+i \sin (\theta+\phi)$. Thus we have $$ (\cos \theta+i \sin \theta)(\cos \phi+i \sin \phi)=\cos (\theta+\phi)+i \sin (\theta+\phi) . $$ By multiplying out the Left Hand Side we get $$ \begin{aligned} (\cos \theta+i \sin \theta)(\cos \phi+i \sin \phi)= & \cos \theta \cos \phi-\sin \theta \sin \phi \\ & +i(\sin \theta \cos \phi+\cos \theta \sin \phi) . \end{aligned} $$ Compare the Right Hand Sides of (21) and (22), and you get the addition formulas for Sine and Cosine: $$ \begin{aligned} \cos (\theta+\phi) & =\cos \theta \cos \phi-\sin \theta \sin \phi \\ \sin (\theta+\phi) & =\sin \theta \cos \phi+\cos \theta \sin \phi \end{aligned} $$ 26.3. De Moivre's formula. For any complex number $z$ the argument of its square $z^{2}$ is $\arg \left(z^{2}\right)=\arg (z \cdot z)=\arg z+\arg z=2 \arg z$. The argument of its cube is $\arg z^{3}=$ $\arg \left(z \cdot z^{2}\right)=\arg (z)+\arg z^{2}=\arg z+2 \arg z=3 \arg z$. Continuing like this one finds that $$ \arg z^{n}=n \arg z $$ for any integer $n$. Applying this to $z=\cos \theta+i \sin \theta$ you find that $z^{n}$ is a number with absolute value $\left|z^{n}\right|=|z|^{n}=1^{n}=1$, and argument $n \arg z=n \theta$. Hence $z^{n}=\cos n \theta+i \sin n \theta$. So we have found $$ (\cos \theta+i \sin \theta)^{n}=\cos n \theta+i \sin n \theta . $$ This is de Moivre's formula. For instance, for $n=2$ this tells us that $$ \cos 2 \theta+i \sin 2 \theta=(\cos \theta+i \sin \theta)^{2}=\cos ^{2} \theta-\sin ^{2} \theta+2 i \cos \theta \sin \theta . $$ Comparing real and imaginary parts on left and right hand sides this gives you the double angle formulas $\cos 2 \theta=\cos ^{2} \theta-\sin ^{2} \theta$ and $\sin 2 \theta=2 \sin \theta \cos \theta$. For $n=3$ you get, using the Binomial Theorem, or Pascal's triangle, $$ \begin{aligned} (\cos \theta+i \sin \theta)^{3} & =\cos ^{3} \theta+3 i \cos ^{2} \theta \sin \theta+3 i^{2} \cos \theta \sin ^{2} \theta+i^{3} \sin ^{3} \theta \\ & =\cos ^{3} \theta-3 \cos \theta \sin ^{2} \theta+i\left(3 \cos ^{2} \theta \sin \theta-\sin ^{3} \theta\right) \end{aligned} $$ so that $$ \cos 3 \theta=\cos ^{3} \theta-3 \cos \theta \sin ^{2} \theta $$ and $$ \sin 3 \theta=\cos ^{2} \theta \sin \theta-\sin ^{3} \theta . $$ In this way it is fairly easy to write down similar formulas for $\sin 4 \theta, \sin 5 \theta$, etc... ## Calculus of complex valued functions A complex valued function on some interval $I=(a, b) \subseteq \mathbb{R}$ is a function $f: I \rightarrow \mathbb{C}$. Such a function can be written as in terms of its real and imaginary parts, $$ f(x)=u(x)+i v(x) $$ in which $u, v: I \rightarrow \mathbb{R}$ are two real valued functions. One defines limits of complex valued functions in terms of limits of their real and imaginary parts. Thus we say that $$ \lim _{x \rightarrow x_{0}} f(x)=L $$ if $f(x)=u(x)+i v(x), L=A+i B$, and both $$ \lim _{x \rightarrow x_{0}} u(x)=A \text { and } \lim _{x \rightarrow x_{0}} v(x)=B $$ hold. From this definition one can prove that the usual limit theorems also apply to complex valued functions. 27.1. Theorem. If $\lim _{x \rightarrow x_{0}} f(x)=L$ and $\lim _{x \rightarrow x_{0}} g(x)=M$, then one has $$ \begin{aligned} \lim _{x \rightarrow x_{0}} f(x) \pm g(x) & =L \pm M, \\ \lim _{x \rightarrow x_{0}} f(x) g(x) & =L M, \\ \lim _{x \rightarrow x_{0}} \frac{f(x)}{g(x)} & =\frac{L}{M}, \text { provided } M \neq 0 . \end{aligned} $$ The derivative of a complex valued function $f(x)=u(x)+i v(x)$ is defined by simply differentiating its real and imaginary parts: $$ f^{\prime}(x)=u^{\prime}(x)+i v^{\prime}(x) $$ Again, one finds that the sum,product and quotient rules also hold for complex valued functions. 27.2. Theorem. If $f, g: I \rightarrow \mathbb{C}$ are complex valued functions which are differentiable at some $x_{0} \in I$, then the functions $f \pm g, f g$ and $f / g$ are differentiable (assuming $g\left(x_{0}\right) \neq 0$ in the case of the quotient.) One has $$ \begin{aligned} (f \pm g)^{\prime}\left(x_{0}\right) & =f^{\prime}\left(x_{0}\right) \pm g^{\prime}\left(x_{0}\right) \\ (f g)^{\prime}\left(x_{0}\right) & =f^{\prime}\left(x_{0}\right) g\left(x_{0}\right)+f\left(x_{0}\right) g^{\prime}\left(x_{0}\right) \\ \left(\frac{f}{g}\right)^{\prime}\left(x_{0}\right) & =\frac{f^{\prime}\left(x_{0}\right) g\left(x_{0}\right)-f\left(x_{0}\right) g^{\prime}\left(x_{0}\right)}{g\left(x_{0}\right)^{2}} \end{aligned} $$ Note that the chain rule does not appear in this list! See problem 324 for more about the chain rule. ## The Complex Exponential Function We finally give a definition of $e^{a+b i}$. First we consider the case $a=0$ : Figure 12. Euler's definition of $e^{i \theta}$ 28.1. Definition. For any real number $t$ we set $$ e^{i t}=\cos t+i \sin t . $$ See Figure 12. 28.2. Example. $e^{\pi i}=\cos \pi+i \sin \pi=-1$. This leads to Euler's famous formula $$ e^{\pi i}+1=0, $$ which combines the five most basic quantities in mathematics: $e, \pi, i, 1$, and 0 . ## Reasons why the definition 28.1 seems a good definition. Reason 1. We haven't defined $e^{i t}$ before and we can do anything we like. Reason 2. Substitute $i t$ in the Taylor series for $e^{x}$ : $$ \begin{aligned} e^{i t}= & 1+i t+\frac{(i t)^{2}}{2 !}+\frac{(i t)^{3}}{3 !}+\frac{(i t)^{4}}{4 !}+\cdots \\ = & 1+i t-\frac{t^{2}}{2 !}-i \frac{t^{3}}{3 !}+\frac{t^{4}}{4 !}+i \frac{t^{5}}{5 !}-\cdots \\ = & 1-t^{2} / 2 !+t^{4} / 4 !-\cdots \\ & \quad+i\left(t-t^{3} / 3 !+t^{5} / 5 !-\cdots\right) \\ = & \cos t+i \sin t . \end{aligned} $$ This is not a proof, because before we had only proved the convergence of the Taylor series for $e^{x}$ if $x$ was a real number, and here we have pretended that the series is also good if you substitute $x=i t$. Reason 3. As a function of $t$ the definition 28.1 gives us the correct derivative. Namely, using the chain rule (i.e. pretending it still applies for complex functions) we would get $$ \frac{d e^{i t}}{d t}=i e^{i t} . $$ Indeed, this is correct. To see this proceed from our definition 28.1: $$ \begin{aligned} \frac{d e^{i t}}{d t} & =\frac{d \cos t+i \sin t}{d t} \\ & =\frac{d \cos t}{d t}+i \frac{d \sin t}{d t} \\ & =-\sin t+i \cos t \\ & =i(\cos t+i \sin t) \end{aligned} $$ Reason 4. The formula $e^{x} \cdot e^{y}=e^{x+y}$ still holds. Rather, we have $e^{i t+i s}=e^{i t} e^{i s}$. To check this replace the exponentials by their definition: $$ e^{i t} e^{i s}=(\cos t+i \sin t)(\cos s+i \sin s)=\cos (t+s)+i \sin (t+s)=e^{i(t+s)} . $$ Requiring $e^{x} \cdot e^{y}=e^{x+y}$ to be true for all complex numbers helps us decide what $e^{a+b i}$ shoud be for arbitrary complex numbers $a+b i$. 28.3. Definition. For any complex number $a+b i$ we set $$ e^{a+b i}=e^{a} \cdot e^{i b}=e^{a}(\cos b+i \sin b) . $$ One verifies as above in "reason 3" that this gives us the right behaviour under differentiation. Thus, for any complex number $r=a+b i$ the function $$ y(t)=e^{r t}=e^{a t}(\cos b t+i \sin b t) $$ satisfies $$ y^{\prime}(t)=\frac{d e^{r t}}{d t}=r e^{r t} $$ ## Complex solutions of polynomial equations 29.1. Quadratic equations. The well-known quadratic formula tells you that the equation $$ a x^{2}+b x+c=0 $$ has two solutions, given by $$ x_{ \pm}=\frac{-b \pm \sqrt{D}}{2 a}, \quad D=b^{2}-4 a c . $$ If the coefficients $a, b, c$ are real numbers and if the discriminant $D$ is positive, then this formula does indeed give two real solutions $x_{+}$and $x_{-}$. However, if $D<0$, then there are no real solutions, but there are two complex solutions, namely $$ x_{ \pm}=\frac{-b}{2 a} \pm i \frac{\sqrt{-D}}{2 a} $$ 29.2. Example: solve $x^{2}+2 x+5=0$. Solution: Use the quadratic formula, or complete the square: $$ \begin{aligned} & x^{2}+2 x+5=0 \\ \Longleftrightarrow & x^{2}+2 x+1=-4 \\ \Longleftrightarrow & (x+1)^{2}=-4 \\ \Longleftrightarrow & x+1= \pm 2 i \\ \Longleftrightarrow & x=-1 \pm 2 i . \end{aligned} $$ So, if you allow complex solutions then every quadratic equation has two solutions, unless the two solutions coincide (the case $D=0$, in which there is only one solution.) Figure 13. The sixth roots of 1 . There are six of them, and they re arranged in a regular hexagon. 29.3. Complex roots of a number. For any given complex number $w$ there is a method of finding all complex solutions of the equation $$ z^{n}=w $$ if $n=2,3,4, \cdots$ is a given integer. To find these solutions you write $w$ in polar form, i.e. you find $r>0$ and $\theta$ such that $w=r e^{i \theta}$. Then $$ z=r^{1 / n} e^{i \theta / n} $$ is a solution to (29). But it isn't the only solution, because the angle $\theta$ for which $w=r e^{i \theta}$ isn't unique - it is only determined up to a multiple of $2 \pi$. Thus if we have found one angle $\theta$ for which $w=r^{i \theta}$, then we can also write $$ w=r e^{i(\theta+2 k \pi)}, \quad k=0, \pm 1, \pm 2, \cdots $$ The $n^{\text {th }}$ roots of $w$ are then $$ z_{k}=r^{1 / n} e^{i\left(\frac{\theta}{n}+2 \frac{k}{n} \pi\right)} $$ Here $k$ can be any integer, so it looks as if there are infinitely many solutions. However, if you increase $k$ by $n$, then the exponent above increases by $2 \pi i$, and hence $z_{k}$ does not change. In a formula: $$ z_{n}=z_{0}, \quad z_{n+1}=z_{1}, \quad z_{n+2}=z_{2}, \quad \ldots \quad z_{k+n}=z_{k} $$ So if you take $k=0,1,2, \cdots, n-1$ then you have had all the solutions. The solutions $z_{k}$ always form a regular polygon with $n$ sides. 29.4. Example: find all sixth roots of $w=1$. We are to solve $z^{6}=1$. First write 1 in polar form, $$ 1=1 \cdot e^{0 i}=1 \cdot e^{2 k \pi i}, \quad(k=0, \pm 1, \pm 2, \ldots) . $$ Then we take the $6^{\text {th }}$ root and find $$ z_{k}=1^{1 / 6} e^{2 k \pi i / 6}=e^{k \pi i / 3}, \quad(k=0, \pm 1, \pm 2, \ldots) . $$ The six roots are $$ \begin{array}{lll} z_{0}=1 & z_{1}=e^{\pi i / 3}=\frac{1}{2}+\frac{i}{2} \sqrt{3} & z_{2}=e^{2 \pi i / 3}=-\frac{1}{2}+\frac{i}{2} \sqrt{3} \\ z_{3}=-1 & z_{4}=e^{4 \pi i / 3}=-\frac{1}{2}-\frac{i}{2} \sqrt{3} & z_{5}=e^{5 \pi i / 3}=\frac{1}{2}-\frac{i}{2} \sqrt{3} \end{array} $$ ## Other handy things you can do with complex numbers 30.1. Partial fractions. Consider the partial fraction decomposition $$ \frac{x^{2}+3 x-4}{(x-2)\left(x^{2}+4\right)}=\frac{A}{x-2}+\frac{B x+C}{x^{2}+4} $$ The coefficient $A$ is easy to find: multiply with $x-2$ and set $x=2$ (or rather, take the limit $x \rightarrow 2$ ) to get $$ A=\frac{2^{2}+3 \cdot 2-4}{2^{2}+4}=\cdots . $$ Before we had no similar way of finding $B$ and $C$ quickly, but now we can apply the same trick: multiply with $x^{2}+4$, $$ \frac{x^{2}+3 x-4}{(x-2)}=B x+C+\left(x^{2}+4\right) \frac{A}{x-2}, $$ and substitute $x=2 i$. This make $x^{2}+4=0$, with result $$ \frac{(2 i)^{2}+3 \cdot 2 i-4}{(2 i-2)}=2 i B+C . $$ Simplify the complex number on the left: $$ \begin{aligned} \frac{(2 i)^{2}+3 \cdot 2 i-4}{(2 i-2)} & =\frac{-4+6 i-4}{-2+2 i} \\ & =\frac{-8+6 i}{-2+2 i} \\ & =\frac{(-8+6 i)(-2-2 i)}{(-2)^{2}+2^{2}} \\ & =\frac{28+4 i}{8} \\ & =\frac{7}{2}+\frac{i}{2} \end{aligned} $$ So we get $2 i B+C=\frac{7}{2}+\frac{i}{2}$; since $B$ and $C$ are real numbers this implies $$ B=\frac{1}{4}, \quad C=\frac{7}{2} . $$ 30.2. Certain trigonometric and exponential integrals. You can compute $$ I=\int e^{3 x} \cos 2 x \mathrm{~d} x $$ by integrating by parts twice. You can also use that $\cos 2 x$ is the real part of $e^{2 i x}$. Instead of computing the real integral $I$, we look at the following related complex integral $$ J=\int e^{3 x} e^{2 i x} \mathrm{~d} x $$ which we get from $I$ by replacing $\cos 2 x$ with $e^{2 i x}$. Since $e^{2 i x}=\cos 2 x+i \sin 2 x$ we have i.e., $$ J=\int e^{3 x}(\cos 2 x+i \sin 2 x) \mathrm{d} x=\int e^{3 x} \cos 2 x \mathrm{~d} x+i \int e^{3 x} \sin 2 x \mathrm{~d} x $$ $J=I+$ something imaginary. The point of all this is that $J$ is easier to compute than $I$ : $$ J=\int e^{3 x} e^{2 i x} \mathrm{~d} x=\int e^{3 x+2 i x} \mathrm{~d} x=\int e^{(3+2 i) x} \mathrm{~d} x=\frac{e^{(3+2 i) x}}{3+2 i}+C $$ where we have used that $$ \int e^{a x} \mathrm{~d} x=\frac{1}{a} e^{a x}+C $$ holds even if $a$ is complex is a complex number such as $a=3+2 i$. To find $I$ you have to compute the real part of $J$, which you do as follows: so $$ \begin{aligned} \frac{e^{(3+2 i) x}}{3+2 i} & =e^{3 x} \frac{\cos 2 x+i \sin 2 x}{3+2 i} \\ & =e^{3 x} \frac{(\cos 2 x+i \sin 2 x)(3-2 i)}{(3+2 i)(3-2 i)} \\ & =e^{3 x} \frac{3 \cos 2 x+2 \sin 2 x+i(\cdots)}{13} \end{aligned} $$ $$ \int e^{3 x} \cos 2 x \mathrm{~d} x=e^{3 x}\left(\frac{3}{13} \cos 2 x+\frac{2}{13} \sin 2 x\right)+C . $$ 30.3. Complex amplitudes. A harmonic oscillation is given by $$ y(t)=A \cos (\omega t-\phi), $$ where $A$ is the amplitude, $\omega$ is the frequency, and $\phi$ is the phase of the oscillation. If you add two harmonic oscillations with the same frequency $\omega$, then you get another harmonic oscillation with frequency $\omega$. You can prove this using the addition formulas for cosines, but there's another way using complex exponentials. It goes like this. Let $y(t)=A \cos (\omega t-\phi)$ and $z(t)=B \cos (\omega t-\theta)$ be the two harmonic oscillations we wish to add. They are the real parts of $$ \begin{aligned} & Y(t)=A\{\cos (\omega t-\phi)+i \sin (\omega t-\phi)\}=A e^{i \omega t-i \phi}=A e^{-i \phi} e^{i \omega t} \\ & Z(t)=B\{\cos (\omega t-\theta)+i \sin (\omega t-\theta)\}=B e^{i \omega t-i \theta}=B e^{-i \theta} e^{i \omega t} \end{aligned} $$ Therefore $y(t)+z(t)$ is the real part of $Y(t)+Z(t)$, i.e. $$ y(t)+z(t)=\mathfrak{R e}(Y(t))+\mathfrak{R e}(Z(t))=\mathfrak{R e}(Y(t)+Z(t)) . $$ The quantity $Y(t)+Z(t)$ is easy to compute: $$ Y(t)+Z(t)=A e^{-i \phi} e^{i \omega t}+B e^{-i \theta} e^{i \omega t}=\left(A e^{-i \phi}+B e^{-i \theta}\right) e^{i \omega t} . $$ If you now do the complex addition $$ A e^{-i \phi}+B e^{-i \theta}=C e^{-i \psi}, $$ i.e. you add the numbers on the right, and compute the absolute value $C$ and argument $-\psi$ of the sum, then we see that $Y(t)+Z(t)=C e^{i(\omega t-\psi)}$. Since we were looking for the real part of $Y(t)+Z(t)$, we get $$ y(t)+z(t)=A \cos (\omega t-\phi)+B \cos (\omega t-\theta)=C \cos (\omega t-\psi) . $$ The complex numbers $A e^{-i \phi}, B e^{-i \theta}$ and $C e^{-i \psi}$ are called the complex amplitudes for the harmonic oscillations $y(t), z(t)$ and $y(t)+z(t)$. The recipe for adding harmonic oscillations can therefore be summarized as follows: Add the complex amplitudes. ## PROBLEMS ## COMPUTING AND DRAWING COMPLEX NUMBERS 294. Compute the following complex numbers by hand. Draw all numbers in the complex (or "Argand") plane (use graph paper or quad paper if necessary). Compute absolute value and argument of all numbers involved. $$ \begin{aligned} & i^{2} ; i^{3} ; i^{4} ; 1 / i ; \\ & (1+2 i)(2-i) ; \end{aligned} $$ $$ \begin{aligned} & (1+i)(1+2 i)(1+3 i) ; \\ & \left(\frac{1}{2} \sqrt{2}+\frac{i}{2} \sqrt{2}\right)^{2} ;\left(\frac{1}{2}+\frac{i}{2} \sqrt{3}\right)^{3} ; \\ & \frac{1}{1+i} ; 5 /(2-i) ; \end{aligned} $$ 295. Simplify your answer. - For $z=2+3 i$ find: (1) $z^{2}$ (2) $\bar{z}$ (3) $|z|$ (4) $\frac{1}{z}$ - For $z=2 e^{3 i}$ find: (1) $\arg (z)$ (2) $|z|$ (3) $z^{2}$ (4) $\frac{1}{z}$ - For $z=-\pi e^{\frac{\pi}{2} i}$ find: (1) $|z|$ (2) $\arg (z)$ 296. Plot the following four points in the complex plane. Be sure and label them. $$ R=\frac{P=\sqrt{2}}{1+2 i} e^{\frac{5 \pi}{4} i} \quad Z=\frac{1}{1+2 i}^{Q=1+2 i} $$ 297. [Deriving the addition formula for $\tan (\theta+\phi)]$ Group problem. Let $\theta, \phi \in\left(-\frac{\pi}{2}, \frac{\pi}{2}\right)$ be two angles. (a) What are the arguments of $$ z=1+i \tan \theta \text { and } w=1+i \tan \phi ? $$ (Draw both $z$ and $w$.) (b) Compute $z w$. (c) What is the argument of $z w$ ? (d) Compute $\tan (\arg z w)$. 298. Find formulas for $\cos 4 \theta, \sin 4 \theta, \cos 5 \theta$ and $\sin 6 \theta$ in terms of $\cos \theta$ and $\sin \theta$, by using de Moivre's formula. 299. In the following picture draw $2 w, \frac{3}{4} w$, $i w,-2 i w,(2+i) w$ and $(2-i) w$. (Try to make a nice drawing, use a ruler.) Make a new copy of the picture, and draw $\bar{w},-\bar{w}$ and $-w$. Make yet another copy of the drawing. Draw $1 / w, 1 / \bar{w}$, and $-1 / w$. For this drawing you need to know where the unit circle is in your drawing: Draw a circle centered at the origin with radius of your choice, and let this be the unit circle. [Depending on which circle you draw you will get a different answer!] 300. Verify directly from the definition of addition and multiplication of complex numbers that $$ \begin{aligned} & \text { (a) } z+w=w+z \\ & \text { (b) } z w=w z \\ & \text { (c) } z(v+w)=z v+z w \end{aligned} $$ holds for all complex numbers $v, w$, and $z$. 301. True or False? (In mathematics this means that you should either give a proof that the statement is always true, or else give a counterexample, thereby showing that the statement is not always true.) For any complex numbers $z$ and $w$ one has (a) $\mathfrak{R e}(z)+\mathfrak{R e}(w)=\mathfrak{R e}(z+w)$ (b) $\overline{z+w}=\bar{z}+\bar{w}$ (c) $\mathfrak{I m}(z)+\mathfrak{I m}(w)=\mathfrak{I m}(z+w)$ (d) $\overline{z w}=(\bar{z})(\bar{w})$ (e) $\mathfrak{R e}(z) \mathfrak{R e}(w)=\mathfrak{R e}(z w)$ (f) $\overline{z / w}=(\bar{z}) /(\bar{w})$ (g) $\mathfrak{R e}(i z)=\mathfrak{I m}(z)$ (h) $\mathfrak{R e}(i z)=i \mathfrak{R e}(z)$ (i) $\mathfrak{R e}(i z)=i \mathfrak{I m}(z)$ (j) $\mathfrak{I m}(i z)=\mathfrak{R e}(z)$ (k) $\mathfrak{R e}(\bar{z})=\mathfrak{R e}(z)$ 302. Group problem. The imaginary part of a complex number is known to be twice its real part. The absolute value of this number is 4 . Which number is this? 303. The real part of a complex number is known to be half the absolute value of that number. The imaginary part of the number is 1 . Which number is it? ## THE COMPLEX EXPONENTIAL 304. Compute and draw the following numbers in the complex plane $$ \begin{aligned} & e^{\pi i / 3} ; e^{\pi i / 2} ; \sqrt{2} e^{3 \pi i / 4} ; e^{17 \pi i / 4} . \\ & e^{\pi i}+1 ; e^{i \ln 2} . \end{aligned} $$ $$ \begin{aligned} & \frac{1}{e^{\pi i / 4}} ; \frac{e^{-\pi i}}{e^{\pi i / 4}} ; \frac{e^{2-\pi i / 2}}{e^{\pi i / 4}} \\ & e^{2009 \pi i} ; e^{2009 \pi i / 2} \\ & -8 e^{4 \pi i / 3} ; 12 e^{\pi i}+3 e^{-\pi i} \end{aligned} $$ 305. Compute the absolute value and argument of $e^{(\ln 2)(1+i)}$. 306. Group problem. Suppose $z$ can be any complex number. (a) Is it true that $e^{z}$ is always a positive number? (b) Is it true that $e^{z} \neq 0$ ? 307. Group problem. Verify directly from the definition that $$ e^{-i t}=\frac{1}{e^{i t}} $$ holds for all real values of $t$. 308. Show that $$ \cos t=\frac{e^{i t}+e^{-i t}}{2}, \quad \sin t=\frac{e^{i t}-e^{-i t}}{2 i} $$ 309. Show that $$ \cosh x=\cos i x, \quad \sinh x=\frac{1}{i} \sin i x . $$ 310. The general solution of a second order linear differential equation contains expressions of the form $A e^{i \beta t}+B e^{-i \beta t}$. These can be rewritten as $C_{1} \cos \beta t+$ $C_{2} \sin \beta t$. If $A e^{i \beta t}+B e^{-i \beta t}=2 \cos \beta t+$ $3 \sin \beta t$, then what are $A$ and $B$ ? 311. Group problem. (a) Show that you can write a "cosine-wave" with amplitude $A$ and phase $\phi$ as follows $$ A \cos (t-\phi)=\mathfrak{R e}\left(z e^{i t}\right), $$ where the "complex amplitude" is given by $z=A e^{-i \phi}$. (See $\left.\S 30.3\right)$. (b) Show that a "sine-wave" with amplitude $A$ and phase $\phi$ as follows $$ A \sin (t-\phi)=\mathfrak{R e}\left(z e^{i t}\right), $$ where the "complex amplitude" is given by $z=-i A e^{-i \phi}$. 312. Find $A$ and $\phi$ where $A \cos (t-\phi)=$ $2 \cos (t)+2 \cos \left(t-\frac{2}{3} \pi\right)$. 313. Find $A$ and $\phi$ where $A \cos (t-\phi)=$ $12 \cos \left(t-\frac{1}{6} \pi\right)+12 \sin \left(t-\frac{1}{3} \pi\right)$. 314. Find $A$ and $\phi$ where $A \cos (t-\phi)=$ $12 \cos (t-\pi / 6)+12 \cos (t-\pi / 3)$. 315. Find $A$ and $\phi$ such that $A \cos (t-\phi)=$ $\cos \left(t-\frac{1}{6} \pi\right)+\sqrt{3} \cos \left(t-\frac{2}{3} \pi\right)$. REAL AND COMPLEX SOLUTIONS OF ALGEBRAIC EQUATIONS 316. Find and draw all real and complex solutions of (a) $z^{2}+6 z+10=0$ (b) $z^{3}+8=0$ (c) $z^{3}-125=0$ (d) $2 z^{2}+4 z+4=0$ (e) $z^{4}+2 z^{2}-3=0$ (f) $3 z^{6}=z^{3}+2$ (g) $z^{5}-32=0$ (h) $z^{5}-16 z=0$ (i) $z^{4}+z^{2}-12=0$ ## CALCULUS OF COMPLEX VALUED FUNCTIONS 317. Compute the derivatives of the following functions $$ \begin{aligned} & f(x)=\frac{1}{x+i} \\ & g(x)=\log x+i \arctan x \\ & h(x)=e^{i x^{2}} \end{aligned} $$ Try to simplify your answers. 318. (a) Compute $$ \int(\cos 2 x)^{4} d x $$ by using $\cos \theta=\frac{1}{2}\left(e^{i \theta}+e^{-i \theta}\right)$ and expanding the fourth power. (b) Assuming $a \in \mathbb{R}$, compute $$ \int e^{-2 x}(\sin a x)^{2} d x $$ (same trick: write sin $a x$ in terms of complex exponentials; make sure your final answer has no complex numbers.) 319. Use $\cos \alpha=\left(e^{i \alpha}+e^{-i \alpha}\right) / 2$, etc. to evaluate these indefinite integrals: (a) $\int \cos ^{2} x \mathrm{~d} x$ (b) $\int \cos ^{4} x \mathrm{~d} x$, (c) $\int \cos ^{2} x \sin x \mathrm{~d} x$, (d) $\int \sin ^{3} x \mathrm{~d} x$, (e) $\int \cos ^{2} x \sin ^{2} x \mathrm{~d} x$, (f) $\int \sin ^{6} x \mathrm{~d} x$ (g) $\int \sin (3 x) \cos (5 x) \mathrm{d} x$ (h) $\int \sin ^{2}(2 x) \cos (3 x) \mathrm{d} x$ (i) $\int_{0}^{\pi / 4} \sin (3 x) \cos (x) \mathrm{d} x$ (j) $\int_{0}^{\pi / 3} \sin ^{3}(x) \cos ^{2}(x) d x$ (k) $\int_{0}^{\pi / 2} \sin ^{2}(x) \cos ^{2}(x) \mathrm{d} x$ (1) $\int_{0}^{\pi / 3} \sin (x) \cos ^{2}(x) d x$ 320. Group problem. Compute the following integrals when $m \neq n$ are distinct integers. (a) $\int_{0}^{2 \pi} \sin (m x) \cos (n x) \mathrm{d} x$ (b) $\int_{0}^{2 \pi} \sin (n x) \cos (n x) \mathrm{d} x$ (c) $\int_{0}^{2 \pi} \cos (m x) \cos (n x) \mathrm{d} x$ (d) $\int_{0}^{\pi} \cos (m x) \cos (n x) \mathrm{d} x$ (e) $\int_{0}^{2 \pi} \sin (m x) \sin (n x) \mathrm{d} x$ $$ \text { (f) } \int_{0}^{\pi} \sin (m x) \sin (n x) \mathrm{d} x $$ These integrals are basic to the theory of Fourier series, which occurs in many applications, especially in the study of wave motion (light, sound, economic cycles, clocks, oceans, etc.). They say that different frequency waves are "independent". 321. Show that $\cos x+\sin x=C \cos (x+\beta)$ for suitable constants $C$ and $\beta$ and use this to evaluate the following integrals. (a) $\int \frac{\mathrm{d} x}{\cos x+\sin x}$ (b) $\int \frac{\mathrm{d} x}{(\cos x+\sin x)^{2}}$ (c) $\int \frac{\mathrm{d} x}{A \cos x+B \sin x}$ where $A$ and $B$ are any constants. 322. Group problem. Compute the integrals $$ \int_{0}^{\pi / 2} \sin ^{2} k x \sin ^{2} l x \mathrm{~d} x $$ where $k$ and $l$ are positive integers. 323. Group problem. Show that for any integers $k, l, m$ $$ \int_{0}^{\pi} \sin k x \sin l x \sin m x \mathrm{~d} x=0 $$ if and only if $k+l+m$ is even. 324. Group problem. (i) Prove the following version of the CHAIN RULE: If $f: I \rightarrow \mathbb{C}$ is a differentiable complex valued function, and $g: J \rightarrow I$ is a differentiable real valued function, then $h=f \circ g: J \rightarrow \mathbb{C}$ is a differentiable function, and one has $$ h^{\prime}(x)=f^{\prime}(g(x)) g^{\prime}(x) . $$ (ii) Let $n \geq 0$ be a nonnegative integer. Prove that if $f: I \rightarrow \mathbb{C}$ is a differentiable function, then $g(x)=f(x)^{n}$ is also differentiable, and one has $$ g^{\prime}(x)=n f(x)^{n-1} f^{\prime}(x) . $$ Note that the chain rule from part (a) does not apply! Why? Complex roots of real polynomials 325. For $a$ and $b$ complex numbers show that (a) $\overline{a+b}=\bar{a}+\bar{b}$ (b) $\overline{a \cdot b}=\bar{a} \cdot \bar{b}$ (c) $a$ is real iff $\bar{a}=a$ 326. For $p(x)=a_{0}+a_{1} x+a_{2} x^{2}+\cdots a_{n} x^{n}$ a polynomial and $z$ a complex number, show that $$ \overline{p(z})=\bar{a}_{0}+\bar{a}_{1} \bar{z}+\bar{a}_{2}(\bar{z})^{2}+\cdots \bar{a}_{n}(\bar{z})^{n} $$ 327. For $p$ a real polynomial, i.e., the coefficients $a_{k}$ of $p$ are real numbers, if $z$ is a complex root of $p$, i.e., $p(z)=0$, show $\bar{z}$ is also a root of $p$. Hence the complex roots of $p$ occur in conjugate pairs. 328. Using the quadratic formula show directly that the roots of a real quadratic are either both real or a complex conjugate pair. 329. Show that $2+3 i$ and its conjugate $2-3 i$ are the roots of a real polynomial. 330. Show that for every complex number $a$ there is a real quadratic whose roots are $a$ and $\bar{a}$. 331. The Fundamental theorem of Algebra states that every complex polynomial of degree $n$ can be completely factored as a constant multiple of $$ \left(x-a l_{1}\right)\left(x-\alpha_{2}\right) \cdots\left(x-\alpha_{n}\right) $$ (The $\alpha_{i}$ may not be distinct.) It was proved by Gauss. Proofs of it are given in courses on Complex Analysis. Use the Fundamental Theorem of Algebra to show that every real polynomial can be factored into a product real polynomials, each of degree 1 or 2. ## Chapter 4: Differential Equations ## What is a DiffEq? A differential equation is an equation involving an unknown function and its derivatives. The order of the differential equation is the order of the highest derivative which appears. A linear differential equation is one of form $$ y^{(n)}+a_{1}(x) y^{(n-1)}+\cdots+a_{n-1}(x) y^{\prime}+a_{n}(x) y=k(x) $$ where the coefficients $a_{1}(x), \ldots, a_{n}(x)$ and the right hand side $k(x)$ are given functions of $x$ and $y$ is the unknown function. Here $$ y^{(k)}=\frac{\mathrm{d}^{k} y}{\mathrm{~d} x^{k}} $$ denotes the $k$ th derivative of $y$ so this equation has order $n$. We shall mainly study the case $n=1$ where the equation has form $$ y^{\prime}+a(x) y=k(x) $$ and the case $n=2$ with constant coefficients where the equation has form $$ y^{\prime \prime}+a y^{\prime}+b y=k(x) . $$ When the right hand side $k(x)$ is zero the equation is called homogeneous linear and otherwise it is called inhomogeneous linear (or nonhomogeneous linear by some people). For a homogeneous linear equation the sum of two solutions is a solution and a constant multiple of a solution is a solution. This property of linear equations is called the principle of superposition. ## First Order Separable Equations A separable differential equation is a diffeq of the form $$ y^{\prime}(x)=F(x) G(y(x)), \quad \text { or } \quad \frac{\mathrm{d} y}{\mathrm{~d} x}=F(x) G(y) . $$ To solve this equation divide by $G(y(x))$ to get $$ \frac{1}{G(y(x))} \frac{\mathrm{d} y}{\mathrm{~d} x}=F(x) $$ Next find a function $H(y)$ whose derivative with respect to $y$ is $$ H^{\prime}(y)=\frac{1}{G(y)} \quad\left(\text { solution: } H(y)=\int \frac{d y}{G(y)} .\right) $$ Then the chain rule implies that (31) can be written as $$ \frac{\mathrm{d} H(y(x))}{\mathrm{d} x}=F(x) . $$ In words: $H(y(x))$ is an antiderivative of $F(x)$, which means we can find $H(y(x))$ by integrating $F(x)$ : $$ H(y(x))=\int F(x) d x+C $$ Once you've found the integral of $F(x)$ this gives you $y(x)$ in implicit form: the equation (33) gives you $y(x)$ as an implicit function of $x$. To get $y(x)$ itself you must solve the equation (33) for $y(x)$. A quick way of organizing the calculation goes like this: To solve $\frac{\mathrm{d} y}{\mathrm{~d} x}=F(x) G(y)$ you first separate the variables, $$ \frac{\mathrm{d} y}{G(y)}=F(x) \mathrm{d} x $$ and then integrate, $$ \int \frac{\mathrm{d} y}{G(y)}=\int F(x) \mathrm{d} x . $$ The result is an implicit equation for the solution $y$ with one undetermined integration constant. Determining the constant. The solution you get from the above procedure contains an arbitrary constant $C$. If the value of the solution is specified at some given $x_{0}$, i.e. if $y\left(x_{0}\right)$ is known then you can express $C$ in terms of $y\left(x_{0}\right)$ by using (33). A snag: You have to divide by $G(y)$ which is problematic when $G(y)=0$. This has as consequence that in addition to the solutions you found with the above procedure, there are at least a few more solutions: the zeroes of $G(y)$ (see Example 33.2 below). In addition to the zeroes of $G(y)$ there sometimes can be more solutions, as we will see in Example 35.2 on "Leaky Bucket Dating." 33.1. Example. We solve $$ \frac{\mathrm{d} z}{\mathrm{~d} t}=\left(1+z^{2}\right) \cos t $$ Separate variables and integrate $$ \int \frac{\mathrm{d} z}{1+z^{2}}=\int \cos t \mathrm{~d} t, $$ to get $$ \arctan z=\sin t+C \text {. } $$ Finally solve for $z$ and you find the general solution $$ z(t)=\tan (\sin (t)+C) . $$ 33.2. Example: The snag in action. If you apply the method to $y^{\prime}(x)=K y$ with $K$ a constant, you get $y(x)=e^{K(x+C)}$. No matter how you choose $C$ you never get the function $y(x)=0$, even though $y(x)=0$ satisfies the equation. This is because here $G(y)=K y$, and $G(y)$ vanishes for $y=0$. ## First Order Linear Equations There are two systematic methods which solve a first order linear inhomogeneous equation $$ \frac{\mathrm{d} y}{\mathrm{~d} x}+a(x) y=k(x) $$ You can multiply the equation with an "integrating factor", or you do a substitution $y(x)=c(x) y_{0}(x)$, where $y_{0}$ is a solution of the homogeneous equation (that's the equation you get by setting $k(x) \equiv 0)$. 34.1. The Integrating Factor. Let $$ A(x)=\int a(x) \mathrm{d} x, \quad m(x)=e^{A(x)} . $$ Multiply the equation $(\ddagger)$ by the "integrating factor" $m(x)$ to get $$ m(x) \frac{\mathrm{d} y}{\mathrm{~d} x}+a(x) m(x) y=m(x) k(x) . $$ By the chain rule the integrating factor satisfies $$ \frac{\mathrm{d} m(x)}{\mathrm{d} x}=A^{\prime}(x) m(x)=a(x) m(x) . $$ Therefore one has $$ \frac{\mathrm{d} m(x) y}{\mathrm{~d} x}=m(x) \frac{\mathrm{d} y}{\mathrm{~d} x}+a(x) m(x) y=m(x)\left\{\frac{\mathrm{d} y}{\mathrm{~d} x}+a(x) y\right\}=m(x) k(x) . $$ Integrating and then dividing by the integrating factor gives the solution $$ y=\frac{1}{m(x)}\left(\int m(x) k(x) \mathrm{d} x+C\right) . $$ In this derivation we have to divide by $m(x)$, but since $m(x)=e^{A(x)}$ and since exponentials never vanish we know that $m(x) \neq 0$, no matter which problem we're doing, so it's OK, we can always divide by $m(x)$. 34.2. Variation of constants for 1st order equations. Here is the second method of solving the inhomogeneous equation $(\ddagger)$. Recall again that the homogeneous equation associated with $(\ddagger)$ is $$ \frac{\mathrm{d} y}{\mathrm{~d} x}+a(x) y=0 $$ The general solution of this equation is $$ y(x)=C e^{-A(x)} . $$ where the coefficient $C$ is an arbitrary constant. To solve the inhomogeneous equation $(\ddagger)$ we replace the constant $C$ by an unknown function $C(x)$, i.e. we look for a solution in the form $$ y=C(x) y_{0}(x) \text { where } y_{0}(x) \stackrel{\text { def }}{=} e^{-A(x)} . $$ (This is how the method gets its name: we are allowing the constant $C$ to vary.) Then $y_{0}^{\prime}(x)+a(x) y_{0}(x)=0$ (because $y_{0}(x)$ solves $(\dagger)$ ) and $$ y^{\prime}(x)+a(x) y(x)=C^{\prime}(x) y_{0}(x)+C(x) y_{0}^{\prime}(x)+a(x) C(x) y_{0}(x)=C^{\prime}(x) y_{0}(x) $$ so $y(x)=C(x) y_{0}(x)$ is a solution if $C^{\prime}(x) y_{0}(x)=k(x)$, i.e. $$ C(x)=\int \frac{k(x)}{y_{0}(x)} \mathrm{d} x $$ Once you notice that $y_{0}(x)=\frac{1}{m(x)}$, you realize that the resulting solution $$ y(x)=C(x) y_{0}(x)=y_{0}(x) \int \frac{k(x)}{y_{0}(x)} \mathrm{d} x $$ is the same solution we found before, using the integrating factor. Either method implies the following: 34.3. Theorem. The initial value problem $$ \frac{\mathrm{d} y}{\mathrm{~d} x}+a(x) y=0, \quad y(0)=y_{0} $$ has exactly one solution. It is given by $$ y=y_{0} e^{-A(x)}, \text { where } A(x)=\int_{0}^{x} a(t) \mathrm{d} t . $$ The theorem says three things: (1) there is a solution, (2) there is a formula for the solution, (3) there aren't any other solutions (if you insist on the initial value $y(0)=y_{0}$.) The last assertion is just as important as the other two, so I'll spend a whole section trying to explain why. ## Dynamical Systems and Determinism A differential equation which describes how something (e.g. the position of a particle) evolves in time is called a dynamical system. In this situation the independent variable is time, so it is customary to call it $t$ rather than $x$; the dependent variable, which depends on time is often denoted by $x$. In other words, one has a differential equation for a function $x=x(t)$. The simplest examples have form $$ \frac{\mathrm{d} x}{\mathrm{~d} t}=f(x, t) $$ In applications such a differential equation expresses a law according to which the quantity $x(t)$ evolves with time (synonyms: "evolutionary law", "dynamical law", "evolution equation for $\left.x^{\prime \prime}\right)$. A good law is deterministic, which means that any solution of (34) is completely determined by its value at one particular time $t_{0}$ : if you know $x$ at time $t=t_{0}$, then the "evolution law" (34) should predict the values of $x(t)$ at all other times, both in the past $\left(t<t_{0}\right)$ and in the future $\left(t>t_{0}\right)$. Our experience with solving differential equations so far ( $§ 33$ and $\S 34)$ tells us that the general solution to a differential equation like (34) contains an unknown integration constant $C$. Let's call the general solution $x(t ; C)$ to emphasize the presence of this constant. If the value of $x$ at some time $t_{0}$ is known to be, say, $x_{0}$, then you get an equation $$ x\left(t_{0} ; C\right)=x_{0} $$ which you can try to solve for $C$. If this equation always has exactly one solution $C$ then the evolutionary law (34) is deterministic (the value of $x\left(t_{0}\right)$ always determines $x(t)$ at all other times $t$ ); if for some prescribed value $x_{0}$ at some time $t_{0}$ the equation (35) has several solutions, then the evolutionary law (34) is not deterministic (because knowing $x(t)$ at time $t_{0}$ still does not determine the whole solution $x(t)$ at times other than $\left.t_{0}\right)$. 35.1. Example: Carbon Dating. Suppose we have a fossil, and we want to know how old it is. All living things contain carbon, which naturally occurs in two isotopes, $\mathrm{C}_{14}$ (unstable) and $\mathrm{C}_{12}$ (stable). A long as the living thing is alive it eats \& breaths, and its ratio of $\mathrm{C}_{12}$ to $\mathrm{C}_{14}$ is kept constant. Once the thing dies the isotope $\mathrm{C}_{14}$ decays into $\mathrm{C}_{12}$ at a steady rate. Let $x(t)$ be the ratio of $\mathrm{C}_{14}$ to $\mathrm{C}_{12}$ at time $t$. The laws of radioactive decay says that there is a constant $k>0$ such that $$ \frac{\mathrm{d} x(t)}{\mathrm{d} t}=-k x(t) $$ Solve this differential equation (it is both separable and first order linear: you choose your method) to find the general solution $$ x(t ; C)=C e^{-k t} . $$ After some lab work it is found that the current $\mathrm{C}_{14} / \mathrm{C}_{12}$ ratio of our fossil is $x_{\text {now }}$. Thus we have $$ x_{\text {now }}=C e^{-k t_{\text {now }}} \Longrightarrow C=x_{\text {now }} e^{t_{\text {now }}} . $$ Therefore our fossil's $\mathrm{C}_{14} / \mathrm{C}_{12}$ ratio at any other time $t$ is/was $$ x(t)=x_{\text {now }} e^{k\left(t_{\text {now }}-t\right)} . $$ This allows you to compute the time at which the fossil died. At this time the $\mathrm{C}_{14} / \mathrm{C}_{12}$ ratio must have been the common value in all living things, which can be measured, let's call it $x_{\text {life }}$. So at the time $t_{\text {demise }}$ when our fossil became a fossil you would have had $x\left(t_{\text {demise }}\right)=x_{\text {life }}$. Hence the age of the fossil would be given by $$ x_{\text {life }}=x\left(t_{\text {demise }}\right)=x_{\text {now }} e^{k\left(t_{\text {now }}-t_{\text {demise }}\right)} \Longrightarrow t_{\text {now }}-t_{\text {demise }}=\frac{1}{k} \ln \frac{x_{\text {life }}}{x_{\text {now }}} $$ 35.2. Example: On Dating a Leaky Bucket. A bucket is filled with water. There's a hole in the bottom of the bucket so the water streams out at a certain rate. $h(t)$ the height of water in the bucket $A$ area of cross section of bucket a area of hole in the bucket $v \quad$ velocity with which water goes through the hole. The amount of water in the bucket is $A \times h(t)$; The rate at which water is leaving the bucket is $a \times v(t)$; Hence $$ \frac{\mathrm{d} A h(t)}{\mathrm{d} t}=-a v(t) $$ In fluid mechanics it is shown that the velocity of the water as it passes through the hole only depends on the height $h(t)$ of the water, and that, for some constant $K$, $$ v(t)=\sqrt{K h(t)} . $$ The last two equations together give a differential equation for $h(t)$, namely, $$ \frac{\mathrm{d} h(t)}{\mathrm{d} t}=-\frac{a}{A} \sqrt{K h(t)} . $$ To make things a bit easier we assume that the constants are such that $\frac{a}{A} \sqrt{K}=2$. Then $h(t)$ satisfies $$ h^{\prime}(t)=-2 \sqrt{h(t)} $$ This equation is separable, and when you solve it you get $$ \frac{\mathrm{d} h}{2 \sqrt{h}}=-1 \Longrightarrow \sqrt{h(t)}=-t+C \text {. } $$ Figure 14. Several solutions $h(t ; C)$ of the Leaking Bucket Equation (36). Note how they all have the same values when $t \geq 1$. This formula can't be valid for all values of $t$, for if you take $t>C$, the RHS becomes negative and can't be equal to the square root in the LHS. But when $t \leq C$ we do get a solution, $$ h(t ; C)=(C-t)^{2} . $$ This solution describes a bucket which is losing water until at time $C$ it is empty. Motivated by the physical interpretation of our solution it is natural to assume that the bucket stays empty when $t>C$, so that the solution with integration constant $C$ is given by $$ h(t)= \begin{cases}(C-t)^{2} & \text { when } t \leq C \\ 0 & \text { for } t>C .\end{cases} $$ We now come to the question: is the Leaky Bucket Equation deterministic? The answer is: NO. If you let $C$ be any negative number, then $h(t ; C)$ describes the water level of a bucket which long ago had water, but emptied out at time $C<0$. In particular, for all these solutions of the diffeq $(36)$ you have $h(0)=0$, and knowing the value of $h(t)$ at $t=0$ in this case therefore doesn't tell you what $h(t)$ is at other times. Once you put it in terms of the physical interpretation it is actually quite obvious why this system can't be deterministic: it's because you can't answer the question "If you know that the bucket once had water and that it is empty now, then how much water did it hold one hour ago?" ## Higher order equations tions.After looking at first order differential equations we now turn to higher order equa- 36.1. Example: Spring with a weight. A body of mass $m$ is suspended by a spring. There are two forces on the body: gravity and the tension in the spring. Let $F$ be the sum of these two forces. Newton's law says that the motion of the weight satisfies $F=m a$ where $a$ is the acceleration. The force of gravity is $m g$ where $g=32 \mathrm{ft} / \mathrm{sec}^{2}$; the quantity $m g$ is called the weight of the body. We assume Hooke's law which says that the tension in the spring is proportional to the amount by which the spring is stretched; the constant or proportionality is called the spring constant. We write $k$ for this spring constant. The total force acting on the body is therefore $$ F=m g-k y(t) . $$ According to Newton's first/second/third law the acceleration $a$ of the body satisfies $F=m a$. Since the acceleration $a$ is the second derivative of position $y$ we get the following differential equation for $y(t)$ $$ m \frac{\mathrm{d}^{2} y}{\mathrm{~d} t^{2}}=m g-k y(t) $$ 36.2. Example: the pendulum. The velocity of the weight on the pendulum is $L \frac{\mathrm{d} \theta}{\mathrm{d} t}$, hence its acceleration is $a=L \mathrm{~d}^{2} \theta / \mathrm{d} t^{2}$. There are two forces acting on the weight: gravity (strength $m g$; direction vertically down) and the tension in the string (strength: whatever it takes to keep the weight on the circle of radius $L$ and center $P$; direction parallel to the string). Together they leave a force of size $F_{\text {gravity }} \cdot \sin \theta$ which accelerates the weight. By Newton's " $F=m a$ " law you get $$ m L \frac{\mathrm{d}^{2} \theta}{\mathrm{d} t^{2}}=-m g \sin \theta(t) $$ or, canceling $m \mathrm{~s}$, $$ \frac{\mathrm{d}^{2} \theta}{\mathrm{d} t^{2}}+\frac{g}{L} \sin \theta(t)=0 $$ ## Constant Coefficient Linear Homogeneous Equations 37.1. Differential operators. In this section we study the homogeneous linear differential equation $$ y^{(n)}+a_{1} y^{(n-1)}+\cdots+a_{n-1} y^{\prime}+a_{n} y=0 $$ where the coefficients $a_{1}, \ldots, a_{n}$ are constants. 37.2. Examples. The three equations $$ \begin{gathered} \frac{\mathrm{d} y}{\mathrm{~d} x}-y=0, \\ y^{\prime \prime}-y=0, \quad y^{\prime \prime}+y=0 \\ y^{(\mathrm{iv})}-y=0 \end{gathered} $$ are homogeneous linear differential equations with constant coefficients. Their degrees are $1,2,2$, and 4 . It will be handy to have an abbreviation for the Left Hand Side in (39), so we agree to write $\mathcal{L}[y]$ for the result of substituting a function $y$ in the LHS of (39). In other words, for any given function $y=y(x)$ we set $$ \mathcal{L}[y](x) \stackrel{\text { def }}{=} y^{(n)}(x)+a_{1} y^{(n-1)}(x)+\cdots+a_{n-1} y^{\prime}(x)+a_{n} y(x) . $$ We call $\mathcal{L}$ an operator. An operator is like a function in that you give it an input, it does a computation and gives you an output. The difference is that ordinary functions take a number as their input, while the operator $\mathcal{L}$ takes a function $y(x)$ as its input, and gives another function (the LHS of (39)) as its output. Since the computation of $\mathcal{L}[y]$ involves taking derivatives of $y$, the operator $\mathcal{L}$ is called a differential operator. 37.3. Example. The differential equations in the previous example correspond to the differential operators $$ \begin{gathered} \mathcal{L}_{1}[y]=y^{\prime}-y, \\ \mathcal{L}_{2}[y]=y^{\prime \prime}-y, \quad \mathcal{L}_{3}[y]=y^{\prime \prime}+y \\ \mathcal{L}_{4}[y]=y^{(\mathrm{iv})}-y . \end{gathered} $$ So one has $$ \mathcal{L}_{3}[\sin 2 x]=\frac{\mathrm{d}^{2} \sin 2 x}{\mathrm{~d} x^{2}}+\sin 2 x=-4 \sin 2 x+\sin 2 x=-3 \sin 2 x . $$ 37.4. The superposition principle. The following theorem is the most important property of linear differential equations. 37.5. Superposition Principle. For any two functions $y_{1}$ and $y_{2}$ we have $$ \mathcal{L}\left[y_{1}+y_{2}\right]=\mathcal{L}\left[y_{1}\right]+\mathcal{L}\left[y_{2}\right] $$ For any function $y$ and any constant $c$ we have $$ \mathcal{L}[c y]=c \mathcal{L}[y] $$ The proof, which is rather straightforward once you know what to do, will be given in lecture. It follows from this theorem that if $y_{1}, \ldots, y_{k}$ are given functions, and $c_{1}, \ldots, c_{k}$ are constants, then $$ \mathcal{L}\left[c_{1} y_{1}+\cdots+c_{k} y_{k}\right]=c_{1} \mathcal{L}\left[y_{1}\right]+\cdots+c_{k} \mathcal{L}\left[y_{k}\right] $$ The importance of the superposition principle is that it allows you to take old solutions to the homogeneous equation and make new ones. Namely, if $y_{1}, \ldots, y_{k}$ are solutions to the homogeneous equation $\mathcal{L}[y]=0$, then so is $c_{1} y_{1}+\cdots+c_{k} y_{k}$ for any choice of constants $c_{1}, \ldots, c_{k}$ 37.6. Example. Consider the equation $$ y^{\prime \prime}-4 y=0 . $$ My cousin Bruce says that the two functions $y_{1}(x)=e^{2 x}$ and $y_{2}(x)=e^{-2 x}$ both are solutions to this equations. You can check that Bruce is right just by substituting his solutions in the equation. The Superposition Principle now implies that $$ y(x)=c_{1} e^{2 x}+c_{2} e^{-2 x} $$ also is a solution, for any choice of constants $c_{1}, c_{2}$. 37.7. The characteristic polynomial. This example contains in it the general method for solving linear constant coefficient ODEs. Suppose we want to solve the equation (39), i.e. $$ \mathcal{L}[y] \stackrel{\text { def }}{=} y^{(n)}+a_{1} y^{(n-1)}+\cdots+a_{n-1} y^{\prime}+a_{n} y=0 . $$ Then the first thing to do is to see if there are any exponential functions $y=e^{r x}$ which satisfy the equation. Since we see that $$ \frac{\mathrm{d} e^{r x}}{\mathrm{~d} x}=r e^{r x}, \quad \frac{\mathrm{d}^{2} e^{r x}}{\mathrm{~d} x^{2}}=r^{2} e^{r x}, \quad \frac{\mathrm{d}^{3} e^{r x}}{\mathrm{~d} x^{3}}=r^{3} e^{r x}, \quad \text { etc. } . . $$ $$ \mathcal{L}\left[e^{r x}\right]=\left(r^{n}+a_{1} r^{n-1}+\cdots a_{n-1} r+a_{n}\right) e^{r x} $$ The polynomial $$ P(r)=r^{n}+a_{1} r^{n-1}+\cdots+a_{n-1} r+a_{n} . $$ is called the characteristic polynomial. We see that $y=e^{r x}$ is a solution of $\mathcal{L}[y]=0$ if and only if $P(r)=0$. 37.8. Example. We look for all exponential solutions of the equation $$ y^{\prime \prime}-4 y=0 . $$ Substitution of $y=e^{r x}$ gives $$ y^{\prime \prime}-4 y=r^{2} e^{r x}-4 e^{r x}=\left(r^{2}-4\right) e^{r x} . $$ The exponential $e^{r x}$ can't vanish, so $y^{\prime \prime}-4 y=0$ will hold exactly when $r^{2}-4=0$, i.e. when $r= \pm 2$. Therefore the only exponential functions which satisfy $y^{\prime \prime}-4 y=0$ are $y_{1}(x)=e^{2 x}$ and $y_{2}(x)=e^{-2 x}$. 37.9. Theorem. Suppose the polynomial $P(r)$ has $n$ distinct roots $r_{1}, r_{2}, \ldots, r_{n}$. Then the general solution of $\mathcal{L}[y]=0$ is $$ y=c_{1} e^{r_{1} x}+c_{2} e^{r_{2} x}+\cdots+c_{n} e^{r_{n} x} $$ where $c_{1}, c_{2}, \ldots, c_{n}$ are arbitrary constants. Proof. We have just seen that the functions $y_{1}(x)=e^{r_{1} x}, y_{2}(x)=e^{r_{2} x}, y_{3}(x)=$ $e^{r_{3} x}$, etc. are solutions of the equation $\mathcal{L}[y]=0$. In Math 320 (or 319 , or...) you prove that these are all the solutions (it also follows from the method of variation of parameters that there aren't any other solutions). 37.10. Complex roots and repeated roots. If the characteristic polynomial has $n$ distinct real roots then Theorem 37.9 tells you what the general solution to the equation $\mathcal{L}[y]=0$ is. In general a polynomial equation like $P(r)=0$ can have repeated roots, and it can have complex roots. 37.11. Example. Solve $y^{\prime \prime}+2 y^{\prime}+y=0$. The characteristic polynomial is $P(r)=r^{2}+2 r+1=(r+1)^{2}$, so the only root of the characteristic equation $r^{2}+2 r+1=0$ is $r=-1$ (it's a repeated root). This means that for this equation we only get one exponential solution, namely $y(x)=e^{-x}$. It turns out that for this equation there is another solution which is not exponential. It is $y_{2}(x)=x e^{-x}$. You can check that it really satisfies the equation $y^{\prime \prime}+2 y^{\prime}+y=0$. When there are repeated roots there are other solutions: if $P(r)=0$, then $t^{j} e^{r t}$ is a solution if $j$ is a nonnegative integer less than the multiplicity of $r$. Also, if any of the roots are complex, the phrase general solution should be understood to mean general complex solution and the coefficients $c_{j}$ should be complex. If the equation is real, the real and imaginary part of a complex solution are again solutions. We only describe the case $n=2$ in detail. 37.12. Theorem. Consider the differential equation $$ \frac{\mathrm{d}^{2} y}{\mathrm{~d} x^{2}}+a_{1} \frac{\mathrm{d} y}{\mathrm{~d} x}+a_{2} y=0 $$ and suppose that $r_{1}$ and $r_{2}$ are the solutions of the characteristic equation of $r^{2}+a_{1} r+a_{2}=$ 0 . Then (i): If $r_{1}$ and $r_{2}$ are distinct and real, the general solution of $(\dagger)$ is $$ y=c_{1} e^{r_{1} x}+c_{2} e^{r_{2} x} $$ (ii): If $r_{1}=r_{2}$, the general solution of $(\dagger)$ is $$ y=c_{1} e^{r_{1} x}+c_{2} x e^{r_{1} x} . $$ (iii): If $r_{1}=\alpha+\beta i$ and $r_{2}=\alpha-\beta i$, the general solution of $(\dagger)$ is $$ y=c_{1} e^{\alpha x} \cos (\beta x)+c_{2} e^{\alpha x} \sin (\beta x) . $$ In each case $c_{1}$ and $c_{2}$ are arbitrary constants. Case (i) and case (iii) can be subsumed into a single case using complex notation: $$ \begin{gathered} e^{(\alpha \pm \beta i) x}=e^{\alpha x} \cos \beta x \pm i e^{\alpha x} \sin \beta x \\ e^{\alpha x} \cos \beta x=\frac{e^{(\alpha+\beta i) x}+e^{(\alpha-\beta i) x}}{2}, \quad e^{\alpha x} \sin \beta x=\frac{e^{(\alpha+\beta i) x}-e^{(\alpha-\beta i) x}}{2 i} . \end{gathered} $$ ## Inhomogeneous Linear Equations In this section we study the inhomogeneous linear differential equation $$ y^{(n)}+a_{1} y^{(n-1)}+\cdots+a_{n-1} y^{\prime}+a_{n} y=k(x) $$ where the coefficients $a_{1}, \ldots, a_{n}$ are constants and the function $k(x)$ is a given function. In the operator notation this equation may be written $$ \mathcal{L}[y]=k(x) . $$ The following theorem says that once we know one particular solution $y_{p}$ of the inhomogeneous equation $\mathcal{L}[y]=k(x)$ we can find all the solutions $y$ to the inhomogeneous equation $\mathcal{L}[y]=k(x)$ by finding all the solutions $y_{h}$ to the homogeneous equation $\mathcal{L}[y]=0$. 38.1. Another Superposition Principle. Assume $\mathcal{L}\left[y_{p}\right]=k(x)$. Then $\mathcal{L}[y]=$ $k(x)$ if and only if $y=y_{p}+y_{h}$ where $\mathcal{L}\left[y_{h}\right]=0$. Proof. Suppose $\mathcal{L}\left[y_{p}\right]=k(x)$ and $y=y_{p}+y_{h}$. Then $$ \mathcal{L}[y]=\mathcal{L}\left[y_{p}+y_{h}\right]=\mathcal{L}\left[y_{p}\right]+\mathcal{L}\left[y_{h}\right]=k(x)+\mathcal{L}\left[y_{h}\right] . $$ Hence $\mathcal{L}[y]=k(x)$ if and only if $\mathcal{L}\left[y_{h}\right]=0$. ## Variation of Constants There is a method to find the general solution of a linear inhomogeneous equation of arbitrary order, provided you already know the solutions to the homogeneous equation. We won't explain this method here, but merely show you the answer you get in the case of second order equations. If $y_{1}(x)$ and $y_{2}(x)$ are solutions to the homogeneous equation $$ y^{\prime \prime}(x)+a(x) y^{\prime}(x)+b(x) y(x)=0 $$ for which $$ W(x) \stackrel{\text { def }}{=} y_{1}(x) y_{2}^{\prime}(x)-y_{1}^{\prime}(x) y_{2}(x) \neq 0, $$ then the general solution of the inhomogeneous equation $$ y^{\prime \prime}(x)+a(x) y^{\prime}(x)+b(x) y(x)=f(x) $$ is given by $$ y(x)=-y_{1}(x) \int \frac{y_{2}(\xi) f(\xi)}{W(\xi)} \mathrm{d} \xi+y_{2}(x) \int \frac{y_{1}(\xi) f(\xi)}{W(\xi)} \mathrm{d} \xi . $$ For more details you should take a more advanced course like MATH 319 or 320. 39.1. Undetermined Coefficients. The easiest way to find a particular solution $y_{p}$ to the inhomogeneous equation is the method of undetermined coefficients or "educated guessing." Unlike the method of "variation of constants" which was (hardly) explained in the previous section, this method does not work for all equations. But it does give you the answer for a few equations which show up often enough to make it worth knowing the method. The basis of the "method" is this: it turns out that many of the second order equations with you run into have the form $$ y^{\prime \prime}+a y^{\prime}+b y=f(t), $$ where $a$ and $b$ are constants, and where the righthand side $f(t)$ comes from a fairly short list of functions. For all $f(t)$ in this list you memorize (yuck!) a particular solution $y_{p}$. With the particular solution in hand you can then find the general solution by adding it to the general solution of the homogeneous equation. Here is the list: $f(t)=$ polynomial in $t$ : In this case you try $y_{p}(t)=$ some other polynomial in $t$ with the same degree as $f(t)$. Exceptions: if $r=0$ is a root of the characteristic equation, then you must try a polynomial $y_{p}(t)$ of degree one higher than $f(t)$; if $r=0$ is a double root then the degree of $y_{p}(t)$ must be two more than the degree of $f(t)$. $$ f(t)=e^{a t}: \operatorname{try} y_{p}(t)=A e^{a t} \text {. } $$ Exceptions: if $r=a$ is a root of the characteristic equation, then you must $\operatorname{try} y_{p}(t)=$ Ate if $r=a$ is a double root then try $y_{p}(t)=A t^{2} e^{a t}$. $f(t)=\sin b t$ or $f(t)=\cos b t$ : In both cases, try $y_{p}(t)=A \cos b t+B \sin b t$. Exceptions: if $r=b i$ is a root of the characteristic equation, then you should try $y_{p}(t)=t(A \cos b t+B \sin b t)$. $f(t)=e^{a t} \sin b t$ or $f(t)=e^{a t} \cos b t$ : Try $y_{p}(t)=e^{a t}(A \cos b t+B \sin b t)$. Exceptions: if $r=a+b i$ is a root of the characteristic equation, then you should try $y_{p}(t)=t e^{a t}(A \cos b t+B \sin b t)$. 39.2. Example. Find the general solution to the following equations $$ \begin{gathered} y^{\prime \prime}+x y^{\prime}-y=2 e^{x} \\ y^{\prime \prime}-2 y^{\prime}+y=\sqrt{1+x^{2}} \end{gathered} $$ The first equation does not have constant coefficients so the method doesn't apply. Sorry, but we can't solve this equation in this course. ${ }^{7}$ The second equation does have constant coefficients, so we can solve the homogeneous equation $\left(y^{\prime \prime}-2 y^{\prime}+y=0\right)$, but the righthand side does not appear in our list. Again, the method doesn't work. 39.3. A more upbeat example. To find a particular solution of $$ y^{\prime \prime}-y^{\prime}+y=3 t^{2} $$ we note that (1) the equation is linear with constant coefficients, and (2) the right hand side is a polynomial, so it's in our list of "right hand sides for which we know what to guess." We try a polynomial of the same degree as the right hand side, namely 2 . We don't know which polynomial, so we leave its coefficients undetermined (whence the name of the method.) I.e. we try $$ y_{p}(t)=A+B t+C t^{2} . $$ To see if this is a solution, we compute $$ y_{p}^{\prime}(t)=B+2 C t, \quad y_{p}^{\prime \prime}(t)=2 C, $$ so that $$ y_{p}^{\prime \prime}-y_{p}^{\prime}+y_{p}=(A-B+2 C)+(B-2 C) t+C t^{2} . $$ Thus $y_{p}^{\prime \prime}-y_{p}^{\prime}+y_{p}=3 t^{2}$ if and only if $$ A-B+2 C=0, \quad B-2 C=0, \quad C=3 . $$ Solving these equations leads to $C=3, B=2 C=6$ and $A=B-2 C=0$. We conclude that is a particular solution. $$ y_{p}(t)=6 t+3 t^{2} $$ 39.4. Another example, which is rather long, but that's because it is meant to cover several cases. Find the general solution to the equation $$ y^{\prime \prime}+3 y^{\prime}+2 y=t+t^{3}-e^{t}+2 e^{-2 t}-e^{-t} \sin 2 t . $$ Solution: First we find the characteristic equation, $$ r^{2}+3 r+2=(r+2)(r+1)=0 . $$ The characteristic roots are $r_{1}=-1$, and $r_{2}=-2$. The general solution to the homogeneous equation is $$ y_{h}(t)=C_{1} e^{-t}+C_{2} e^{-2 t} . $$ ${ }^{7}$ Who says you can't solve this equation? For equation (41) you can find a solution by computing its Taylor series! For more details you should again take a more advanced course (like Mатн 319), or, in this case, give it a try yourself. We now look for a particular solutions. Initially it doesn't look very good as the righthand side does not appear in our list. However, the righthand side is a sum of five terms, each of which is in our list. Abbreviate $\mathcal{L}[y]=y^{\prime \prime}+3 y^{\prime}+2 y$. Then we will find functions $y_{1}, \ldots, y_{4}$ for which one has $$ \mathcal{L}\left[y_{1}\right]=t+t^{3}, \quad \mathcal{L}\left[y_{2}\right]=-e^{t}, \quad \mathcal{L}\left[y_{3}\right]=2 e^{-2 t}, \quad \mathcal{L}\left[y_{4}\right]=-e^{-t} \sin 2 t . $$ Then, by the Superposition Principle (Theorem 37.5) you get that $y_{p} \stackrel{\text { def }}{=} y_{1}+y_{2}+y_{3}+y_{4}$ satisfies $$ \mathcal{L}\left[y_{p}\right]=\mathcal{L}\left[y_{1}\right]+\mathcal{L}\left[y_{2}\right]+\mathcal{L}\left[y_{3}\right]+\mathcal{L}\left[y_{4}\right]=t+t^{3}-e^{t}+2 e^{-2 t}-e^{-t} \sin 2 t . $$ So $y_{p}$ (once we find it) is a particular solution. Now let's find $y_{1}, \ldots, y_{4}$. $y_{1}(t)$ : the righthand side $t+t^{3}$ is a polynomial, and $r=0$ is not a root of the characteristic equation, so we try a polynomial of the same degree. Try $$ y_{1}(t)=A+B t+C t^{2}+D t^{3} . $$ Here $A, B, C, D$ are the undetermined coefficients that give the method its name. You compute $$ \begin{aligned} \mathcal{L}\left[y_{1}\right] & =y_{1}^{\prime \prime}+3 y_{1}^{\prime}+2 y_{1} \\ & =(2 C+6 D t)+3\left(B+2 C t+3 D t^{2}\right)+2\left(A+B t+C t^{2}+D t^{3}\right) \\ & =(2 C+3 B+2 A)+(2 B+6 C+6 D) t+(2 C+9 D) t^{2}+2 D t^{3} . \end{aligned} $$ So to get $\mathcal{L}\left[y_{1}\right]=t+t^{3}$ we must impose the equations $2 D=1, \quad 2 C+9 D=0, \quad 2 B+6 C+6 D=1, \quad 2 C+6 B+2 A=0$. You can solve these equations one-by-one, with result $$ D=\frac{1}{2}, \quad C=-\frac{9}{4}, \quad B=-\frac{23}{4}, \quad A=\frac{87}{8}, $$ and thus $$ y_{1}(t)=\frac{87}{8}-\frac{23}{4} t-\frac{9}{4} t^{2}+\frac{1}{2} t^{3} $$ $y_{2}(t)$ : We want $y_{2}(t)$ to satisfy $\mathcal{L}\left[y_{2}\right]=-e^{t}$. Since $e^{t}=e^{a t}$ with $a=1$, and $a=1$ is not a characteristic root, we simply try $y_{2}(t)=A e^{t}$. A quick calculation gives $$ \mathcal{L}\left[y_{2}\right]=A e^{t}+3 A e^{t}+2 A e^{t}=6 A e^{t} . $$ To achieve $\mathcal{L}\left[y_{2}\right]=-e^{t}$ we therefore need $6 A=-1$, i.e. $A=-\frac{1}{6}$. Thus $$ y_{2}(t)=-\frac{1}{6} e^{t} $$ $y_{3}(t)$ : We want $y_{3}(t)$ to satisfy $\mathcal{L}\left[y_{3}\right]=-e^{-2 t}$. Since $e^{-2 t}=e^{a t}$ with $a=-2$, and $a=-2$ is a characteristic root, we can't simply try $y_{3}(t)=A e^{-2 t}$. Instead you have to try $y_{3}(t)=A t e^{-2 t}$. Another calculation gives $$ \begin{aligned} \mathcal{L}\left[y_{3}\right] & =(4 t-4) A e^{-2 t}+3(-2 t+2) A e^{-2 t}+2 A t e^{-2 t} \quad\left(\text { factor out } A e^{-2 t}\right) \\ & =[(4+3(-2)+2) t+(-4+3)] A e^{-2 t} \\ & =-A e^{-2 t} . \end{aligned} $$ Note that all the terms with $t e^{-2 t}$ cancel: this is no accident, but a consequence of the fact that $a=-2$ is a characteristic root. To get $\mathcal{L}\left[y_{3}\right]=2 e^{-2 t}$ we see we have to choose $A=-2$. We find $$ y_{3}(t)=-2 t e^{-2 t} \text {. } $$ $y_{4}(t)$ : Finally, we need a function $y_{4}(t)$ for which one has $\mathcal{L}\left[y_{4}\right]=-e^{-t} \sin 2 t$. The list tells us to try $$ y_{4}(t)=e^{-t}(A \cos 2 t+B \sin 2 t) . $$ (Since $-1+2 i$ is not a root of the characteristic equation we are not in one of the exceptional cases.) Diligent computation yields $$ \begin{aligned} & y_{4}(t)=\quad A e^{-t} \cos 2 t+B e^{-t} \sin 2 t \\ & y_{4}^{\prime}(t)=(-A+2 B) e^{-t} \cos 2 t+(-B-2 A) e^{-t} \sin 2 t \\ & y_{4}^{\prime \prime}(t)=(-3 A-4 B) e^{-t} \cos 2 t+(-3 B+4 A) e^{-t} \sin 2 t \end{aligned} $$ so that $$ \mathcal{L}\left[y_{4}\right]=(-4 A+2 B) e^{-t} \cos 2 t+(-2 A-4 B) e^{-t} \sin 2 t . $$ We want this to equal $-e^{-t} \sin 2 t$, so we have to find $A, B$ with $$ -4 A+2 B=0, \quad-2 A-4 B=-1 . $$ The first equation implies $B=2 A$, the second then gives $-10 A=-1$, so $A=\frac{1}{10}$ and $B=\frac{2}{10}$. We have found $$ y_{4}(t)=\frac{1}{10} e^{-t} \cos 2 t+\frac{2}{10} e^{-t} \sin 2 t . $$ After all these calculations we get the following impressive particular solution of our differential equation, $$ y_{p}(t)=\frac{87}{8}-\frac{23}{4} t-\frac{9}{4} t^{2}+\frac{1}{2} t^{3}-\frac{1}{6} e^{t}-2 t e^{-2 t}+\frac{1}{10} e^{-t} \cos 2 t+\frac{2}{10} e^{-t} \sin 2 t $$ and the even more impressive general solution to the equation, $$ \begin{aligned} y(t)= & y_{h}(y)+y_{p}(t) \\ = & C_{1} e^{-t}+C_{2} e^{-2 t} \\ & +\frac{87}{8}-\frac{23}{4} t-\frac{9}{4} t^{2}+\frac{1}{2} t^{3} \\ & \quad-\frac{1}{6} e^{t}-2 t e^{-2 t}+\frac{1}{10} e^{-t} \cos 2 t+\frac{2}{10} e^{-t} \sin 2 t . \end{aligned} $$ You shouldn't be put off by the fact that the result is a pretty long formula, and that the computations took up two pages. The approach is to (i) break up the right hand side into terms which are in the list at the beginning of this section, (ii) to compute the particular solutions for each of those terms and (iii) to use the Superposition Principle (Theorem 37.5) to add the pieces together, resulting in a particular solution for the whole right hand side you started with. ## Applications of Second Order Linear Equations 40.1. Spring with a weight. In example 36.1 we showed that the height $y(t)$ a mass $m$ suspended from a spring with constant $k$ satisfies $$ m y^{\prime \prime}(t)+k y(t)=m g, \quad \text { or } \quad y^{\prime \prime}(t)+\frac{k}{m} y(t)=g . $$ This is a Linear Inhomogeneous Equation whose homogeneous equation, $y^{\prime \prime}+\frac{k}{m} y=0$ has $$ y_{h}(t)=C_{1} \cos \omega t+C_{2} \sin \omega t $$ as general solution, where $\omega=\sqrt{k / m}$. The right hand side is a constant, which is a polynomial of degree zero, so the method of "educated guessing" applies, and we can find a particular solution by trying a constant $y_{p}=A$ as particular solution. You find that $y_{p}^{\prime \prime}+\frac{k}{m} y_{p}=\frac{k}{m} A$, which will equal $g$ if $A=\frac{m g}{k}$. Hence the general solution to the "spring with weight equation" is $$ y(t)=\frac{m g}{k}+C_{1} \cos \omega t+C_{2} \sin \omega t . $$ To solve the initial value problem $y(0)=y_{0}$ and $y^{\prime}(0)=v_{0}$ we solve for the constants $C_{1}$ and $C_{2}$ and get $$ y(t)=\frac{m g}{k}+\frac{v_{0}}{\omega} \sin (\omega t)+\left(y_{0}-\frac{m g}{k}\right) \cos (\omega t) . $$ which you could rewrite as $$ y(t)=\frac{m g}{k}+Y \cos (\omega t-\phi) $$ for certain numbers $Y, \phi$. The weight in this model just oscillates up and down forever: this motion is called a simple harmonic oscillation, and the equation (43) is called the equation of the Harmonic Oscillator. 40.2. The pendulum equation. In example 36.2 we saw that the angle $\theta(t)$ subtended by a swinging pendulum satisfies the pendulum equation, $$ \frac{\mathrm{d}^{2} \theta}{\mathrm{d} t^{2}}+\frac{g}{L} \sin \theta(t)=0 $$ This equation is not linear and cannot be solved by the methods you have learned in this course. However, if the oscillations of the pendulum are small, i.e. if $\theta$ is small, then we can approximate $\sin \theta$ by $\theta$. Remember that the error in this approximation is the remainder term in the Taylor expansion of $\sin \theta$ at $\theta=0$. According to Lagrange this is $$ \sin \theta=\theta+R_{2}(\theta), \quad R_{2}(\theta)=\cos \tilde{\theta} \frac{\theta^{3}}{3 !} \text { with }|\tilde{\theta}| \leq \theta . $$ When $\theta$ is small, e.g. if $|\theta| \leq 10^{\circ} \approx 0.175$ radians then compared to $\theta$ the error is at most $$ \left|\frac{R_{3}(\theta)}{\theta}\right| \leq \frac{(0.175)^{2}}{3 !} \approx 0.005, $$ in other words, the error is no more than half a percent. So for small angles we will assume that $\sin \theta \approx \theta$ and hence $\theta(t)$ almost satisfies the equation $$ \frac{\mathrm{d}^{2} \theta}{\mathrm{d} t^{2}}+\frac{g}{L} \theta(t)=0 $$ In contrast to the pendulum equation (38), this equation is linear, and we could solve it right now. The procedure of replacing inconvenient quantities like $\sin \theta$ by more manageable ones (like $\theta$ ) in order to end up with linear equations is called linearization. Note that the solutions to the linearized equation (44), which we will derive in a moment, are not solutions of the Pendulum Equation (38). However, if the solutions we find have small angles (have $|\theta|$ small), then the Pendulum Equation and its linearized form (44) are almost the same, and "you would think that their solutions should also be almost the same." I put that in quotation marks, because (1) it's not a very precise statement and (2) if it were more precise, you would have to prove it, which is not easy, and not a topic for this course (or even MATH 319 - take MATH 419 or 519 for more details.) Let's solve the linearized equation (44). Setting $\theta=e^{r t}$ you find the characteristic equation $$ r^{2}+\frac{g}{L}=0 $$ which has two complex roots, $r_{ \pm}= \pm i \sqrt{\frac{g}{L}}$. Therefore, the general solution to (44) is $$ \theta(t)=A \cos \left(\sqrt{\frac{g}{L}} t\right)+B \sin \left(\sqrt{\frac{g}{L}} t\right) $$ and you would expect the general solution of the Pendulum Equation (38) to be almost the same. So you see that a pendulum will oscillate, and that the period of its oscillation is given by $$ T=2 \pi \sqrt{\frac{L}{g}} . $$ Once again: because we have used a linearization, you should expect this statement to be valid only for small oscillations. When you study the Pendulum Equation instead of its linearization (44), you discover that the period $T$ of oscillation actually depends on the amplitude of the oscillation: the bigger the swings, the longer they take. 40.3. The effect of friction. A real weight suspended from a real spring will of course not oscillate forever. Various kinds of friction will slow it down and bring it to a stop. As an example let's assume that air drag is noticeable, so, as the weight moves the surrounding air will exert a force on the weight (To make this more likely, assume the weight is actually moving in some viscous liquid like salad oil.) This drag is stronger as the weight moves faster. A simple model is to assume that the friction force is proportional to the velocity of the weight, $$ F_{\text {friction }}=-h y^{\prime}(t) \text {. } $$ This adds an extra term to the oscillator equation (43), and gives $$ m y^{\prime \prime}(t)=F_{\text {grav }}+F_{\text {friction }}=-k y(t)+m g-h y^{\prime}(t) $$ i.e. $$ m y^{\prime \prime}(t)+h y^{\prime}(t)+k y(t)=m g . $$ This is a second order linear homogeneous differential equation with constant coefficients. A particular solution is easy to find, $y_{p}=m g / k$ works again. To solve the homogeneous equation you try $y=e^{r t}$, which leads to the characteristic equation $$ m r^{2}+h r+k=0 $$ whose roots are $$ r_{ \pm}=\frac{-h \pm \sqrt{h^{2}-4 m k}}{2 m} $$ If friction is large, i.e. if $h>\sqrt{4 k m}$, then the two roots $r_{ \pm}$are real, and all solutions are of exponential type, $$ y(t)=\frac{m g}{k}+C_{+} e^{r_{+} t}+C_{-} e^{r_{-} t} . $$ Both roots $r_{ \pm}$are negative, so all solutions satisfy $$ \lim _{t \rightarrow \infty} y(t)=0 . $$ If friction is weak, more precisely, if $h<\sqrt{4 m k}$ then the two roots $r_{ \pm}$are complex numbers, $$ r_{ \pm}=-\frac{h}{2 m} \pm i \omega, \quad \text { with } \omega=\frac{\sqrt{4 k m-h^{2}}}{2 m} . $$ The general solution in this case is $$ y(t)=\frac{m g}{k}+e^{-\frac{h}{2 m} t}(A \cos \omega t+B \sin \omega t) . $$ These solutions also tend to zero as $t \rightarrow \infty$, but they oscillate infinitely often. 40.4. Electric circuits. Many equations in physics and engineering have the form (45). For example in the electric circuit in the diagram a time varying voltage $V_{i n}(t)$ is applied to a resistor $R$, an inductance $L$ and a capacitor $C$. This causes a current $I(t)$ to flow through the circuit. How much is this current, and how much is, say, the voltage across the resistor? Electrical engineers will tell you that the total voltage $V_{\text {in }}(t)$ must equal the sum of the voltages $V_{R}(t), V_{L}(t)$ and $V_{C}(t)$ across the three components. These voltages are related to the current $I(t)$ which flows through the three components as follows: $$ \begin{aligned} V_{R}(t) & =R I(t) \\ \frac{\mathrm{d} V_{C}(t)}{\mathrm{d} t} & =\frac{1}{C} I(t) \\ V_{L}(t) & =L \frac{\mathrm{d} I(t)}{\mathrm{d} t} . \end{aligned} $$ Surprisingly, these little electrical components know calculus! (Here $R, C$ and $L$ are constants depending on the particular components in the circuit. They are measured in "Ohm," "Farad," and "Henry.") Starting from the equation $$ V_{i n}(t)=V_{R}(t)+V_{L}(t)+V_{C}(t) $$ you get $$ \begin{aligned} V_{i n}^{\prime}(t) & =V_{R}^{\prime}(t)+V_{L}^{\prime}(t)+V_{C}^{\prime}(t) \\ & =R I^{\prime}(t)+L I^{\prime \prime}(t)+\frac{1}{C} I(t) \end{aligned} $$ In other words, for a given input voltage the current $I(t)$ satisfies a second order inhomogeneous linear differential equation $$ L \frac{\mathrm{d}^{2} I}{\mathrm{~d} t^{2}}+R \frac{\mathrm{d} I}{\mathrm{~d} t}+\frac{1}{C} I=V_{i n}^{\prime}(t) . $$ Once you know the current $I(t)$ you get the output voltage $V_{\text {out }}(t)$ from $$ V_{\text {out }}(t)=R I(t) $$ In general you can write down a differential equation for any electrical circuit. As you add more components the equation gets more complicated, but if you stick to resistors, inductances and capacitors the equations will always be linear, albeit of very high order. ## PROBLEMS ## GENERAL QUESTIONS 332. Classify each of the following as homogeneous linear, inhomogeneous linear, or nonlinear and specify the order. For each linear equation say whether or not the coefficients are constant. $$ \begin{aligned} \text { (i) } y^{\prime \prime}+y=0 & \text { (ii) } x y^{\prime \prime}+y y^{\prime}=0 \\ \text { (iii) } x y^{\prime \prime}-y^{\prime}=0 & \text { (iv) } x y^{\prime \prime}+y y^{\prime}=x \\ \text { (v) } x y^{\prime \prime}-y^{\prime}=x & \text { (vi) } y^{\prime}+y=x e^{x} \end{aligned} $$ 333. (i) Show that $y=x^{2}+5$ is a solution of $x y^{\prime \prime}-y^{\prime}=0$. (ii) Show that $y=C_{1} x^{2}+C_{2}$ is a solution of $x y^{\prime \prime}-y^{\prime}=0$. 334. (i) Show that $y=\left(\tan \left(c_{1} x+c_{2}\right)\right) / c_{1}$ is a solution of $y y^{\prime \prime}=2\left(y^{\prime}\right)^{2}-2 y^{\prime}$. (ii) Show that $y_{1}=\tan (x)$ and $y_{2}=1$ are solutions of this equation, but that $y_{1}+y_{2}$ is not. (iii) Is the equation linear homogeneous? ## SEPARATION OF VARIABLES 335. Consider the differential equation $$ \frac{\mathrm{d} y}{\mathrm{~d} t}=\frac{4-y^{2}}{4} . $$ (a) Find the solutions $y_{0}, y_{1}, y_{2}$, and $y_{3}$ which satisfy $y_{0}(0)=0, y_{1}(0)=1$, $y_{2}(0)=2$ and $y_{3}(0)=3$. (b) Find $\lim _{t \rightarrow \infty} y_{k}(t)$ for $k=$ (c) Find $\lim _{t \rightarrow-\infty} y_{k}(t)$ for $k=$ $1,2,3$. (d) Graph the four solutions $y_{0}, \ldots$, $y_{3}$. (e) Show that the quantity $x=$ $(y+2) / 4$ satisfies the so-called Logistic Equation $$ \frac{d x}{d t}=x(1-x) . $$ (Hint: if $x=(y+2) / 4$, then $y=4 x-2$; substitute this in both sides of the diffeq for $y$ ). ## $* * *$ In each of the following problems you should find the function $y$ of $x$ which satisfies the conditions ( $A$ is an unspecified constant: you should at least indicate for which values of $A$ your solution is valid.) 336. $\frac{\mathrm{d} y}{\mathrm{~d} x}+x^{2} y=0, y(1)=5$. 337. $\frac{\mathrm{d} y}{\mathrm{~d} x}+\left(1+3 x^{2}\right) y=0, y(1)=1$. 338. $\frac{\mathrm{d} y}{\mathrm{~d} x}+x \cos ^{2} y=0, y(0)=\frac{\pi}{3}$. 339. $\frac{\mathrm{d} y}{\mathrm{~d} x}+\frac{1+x}{1+y}=0, y(0)=A$. 340. $\frac{\mathrm{d} y}{\mathrm{~d} x}+1-y^{2}=0, y(0)=A$. 341. $\frac{\mathrm{d} y}{\mathrm{~d} x}+1+y^{2}=0, y(0)=A$. 342. Find the function $y$ of $x$ which satisfies the initial value problem: $$ \frac{d y}{d x}+\frac{x^{2}-1}{y}=0 \quad y(0)=1 $$ 343. Find the general solution of $$ \frac{d y}{d x}+2 y+e^{x} \equiv 0 $$ 344. $\frac{\mathrm{d} y}{\mathrm{~d} x}-(\cos x) y=e^{\sin x}, y(0)=A$. 345. $y^{2} \frac{\mathrm{d} y}{\mathrm{~d} x}+x^{3}=0, y(0)=A$. 346. Group problem. Read Example 35.2 on "Leaky bucket dating" again. In that example we assumed that $\frac{a}{A} \sqrt{K}=$ 2. (a) Solve diffeq for $h(t)$ without assuming $\frac{a}{A} \sqrt{K}=2$. Abbreviate $C=$ $\frac{a}{A} \sqrt{K}$. (b) If in an experiment one found that the bucket empties in 20 seconds after being filled to height $20 \mathrm{~cm}$, then how much is the constant $C$ ? ## LINEAR HOMOGENEOUS 347. (a) Show that $y=4 e^{x}+7 e^{2 x}$ is a solution of $y^{\prime \prime}-3 y^{\prime}+2 y=0$. (b) Show that $y=C_{1} e^{x}+C_{2} e^{2 x}$ is a solution of $y^{\prime \prime}-3 y^{\prime}+2 y=0$. (c) Find a solution of $y^{\prime \prime}-3 y^{\prime}+2 y=0$ such that $y(0)=7$ and $y^{\prime}(0)=9$. 348. (a) Find all solutions of $\frac{\mathrm{d} y}{\mathrm{~d} x}+2 y=0$. (b) Find all solutions of $\frac{\mathrm{d} y}{\mathrm{~d} x}+2 y=e^{-x}$. (c) Find $y$ if $\frac{\mathrm{d} y}{\mathrm{~d} x}+2 y=e^{-x}$ and $y=7$ when $x=0$. 349. (a) Find all real solutions of $$ \frac{\mathrm{d}^{2} y}{\mathrm{~d} t^{2}}-6 \frac{\mathrm{d} y}{\mathrm{~d} t}+10 y=0 . $$ (b) Find $y$ if $$ y^{\prime \prime}-6 y^{\prime}+10 y=0, $$ and in addition $y$ satisfies the initial conditions $y(0)=7$, and $y^{\prime}(0)=11$. 350. Solve the initial value problem: $$ \begin{gathered} y^{\prime \prime}-5 y^{\prime}+4 y \equiv 0 \\ y(0)=2 \\ y^{\prime}(0)=-1 \end{gathered} $$ 351. For $y$ as a function of $x$, find the general solution of the equation: $$ y^{\prime \prime}-2 y^{\prime}+10 y \equiv 0 $$ ## * * * Find the general solution $y=y(x)$ of the following differential equations 352. $\frac{\mathrm{d}^{4} y}{\mathrm{~d} x^{4}}=y$ 353. $\frac{\mathrm{d}^{4} y}{\mathrm{~d} x^{4}}+y=0$ 354. $\frac{\mathrm{d}^{4} y}{\mathrm{~d} x^{4}}-\frac{\mathrm{d}^{2} y}{\mathrm{~d} x^{2}}=0$ 355. $\frac{\mathrm{d}^{4} y}{\mathrm{~d} x^{4}}+\frac{\mathrm{d}^{2} y}{\mathrm{~d} x^{2}}=0$ 356. $\frac{\mathrm{d}^{3} y}{\mathrm{~d} x^{3}}+y=0$ 357. $\frac{\mathrm{d}^{3} y}{\mathrm{~d} x^{3}}-y=0$ 358. $y^{(4)}(t)-2 y^{\prime \prime}(t)-3 y(t)=0$ 359. $y^{(4)}(t)+4 y^{\prime \prime}(t)+3 y(t)=0$. 360. $y^{(4)}(t)+2 y^{\prime \prime}(t)+2 y(t)=0$. 361. $y^{(4)}(t)+y^{\prime \prime}(t)-6 y(t)=0$. 362. $y^{(4)}(t)-8 y^{\prime \prime}(t)+15 y(t)=0$. 363. $f^{\prime \prime \prime}(x)-125 f(x)=0$. 364. $u^{(5)}(x)-32 u(x)=0$. 365. $u^{(5)}(x)+32 u(x)=0$. 366. $y^{\prime \prime \prime}(t)-5 y^{\prime \prime}(t)+6 y^{\prime}(t)-2 y(t)=0$. 367. $h^{(4)}(t)-h^{(3)}(t)+4 h^{\prime \prime}(t)-4 h(t)=0$. 368. $z^{\prime \prime \prime}(x)-5 z^{\prime \prime}(x)+4 z(x)=0$. Solve each of the following initial value problems. Your final answer should not use complex numbers, but you may use complex numbers to find it. 369. $y^{\prime \prime}+9 y=0, y(0)=0, y^{\prime}(0)=-3$. 370. $y^{\prime \prime}+9 y=0, y(0)=-3, y^{\prime}(0)=0$. 371. $y^{\prime \prime}-5 y^{\prime}+6 y=0, y(0)=0, y^{\prime}(0)=1$. 372. $y^{\prime \prime}+5 y^{\prime}+6 y=0, y(0)=1, y^{\prime}(0)=0$. 373. $y^{\prime \prime}+5 y^{\prime}+6 y=0, y(0)=0, y^{\prime}(0)=1$. 374. $y^{\prime \prime}-6 y^{\prime}+5 y=0, y(0)=1, y^{\prime}(0)=0$. 375. $y^{\prime \prime}-6 y^{\prime}+5 y=0, y(0)=0, y^{\prime}(0)=1$. 376. $y^{\prime \prime}+6 y^{\prime}+5 y=0, y(0)=1, y^{\prime}(0)=0$. 377. $y^{\prime \prime}+6 y^{\prime}+5 y=0, y(0)=0, y^{\prime}(0)=1$. 378. $y^{\prime \prime}-4 y^{\prime}+5 y=0, y(0)=1, y^{\prime}(0)=0$. 379. $y^{\prime \prime}-4 y^{\prime}+5 y=0, y(0)=0, y^{\prime}(0)=1$. 380. $y^{\prime \prime}+4 y^{\prime}+5 y=0, y(0)=1, y^{\prime}(0)=0$. 381. $y^{\prime \prime}+4 y^{\prime}+5 y=0, y(0)=0, y^{\prime}(0)=1$. 382. $y^{\prime \prime}-5 y^{\prime}+6 y=0, y(0)=1, y^{\prime}(0)=0$. 383. $f^{\prime \prime \prime}(t)+f^{\prime \prime}(t)-f^{\prime}(t)+15 f(t)=0$, with initial conditions $f(0)=0, f^{\prime}(0)=$ $1, f^{\prime \prime}(0)=0$. ## LINEAR INHOMOGENEOUS 384. Find particular solutions of $$ \begin{aligned} & y^{\prime \prime}-3 y^{\prime}+2 y=e^{3 x} \\ & y^{\prime \prime}-3 y^{\prime}+2 y=e^{x} \\ & y^{\prime \prime}-3 y^{\prime}+2 y=4 e^{3 x}+5 e^{x} \end{aligned} $$ 385. Find a particular solution of the equation: $$ y^{\prime \prime}+y^{\prime}+2 y=e^{x}+x+1 $$ Find the general solution $y(t)$ of the following differential equations 386. $\frac{\mathrm{d}^{2} y}{\mathrm{~d} t^{2}}-y=2$ 387. $\frac{\mathrm{d}^{2} y}{\mathrm{~d} t^{2}}-y=2 e^{t}$ 388. $\frac{\mathrm{d}^{2} y}{d t^{2}}+9 y=\cos 3 t$ 389. $\frac{\mathrm{d}^{2} y}{\mathrm{~d} t^{2}}+9 y=\cos t$ 390. $\frac{\mathrm{d}^{2} y}{\mathrm{~d} t^{2}}+y=\cos t$ 391. $\frac{\mathrm{d}^{2} y}{\mathrm{~d} t^{2}}+y=\cos 3 t$. 392. Find $y$ if (a) $\frac{\mathrm{d}^{2} y}{\mathrm{~d} x^{2}}+2 \frac{\mathrm{d} y}{\mathrm{~d} x}+y=0$ $y(0)=2$, $y^{\prime}(0)=3$ (b) $\frac{\mathrm{d}^{2} y}{\mathrm{~d} x^{2}}+2 \frac{\mathrm{d} y}{\mathrm{~d} x}+y=e^{-x}$ $y(0)=0$, $y^{\prime}(0)=0$ (c) $\frac{\mathrm{d}^{2} y}{\mathrm{~d} x^{2}}+2 \frac{\mathrm{d} y}{\mathrm{~d} x}+y=x e^{-x}$ $y(0)=0$, $y^{\prime}(0)=0$ (d) $\frac{\mathrm{d}^{2} y}{\mathrm{~d} x^{2}}+2 \frac{\mathrm{d} y}{\mathrm{~d} x}+y=e^{-x}+x e^{-x}$ $y(0)=2$, $y^{\prime}(0)=3$. Hint: Use the Superposition Principle to save work. 393. Group problem. (i) Find the general solution of $$ z^{\prime \prime}+4 z^{\prime}+5 z=e^{i t} $$ using complex exponentials. (ii) Solve $$ z^{\prime \prime}+4 z^{\prime}+5 z=\sin t $$ using your solution to question (i). (iii) Find a solution for the equation $$ z^{\prime \prime}+2 z^{\prime}+2 z=2 e^{-(1-i) t} $$ in the form $z(t)=u(t) e^{-(1-i) t}$. (iv) Find a solution for the equation $$ x^{\prime \prime}+2 x^{\prime}+2 x=2 e^{-t} \cos t . $$ Hint: Take the real part of the previous answer. (v) Find a solution for the equation $$ y^{\prime \prime}+2 y^{\prime}+2 y=2 e^{-t} \sin t $$ ## APPLICATIONS 394. A population of bacteria grows at a rate proportional to its size. Write and solve a differential equation which expresses this. If there are 1000 bacteria after one hour and 2000 bacteria after two hours, how many bacteria are there after three hours? 395. Rabbits in Madison have a birth rate of $5 \%$ per year and a death rate (from old age) of $2 \%$ per year. Each year 1000 rabbits get run over and 700 rabbits move in from Sun Prairie. (i) Write a differential equation which describes Madison's rabbit population at time $t$. (ii) If there were 12,000 rabbits in Madison in 1991, how many are there in 1994? 396. Group problem. According to Newton's law of cooling the rate $\mathrm{d} T / \mathrm{d} t$ at which an object cools is proportional to the difference $T-A$ between its temperature $T$ and the ambient temperature $A$. The differential equation which expresses this is $$ \frac{\mathrm{d} T}{\mathrm{~d} t}=k(T-A) $$ where $k<0$ and $A$ are constants. (i) Solve this equation and show that every solution satisfies $$ \lim _{t \rightarrow \infty} T=A \text {. } $$ (ii) A cup of coffee at a temperature of $180^{\circ} \mathrm{F}$ sits in a room whose temperature is $75^{\circ} \mathrm{F}$. In five minutes its temperature has dropped to $150^{\circ} \mathrm{F}$. When will its temperature be $90^{\circ} \mathrm{F}$ ? What is the limit of the temperature as $t \rightarrow \infty$ ? 397. Retaw is a mysterious living liquid; it grows at a rate of $5 \%$ of its volume per hour. A scientist has a tank initially holding $y_{0}$ gallons of retaw and removes retaw from the tank continuously at the rate of 3 gallons per hour. (i) Find a differential equation for the number $y(t)$ of gallons of retaw in the tank at time $t$. (ii) Solve this equation for $y$ as a function of $t$. (The initial volume $y_{0}$ will appear in your answer.) (iii) What is $\lim _{t \rightarrow \infty} y(t)$ if $y_{0}=100$ ? (iv) What should the value of $y_{0}$ be so that $y(t)$ remains constant? 398. A 1000 gallon vat is full of $25 \%$ solution of acid. Starting at time $t=0$ a $40 \%$ solution of acid is pumped into the vat at 20 gallons per minute. The solution is kept well mixed and drawn off at 20 gallons per minute so as to maintain the total value of 1000 gallons. Derive an expression for the acid concentration at times $t>0$. As $t \rightarrow \infty$ what percentage solution is approached? 399. The volume of a lake is $V=10^{9}$ cubic feet. Pollution $P$ runs into the lake at 3 cubic feet per minute, and clean water runs in at 21 cubic feet per minute. The lake drains at a rate of 24 cubic feet per minute so its volume is constant. Let $C$ be the concentration of pollution in the lake; i.e. $C=P / V$. (i) Give a differential equation for $C$. (ii) Solve the differential equation. Use the initial condition $C=C_{0}$ when $t=0$ to evaluate the constant of integration. (iii) There is a critical value $C^{*}$ with the property that for any solution $C=C(t)$ we have $$ \lim _{t \rightarrow \infty} C=C^{*} . $$ Find $C^{*}$. If $C_{0}=C^{*}$, what is $C(t) ?$ 400. Group problem. A philanthropist endows a chair. This means that she donates an amount of money $B_{0}$ to the university. The university invests the money (it earns interest) and pays the salary of a professor. Denote the interest rate on the investment by $r$ (e.g. if $r=.06$, then the investment earns interest at a rate of $6 \%$ per year) the salary of the professor by $a$ (e.g. $a=\$ 50,000$ per year), and the balance in the investment account at time $t$ by $B$. (i) Give a differential equation for $B$. (ii) Solve the differential equation. Use the initial condition $B=B_{0}$ when $t=0$ to evaluate the constant of integration. (iii) There is a critical value $B^{*}$ with the property that (1) if $B_{0}<B^{*}$, then there is a $t>0$ with $B(t)=0$ (i.e. the account runs out of money) while (2) if $B_{0}>B^{*}$, then $\lim _{t \rightarrow \infty} B=\infty$. Find $B^{*}$. (iv) This problem is like the pollution problem except for the signs of $r$ and $a$. Explain. 401. Group problem. A citizen pays social security taxes of a dollars per year for $T_{1}$ years, then retires, then receives payments of $b$ dollars per year for $T_{2}$ years, then dies. The account which receives and dispenses the money earns interest at a rate of $r \%$ per year and has no money at time $t=0$ and no money at the time $t=T_{1}+T_{2}$ of death. Find two differential equations for the balance $B(t)$ at time $t$; one valid for $0 \leq t \leq T_{1}$, the other valid for $T_{1} \leq t \leq T_{1}+T_{2}$. Express the ratio $b / a$ in terms of $T_{1}, T_{2}$, and $r$. Reasonable values for $T_{1}, T_{2}$, and $r$ are $T_{1}=40, T_{2}=20$, and $r=5 \%=.05$. This model ignores inflation. Notice that $0<d B / \mathrm{d} t$ for $0<t<T_{1}$, that $d B / \mathrm{d} t<0$ for $T_{1}<t<T_{1}+T_{2}$, and that the account earns interest even for $T_{1}<t<T_{1}+T_{2}$. 402. A 300 gallon tank is full of milk containing $2 \%$ butterfat. Milk containing $1 \%$ butterfat is pumped in a 10 gallons per minute starting at 10:00 AM and the well mixed milk is drained off at 15 gallons per minute. What is the percent butterfat in the milk in the tank 5 minutes later at 10:05 AM? Hint: How much milk is in the tank at time $t$ ? How much butterfat is in the milk at time $t=0$ ? 403. A sixteen pound weight is suspended from the lower end of a spring whose upper end is attached to a rigid support. The weight extends the spring by half a foot. It is struck by a sharp blow which gives it an initial downward velocity of eight feet per second. Find its position as a function of time. 404. A sixteen pound weight is suspended from the lower end of a spring whose upper end is attached to a rigid support. The weight extends the spring by half a foot. The weight is pulled down one feet and released. Find its position as a function of time. 405. The equation for the displacement $y(t)$ from equilibrium of a spring subject to a forced vibration of frequency $\omega$ is $$ \frac{d^{2} y}{\mathrm{~d} t^{2}}+4 y=\sin (\omega t) $$ (i) Find the solution $y=y(\omega, t)$ of (47) for $\omega \neq 2$ if $y(0)=0$ and $y^{\prime}(0)=0$. (ii) What is $\lim _{\omega \rightarrow 2} y(\omega, t)$ ? (iii) Find the solution $y(t)$ of $$ \frac{d^{2} y}{\mathrm{~d} t^{2}}+4 y=\sin (2 t) $$ if $y(0)=0$ and $y^{\prime}(0)=0$. (Hint: Compare with (47).) 406. Group problem. Suppose that an undamped spring is subjected to an external periodic force so that its position $y$ at time $t$ satisfies the differential equation $$ \frac{d^{2} y}{\mathrm{~d} t^{2}}+\omega_{0}^{2} y=c \sin (\omega t) $$ (i) Show that the general solution is $$ y=C_{1} \cos \omega_{0} t+C_{2} \sin \omega_{0} t+\frac{c}{\omega_{0}^{2}-\omega^{2}} \sin \omega t $$ when $\omega_{0} \neq \omega$. (ii) Solve the equation when $\omega=\omega_{0}$. (iii) Show that in part (i) the solution remains bounded as $t \rightarrow \infty$ but in part (ii) this is not so. (This phenomenon is called resonance. To see an example of resonance try Googling "Tacoma Bridge Disaster.") 407. Group problem. Have look at the electrical circuit equation (46) from $\S 40.4$. (i) Find the general solution of (46), assuming that $V_{\text {in }}(t)$ does not depend on time $t$. What is $\lim _{t \rightarrow \infty} I(t)$ ? (ii) Assume for simplicity that $L=C=1$, and that the resistor has been short circuited, i.e. that $R=0$. If the input voltage is a sinusoidal wave, $$ V_{\mathrm{in}}(t)=A \sin \omega t, \quad(\omega \neq 1) $$ then find a particular solution, and then the general solution. (iii) Repeat problem (ii) with $\omega=1$. (iv) Suppose again that $L=C=1$, but now assume that $R>0$. Find the general solution when $V_{\text {in }}(t)$ is constant. (v) Still assuming $L=C=1, R>0$ find a particular solution of the equation when the input voltage is a sinusoidal wave $$ V_{\text {in }}(t)=A \sin \omega t $$ 408. You are watching a buoy bobbing up and down in the water. Assume that the buoy height with respect to the surface level of the water satisfies the damped oscillator equation: $z^{\prime \prime}+b z^{\prime}+k z \equiv 0$ where $b$ and $k$ are positive constants. Something has initially disturbed the buoy causing it to go up and down, but friction will gradually cause its motion to die out. You make the following observations: At time zero the center of the buoy is at $z(0)=0$ , i.e., the position it would be in if it were at rest. It then rises up to a peak and falls down so that at time $t=2$ it again at zero, $z(2)=0$ descends downward and then comes back to 0 at time 4 , i.e, $z(4)=0$. Suppose $z(1)=25$ and $z(3)=-16$. (a) How high will $z$ be at time $t=5$ ? (b) What are $b$ and $k$ ? Hint: Use that $z=A e^{\alpha t} \sin (\omega t+B)$. 409. Contrary to what one may think the buoy does not reach its peak at time $t=1$ which is midway between its first two zeros, at $t=0$ and $t=2$. For example, suppose $z=e^{-t} \sin t$. Then $z$ is zero at both $t=0$ and $t=\pi$. Does $z$ have a local maximum at $t=\frac{\pi}{2}$ ? 410. In the buoy problem 408 suppose you make the following observations: It rises up to its first peak at $t=1$ where $z(1)=25$ and then descends downward to a local minimum at at $t=3$ where $z(3)=-16$. (a) When will the buoy reach its second peak and how high will that be? (b) What are $b$ and $k$ ? Note: It will not be the case that $z(0)=0$. Linear operators Given a polynomial $p=p(x)=a_{0}+a_{1} x+a_{2} x^{2}+\ldots+a_{n} x^{n}$ and $z=z(t)$ an infinitely differentiable function of $t$ define $$ \mathcal{L}_{p}(z)=a_{0} z+a_{1} z^{(1)}+a_{2} z^{(2)}+\ldots+a_{n} z^{(n)} $$ where $z^{(k)}=\frac{d^{k} z}{d t^{k}}$ is the $k^{t h}$ derivative of $z$ with respect to $t$. 411. Show for $\mathcal{L}=\mathcal{L}_{p}$ that (a) $\mathcal{L}\left(z_{1}+z_{2}\right)=\mathcal{L}\left(z_{1}\right)+\mathcal{L}\left(z_{2}\right)$ (b) $\mathcal{L}(C z)=C \mathcal{L}(z)$ where $C$ is any constant Such an $\mathcal{L}$ is called a linear operator. Operator because it takes as input a function and then ouputs another function. Linear refers to properties (a) and (b). 412. Let $r$ be any constant and $p$ any polynomial. Show that $\mathcal{L}_{p}\left(e^{r t}\right)=p(r) e^{r t}$. 413. For $p$ and $q$ polynomials show that $$ \mathcal{L}_{p+q}(z)=\mathcal{L}_{p}(z)+\mathcal{L}_{q}(z) $$ 414. For $p$ and $q$ polynomials show that $$ \mathcal{L}_{p \cdot q}(z)=\mathcal{L}_{p}\left(\mathcal{L}_{q}(z)\right) $$ Here $p \cdot q$ refers to the ordinary product of the two polynomials. 415. Let $\alpha$ be a constant. For any $u$ an infinitely differentiable function of $t$ show that (a) $\mathcal{L}_{x-\alpha}\left(u \cdot e^{\alpha t}\right)=u^{(1)} e^{\alpha t}$ (b) $\mathcal{L}_{(x-\alpha)^{n}}\left(u \cdot e^{\alpha t}\right)=u^{(n)} e^{\alpha t}$ 416. Let $\alpha$ be any constant, $p$ a polynomial, and suppose that $(x-\alpha)^{n}$ divides $p$. Show that for any $k<n$ $$ \mathcal{L}_{p}\left(t^{k} e^{\alpha t}\right) \equiv 0 $$ 417. Suppose that $p(x)=\left(x-\alpha_{1}\right)^{n_{1}} \cdots\left(x-\alpha_{m}\right)^{n_{m}}$ where the $\alpha_{i}$ are distinct constants. Suppose that $$ z=C_{1}^{1} e^{\alpha_{1} t}+C_{1}^{2} t e^{\alpha_{1} t}+\cdots C_{1}^{n_{1}} t^{n_{1}-1} e^{\alpha_{1} t}+\cdots+C_{m}^{1} e^{\alpha_{m} t}+C_{m}^{2} t e^{\alpha_{m} t}+\cdots C_{m}^{n_{m}} t^{n_{m}-1} e^{\alpha_{m} t} $$ Show that $\mathcal{L}_{p}(z) \equiv 0$. In a more advanced course in the theory of differential equations it would be proved that every solution of $\mathcal{L}_{p}(z) \equiv 0$ has this form, i.e., $z$ satisfies the above formula for some choice of the constants $C_{j}^{i}$. 418. Suppose $\mathcal{L}$ is a linear operator and $b=b(t)$ is a fixed function of $t$. Suppose that $z_{P}$ is one particular solution of $\mathcal{L}(z)=b$, i.e., $\mathcal{L}\left(z_{P}\right)=b$. Suppose that $z$ is any other solution of $\mathcal{L}(z)=b$. Show that $\mathcal{L}\left(z-z_{P}\right) \equiv 0$. Show that for any solution of the equation $\mathcal{L}(z)=b$ there is a solution $z_{H}$ of the associated homogenous equation such that $z=z_{P}+z_{H}$. Variations of Parameters 419. Given the equation $$ \mathcal{L}(z)=z^{\prime \prime}+a_{0} z^{\prime}+a_{1} z \equiv b $$ where $a_{0}, a_{1}, b$ are given functions of $t$. Then $$ \mathcal{L}\left(f z_{1}+g z_{2}\right)=b $$ where $$ z_{h}=C_{1} z_{1}+C_{2} z_{2} $$ is the general solution of the associated homogenous equation $\mathcal{L}(z) \equiv 0$ and the derivatives of $f$ and $g$ satisfy: $$ f^{\prime}=\frac{\operatorname{det}\left(\begin{array}{cc} 0 & z_{2} \\ b & z_{2}^{\prime} \end{array}\right)}{\operatorname{det}\left(\begin{array}{cc} z_{1} & z_{2} \\ z_{1}^{\prime} & z_{2}^{\prime} \end{array}\right)} \quad g^{\prime}=\frac{\operatorname{det}\left(\begin{array}{cc} z_{1} & 0 \\ z_{1}^{\prime} & b \end{array}\right)}{\operatorname{det}\left(\begin{array}{cc} z_{1} & z_{2} \\ z_{1}^{\prime} & z_{2}^{\prime} \end{array}\right)} $$ Use these formulas to find the general solution of $$ z^{\prime \prime}+z \equiv(\cos t)^{2} $$ 420. Use these formulas to find the general solution of $$ z^{\prime \prime}+z \equiv \frac{1}{\sin t} $$ 421. Solve the initial value problem: $$ \begin{gathered} z^{\prime \prime}+z \equiv(\tan t)^{2} \\ z(0)=1 \\ z^{\prime}(0)=-1 \end{gathered} $$ 422. Find the general solution of $$ z^{\prime \prime}-z \equiv \frac{1}{e^{t}+e^{-t}} $$ 423. Given a system of linear equations $$ \begin{aligned} & a x+b y=r \\ & c x+d y=s \end{aligned} $$ show that the solution is given by: $$ x=\frac{\operatorname{det}\left(\begin{array}{ll} r & b \\ s & d \end{array}\right)}{\operatorname{det}\left(\begin{array}{ll} a & b \\ c & d \end{array}\right)} \quad y=\frac{\operatorname{det}\left(\begin{array}{ll} a & r \\ c & s \end{array}\right)}{\operatorname{det}\left(\begin{array}{ll} a & b \\ c & d \end{array}\right)} $$ You may assume the determinant: $$ \operatorname{det}\left(\begin{array}{ll} a & b \\ c & d \end{array}\right)=a d-b c $$ is not zero. 424. Given the linear operator $\mathcal{L}(z)=z^{\prime \prime}+a_{0} z^{\prime}+a_{1} z$ suppose $\mathcal{L}\left(z_{1}\right) \equiv 0$ and $\mathcal{L}\left(z_{2}\right) \equiv 0$ and that $f$ and $g$ are functions of $t$ which satisfy $f^{\prime} z_{1}+g^{\prime} z_{2} \equiv 0$. Show that $$ \mathcal{L}\left(f z_{1}+g z_{2}\right)=f^{\prime} z_{1}^{\prime}+g^{\prime} z_{2}^{\prime} $$ 425. Prove that the formulas given problem 419 work. ## Chapter 5: Vectors ## Introduction to vectors 42.1. Definition. A vector is a column of two, three, or more numbers, written as $$ \overrightarrow{\boldsymbol{a}}=\left(\begin{array}{c} a_{1} \\ a_{2} \end{array}\right) \quad \text { or } \quad \overrightarrow{\boldsymbol{a}}=\left(\begin{array}{c} a_{1} \\ a_{2} \\ a_{3} \end{array}\right) \quad \text { or } \quad \overrightarrow{\boldsymbol{a}}=\left(\begin{array}{c} a_{1} \\ \vdots \\ a_{n} \end{array}\right) $$ in general. The length of a vector $\overrightarrow{\boldsymbol{a}}=\left(\begin{array}{c}a_{1} \\ a_{2} \\ a_{3}\end{array}\right)$ is defined by $$ \|\overrightarrow{\boldsymbol{a}}\|=\left\|\left(\begin{array}{l} a_{1} \\ a_{2} \\ a_{3} \end{array}\right)\right\|=\sqrt{a_{1}^{2}+a_{2}^{2}+a_{3}^{2}} . $$ We will always deal with either the two or three dimensional cases, in other words, the cases $n=2$ or $n=3$, respectively. For these cases there is a geometric description of vectors which is very useful. In fact, the two and three dimensional theories have their origins in mechanics and geometry. In higher dimensions the geometric description fails, simply because we cannot visualize a four dimensional space, let alone a higher dimensional space. Instead of a geometric description of vectors there is an abstract theory called Linear Algebra which deals with "vector spaces" of any dimension (even infinite!). This theory of vectors in higher dimensional spaces is very useful in science, engineering and economics. You can learn about it in courses like MATH 320 or 340/341. 42.2. Basic arithmetic of vectors. You can add and subtract vectors, and you can multiply them with arbitrary real numbers. this section tells you how. The sum of two vectors is defined by $$ \left(\begin{array}{l} a_{1} \\ a_{2} \end{array}\right)+\left(\begin{array}{l} b_{1} \\ b_{2} \end{array}\right)=\left(\begin{array}{l} a_{1}+b_{1} \\ a_{2}+b_{2} \end{array}\right), $$ and $$ \left(\begin{array}{l} a_{1} \\ a_{2} \\ a_{3} \end{array}\right)+\left(\begin{array}{l} b_{1} \\ b_{2} \\ b_{3} \end{array}\right)=\left(\begin{array}{l} a_{1}+b_{1} \\ a_{2}+b_{2} \\ a_{3}+b_{3} \end{array}\right) . $$ The zero vector is defined by $$ \overrightarrow{\mathbf{0}}=\left(\begin{array}{l} 0 \\ 0 \end{array}\right) \quad \text { or } \quad \overrightarrow{\mathbf{0}}=\left(\begin{array}{l} 0 \\ 0 \\ 0 \end{array}\right) $$ It has the property that $$ \vec{a}+\overrightarrow{0}=\overrightarrow{0}+\vec{a}=\vec{a} $$ no matter what the vector $\overrightarrow{\boldsymbol{a}}$ is. You can multiply a vector $\overrightarrow{\boldsymbol{a}}=\left(\begin{array}{l}a_{1} \\ a_{2} \\ a_{3}\end{array}\right)$ with a real number $t$ according to the rule $$ t \vec{a}=\left(\begin{array}{c} t a_{1} \\ t a_{2} \\ t a_{3} \end{array}\right) $$ In particular, "minus a vector" is defined by $$ -\overrightarrow{\boldsymbol{a}}=(-1) \overrightarrow{\boldsymbol{a}}=\left(\begin{array}{c} -a_{1} \\ -a_{2} \\ -a_{3} \end{array}\right) . $$ The difference of two vectors is defined by $$ \vec{a}-\vec{b}=\vec{a}+(-\vec{b}) . $$ So, to subtract two vectors you subtract their components, $$ \overrightarrow{\boldsymbol{a}}-\overrightarrow{\boldsymbol{b}}=\left(\begin{array}{l} a_{1} \\ a_{2} \\ a_{3} \end{array}\right)-\left(\begin{array}{l} b_{1} \\ b_{2} \\ b_{3} \end{array}\right)=\left(\begin{array}{l} a_{1}-b_{1} \\ a_{2}-b_{2} \\ a_{3}-b_{3} \end{array}\right) $$ 42.3. Some GOOD examples. $$ \begin{aligned} \left(\begin{array}{l} 2 \\ 3 \end{array}\right)+\left(\begin{array}{c} -3 \\ \pi \end{array}\right)=\left(\begin{array}{c} -1 \\ 3+\pi \end{array}\right) & 2\left(\begin{array}{l} 1 \\ 0 \end{array}\right)+3\left(\begin{array}{l} 0 \\ 1 \end{array}\right)=\left(\begin{array}{l} 2 \\ 3 \end{array}\right) \\ \left(\begin{array}{l} 1 \\ 0 \\ 3 \end{array}\right)-\left(\begin{array}{c} -1 \\ 12 \\ \sqrt{2} \end{array}\right)=\left(\begin{array}{c} 2 \\ -12 \\ 3-\sqrt{2} \end{array}\right) & a\left(\begin{array}{l} 1 \\ 0 \\ 0 \end{array}\right)+b\left(\begin{array}{l} 0 \\ 1 \\ 0 \end{array}\right)+c\left(\begin{array}{l} 0 \\ 0 \\ 1 \end{array}\right)=\left(\begin{array}{l} a \\ b \\ c \end{array}\right) \\ 0 \cdot\left(\begin{array}{c} 12 \sqrt{ } 39 \\ \pi^{2}-\ln 3 \end{array}\right)=\left(\begin{array}{l} 0 \\ 0 \end{array}\right)=\overrightarrow{\mathbf{0}} & \left(\begin{array}{l} t+t^{2} \\ 1-t^{2} \end{array}\right)=(1+t)\left(\begin{array}{c} t \\ 1-t \end{array}\right) \end{aligned} $$ 42.4. Two very, very BAD examples. Vectors must have the same size to be added, therefore $$ \left(\begin{array}{l} 2 \\ 3 \end{array}\right)+\left(\begin{array}{l} 1 \\ 3 \\ 2 \end{array}\right)=\text { undefined!!! } $$ Vectors and numbers are different things, so an equation like $$ \vec{a}=3 \text { is nonsense! } $$ This equation says that some vector $(\overrightarrow{\boldsymbol{a}})$ is equal to some number (in this case: 3 ). Vectors and numbers are never equal! 42.5. Algebraic properties of vector addition and multiplication. Addition of vectors and multiplication of numbers and vectors were defined in such a way that the following always hold for any vectors $\overrightarrow{\boldsymbol{a}}, \overrightarrow{\boldsymbol{b}}, \overrightarrow{\boldsymbol{c}}$ (of the same size) and any real numbers $s, t$ $$ \begin{aligned} \vec{a}+\overrightarrow{\boldsymbol{b}} & =\overrightarrow{\boldsymbol{b}}+\overrightarrow{\boldsymbol{a}} & & \text { [vector addition is commutative] } \\ \overrightarrow{\boldsymbol{a}}+(\overrightarrow{\boldsymbol{b}}+\overrightarrow{\boldsymbol{c}}) & =(\overrightarrow{\boldsymbol{a}}+\overrightarrow{\boldsymbol{b}})+\overrightarrow{\boldsymbol{c}} & & \text { [vector addition is associative] } \\ t(\overrightarrow{\boldsymbol{a}}+\overrightarrow{\boldsymbol{b}}) & =t \overrightarrow{\boldsymbol{a}}+t \overrightarrow{\boldsymbol{b}} & & \text { [first distributive property] } \\ (s+t) \overrightarrow{\boldsymbol{a}} & =s \overrightarrow{\boldsymbol{a}}+t \overrightarrow{\boldsymbol{a}} & & \text { [second distributive property] } \end{aligned} $$ 42.6. Prove (50). Let $\overrightarrow{\boldsymbol{a}}=\left(\begin{array}{l}a_{1} \\ a_{2} \\ a_{3}\end{array}\right)$ and $\overrightarrow{\boldsymbol{b}}=\left(\begin{array}{l}b_{1} \\ b_{2} \\ b_{3}\end{array}\right)$ be two vectors, and consider both possible ways of adding them: $$ \left(\begin{array}{l} a_{1} \\ a_{2} \\ a_{3} \end{array}\right)+\left(\begin{array}{l} b_{1} \\ b_{2} \\ b_{3} \end{array}\right)=\left(\begin{array}{l} a_{1}+b_{1} \\ a_{2}+b_{2} \\ a_{3}+b_{3} \end{array}\right) \quad \text { and }\left(\begin{array}{l} b_{1} \\ b_{2} \\ b_{3} \end{array}\right)+\left(\begin{array}{l} a_{1} \\ a_{2} \\ a_{3} \end{array}\right)=\left(\begin{array}{l} b_{1}+a_{1} \\ b_{2}+a_{2} \\ b_{3}+a_{3} \end{array}\right) $$ We know (or we have assumed long ago) that addition of real numbers is commutative, so that $a_{1}+b_{1}=b_{1}+a_{1}$, etc. Therefore $$ \overrightarrow{\boldsymbol{a}}+\overrightarrow{\boldsymbol{b}}=\left(\begin{array}{l} a_{1}+b_{1} \\ a_{2}+b_{2} \\ a_{3}+b_{3} \end{array}\right)=\left(\begin{array}{l} b_{1}+a_{1} \\ b_{2}+a_{2} \\ b_{3}+a_{3} \end{array}\right)=\overrightarrow{\boldsymbol{b}}+\overrightarrow{\boldsymbol{a}} $$ This proves (50). 42.7. Example. If $\overrightarrow{\boldsymbol{v}}$ and $\overrightarrow{\boldsymbol{w}}$ are two vectors, we define $$ \vec{a}=2 \vec{v}+3 \vec{w}, \quad \vec{b}=-\vec{v}+\vec{w} $$ Problem: Compute $\overrightarrow{\boldsymbol{a}}+\overrightarrow{\boldsymbol{b}}$ and $2 \overrightarrow{\boldsymbol{a}}-3 \overrightarrow{\boldsymbol{b}}$ in terms of $\overrightarrow{\boldsymbol{v}}$ and $\overrightarrow{\boldsymbol{w}}$. Solution: $$ \begin{gathered} \overrightarrow{\boldsymbol{a}}+\overrightarrow{\boldsymbol{b}}=(2 \overrightarrow{\boldsymbol{v}}+3 \overrightarrow{\boldsymbol{w}})+(-\overrightarrow{\boldsymbol{v}}+\overrightarrow{\boldsymbol{w}})=(2-1) \overrightarrow{\boldsymbol{v}}+(3+1) \overrightarrow{\boldsymbol{w}}=\overrightarrow{\boldsymbol{v}}+4 \overrightarrow{\boldsymbol{w}} \\ 2 \overrightarrow{\boldsymbol{a}}-3 \overrightarrow{\boldsymbol{b}}=2(2 \overrightarrow{\boldsymbol{v}}+3 \overrightarrow{\boldsymbol{w}})-3(-\overrightarrow{\boldsymbol{v}}+\overrightarrow{\boldsymbol{w}})=4 \overrightarrow{\boldsymbol{w}}+6 \overrightarrow{\boldsymbol{w}}+3 \overrightarrow{\boldsymbol{v}}-3 \overrightarrow{\boldsymbol{w}}=7 \overrightarrow{\boldsymbol{v}}+3 \overrightarrow{\boldsymbol{w}} \end{gathered} $$ Problem: Find $s, t$ so that $s \overrightarrow{\boldsymbol{a}}+t \overrightarrow{\boldsymbol{b}}=\overrightarrow{\boldsymbol{v}}$. Solution: Simplifying $s \overrightarrow{\boldsymbol{a}}+t \overrightarrow{\boldsymbol{b}}$ you find $$ s \overrightarrow{\boldsymbol{a}}+t \overrightarrow{\boldsymbol{b}}=s(2 \overrightarrow{\boldsymbol{v}}+3 \overrightarrow{\boldsymbol{w}})+t(-\overrightarrow{\boldsymbol{v}}+\overrightarrow{\boldsymbol{w}})=(2 s-t) \overrightarrow{\boldsymbol{v}}+(3 s+t) \overrightarrow{\boldsymbol{w}} $$ One way to ensure that $s \overrightarrow{\boldsymbol{a}}+t \overrightarrow{\boldsymbol{b}}=\overrightarrow{\boldsymbol{v}}$ holds is therefore to choose $s$ and $t$ to be the solutions of $$ \begin{aligned} & 2 s-t=1 \\ & 3 s+t=0 \end{aligned} $$ The second equation says $t=-3 s$. The first equation then leads to $2 s+3 s=1$, i.e. $s=\frac{1}{5}$. Since $t=-3 s$ we get $t=-\frac{3}{5}$. The solution we have found is therefore $$ \frac{1}{5} \overrightarrow{\boldsymbol{a}}-\frac{3}{5} \overrightarrow{\boldsymbol{b}}=\overrightarrow{\boldsymbol{v}} $$ 42.8. Geometric description of vectors. Vectors originally appeared in mechanics, where they represented forces: a force acting on some object has a magnitude and a direction. Thus a force can be thought of as an arrow, where the length of the arrow indicates how strong the force is (how hard it pushes or pulls). So we will think of vectors as arrows: if you specify two points $P$ and $Q$, then the arrow pointing from $P$ to $Q$ is a vector and we denote this vector by $\overrightarrow{P Q}$. The precise mathematical definition is as follows: position vectors in the plane and in space 102 42.9. Definition. For any pair of points $P$ and $Q$ whose coordinates are $\left(p_{1}, p_{2}, p_{3}\right)$ and $\left(q_{1}, q_{2}, q_{3}\right)$ one defines a vector $\overrightarrow{P Q}$ by $$ \overrightarrow{P Q}=\left(\begin{array}{l} q_{1}-p_{1} \\ q_{2}-p_{2} \\ q_{3}-p_{3} \end{array}\right) $$ If the initial point of an arrow is the origin $O$, and the final point is any point $Q$, then the vector $\overrightarrow{O Q}$ is called the position vector of the point $Q$. If $\overrightarrow{\boldsymbol{p}}$ and $\overrightarrow{\boldsymbol{q}}$ are the position vectors of $P$ and $Q$, then one can write $\overrightarrow{P Q}$ as $$ \overrightarrow{P Q}=\left(\begin{array}{l} q_{1} \\ q_{2} \\ q_{3} \end{array}\right)-\left(\begin{array}{l} p_{1} \\ p_{2} \\ p_{3} \end{array}\right)=\overrightarrow{\boldsymbol{q}}-\overrightarrow{\boldsymbol{p}} $$ For plane vectors we define $\overrightarrow{P Q}$ similarly, namely, $\overrightarrow{P Q}=\left(\begin{array}{c}q_{1}-p_{1} \\ q_{2}-p_{2}\end{array}\right)$. The old formula for the distance between two points $P$ and $Q$ in the plane $$ \text { distance from } P \text { to } Q=\sqrt{\left(q_{1}-p_{1}\right)^{2}+\left(q_{2}-p_{2}\right)^{2}} $$ says that the length of the vector $\overrightarrow{P Q}$ is just the distance between the points $P$ and $Q$, i.e. $$ \text { distance from } P \text { to } Q=\|\overrightarrow{P Q}\| \text {. } $$ This formula is also valid if $P$ and $Q$ are points in space. 42.10. Example. The point $P$ has coordinates $(2,3)$; the point $Q$ has coordinates $(8,6)$. The vector $\overrightarrow{P Q}$ is therefore $$ \overrightarrow{P Q}=\left(\begin{array}{l} 8-2 \\ 6-3 \end{array}\right)=\left(\begin{array}{l} 6 \\ 3 \end{array}\right) $$ This vector is the position vector of the point $R$ whose coordinates are $(6,3)$. Thus $$ \overrightarrow{P Q}=\overrightarrow{O R}=\left(\begin{array}{l} 6 \\ 3 \end{array}\right) . $$ The distance from $P$ to $Q$ is the length of the vector $\overrightarrow{P Q}$, i.e. $$ \text { distance } P \text { to } Q=\left\|\left(\begin{array}{l} 6 \\ 3 \end{array}\right)\right\|=\sqrt{6^{2}+3^{2}}=3 \sqrt{ } 5 \text {. } $$ 42.11. Example. Find the distance between the points $A$ and $B$ whose position vectors are $\overrightarrow{\boldsymbol{a}}=\left(\begin{array}{l}1 \\ 1 \\ 0\end{array}\right)$ and $\overrightarrow{\boldsymbol{b}}=\left(\begin{array}{l}0 \\ 1 \\ 1\end{array}\right)$ respectively. ## Solution: One has $$ \text { distance } A \text { to } B=\|\overrightarrow{A B}\|=\|\overrightarrow{\boldsymbol{b}}-\overrightarrow{\boldsymbol{a}}\|=\left\|\left(\begin{array}{c} -1 \\ 0 \\ 1 \end{array}\right)\right\|=\sqrt{(-1)^{2}+0^{2}+1^{2}}=\sqrt{ } 2 $$ two pictures of the vector $\overrightarrow{P Q}=\overrightarrow{\boldsymbol{q}}-\overrightarrow{\boldsymbol{p}}$ 42.12. Geometric interpretation of vector addition and multiplication. Suppose you have two vectors $\overrightarrow{\boldsymbol{a}}$ and $\overrightarrow{\boldsymbol{b}}$. Consider them as position vectors, i.e. represent them by vectors that have the origin as initial point: $$ \overrightarrow{\boldsymbol{a}}=\overrightarrow{O A}, \quad \overrightarrow{\boldsymbol{b}}=\overrightarrow{O B} $$ Then the origin and the three endpoints of the vectors $\overrightarrow{\boldsymbol{a}}, \overrightarrow{\boldsymbol{b}}$ and $\overrightarrow{\boldsymbol{a}}+\overrightarrow{\boldsymbol{b}}$ form a parallelogram. See figure 15. To multiply a vector $\overrightarrow{\boldsymbol{a}}$ with a real number $t$ you multiply its length with $|t|$; if $t<0$ you reverse the direction of $\overrightarrow{\boldsymbol{a}}$. Figure 15. Two ways of adding plane vectors, and an addition of space vectors Figure 16. Multiples of a vector, and the difference of two vectors. Figure 17. Picture proof that $\overrightarrow{\boldsymbol{a}}+\overrightarrow{\boldsymbol{b}}=\overrightarrow{\boldsymbol{v}}+4 \overrightarrow{\boldsymbol{w}}$ in example 42.13 . 42.13. Example. In example 42.7 we assumed two vectors $\overrightarrow{\boldsymbol{v}}$ and $\overrightarrow{\boldsymbol{w}}$ were given, and then defined $\overrightarrow{\boldsymbol{a}}=2 \overrightarrow{\boldsymbol{v}}+3 \overrightarrow{\boldsymbol{w}}$ and $\overrightarrow{\boldsymbol{b}}=-\overrightarrow{\boldsymbol{v}}+\overrightarrow{\boldsymbol{w}}$. In figure 17 the vectors $\overrightarrow{\boldsymbol{a}}$ and $\overrightarrow{\boldsymbol{b}}$ are constructed geometrically from some arbitrarily chosen $\overrightarrow{\boldsymbol{v}}$ and $\overrightarrow{\boldsymbol{w}}$. We also found algebraically in example 42.7 that $\overrightarrow{\boldsymbol{a}}+\overrightarrow{\boldsymbol{b}}=\overrightarrow{\boldsymbol{v}}+4 \overrightarrow{\boldsymbol{w}}$. The third drawing in figure 17 illustrates this. ## Parametric equations for lines and planes Given two distinct points $A$ and $B$ we consider the line segment $A B$. If $X$ is any given point on $A B$ then we will now find a formula for the position vector of $X$. Define $t$ to be the ratio between the lengths of the line segments $A X$ and $A B$, $$ t=\frac{\text { length } A X}{\text { length } A B} . $$ Then the vectors $\overrightarrow{A X}$ and $\overrightarrow{A B}$ are related by $\overrightarrow{A X}=t \overrightarrow{A B}$. Since $A X$ is shorter than $A B$ we have $0<t<1$. The position vector of the point $X$ on the line segment $A B$ is $$ \overrightarrow{O X}=\overrightarrow{O A}+\overrightarrow{A X}=\overrightarrow{O A}+t \overrightarrow{A B} $$ If we write $\overrightarrow{\boldsymbol{a}}, \overrightarrow{\boldsymbol{b}}, \overrightarrow{\boldsymbol{x}}$ for the position vectors of $A, B, X$, then we get $$ \overrightarrow{\boldsymbol{x}}=(1-t) \overrightarrow{\boldsymbol{a}}+t \overrightarrow{\boldsymbol{b}}=\overrightarrow{\boldsymbol{a}}+t(\overrightarrow{\boldsymbol{b}}-\overrightarrow{\boldsymbol{a}}) . $$ This equation is called the parametric equation for the line through $A$ and $B$. In our derivation the parameter $t$ satisfied $0 \leq t \leq 1$, but there is nothing that keeps us from substituting negative values of $t$, or numbers $t>1$ in (54). The resulting vectors $\overrightarrow{\boldsymbol{x}}$ are position vectors of points $X$ which lie on the line $\ell$ through $A$ and $B$. Figure 18. Constructing points on the line through $A$ and $B$ 43.1. Example. [Find the parametric equation for the line $\ell$ through the points $A(1,2)$ and $B(3,-1)$, and determine where $\ell$ intersects the $x_{1}$ axis. ] Solution: The position vectors of $A, B$ are $\overrightarrow{\boldsymbol{a}}=\left(\begin{array}{l}1 \\ 2\end{array}\right)$ and $\overrightarrow{\boldsymbol{b}}=\left(\begin{array}{c}3 \\ -1\end{array}\right)$, so the position vector of an arbitrary point on $\ell$ is given by $\overrightarrow{\boldsymbol{a}})=\left(\begin{array}{l}1 \\ 2\end{array}\right)+t\left(\begin{array}{c}3-1 \\ -1-2\end{array}\right)=\left(\begin{array}{l}1 \\ 2\end{array}\right)+t\left(\begin{array}{c}2 \\ -3\end{array}\right)=\left(\begin{array}{l}1+2 t \\ 2-3 t\end{array}\right)$ where $t$ is an arbitrary real number. This vector points to the point $X=(1+2 t, 2-3 t)$. By definition, a point lies on the $x_{1}$-axis if its $x_{2}$ component vanishes. Thus if the point $$ X=(1+2 t, 2-3 t) $$ lies on the $x_{1}$-axis, then $2-3 t=0$, i.e. $t=\frac{2}{3}$. The intersection point $X$ of $\ell$ and the $x_{1}$-axis is therefore $\left.X\right|_{t=2 / 3}=\left(1+2 \cdot \frac{2}{3}, 0\right)=\left(\frac{5}{3}, 0\right)$. 43.2. Midpoint of a line segment. If $M$ is the midpoint of the line segment $A B$, then the vectors $\overrightarrow{A M}$ and $\overrightarrow{M B}$ are both parallel and have the same direction and length (namely, half the length of the line segment $A B$ ). Hence they are equal: $\overrightarrow{A M}=\overrightarrow{M B}$. If $\overrightarrow{\boldsymbol{a}}, \overrightarrow{\boldsymbol{m}}$, and $\overrightarrow{\boldsymbol{b}}$ are the position vectors of $A, M$ and $B$, then this means $$ \overrightarrow{\boldsymbol{m}}-\overrightarrow{\boldsymbol{a}}=\overrightarrow{A M}=\overrightarrow{M B}=\overrightarrow{\boldsymbol{b}}-\overrightarrow{\boldsymbol{m}} . $$ Add $\overrightarrow{\boldsymbol{m}}$ and $\overrightarrow{\boldsymbol{a}}$ to both sides, and divide by 2 to get $$ \overrightarrow{\boldsymbol{m}}=\frac{1}{2} \overrightarrow{\boldsymbol{a}}+\frac{1}{2} \overrightarrow{\boldsymbol{b}}=\frac{\overrightarrow{\boldsymbol{a}}+\overrightarrow{\boldsymbol{b}}}{2} . $$ 43.3. Parametric equations for planes in space*. You can specify a plane in three dimensional space by naming a point $A$ on the plane $\mathcal{P}$, and two vectors $\overrightarrow{\boldsymbol{v}}$ and $\overrightarrow{\boldsymbol{w}}$ parallel to the plane $\mathcal{P}$, but not parallel to each other. Then any point on the plane $\mathcal{P}$ has position vector $\overrightarrow{\boldsymbol{x}}$ given by $$ \overrightarrow{\boldsymbol{x}}=\overrightarrow{\boldsymbol{a}}+s \overrightarrow{\boldsymbol{v}}+t \overrightarrow{\boldsymbol{w}} $$ Figure 19. Generating points on a plane $\mathcal{P}$ The following construction explains why (55) will give you any point on the plane through $A$, parallel to $\overrightarrow{\boldsymbol{v}}, \overrightarrow{\boldsymbol{w}}$. Let $A, \overrightarrow{\boldsymbol{v}}, \overrightarrow{\boldsymbol{w}}$ be given, and suppose we want to express the position vector of some other point $X$ on the plane $\mathcal{P}$ in terms of $\overrightarrow{\boldsymbol{a}}=\overrightarrow{O A}, \overrightarrow{\boldsymbol{v}}$, and $\overrightarrow{\boldsymbol{w}}$. First we note that $$ \overrightarrow{O X}=\overrightarrow{O A}+\overrightarrow{A X} $$ Next, you draw a parallelogram in the plane $\mathcal{P}$ whose sides are parallel to the vectors $\overrightarrow{\boldsymbol{v}}$ and $\overrightarrow{\boldsymbol{w}}$, and whose diagonal is the line segment $A X$. The sides of this parallelogram represent vectors which are multiples of $\overrightarrow{\boldsymbol{v}}$ and $\overrightarrow{\boldsymbol{w}}$ and which add up to $\overrightarrow{A X}$. So, if one side of the parallelogram is $s \overrightarrow{\boldsymbol{v}}$ and the other $t \overrightarrow{\boldsymbol{w}}$ then we have $\overrightarrow{A X}=s \overrightarrow{\boldsymbol{v}}+t \overrightarrow{\boldsymbol{w}}$. With $\overrightarrow{O X}=\overrightarrow{O A}+\overrightarrow{A X}$ this implies (55). ## Vector Bases 44.1. The Standard Basis Vectors. The notation for vectors which we have been using so far is not the most traditional. In the late 19th century GIBBS and HEAVYsidE adapted Hamilton's theory of Quaternions to deal with vectors. Their notation is still popular in texts on electromagnetism and fluid mechanics. Define the following three vectors: $$ \overrightarrow{\boldsymbol{i}}=\left(\begin{array}{l} 1 \\ 0 \\ 0 \end{array}\right), \quad \overrightarrow{\boldsymbol{j}}=\left(\begin{array}{l} 0 \\ 1 \\ 0 \end{array}\right), \quad \overrightarrow{\boldsymbol{k}}=\left(\begin{array}{l} 0 \\ 0 \\ 1 \end{array}\right) . $$ Then every vector can be written as a linear combination of $\overrightarrow{\boldsymbol{i}}, \overrightarrow{\boldsymbol{j}}$ and $\overrightarrow{\boldsymbol{k}}$, namely as follows: $$ \left(\begin{array}{c} a_{1} \\ a_{2} \\ a_{3} \end{array}\right)=a_{1} \overrightarrow{\boldsymbol{i}}+a_{2} \overrightarrow{\boldsymbol{j}}+a_{3} \overrightarrow{\boldsymbol{k}} . $$ Moreover, there is only one way to write a given vector as a linear combination of $\{\overrightarrow{\boldsymbol{i}}, \overrightarrow{\boldsymbol{j}}, \overrightarrow{\boldsymbol{k}}\}$. This means that $$ a_{1} \overrightarrow{\boldsymbol{i}}+a_{2} \overrightarrow{\boldsymbol{j}}+a_{3} \overrightarrow{\boldsymbol{k}}=b_{1} \overrightarrow{\boldsymbol{i}}+b_{2} \overrightarrow{\boldsymbol{j}}+b_{3} \overrightarrow{\boldsymbol{k}} \Longleftrightarrow\left\{\begin{array}{l} a_{1}=b_{1} \\ a_{2}=b_{2} \\ a_{3}=b_{3} \end{array}\right. $$ For plane vectors one defines $$ \vec{i}=\left(\begin{array}{l} 1 \\ 0 \end{array}\right), \quad \vec{j}=\left(\begin{array}{l} 0 \\ 1 \end{array}\right) $$ and just as for three dimensional vectors one can write every (plane) vector $\overrightarrow{\boldsymbol{a}}$ as a linear combination of $\vec{i}$ and $\vec{j}$, $$ \left(\begin{array}{l} a_{1} \\ a_{2} \end{array}\right)=a_{1} \overrightarrow{\boldsymbol{i}}+a_{2} \overrightarrow{\boldsymbol{j}} $$ Just as for space vectors, there is only one way to write a given vector as a linear combination of $\vec{i}$ and $\vec{j}$. 44.2. A Basis of Vectors (in general)* The vectors $\overrightarrow{\boldsymbol{i}}, \overrightarrow{\boldsymbol{j}}, \overrightarrow{\boldsymbol{k}}$ are called the $\boldsymbol{s t a n}$ dard basis vectors. They are an example of what is called a "basis". Here is the definition in the case of space vectors: 44.3. Definition. A triple of space vectors $\{\overrightarrow{\boldsymbol{u}}, \overrightarrow{\boldsymbol{v}}, \overrightarrow{\boldsymbol{w}}\}$ is a basis if every space vector $\overrightarrow{\boldsymbol{a}}$ can be written as a linear combination of $\{\overrightarrow{\boldsymbol{u}}, \overrightarrow{\boldsymbol{v}}, \overrightarrow{\boldsymbol{w}}\}$, i.e. $$ \overrightarrow{\boldsymbol{a}}=a_{u} \overrightarrow{\boldsymbol{u}}+a_{v} \overrightarrow{\boldsymbol{v}}+a_{w} \overrightarrow{\boldsymbol{w}} $$ and if there is only one way to do so for any given vector $\overrightarrow{\boldsymbol{a}}$ (i.e. the vector $\overrightarrow{\boldsymbol{a}}$ determines the coefficients $a_{u}, a_{v}, a_{w}$ ). For plane vectors the definition of a basis is almost the same, except that a basis consists of two vectors rather than three: 44.4. Definition. A pair of plane vectors $\{\overrightarrow{\boldsymbol{u}}, \overrightarrow{\boldsymbol{v}}\}$ is a basis if every plane vector $\overrightarrow{\boldsymbol{a}}$ can be written as a linear combination of $\{\overrightarrow{\boldsymbol{u}}, \overrightarrow{\boldsymbol{v}}\}$, i.e. $\overrightarrow{\boldsymbol{a}}=a_{u} \overrightarrow{\boldsymbol{u}}+a_{v} \overrightarrow{\boldsymbol{v}}$, and if there is only one way to do so for any given vector $\overrightarrow{\boldsymbol{a}}$ (i.e. the vector $\overrightarrow{\boldsymbol{a}}$ determines the coefficients $\left.a_{u}, a_{v}\right)$. ## Dot Product 45.1. Definition. The "inner product" or "dot product" of two vectors is given by $$ \left(\begin{array}{l} a_{1} \\ a_{2} \\ a_{3} \end{array}\right) \cdot\left(\begin{array}{l} b_{1} \\ b_{2} \\ b_{3} \end{array}\right)=a_{1} b_{1}+a_{2} b_{2}+a_{3} b_{3} $$ Note that the dot-product of two vectors is a number! The dot product of two plane vectors is (predictably) defined by $$ \left(\begin{array}{l} a_{1} \\ a_{2} \end{array}\right) \cdot\left(\begin{array}{l} b_{1} \\ b_{2} \end{array}\right)=a_{1} b_{1}+a_{2} b_{2} . $$ An important property of the dot product is its relation with the length of a vector: $$ \|\overrightarrow{\boldsymbol{a}}\|^{2}=\overrightarrow{\boldsymbol{a}} \cdot \overrightarrow{\boldsymbol{a}} . $$ 45.2. Algebraic properties of the dot product. The dot product satisfies the following rules, $$ \begin{aligned} \vec{a} \cdot \vec{b} & =\vec{b} \cdot \vec{a} \\ \vec{a} \cdot(\vec{b}+\vec{c}) & =\vec{a} \cdot \vec{b}+\vec{a} \cdot \vec{c} \\ (\vec{b}+\vec{c}) \cdot \vec{a} & =\vec{b} \cdot \vec{a}+\vec{c} \cdot \vec{a} \\ t(\vec{a} \cdot \vec{b}) & =(t \vec{a}) \cdot \vec{b} \end{aligned} $$ which hold for all vectors $\overrightarrow{\boldsymbol{a}}, \overrightarrow{\boldsymbol{b}}, \overrightarrow{\boldsymbol{c}}$ and any real number $t$. 45.3. Example. Simplify $\|\vec{a}+\vec{b}\|^{2}$. One has $$ \begin{aligned} \|\overrightarrow{\boldsymbol{a}}+\overrightarrow{\boldsymbol{b}}\|^{2} & =(\overrightarrow{\boldsymbol{a}}+\overrightarrow{\boldsymbol{b}}) \cdot(\overrightarrow{\boldsymbol{a}}+\overrightarrow{\boldsymbol{b}}) \\ & =\overrightarrow{\boldsymbol{a}} \cdot(\overrightarrow{\boldsymbol{a}}+\overrightarrow{\boldsymbol{b}})+\overrightarrow{\boldsymbol{b}} \cdot(\overrightarrow{\boldsymbol{a}}+\overrightarrow{\boldsymbol{b}}) \\ & =\overrightarrow{\boldsymbol{a}} \cdot \overrightarrow{\boldsymbol{a}}+\underbrace{\overrightarrow{\boldsymbol{a}} \cdot \overrightarrow{\boldsymbol{b}}+\overrightarrow{\boldsymbol{b}} \cdot \overrightarrow{\boldsymbol{a}} \cdot \overrightarrow{\boldsymbol{b}} \text { by }(57)}+\overrightarrow{\boldsymbol{b}} \cdot \overrightarrow{\boldsymbol{b}} \\ & =\|\overrightarrow{\boldsymbol{a}}\|^{2}+2 \overrightarrow{\boldsymbol{a}} \cdot \overrightarrow{\boldsymbol{b}}+\|\overrightarrow{\boldsymbol{b}}\|^{2} \end{aligned} $$ 45.4. The diagonals of a parallelogram. Here is an example of how you can use the algebra of the dot product to prove something in geometry. Suppose you have a parallelogram one of whose vertices is the origin. Label the vertices, starting at the origin and going around counterclockwise, $O, A, C$ and $B$. Let $\overrightarrow{\boldsymbol{a}}=\overrightarrow{O A}, \overrightarrow{\boldsymbol{b}}=\overrightarrow{O B}, \overrightarrow{\boldsymbol{c}}=\overrightarrow{O C}$. One has $$ \overrightarrow{O C}=\overrightarrow{\boldsymbol{c}}=\overrightarrow{\boldsymbol{a}}+\overrightarrow{\boldsymbol{b}}, \quad \text { and } \quad \overrightarrow{A B}=\overrightarrow{\boldsymbol{b}}-\overrightarrow{\boldsymbol{a}} . $$ These vectors correspond to the diagonals $O C$ and $A B$ 45.5. Theorem. In a parallelogram $O A C B$ the sum of the squares of the lengths of the two diagonals equals the sum of the squares of the lengths of all four sides. Proof. The squared lengths of the diagonals are $$ \begin{aligned} & \|\overrightarrow{O C}\|^{2}=\|\overrightarrow{\boldsymbol{a}}+\overrightarrow{\boldsymbol{b}}\|^{2}=\|\overrightarrow{\boldsymbol{a}}\|^{2}+2 \overrightarrow{\boldsymbol{a}} \cdot \overrightarrow{\boldsymbol{b}}+\|\overrightarrow{\boldsymbol{b}}\|^{2} \\ & \|\overrightarrow{A B}\|^{2}=\|\overrightarrow{\boldsymbol{a}}-\overrightarrow{\boldsymbol{b}}\|^{2}=\|\overrightarrow{\boldsymbol{a}}\|^{2}-2 \overrightarrow{\boldsymbol{a}} \cdot \overrightarrow{\boldsymbol{b}}+\|\overrightarrow{\boldsymbol{b}}\|^{2} \end{aligned} $$ Adding both these equations you get $$ \|\overrightarrow{O C}\|^{2}+\|\overrightarrow{A B}\|^{2}=2\left(\|\overrightarrow{\boldsymbol{a}}\|^{2}+\|\overrightarrow{\boldsymbol{b}}\|^{2}\right) . $$ The squared lengths of the sides are $$ \|\overrightarrow{O A}\|^{2}=\|\overrightarrow{\boldsymbol{a}}\|^{2}, \quad\|\overrightarrow{A B}\|^{2}=\|\overrightarrow{\boldsymbol{b}}\|^{2}, \quad\|\overrightarrow{B C}\|^{2}=\|\overrightarrow{\boldsymbol{a}}\|^{2}, \quad\|\overrightarrow{O C}\|^{2}=\|\overrightarrow{\boldsymbol{b}}\|^{2} . $$ Together these also add up to $2\left(\|\overrightarrow{\boldsymbol{a}}\|^{2}+\|\overrightarrow{\boldsymbol{b}}\|^{2}\right)$. Figure 20. Proof of the law of cosines 45.6. The dot product and the angle between two vectors. Here is the most important interpretation of the dot product: 45.7. Theorem. If the angle between two vectors $\overrightarrow{\boldsymbol{a}}$ and $\overrightarrow{\boldsymbol{b}}$ is $\theta$, then one has $$ \overrightarrow{\boldsymbol{a}} \cdot \overrightarrow{\boldsymbol{b}}=\|\overrightarrow{\boldsymbol{a}}\| \cdot\|\overrightarrow{\boldsymbol{b}}\| \cos \theta $$ Proof. We need the law of cosines from high-school trigonometry. Recall that for a triangle $O A B$ with angle $\theta$ at the point $O$, and with sides $O A$ and $O B$ of lengths $a$ and $b$, the length $c$ of the opposing side $A B$ is given by $$ c^{2}=a^{2}+b^{2}-2 a b \cos \theta . $$ In trigonometry this is proved by dropping a perpendicular line from $B$ onto the side $O A$. The triangle $O A B$ gets divided into two right triangles, one of which has $A B$ as hypotenuse. Pythagoras then implies $$ c^{2}=(b \sin \theta)^{2}+(a-b \cos \theta)^{2} . $$ After simplification you get (61). To prove the theorem you let $O$ be the origin, and then observe that the length of the side $A B$ is the length of the vector $\overrightarrow{A B}=\overrightarrow{\boldsymbol{b}}-\overrightarrow{\boldsymbol{a}}$. Here $\overrightarrow{\boldsymbol{a}}=\overrightarrow{O A}, \overrightarrow{\boldsymbol{b}}=\overrightarrow{O B}$, and hence $$ c^{2}=\|\overrightarrow{\boldsymbol{b}}-\overrightarrow{\boldsymbol{a}}\|^{2}=(\overrightarrow{\boldsymbol{b}}-\overrightarrow{\boldsymbol{a}}) \cdot(\overrightarrow{\boldsymbol{b}}-\overrightarrow{\boldsymbol{a}})=\|\overrightarrow{\boldsymbol{b}}\|^{2}+\|\overrightarrow{\boldsymbol{a}}\|^{2}-2 \overrightarrow{\boldsymbol{a}} \cdot \overrightarrow{\boldsymbol{b}} $$ Compare this with (61), keeping in mind that $a=\|\overrightarrow{\boldsymbol{a}}\|$ and $b=\|\overrightarrow{\boldsymbol{b}}\|$ : you are led to conclude that $-2 \overrightarrow{\boldsymbol{a}} \cdot \overrightarrow{\boldsymbol{b}}=-2 a b \cos \theta$, and thus $\overrightarrow{\boldsymbol{a}} \cdot \overrightarrow{\boldsymbol{b}}=\|\overrightarrow{\boldsymbol{a}}\| \cdot\|\overrightarrow{\boldsymbol{b}}\| \cos \theta$. 45.8. Orthogonal projection of one vector onto another. The following construction comes up very often. Let $\overrightarrow{\boldsymbol{a}} \neq \overrightarrow{\boldsymbol{0}}$ be a given vector. Then for any other vector $\overrightarrow{\boldsymbol{x}}$ there is a number $\lambda$ such that $$ \overrightarrow{\boldsymbol{x}}=\lambda \overrightarrow{\boldsymbol{a}}+\overrightarrow{\boldsymbol{y}} $$ where $\overrightarrow{\boldsymbol{y}} \perp \overrightarrow{\boldsymbol{a}}$. In other words, you can write any vector $\overrightarrow{\boldsymbol{x}}$ as the sum of one vector parallel to $\overrightarrow{\boldsymbol{a}}$ and another vector orthogonal to $\overrightarrow{\boldsymbol{a}}$. The two vectors $\lambda \overrightarrow{\boldsymbol{a}}$ and $\overrightarrow{\boldsymbol{y}}$ are called the parallel and orthogonal components of the vector $\overrightarrow{\boldsymbol{x}}$ (with respect to $\overrightarrow{\boldsymbol{a}}$ ), Given $\overrightarrow{\boldsymbol{x}}$ and $\overrightarrow{\boldsymbol{a}}$, find $\overrightarrow{\boldsymbol{x}}_{\perp}$ and $\overrightarrow{\boldsymbol{x}}_{/ /}$. and sometimes the following notation is used $$ \overrightarrow{\boldsymbol{x}^{/ /}}=\lambda \overrightarrow{\boldsymbol{a}}, \quad \overrightarrow{\boldsymbol{x}}^{\perp}=\overrightarrow{\boldsymbol{y}}, $$ so that $$ \overrightarrow{\boldsymbol{x}}=\overrightarrow{\boldsymbol{x}}^{/ /}+\overrightarrow{\boldsymbol{x}}^{\perp} $$ There are moderately simple formulas for $\overrightarrow{\boldsymbol{x}}^{\prime /}$ and $\overrightarrow{\boldsymbol{x}}^{\perp}$, but it is better to remember the following derivation of these formulas. Assume that the vectors $\overrightarrow{\boldsymbol{a}}$ and $\overrightarrow{\boldsymbol{x}}$ are given. Then we look for a number $\lambda$ such that $\overrightarrow{\boldsymbol{y}}=\overrightarrow{\boldsymbol{x}}-\lambda \overrightarrow{\boldsymbol{a}}$ is perpendicular to $\overrightarrow{\boldsymbol{a}}$. Recall that $\overrightarrow{\boldsymbol{a}} \perp(\overrightarrow{\boldsymbol{x}}-\lambda \overrightarrow{\boldsymbol{a}})$ if and only if $$ \overrightarrow{\boldsymbol{a}} \cdot(\overrightarrow{\boldsymbol{x}}-\lambda \overrightarrow{\boldsymbol{a}})=0 . $$ Expand the dot product and you get this equation for $\lambda$ $$ \overrightarrow{\boldsymbol{a}} \cdot \overrightarrow{\boldsymbol{x}}-\lambda \overrightarrow{\boldsymbol{a}} \cdot \overrightarrow{\boldsymbol{a}}=0, $$ whence $$ \lambda=\frac{\vec{a} \cdot \vec{x}}{\vec{a} \cdot \vec{a}}=\frac{\vec{a} \cdot \vec{x}}{\|\vec{a}\|^{2}} $$ To compute the parallel and orthogonal components of $\overrightarrow{\boldsymbol{x}}$ w.r.t. $\overrightarrow{\boldsymbol{a}}$ you first compute $\lambda$ according to (62), which tells you that the parallel component is given by $$ \overrightarrow{\boldsymbol{x}}^{/ /}=\lambda \overrightarrow{\boldsymbol{a}}=\frac{\overrightarrow{\boldsymbol{a}} \cdot \overrightarrow{\boldsymbol{x}}}{\overrightarrow{\boldsymbol{a}} \cdot \overrightarrow{\boldsymbol{a}}} \overrightarrow{\boldsymbol{a}} . $$ The orthogonal component is then "the rest," i.e. by definition $\overrightarrow{\boldsymbol{x}} \perp=\overrightarrow{\boldsymbol{x}}-\overrightarrow{\boldsymbol{x}}$ "/, so $$ \vec{x}^{\perp}=\vec{x}-\vec{x}^{/ /}=\vec{x}-\frac{\vec{a} \cdot \vec{x}}{\vec{a} \cdot \vec{a}} \vec{a} . $$ 45.9. Defining equations of lines. In $\S 43$ we saw how to generate points on a line given two points on that line by means of a "parametrization." I.e. given points $A$ and $B$ on the line $\ell$ the point whose position vector is $\overrightarrow{\boldsymbol{x}}=\overrightarrow{\boldsymbol{a}}+t(\overrightarrow{\boldsymbol{b}}-\overrightarrow{\boldsymbol{a}})$ will be on $\ell$ for any value of the "parameter" $t$. In this section we will use the dot-product to give a different description of lines in the plane (and planes in three dimensional space.) We will derive an equation for a line. Rather than generating points on the line $\ell$ this equation tells us if any given point $X$ in the plane is on the line or not. Here is the derivation of the equation of a line in the plane. To produce the equation you need two ingredients: 1. One particular point on the line (let's call this point $A$, and write $\overrightarrow{\boldsymbol{a}}$ for its position vector), Is $X$ on $\ell$ ? 2. a normal vector $\overrightarrow{\boldsymbol{n}}$ for the line, i.e. a nonzero vector which is perpendicular to the line. Now let $X$ be any point in the plane, and consider the line segment $A X$. - Clearly, $X$ will be on the line if and only if $A X$ is parallel to $\ell^{8}$ - Since $\ell$ is perpendicular to $\vec{n}$, the segment $A X$ and the line $\ell$ will be parallel if and only if $A X \perp \overrightarrow{\boldsymbol{n}}$. - $A X \perp \overrightarrow{\boldsymbol{n}}$ holds if and only if $\overrightarrow{A X} \cdot \overrightarrow{\boldsymbol{n}}=0$. So in the end we see that $X$ lies on the line $\ell$ if and only if the following vector equation is satisfied: $$ \overrightarrow{A X} \cdot \overrightarrow{\boldsymbol{n}}=0 \quad \text { or } \quad(\overrightarrow{\boldsymbol{x}}-\overrightarrow{\boldsymbol{a}}) \cdot \overrightarrow{\boldsymbol{n}}=0 $$ This equation is called a defining equation for the line $\ell$. Any given line has many defining equations. Just by changing the length of the normal you get a different equation, which still describes the same line. 45.10. Line through one point and perpendicular to another. Find a defining equation for the line $\ell$ which goes through $A(1,1)$ and is perpendicular to the line segment $A B$ where $B$ is the point $(3,-1)$. Solution. We already know a point on the line, namely $A$, but we still need a normal vector. The line is required to be perpendicular to $A B$, so $\overrightarrow{\boldsymbol{n}}=\overrightarrow{A B}$ is a normal vector: $\overrightarrow{\boldsymbol{n}}=\overrightarrow{A B}=\left(\begin{array}{c}3-1 \\ (-1)-1\end{array}\right)=\left(\begin{array}{c}2 \\ -2\end{array}\right)$ Of course any multiple of $\overrightarrow{\boldsymbol{n}}$ is also a normal vector, for instance $$ \vec{m}=\frac{1}{2} \vec{n}=\left(\begin{array}{c} 1 \\ -1 \end{array}\right) $$ With $\overrightarrow{\boldsymbol{a}}=\left(\begin{array}{l}1 \\ 1\end{array}\right)$ we then get the following equation for $\ell$ $$ \overrightarrow{\boldsymbol{n}} \cdot(\overrightarrow{\boldsymbol{x}}-\overrightarrow{\boldsymbol{a}})=\left(\begin{array}{c} 2 \\ -2 \end{array}\right) \cdot\left(\begin{array}{c} x_{1}-1 \\ x_{2}-1 \end{array}\right)=2 x_{1}-2 x_{2}=0 . $$ If you choose the normal $\overrightarrow{\boldsymbol{m}}$ instead, you get $$ \overrightarrow{\boldsymbol{m}} \cdot(\overrightarrow{\boldsymbol{x}}-\overrightarrow{\boldsymbol{a}})=\left(\begin{array}{c} 1 \\ -1 \end{array}\right) \cdot\left(\begin{array}{l} x_{1}-1 \\ x_{2}-1 \end{array}\right)=x_{1}-x_{2}=0 . $$ Both equations $2 x_{1}-2 x_{2}=0$ and $x_{1}-x_{2}=0$ are equivalent. ${ }^{8}$ From plane Euclidean geometry: parallel lines either don't intersect or they coincide. 45.11. Distance to a line. Let $\ell$ be a line in the plane and assume a point $A$ on the line as well as a vector $\overrightarrow{\boldsymbol{n}}$ perpendicular to $\ell$ are known. Using the dot product one can easily compute the distance from the line to any other given point $P$ in the plane. Here is how: Draw the line $m$ through $A$ perpendicular to $\ell$, and drop a perpendicular line from $P$ onto $m$. let $Q$ be the projection of $P$ onto $m$. The distance from $P$ to $\ell$ is then equal to the length of the line segment $A Q$. Since $A Q P$ is a right triangle one has $$ A Q=A P \cos \theta . $$ Here $\theta$ is the angle between the normal $\overrightarrow{\boldsymbol{n}}$ and the vector $\overrightarrow{A P}$. One also has $$ \overrightarrow{\boldsymbol{n}} \cdot(\overrightarrow{\boldsymbol{p}}-\overrightarrow{\boldsymbol{a}})=\overrightarrow{\boldsymbol{n}} \cdot \overrightarrow{A P}=\|\overrightarrow{A P}\|\|\overrightarrow{\boldsymbol{n}}\| \cos \theta=A P\|\overrightarrow{\boldsymbol{n}}\| \cos \theta $$ Hence we get $$ \operatorname{dist}(P, \ell)=\frac{\overrightarrow{\boldsymbol{n}} \cdot(\overrightarrow{\boldsymbol{p}}-\overrightarrow{\boldsymbol{a}})}{\|\overrightarrow{\boldsymbol{n}}\|} $$ This argument from a drawing contains a hidden assumption, namely that the point $P$ lies on the side of the line $\ell$ pointed to by the vector $\overrightarrow{\boldsymbol{n}}$. If this is not the case, so that $\overrightarrow{\boldsymbol{n}}$ and $\overrightarrow{A P}$ point to opposite sides of $\ell$, then the angle between them exceeds $90^{\circ}$, i.e. $\theta>\pi / 2$. In this case $\cos \theta<0$, and one has $A Q=-A P \cos \theta$. the distance formula therefore has to be modified to $$ \operatorname{dist}(P, \ell)=-\frac{\overrightarrow{\boldsymbol{n}} \cdot(\overrightarrow{\boldsymbol{p}}-\overrightarrow{\boldsymbol{a}})}{\|\overrightarrow{\boldsymbol{n}}\|} . $$ 45.12. Defining equation of a plane. Just as we have seen how you can form the defining equation for a line in the plane from just one point on the line and one normal vector to the line, you can also form the defining equation for a plane in space, again knowing only one point on the plane, and a vector perpendicular to it. If $A$ is a point on some plane $\mathcal{P}$ and $\vec{n}$ is a vector perpendicular to $\mathcal{P}$, then any other point $X$ lies on $\mathcal{P}$ if and only if $\overrightarrow{A X} \perp \overrightarrow{\boldsymbol{n}}$. In other words, in terms of the position vectors $\overrightarrow{\boldsymbol{a}}$ and $\overrightarrow{\boldsymbol{x}}$ of $A$ and $X$, $$ \text { the point } X \text { is on } \mathcal{P} \Longleftrightarrow \overrightarrow{\boldsymbol{n}} \cdot(\overrightarrow{\boldsymbol{x}}-\overrightarrow{\boldsymbol{a}})=0 . $$ Arguing just as in $\S 45.11$ you find that the distance of a point $X$ in space to the plane $\mathcal{P}$ is $$ \operatorname{dist}(X, \mathcal{P})= \pm \frac{\overrightarrow{\boldsymbol{n}} \cdot(\overrightarrow{\boldsymbol{x}}-\overrightarrow{\boldsymbol{a}})}{\|\overrightarrow{\boldsymbol{n}}\|} . $$ Here the sign is "+" if $X$ and the normal $\vec{n}$ are on the same side of the plane $\mathcal{P}$; otherwise the sign is "-". 45.13. Example. Find the defining equation for the plane $\mathcal{P}$ through the point $A(1,0,2)$ which is perpendicular to the vector $\left(\begin{array}{l}1 \\ 2 \\ 1\end{array}\right)$. Solution: We know a point $(A)$ and a normal vector $\overrightarrow{\boldsymbol{n}}=\left(\begin{array}{l}1 \\ 2 \\ 1\end{array}\right)$ for $\mathcal{P}$. Then any point $X$ with coordinates $\left(x_{1}, x_{2}, x_{3}\right)$, or, with position vector $\overrightarrow{\boldsymbol{x}}=\left(\begin{array}{l}x_{1} \\ x_{2} \\ x_{3}\end{array}\right)$, will lie on the plane $\mathcal{P}$ if and only if $$ \begin{aligned} \overrightarrow{\boldsymbol{n}} \cdot(\overrightarrow{\boldsymbol{x}}-\overrightarrow{\boldsymbol{a}})=0 & \Longleftrightarrow\left(\begin{array}{l} 1 \\ 2 \\ 1 \end{array}\right) \cdot\left\{\left(\begin{array}{l} x_{1} \\ x_{2} \\ x_{3} \end{array}\right)-\left(\begin{array}{l} 1 \\ 0 \\ 2 \end{array}\right)\right\}=0 \\ & \Longleftrightarrow\left(\begin{array}{l} 1 \\ 2 \\ 1 \end{array}\right) \cdot\left(\begin{array}{c} x_{1}-1 \\ x_{2} \\ x_{3}-2 \end{array}\right)=0 \\ & \Longleftrightarrow 1 \cdot\left(x_{1}-1\right)+2 \cdot\left(x_{2}\right)+1 \cdot\left(x_{3}-2\right)=0 \\ & \Longleftrightarrow x_{1}+2 x_{2}+x_{3}-3=0 . \end{aligned} $$ 45.14. Example continued. Let $\mathcal{P}$ be the plane from the previous example. Which of the points $P(0,0,1), Q(0,0,2), R(-1,2,0)$ and $S(-1,0,5)$ lie on $\mathcal{P}$ ? Compute the distances from the points $P, Q, R, S$ to the plane $\mathcal{P}$. Separate the points which do not lie on $\mathcal{P}$ into two group of points which lie on the same side of $\mathcal{P}$. Solution: We apply (64) to the position vectors $\overrightarrow{\boldsymbol{p}}, \overrightarrow{\boldsymbol{q}}, \overrightarrow{\boldsymbol{r}}, \overrightarrow{\boldsymbol{s}}$ of the points $P, Q, R, S$. For each calculation we need $$ \|\overrightarrow{\boldsymbol{n}}\|=\sqrt{1^{2}+2^{2}+1^{2}}=\sqrt{ } 6 . $$ The third component of the given normal $\vec{n}=\left(\begin{array}{l}1 \\ 2 \\ 1\end{array}\right)$ is positive, so $\vec{n}$ points "upwards." Therefore, if a point lies on the side of $\mathcal{P}$ pointed to by $\overrightarrow{\boldsymbol{n}}$, we shall say that the point lies above the plane. $P: \overrightarrow{\boldsymbol{p}}=\left(\begin{array}{l}0 \\ 0 \\ 1\end{array}\right), \overrightarrow{\boldsymbol{p}}-\overrightarrow{\boldsymbol{a}}=\left(\begin{array}{c}-1 \\ 0 \\ -1\end{array}\right), \overrightarrow{\boldsymbol{n}} \cdot(\overrightarrow{\boldsymbol{p}}-\overrightarrow{\boldsymbol{a}})=1 \cdot(-1)+2 \cdot(0)+1 \cdot(-1)=-2$ $$ \frac{\overrightarrow{\boldsymbol{n}} \cdot(\overrightarrow{\boldsymbol{p}}-\overrightarrow{\boldsymbol{a}})}{\|\overrightarrow{\boldsymbol{n}}\|}=-\frac{2}{\sqrt{ } 6}=-\frac{1}{3} \sqrt{ } 6 $$ This quantity is negative, so $P$ lies below $\mathcal{P}$. Its distance to $\mathcal{P}$ is $\frac{1}{3} \sqrt{ } 6$. $Q: \overrightarrow{\boldsymbol{q}}=\left(\begin{array}{l}0 \\ 0 \\ 2\end{array}\right), \overrightarrow{\boldsymbol{p}}-\overrightarrow{\boldsymbol{a}}=\left(\begin{array}{c}-1 \\ 0 \\ 0\end{array}\right), \overrightarrow{\boldsymbol{n}} \cdot(\overrightarrow{\boldsymbol{p}}-\overrightarrow{\boldsymbol{a}})=1 \cdot(-1)+2 \cdot(0)+1 \cdot(0)=-1$ $$ \frac{\overrightarrow{\boldsymbol{n}} \cdot(\overrightarrow{\boldsymbol{p}}-\overrightarrow{\boldsymbol{a}})}{\|\overrightarrow{\boldsymbol{n}}\|}=-\frac{1}{\sqrt{ } 6}=-\frac{1}{6} \sqrt{ } 6 $$ This quantity is negative, so $Q$ also lies below $\mathcal{P}$. Its distance to $\mathcal{P}$ is $\frac{1}{6} \sqrt{ } 6$. $R: \overrightarrow{\boldsymbol{r}}=\left(\begin{array}{c}-1 \\ 2 \\ 0\end{array}\right), \overrightarrow{\boldsymbol{p}}-\overrightarrow{\boldsymbol{a}}=\left(\begin{array}{c}-2 \\ 2 \\ -2\end{array}\right), \overrightarrow{\boldsymbol{n}} \cdot(\overrightarrow{\boldsymbol{p}}-\overrightarrow{\boldsymbol{a}})=1 \cdot(-2)+2 \cdot(2)+1 \cdot(-2)=0$ $$ \frac{\overrightarrow{\boldsymbol{n}} \cdot(\overrightarrow{\boldsymbol{p}}-\overrightarrow{\boldsymbol{a}})}{\|\overrightarrow{\boldsymbol{n}}\|}=0 . $$ Thus $R$ lies on the plane $\mathcal{P}$, and its distance to $\mathcal{P}$ is of course 0 . $S: \overrightarrow{\boldsymbol{s}}=\left(\begin{array}{c}-1 \\ 0 \\ 5\end{array}\right), \overrightarrow{\boldsymbol{p}}-\overrightarrow{\boldsymbol{a}}=\left(\begin{array}{c}-2 \\ 0 \\ 3\end{array}\right), \overrightarrow{\boldsymbol{n}} \cdot(\overrightarrow{\boldsymbol{p}}-\overrightarrow{\boldsymbol{a}})=1 \cdot(-1)+2 \cdot(0)+1 \cdot(3)=2$ $$ \frac{\overrightarrow{\boldsymbol{n}} \cdot(\overrightarrow{\boldsymbol{p}}-\overrightarrow{\boldsymbol{a}})}{\|\overrightarrow{\boldsymbol{n}}\|}=\frac{2}{\sqrt{ } 6}=\frac{1}{3} \sqrt{ } 6 . $$ This quantity is positive, so $S$ lies above $\mathcal{P}$. Its distance to $\mathcal{P}$ is $\frac{1}{3} \sqrt{ } 6$. We have found that $P$ and $Q$ lie below the plane, $R$ lies on the plane, and $S$ is above the plane. 45.15. Where does the line through the points $B(2,0,0)$ and $C(0,1,2)$ intersect the plane $\mathcal{P}$ from example 45.13? Solution: Let $\ell$ be the line through $B$ and $C$. We set up the parametric equation for $\ell$. According to $\S 43,(54)$ every point $X$ on $\ell$ has position vector $\overrightarrow{\boldsymbol{x}}$ given by $$ \overrightarrow{\boldsymbol{x}}=\overrightarrow{\boldsymbol{b}}+t(\overrightarrow{\boldsymbol{c}}-\overrightarrow{\boldsymbol{b}})=\left(\begin{array}{l} 2 \\ 0 \\ 0 \end{array}\right)+t\left(\begin{array}{l} 0-2 \\ 1-0 \\ 2-0 \end{array}\right)=\left(\begin{array}{c} 2-2 t \\ t \\ 2 t \end{array}\right) $$ for some value of $t$. The point $X$ whose position vector $\overrightarrow{\boldsymbol{x}}$ is given above lies on the plane $\mathcal{P}$ if $\overrightarrow{\boldsymbol{x}}$ satisfies the defining equation of the plane. In example 45.13 we found this defining equation. It was $$ \overrightarrow{\boldsymbol{n}} \cdot(\overrightarrow{\boldsymbol{x}}-\overrightarrow{\boldsymbol{a}})=0, \text { i.e. } x_{1}+2 x_{2}+x_{3}-3=0 . $$ So to find the point of intersection of $\ell$ and $\mathcal{P}$ you substitute the parametrization (65) in the defining equation (66): $$ 0=x_{1}+2 x_{2}+x_{3}-3=(2-2 t)+2(t)+(2 t)-3=2 t-1 . $$ This implies $t=\frac{1}{2}$, and thus the intersection point has position vector $$ \overrightarrow{\boldsymbol{x}}=\overrightarrow{\boldsymbol{b}}+\frac{1}{2}(\overrightarrow{\boldsymbol{c}}-\overrightarrow{\boldsymbol{b}})=\left(\begin{array}{c} 2-2 t \\ t \\ 2 t \end{array}\right)=\left(\begin{array}{c} 1 \\ \frac{1}{2} \\ 1 \end{array}\right), $$ i.e. $\ell$ and $\mathcal{P}$ intersect at $X\left(1, \frac{1}{2}, 1\right)$. ## Cross Product 46.1. Algebraic definition of the cross product. Here is the definition of the cross-product of two vectors. The definition looks a bit strange and arbitrary at first sight - it really makes you wonder who thought of this. We will just put up with that for now and explore the properties of the cross product. Later on we will see a geometric interpretation of the cross product which will show that this particular definition is really useful. We will also find a few tricks that will help you reproduce the formula without memorizing it. 46.2. Definition. The "outer product" or "cross product" of two vectors is given by $$ \left(\begin{array}{l} a_{1} \\ a_{2} \\ a_{3} \end{array}\right) \times\left(\begin{array}{l} b_{1} \\ b_{2} \\ b_{3} \end{array}\right)=\left(\begin{array}{l} a_{2} b_{3}-a_{3} b_{2} \\ a_{3} b_{1}-a_{1} b_{3} \\ a_{1} b_{2}-a_{2} b_{1} \end{array}\right) $$ Note that the cross-product of two vectors is again a vector! 46.3. Example. If you set $\overrightarrow{\boldsymbol{b}}=\overrightarrow{\boldsymbol{a}}$ in the definition you find the following important fact: The cross product of any vector with itself is the zero vector: $$ \vec{a} \times \vec{a}=\overrightarrow{0} \quad \text { for } a n y \text { vector } \vec{a} . $$ vectors. 46.4. Example. Let $\overrightarrow{\boldsymbol{a}}=\left(\begin{array}{l}1 \\ 2 \\ 3\end{array}\right), \overrightarrow{\boldsymbol{b}}=\left(\begin{array}{c}-2 \\ 1 \\ 0\end{array}\right)$ and compute the cross product of these | $\times$ | $\vec{i}$ | $\vec{j}$ | $\vec{k}$ | | :---: | :---: | :---: | :---: | | $\vec{i}$ | $\overrightarrow{0}$ | $\vec{k}$ | $-\vec{j}$ | | $\vec{j}$ | $-\vec{k}$ | $\overrightarrow{0}$ | $\vec{i}$ | | $\vec{k}$ | $\vec{j}$ | $-\vec{i}$ | $\overrightarrow{0}$ | Solution: $$ \overrightarrow{\boldsymbol{a}} \times \overrightarrow{\boldsymbol{b}}=\left(\begin{array}{l} 1 \\ 2 \\ 3 \end{array}\right) \times\left(\begin{array}{c} -2 \\ 1 \\ 0 \end{array}\right)=\left(\begin{array}{c} 2 \cdot 0-3 \cdot 1 \\ 3 \cdot(-2)-1 \cdot 0 \\ 1 \cdot 1-2 \cdot(-2) \end{array}\right)=\left(\begin{array}{c} -3 \\ -6 \\ 5 \end{array}\right) $$ In terms of the standard basis vectors you can check the multiplication table. An easy way to remember the multiplication table is to put the vectors $\overrightarrow{\boldsymbol{i}}, \overrightarrow{\boldsymbol{j}}, \overrightarrow{\boldsymbol{k}}$ clockwise in a circle. Given two of the three vectors their product is either plus or minus the remaining vector. To determine the sign you step from the first vector to the second, to the third: if this makes you go clockwise you have a plus sign, if you have to go counterclockwise, you get a minus. The products of $\overrightarrow{\boldsymbol{i}}, \vec{j}$ and $\overrightarrow{\boldsymbol{k}}$ are all you need to know to compute the cross product. Given two vectors $\overrightarrow{\boldsymbol{a}}$ and $\overrightarrow{\boldsymbol{b}}$ write them as $\overrightarrow{\boldsymbol{a}}=a_{1} \overrightarrow{\boldsymbol{i}}+a_{2} \overrightarrow{\boldsymbol{j}}+a_{3} \overrightarrow{\boldsymbol{k}}$ and $\overrightarrow{\boldsymbol{b}}=b_{1} \overrightarrow{\boldsymbol{i}}+b_{2} \overrightarrow{\boldsymbol{j}}+b_{3} \overrightarrow{\boldsymbol{k}}$, and multiply as follows $$ \begin{aligned} \overrightarrow{\boldsymbol{a}} \times \overrightarrow{\boldsymbol{b}}= & \left(a_{1} \overrightarrow{\boldsymbol{i}}+a_{2} \overrightarrow{\boldsymbol{j}}+a_{3} \overrightarrow{\boldsymbol{k}}\right) \times\left(b_{1} \overrightarrow{\boldsymbol{i}}+b_{2} \overrightarrow{\boldsymbol{j}}+b_{3} \overrightarrow{\boldsymbol{k}}\right) \\ = & a_{1} \overrightarrow{\boldsymbol{i}} \times\left(b_{1} \overrightarrow{\boldsymbol{i}}+b_{2} \overrightarrow{\boldsymbol{j}}+b_{3} \overrightarrow{\boldsymbol{k}}\right) \\ & +a_{2} \overrightarrow{\boldsymbol{j}} \times\left(b_{1} \overrightarrow{\boldsymbol{i}}+b_{2} \overrightarrow{\boldsymbol{j}}+b_{3} \overrightarrow{\boldsymbol{k}}\right) \\ & +a_{3} \overrightarrow{\boldsymbol{k}} \times\left(b_{1} \overrightarrow{\boldsymbol{i}}+b_{2} \overrightarrow{\boldsymbol{j}}+b_{3} \overrightarrow{\boldsymbol{k}}\right) \\ = & a_{1} b_{1} \overrightarrow{\boldsymbol{i}} \times \overrightarrow{\boldsymbol{i}}+a_{1} b_{2} \overrightarrow{\boldsymbol{i}} \times \overrightarrow{\boldsymbol{j}}+a_{1} b_{3} \overrightarrow{\boldsymbol{i}} \times \overrightarrow{\boldsymbol{k}}+ \\ & a_{2} b_{1} \overrightarrow{\boldsymbol{j}} \times \overrightarrow{\boldsymbol{i}}+a_{2} b_{2} \overrightarrow{\boldsymbol{j}} \times \overrightarrow{\boldsymbol{j}}+a_{2} b_{3} \overrightarrow{\boldsymbol{j}} \times \overrightarrow{\boldsymbol{k}}+ \\ & a_{3} b_{1} \overrightarrow{\boldsymbol{k}} \times \overrightarrow{\boldsymbol{i}}+a_{3} b_{2} \overrightarrow{\boldsymbol{k}} \times \overrightarrow{\boldsymbol{j}}+a_{3} b_{3} \overrightarrow{\boldsymbol{k}} \times \overrightarrow{\boldsymbol{k}} \\ = & a_{1} b_{1} \overrightarrow{\boldsymbol{0}}+a_{1} b_{2} \overrightarrow{\boldsymbol{k}}-a_{1} b_{3} \overrightarrow{\boldsymbol{j}} \\ & -a_{2} b_{1} \overrightarrow{\boldsymbol{k}}+a_{2} b_{2} \overrightarrow{\mathbf{0}}+a_{2} b_{3} \overrightarrow{\boldsymbol{i}}+ \\ & a_{3} b_{1} \overrightarrow{\boldsymbol{j}}-a_{3} b_{2} \overrightarrow{\boldsymbol{i}}+a_{3} b_{3} \overrightarrow{\boldsymbol{0}} \\ = & \left(a_{2} b_{3}-a_{3} b_{2}\right) \overrightarrow{\boldsymbol{i}}+\left(a_{3} b_{1}-a_{1} b_{3}\right) \overrightarrow{\boldsymbol{j}}+\left(a_{1} b_{2}-a_{2} b_{1}\right) \overrightarrow{\boldsymbol{k}} \end{aligned} $$ This is a useful way of remembering how to compute the cross product, particularly when many of the components $a_{i}$ and $b_{j}$ are zero. 46.5. Example. Compute $\overrightarrow{\boldsymbol{k}} \times(p \overrightarrow{\boldsymbol{i}}+q \overrightarrow{\boldsymbol{j}}+r \overrightarrow{\boldsymbol{k}})$ : $$ \overrightarrow{\boldsymbol{k}} \times(p \overrightarrow{\boldsymbol{i}}+q \overrightarrow{\boldsymbol{j}}+r \overrightarrow{\boldsymbol{k}})=p(\overrightarrow{\boldsymbol{k}} \times \overrightarrow{\boldsymbol{i}})+q(\overrightarrow{\boldsymbol{k}} \times \overrightarrow{\boldsymbol{j}})+r(\overrightarrow{\boldsymbol{k}} \times \overrightarrow{\boldsymbol{k}})=-q \overrightarrow{\boldsymbol{i}}+p \overrightarrow{\boldsymbol{j}} $$ There is another way of remembering how to find $\overrightarrow{\boldsymbol{a}} \times \overrightarrow{\boldsymbol{b}}$. It involves the "triple product" and determinants. See $\S 46.7$. 46.6. Algebraic properties of the cross product. Unlike the dot product, the cross product of two vectors behaves much less like ordinary multiplication. To begin with, the product is not commutative - instead one has $$ \vec{a} \times \vec{b}=-\vec{b} \times \vec{a} \quad \text { for all vectors } \vec{a} \text { and } \vec{b} $$ This property is sometimes called "anti-commutative." Since the crossproduct of two vectors is again a vector you can compute the cross product of three vectors $\overrightarrow{\boldsymbol{a}}, \overrightarrow{\boldsymbol{b}}, \overrightarrow{\boldsymbol{c}}$. You now have a choice: do you first multiply $\overrightarrow{\boldsymbol{a}}$ and $\overrightarrow{\boldsymbol{b}}$, or $\overrightarrow{\boldsymbol{b}}$ and $\overrightarrow{\boldsymbol{c}}$, or $\overrightarrow{\boldsymbol{a}}$ and $\overrightarrow{\boldsymbol{c}}$ ? $$ \begin{gathered} \vec{i} \times(\vec{i} \times \vec{j})=\vec{i} \times \vec{k}=-\vec{j} \\ (\vec{i} \times \vec{i}) \times \vec{j}=\overrightarrow{0} \times \vec{j}=\overrightarrow{0} \\ \text { so " } \times \text { " is not associative } \end{gathered} $$ With numbers it makes no difference (e.g. $2 \times(3 \times 5)=2 \times 15=30$ and $(2 \times 3) \times 5=6 \times 5=$ also 30) but with the cross product of vectors it does matter: the cross product is not associative, i.e. $$ \vec{a} \times(\vec{b} \times \vec{c}) \neq(\vec{a} \times \vec{b}) \times \vec{c} \quad \text { for } \text { most vectors } \vec{a}, \vec{b}, \vec{c} . $$ The distributive law does hold, i.e. $$ \vec{a} \times(\vec{b}+\vec{c})=\vec{a} \times \vec{b}+\vec{a} \times \vec{c}, \quad \text { and } \quad(\vec{b}+\vec{c}) \times \vec{a}=\vec{b} \times \vec{a}+\vec{c} \times \vec{a} $$ is true for all vectors $\overrightarrow{\boldsymbol{a}}, \overrightarrow{\boldsymbol{b}}, \overrightarrow{\boldsymbol{c}}$. Also, an associative law, where one of the factors is a number and the other two are vectors, does hold. I.e. $$ t(\overrightarrow{\boldsymbol{a}} \times \overrightarrow{\boldsymbol{b}})=(t \overrightarrow{\boldsymbol{a}}) \times \overrightarrow{\boldsymbol{b}}=\overrightarrow{\boldsymbol{a}} \times(t \overrightarrow{\boldsymbol{b}}) $$ holds for all vectors $\overrightarrow{\boldsymbol{a}}, \overrightarrow{\boldsymbol{b}}$ and any number $t$. We were already using these properties when we multiplied $\left(a_{1} \overrightarrow{\boldsymbol{i}}+a_{2} \overrightarrow{\boldsymbol{j}}+a_{3} \overrightarrow{\boldsymbol{k}}\right) \times\left(b_{1} \overrightarrow{\boldsymbol{i}}+b_{2} \overrightarrow{\boldsymbol{j}}+b_{3} \overrightarrow{\boldsymbol{k}}\right)$ in the previous section. Finally, the cross product is only defined for space vectors, not for plane vectors. ### The triple product and determinants. be46.8. Definition. The triple product of three given vectors $\overrightarrow{\boldsymbol{a}}, \overrightarrow{\boldsymbol{b}}$, and $\overrightarrow{\boldsymbol{c}}$ is defined to $$ \vec{a} \cdot(\vec{b} \times \vec{c}) . $$ In terms of the components of $\overrightarrow{\boldsymbol{a}}, \overrightarrow{\boldsymbol{b}}$, and $\overrightarrow{\boldsymbol{c}}$ one has $$ \begin{aligned} \overrightarrow{\boldsymbol{a}} \cdot(\overrightarrow{\boldsymbol{b}} \times \overrightarrow{\boldsymbol{c}}) & =\left(\begin{array}{l} a_{1} \\ a_{2} \\ a_{3} \end{array}\right) \cdot\left(\begin{array}{l} b_{2} c_{3}-b_{3} c_{2} \\ b_{3} c_{1}-b_{1} c_{3} \\ b_{1} c_{2}-b_{2} c_{1} \end{array}\right) \\ & =a_{1} b_{2} c_{3}-a_{1} b_{3} c_{2}+a_{2} b_{3} c_{1}-a_{2} b_{1} c_{3}+a_{3} b_{1} c_{2}-a_{3} b_{2} c_{1} . \end{aligned} $$ This quantity is called a determinant, and is written as follows $$ \left|\begin{array}{lll} a_{1} & b_{1} & c_{1} \\ a_{2} & b_{2} & c_{2} \\ a_{3} & b_{3} & c_{3} \end{array}\right|=a_{1} b_{2} c_{3}-a_{1} b_{3} c_{2}+a_{2} b_{3} c_{1}-a_{2} b_{1} c_{3}+a_{3} b_{1} c_{2}-a_{3} b_{2} c_{1} $$ There's a useful shortcut for computing such a determinant: after writing the determinant, append a fourth and a fifth column which are just copies of the first two columns of the determinant. The determinant then is the sum of six products, one for each dotted line in the drawing. Each term has a sign: if the factors are read from top-left to bottom-right, the term is positive, if they are read from top-right to bottom left the term is negative. This shortcut is also very useful for computing the crossproduct. To compute the cross product of two given vectors $\overrightarrow{\boldsymbol{a}}$ and $\overrightarrow{\boldsymbol{b}}$ you arrange their components in the following determinant $$ \overrightarrow{\boldsymbol{a}} \times \overrightarrow{\boldsymbol{b}}=\left|\begin{array}{ccc} \overrightarrow{\boldsymbol{i}} & a_{1} & b_{1} \\ \overrightarrow{\boldsymbol{j}} & a_{2} & b_{2} \\ \overrightarrow{\boldsymbol{k}} & a_{3} & b_{3} \end{array}\right|=\left(a_{2} b_{3}-a_{3} b_{2}\right) \overrightarrow{\boldsymbol{i}}+\left(a_{3} b_{1}-a_{1} b_{3}\right) \overrightarrow{\boldsymbol{j}}+\left(a_{1} b_{2}-a_{2} b_{1}\right) \overrightarrow{\boldsymbol{k}} . $$ This is not a normal determinant since some of its entries are vectors, but if you ignore that odd circumstance and simply compute the determinant according to the definition (68), you get (69). An important property of the triple product is that it is much more symmetric in the factors $\overrightarrow{\boldsymbol{a}}, \overrightarrow{\boldsymbol{b}}, \overrightarrow{\boldsymbol{c}}$ than the notation $\overrightarrow{\boldsymbol{a}} \cdot(\overrightarrow{\boldsymbol{b}} \times \overrightarrow{\boldsymbol{c}})$ suggests. 46.9. Theorem. For any triple of vectors $\overrightarrow{\boldsymbol{a}}, \overrightarrow{\boldsymbol{b}}, \overrightarrow{\boldsymbol{c}}$ one has $$ \vec{a} \cdot(\vec{b} \times \vec{c})=\vec{b} \cdot(\vec{c} \times \vec{a})=\vec{c} \cdot(\vec{a} \times \vec{b}), $$ and $$ \vec{a} \cdot(\vec{b} \times \vec{c})=-\vec{b} \cdot(\vec{a} \times \vec{c})=-\vec{c} \cdot(\vec{b} \times \vec{a}) . $$ In other words, if you exchange two factors in the product $\overrightarrow{\boldsymbol{a}} \cdot(\overrightarrow{\boldsymbol{b}} \times \overrightarrow{\boldsymbol{c}})$ it changes its sign. If you "rotate the factors," i.e. if you replace $\overrightarrow{\boldsymbol{a}}$ by $\overrightarrow{\boldsymbol{b}}, \overrightarrow{\boldsymbol{b}}$ by $\overrightarrow{\boldsymbol{c}}$ and $\overrightarrow{\boldsymbol{c}}$ by $\overrightarrow{\boldsymbol{a}}$, the product doesn't change at all. ### Geometric description of the cross product. ### Theorem. $$ \vec{a} \times \vec{b} \perp \vec{a}, \vec{b} $$ Proof. We use the triple product: $$ \vec{a} \cdot(\vec{a} \times \vec{b})=\vec{b} \cdot(\vec{a} \times \vec{a})=\overrightarrow{0} $$ since $\vec{a} \times \vec{a}=\overrightarrow{0}$ for any vector $\vec{a}$. It follows that $\vec{a} \times \vec{b}$ is perpendicular to $\overrightarrow{\boldsymbol{a}}$. Similarly, $\overrightarrow{\boldsymbol{b}} \cdot(\overrightarrow{\boldsymbol{a}} \times \overrightarrow{\boldsymbol{b}})=\overrightarrow{\boldsymbol{a}} \cdot(\overrightarrow{\boldsymbol{b}} \times \overrightarrow{\boldsymbol{b}})=\overrightarrow{\mathbf{0}}$ shows that $\overrightarrow{\boldsymbol{a}} \cdot \overrightarrow{\boldsymbol{b}}$ is perpendicular to $\vec{b}$. 46.12. Theorem. $$ \|\overrightarrow{\boldsymbol{a}} \times \overrightarrow{\boldsymbol{b}}\|=\|\overrightarrow{\boldsymbol{a}}\|\|\overrightarrow{\boldsymbol{b}}\| \sin \theta $$ Proof. Bruce ${ }^{9}$ just slipped us a piece of paper with the following formula on it: $$ \|\overrightarrow{\boldsymbol{a}} \times \overrightarrow{\boldsymbol{b}}\|^{2}+(\overrightarrow{\boldsymbol{a}} \cdot \overrightarrow{\boldsymbol{b}})^{2}=\|\overrightarrow{\boldsymbol{a}}\|^{2}\|\overrightarrow{\boldsymbol{b}}\|^{2} . $$ After setting $\overrightarrow{\boldsymbol{a}}=\left(\begin{array}{l}a_{1} \\ a_{2} \\ a_{3}\end{array}\right)$ and $\overrightarrow{\boldsymbol{b}}=\left(\begin{array}{l}b_{1} \\ b_{2} \\ b_{3}\end{array}\right)$ and diligently computing both sides we find that this formula actually holds for any pair of vectors $\overrightarrow{\boldsymbol{a}}, \overrightarrow{\boldsymbol{b}}$ ! The (long) computation which implies this identity will be presented in class (maybe). If we assume that Lagrange's identity holds then we get $$ \|\overrightarrow{\boldsymbol{a}} \times \overrightarrow{\boldsymbol{b}}\|^{2}=\|\overrightarrow{\boldsymbol{a}}\|^{2}\|\overrightarrow{\boldsymbol{b}}\|^{2}-(\overrightarrow{\boldsymbol{a}} \cdot \overrightarrow{\boldsymbol{b}})^{2}=\|\overrightarrow{\boldsymbol{a}}\|^{2}\|\overrightarrow{\boldsymbol{b}}\|^{2}-\|\overrightarrow{\boldsymbol{a}}\|^{2}\|\overrightarrow{\boldsymbol{b}}\|^{2} \cos ^{2} \theta=\|\overrightarrow{\boldsymbol{a}}\|^{2}\|\overrightarrow{\boldsymbol{b}}\|^{2} \sin ^{2} \theta $$ since $1-\cos ^{2} \theta=\sin ^{2} \theta$. The theorem is proved. These two theorems almost allow you to construct the cross product of two vectors geometrically. If $\overrightarrow{\boldsymbol{a}}$ and $\overrightarrow{\boldsymbol{b}}$ are two vectors, then their cross product satisfies the following description: (1) If $\overrightarrow{\boldsymbol{a}}$ and $\overrightarrow{\boldsymbol{b}}$ are parallel, then the angle $\theta$ between them vanishes, and so their cross product is the zero vector. Assume from here on that $\overrightarrow{\boldsymbol{a}}$ and $\overrightarrow{\boldsymbol{b}}$ are not parallel. (2) $\overrightarrow{\boldsymbol{a}} \times \overrightarrow{\boldsymbol{b}}$ is perpendicular to both $\overrightarrow{\boldsymbol{a}}$ and $\overrightarrow{\boldsymbol{b}}$. In other words, since $\overrightarrow{\boldsymbol{a}}$ and $\overrightarrow{\boldsymbol{b}}$ are not parallel, they determine a plane, and their cross product is a vector perpendicular to this plane. (3) the length of the cross product $\overrightarrow{\boldsymbol{a}} \times \overrightarrow{\boldsymbol{b}}$ is $\|\overrightarrow{\boldsymbol{a}}\| \cdot\|\overrightarrow{\boldsymbol{b}}\| \sin \theta$. There are only two vectors that satisfy conditions 2 and 3: to determine which one of these is the cross product you must apply the Right Hand Rule (screwdriver rule, corkscrew rule, etc.) for $\overrightarrow{\boldsymbol{a}}, \overrightarrow{\boldsymbol{b}}, \overrightarrow{\boldsymbol{a}} \times \overrightarrow{\boldsymbol{b}}$ : if you turn a screw whose axis is perpendicular to $\overrightarrow{\boldsymbol{a}}$ and $\overrightarrow{\boldsymbol{b}}$ in the direction from $\overrightarrow{\boldsymbol{a}}$ to $\overrightarrow{\boldsymbol{b}}$, the screw moves in the direction of $\vec{a} \times \vec{b}$. Alternatively, without seriously injuring yourself, you should be able to make a fist with your right hand, and then stick out your thumb, index and middle fingers so that your thumb is $\overrightarrow{\boldsymbol{a}}$, your index finger is $\vec{b}$ and your middle finger is $\overrightarrow{\boldsymbol{a}} \times \overrightarrow{\boldsymbol{b}}$. Only people with the most flexible joints can do this with their left hand. ## A few applications of the cross product 47.1. Area of a parallelogram. Let $A B C D$ be a parallelogram. Its area is given by "height times base," a formula which should be familiar from high school geometry. If the angle between the sides $A B$ and $A D$ is $\theta$, then the height of the parallelogram is $\|\overrightarrow{A D}\| \sin \theta$, so that the area of $A B C D$ is $$ \text { area of } A B C D=\|\overrightarrow{A B}\| \cdot\|\overrightarrow{A D}\| \sin \theta=\|\overrightarrow{A B} \times \overrightarrow{A D}\| \text {. } $$ The area of the triangle $A B D$ is of course half as much, $$ \text { area of triangle } A B D=\frac{1}{2}\|\overrightarrow{A B} \times \overrightarrow{A D}\| \text {. } $$ ${ }^{9}$ It's actually called Lagrange's identity. Yes, the same Lagrange who found the formula for the remainder term. These formulae are valid even when the points $A, B, C$, and $D$ are points in space. Of course they must lie in one plane for otherwise $A B C D$ couldn't be a parallelogram. 47.2. Example. Let the points $A(1,0,2), B(2,0,0), C(3,1,-1)$ and $D(2,1,1)$ be given. Show that $A B C D$ is a parallelogram, and compute its area. Solution: $\quad A B C D$ will be a parallelogram if and only if $\overrightarrow{A C}=\overrightarrow{A B}+\overrightarrow{A D}$. In terms of the position vectors $\overrightarrow{\boldsymbol{a}}, \overrightarrow{\boldsymbol{b}}, \overrightarrow{\boldsymbol{c}}$ and $\overrightarrow{\boldsymbol{d}}$ of $A, B, C, D$ this boils down to $$ \vec{c}-\vec{a}=(\vec{b}-\vec{a})+(\vec{d}-\vec{a}), \quad \text { i.e. } \quad \vec{a}+\vec{c}=\vec{b}+\vec{d} . $$ For our points we get $$ \overrightarrow{\boldsymbol{a}}+\overrightarrow{\boldsymbol{c}}=\left(\begin{array}{l} 1 \\ 0 \\ 2 \end{array}\right)+\left(\begin{array}{c} 3 \\ 1 \\ -1 \end{array}\right)=\left(\begin{array}{l} 4 \\ 1 \\ 1 \end{array}\right), \quad \overrightarrow{\boldsymbol{b}}+\overrightarrow{\boldsymbol{d}}=\left(\begin{array}{l} 2 \\ 0 \\ 0 \end{array}\right)+\left(\begin{array}{l} 2 \\ 1 \\ 1 \end{array}\right)=\left(\begin{array}{l} 4 \\ 1 \\ 1 \end{array}\right) . $$ So $A B C D$ is indeed a parallelogram. Its area is the length of $$ \overrightarrow{A B} \times \overrightarrow{A D}=\left(\begin{array}{c} 2-1 \\ 0 \\ 0-2 \end{array}\right) \times\left(\begin{array}{l} 2-1 \\ 1-0 \\ 1-2 \end{array}\right)=\left(\begin{array}{c} 1 \\ 0 \\ -2 \end{array}\right) \times\left(\begin{array}{c} 1 \\ -1 \\ -1 \end{array}\right)=\left(\begin{array}{l} -2 \\ -1 \\ -1 \end{array}\right) . $$ So the area of $A B C D$ is $\sqrt{(-2)^{2}+(-1)^{2}+(-1)^{2}}=\sqrt{6}$. 47.3. Finding the normal to a plane. If you know two vectors $\overrightarrow{\boldsymbol{a}}$ and $\overrightarrow{\boldsymbol{b}}$ which are parallel to a given plane $\mathcal{P}$ but not parallel to each other, then you can find a normal vector for the plane $\mathcal{P}$ by computing $$ \vec{n}=\vec{a} \times \vec{b} $$ We have just seen that the vector $\overrightarrow{\boldsymbol{n}}$ must be perpendicular to both $\overrightarrow{\boldsymbol{a}}$ and $\overrightarrow{\boldsymbol{b}}$, and hence ${ }^{10}$ it is perpendicular to the plane $\mathcal{P}$. This trick is especially useful when you have three points $A, B$ and $C$, and you want to find the defining equation for the plane $\mathcal{P}$ through these points. We will assume that the three points do not all lie on one line, for otherwise there are many planes through $A$, $B$ and $C$. To find the defining equation we need one point on the plane (we have three of them), and a normal vector to the plane. A normal vector can be obtained by computing the cross product of two vectors parallel to the plane. Since $\overrightarrow{A B}$ and $\overrightarrow{A C}$ are both parallel to $\mathcal{P}$, the vector $\overrightarrow{\boldsymbol{n}}=\overrightarrow{A B} \times \overrightarrow{A C}$ is such a normal vector. Thus the defining equation for the plane through three given points $A, B$ and $C$ is $$ \overrightarrow{\boldsymbol{n}} \cdot(\overrightarrow{\boldsymbol{x}}-\overrightarrow{\boldsymbol{a}})=0, \quad \text { with } \quad \overrightarrow{\boldsymbol{n}}=\overrightarrow{A B} \times \overrightarrow{A C}=(\overrightarrow{\boldsymbol{b}}-\overrightarrow{\boldsymbol{a}}) \times(\overrightarrow{\boldsymbol{c}}-\overrightarrow{\boldsymbol{a}}) $$ ${ }^{10}$ This statement needs a proof which we will skip. Instead have a look at the picture 47.4. Example. Find the defining equation of the plane $\mathcal{P}$ through the points $A(2,-1,0)$, $B(2,1,-1)$ and $C(-1,1,1)$. Find the intersections of $\mathcal{P}$ with the three coordinate axes, and find the distance from the origin to $\mathcal{P}$. Solution: We have $$ \overrightarrow{A B}=\left(\begin{array}{c} 0 \\ 2 \\ -1 \end{array}\right) \quad \text { and } \quad \overrightarrow{A C}=\left(\begin{array}{c} -3 \\ 2 \\ 1 \end{array}\right) $$ so that $$ \overrightarrow{\boldsymbol{n}}=\overrightarrow{A B} \times \overrightarrow{A C}=\left(\begin{array}{c} 0 \\ 2 \\ -1 \end{array}\right) \times\left(\begin{array}{c} -3 \\ 2 \\ 1 \end{array}\right)=\left(\begin{array}{l} 4 \\ 3 \\ 6 \end{array}\right) $$ is a normal to the plane. The defining equation for $\mathcal{P}$ is therefore $$ 0=\overrightarrow{\boldsymbol{n}} \cdot(\overrightarrow{\boldsymbol{x}}-\overrightarrow{\boldsymbol{a}})=\left(\begin{array}{l} 4 \\ 3 \\ 6 \end{array}\right) \cdot\left(\begin{array}{l} x_{1}-2 \\ x_{2}+1 \\ x_{3}-0 \end{array}\right) $$ i.e. $$ 4 x_{1}+3 x_{2}+6 x_{3}-5=0 . $$ The plane intersects the $x_{1}$ axis when $x_{2}=x_{3}=0$ and hence $4 x_{1}-5=0$, i.e. in the point $\left(\frac{5}{4}, 0,0\right)$. The intersections with the other two axes are $\left(0, \frac{5}{3}, 0\right)$ and $\left(0,0, \frac{5}{6}\right)$. The distance from any point with position vector $\overrightarrow{\boldsymbol{x}}$ to $\mathcal{P}$ is given by $$ \text { dist }= \pm \frac{\overrightarrow{\boldsymbol{n}} \cdot(\overrightarrow{\boldsymbol{x}}-\overrightarrow{\boldsymbol{a}})}{\|\overrightarrow{\boldsymbol{n}}\|} $$ so the distance from the origin (whose position vector is $\overrightarrow{\boldsymbol{x}}=\overrightarrow{\mathbf{0}}=\left(\begin{array}{l}0 \\ 0 \\ 0\end{array}\right)$ ) to $\mathcal{P}$ is $$ \text { distance origin to } \mathcal{P}= \pm \frac{\overrightarrow{\boldsymbol{a}} \cdot \overrightarrow{\boldsymbol{n}}}{\|\overrightarrow{\boldsymbol{n}}\|}= \pm \frac{2 \cdot 4+(-1) \cdot 3+0 \cdot 6}{\sqrt{4^{2}+3^{2}+6^{2}}}=\frac{5}{\sqrt{ } 61}(\approx 1.024 \cdots) \text {. } $$ ### Volume of a parallelepiped. A parallelepiped is a three dimensional body whose sides are parallelograms. For instance, a cube is an example of a parallelepiped; a rectangular block (whose faces are rectangles, meeting at right angles) is also a parallelepiped. Any parallelepiped has 8 vertices (corner points), 12 edges and 6 faces. Let $\begin{gathered}A B C D \\ E F G H\end{gathered}$ be a parallelepiped. If we call one of the faces, say $A B C D$, the base of the parallelepiped, then the other face EFGH is parallel to the base. The height of the parallelepiped is the distance from any point in EFGH to the base, e.g. to compute the height of $\begin{gathered}A B C D \\ E F G H\end{gathered}$ one could compute the distance from the point $E$ (or $F$, or $G$, or $H$ ) to the plane through $A B C D$. The volume of the parallelepiped $\begin{gathered}A B C D \\ E F G H\end{gathered}$ is given by the formula $$ \text { Volume } \begin{aligned} & A B C D \\ & E F G H \end{aligned}=\text { Area of base } \times \text { height. } $$ Since the base is a parallelogram we know its area is given by $$ \text { Area of base } A B C D=\|\overrightarrow{A B} \times \overrightarrow{A D}\| $$ We also know that $\overrightarrow{\boldsymbol{n}}=\overrightarrow{A B} \times \overrightarrow{A D}$ is a vector perpendicular to the plane through $A B C D$, i.e. perpendicular to the base of the parallelepiped. If we let the angle between the edge $A E$ and the normal $\overrightarrow{\boldsymbol{n}}$ be $\psi$, then the height of the parallelepiped is given by $$ \text { height }=\|\overrightarrow{A E}\| \cos \psi \text {. } $$ Therefore the triple product of $\overrightarrow{A B}, \overrightarrow{A D}, \overrightarrow{A E}$ is $$ \text { Volume } \begin{aligned} & A B C D=\text { height } \times \text { Area of base } \\ & E F G H \\ &=\|\overrightarrow{A E}\| \cos \psi\|\overrightarrow{A B} \times \overrightarrow{A D}\|, \end{aligned} $$ i.e. $$ \text { Volume } \begin{aligned} & A B C D \\ & E F G H \end{aligned}=\overrightarrow{A E} \cdot(\overrightarrow{A B} \times \overrightarrow{A D}) . $$ ## Notation In the next chapter we will be using vectors, so let's take a minute to summarize the | OBJECT | Notation | | :---: | :---: | | Point.............. | Upper case letters, $A, B$, etc. | | Position vector $\ldots \ldots \ldots$. | Lowercase letters with an arrow on top | | Coordinates of a point... | $\begin{array}{l}\text { The position vector } \overrightarrow{O A} \text { of the point } A \\ \text { should be } \overrightarrow{\boldsymbol{a}}, \text { so that letters match across } \\ \text { changes from upper to lower case. } \\ \text { The coordinates of the point } A \text { are the } \\ \text { same as the components of its position } \\ \text { vector } \overrightarrow{\boldsymbol{a}} \text { : we use lower case letters with } \\ \text { a subscript to indicate which coordinate } \\ \text { we have in mind: }\left(a_{1}, a_{2}\right) \text {. }\end{array}$ | concepts and notation we have been using. Given a point in the plane, or in space you can form its position vector. So associated to a point we have three different objects: the point, its position vector and its coordinates. here is the notation we use for these: 49. PROBLEMS COMPUTING AND DRAWING VECTORS 426. Simplify the following $$ \begin{aligned} & \overrightarrow{\boldsymbol{a}}=\left(\begin{array}{c} 1 \\ -2 \\ 3 \end{array}\right)+3\left(\begin{array}{l} 0 \\ 1 \\ 3 \end{array}\right) ; \\ & \overrightarrow{\boldsymbol{b}}=12\left(\begin{array}{c} 1 \\ 1 / 3 \end{array}\right)-3\left(\begin{array}{c} 4 \\ 1 \end{array}\right) ; \\ & \overrightarrow{\boldsymbol{c}}=(1+t)\left(\begin{array}{c} 1 \\ 1-t \end{array}\right)-t\left(\begin{array}{c} 1 \\ -t \end{array}\right), \\ & \overrightarrow{\boldsymbol{d}}=t\left(\begin{array}{l} 1 \\ 0 \\ 0 \end{array}\right)+t^{2}\left(\begin{array}{c} 0 \\ -1 \\ 2 \end{array}\right)-\left(\begin{array}{l} 0 \\ 0 \\ 1 \end{array}\right) . \end{aligned} $$ 427. If $\overrightarrow{\boldsymbol{a}}, \overrightarrow{\boldsymbol{b}}, \overrightarrow{\boldsymbol{c}}$ are as in the previous problem, then which of the following expressions mean anything? Compute those expressions that are well defined. (i) $\vec{a}+\vec{b}$ (ii) $\vec{b}+\vec{c}$ (iii) $\pi \overrightarrow{\boldsymbol{a}}$ (iv) $\vec{b}^{2}$ (v) $\vec{b} / \vec{c}$ (vi) $\|\vec{a}\|+\|\vec{b}\|$ (vii) $\|\vec{b}\|^{2}$ (viii) $\vec{b} /\|\vec{c}\|$ 428. Let $\vec{a}=\left(\begin{array}{r}1 \\ -2 \\ 2\end{array}\right)$ and $\vec{b}=\left(\begin{array}{r}2 \\ -1 \\ 1\end{array}\right)$. Compute: (1) $\|\vec{a}\|$ (2) $2 \vec{a}$ (3) || $2 \vec{a} \|^{2}$ (4) $\vec{a}+\vec{b}$ (5) $3 \vec{a}-\vec{b}$ 428. Let $\overrightarrow{\boldsymbol{u}}, \overrightarrow{\boldsymbol{v}}, \overrightarrow{\boldsymbol{w}}$ be three given vectors, and suppose $\overrightarrow{\boldsymbol{a}}=\overrightarrow{\boldsymbol{v}}+\overrightarrow{\boldsymbol{w}}, \quad \overrightarrow{\boldsymbol{b}}=2 \overrightarrow{\boldsymbol{u}}-\overrightarrow{\boldsymbol{w}}, \quad \overrightarrow{\boldsymbol{c}}=\overrightarrow{\boldsymbol{u}}+\overrightarrow{\boldsymbol{v}}+\overrightarrow{\boldsymbol{w}}$. (a) Simplify $\overrightarrow{\boldsymbol{p}}=\overrightarrow{\boldsymbol{a}}+3 \overrightarrow{\boldsymbol{b}}-\overrightarrow{\boldsymbol{c}}$ and $\overrightarrow{\boldsymbol{q}}=$ $\overrightarrow{\boldsymbol{c}}-2(\overrightarrow{\boldsymbol{u}}+\overrightarrow{\boldsymbol{a}})$. (b) Find numbers $r, s, t$ such that $r \overrightarrow{\boldsymbol{a}}+$ $s \overrightarrow{\boldsymbol{b}}+t \overrightarrow{\boldsymbol{c}}=\overrightarrow{\boldsymbol{u}}$. (c) Find numbers $k, l, m$ such that $k \overrightarrow{\boldsymbol{a}}+$ $l \overrightarrow{\boldsymbol{b}}+m \overrightarrow{\boldsymbol{c}}=\overrightarrow{\boldsymbol{v}}$. 430. Prove the Algebraic Properties (50), (51), (52), and (53) in section 42.5. 431. (a) Does there exist a number $x$ such that $$ \left(\begin{array}{l} 1 \\ 2 \end{array}\right)+\left(\begin{array}{l} x \\ x \end{array}\right)=\left(\begin{array}{l} 2 \\ 1 \end{array}\right) ? $$ (b) Make a drawing of all points $P$ whose position vectors are given by $$ \vec{p}=\left(\begin{array}{l} 1 \\ 2 \end{array}\right)+\left(\begin{array}{l} x \\ x \end{array}\right) . $$ (c) Do there exist a numbers $x$ and $y$ such that $$ x\left(\begin{array}{l} 1 \\ 2 \end{array}\right)+y\left(\begin{array}{l} 1 \\ 1 \end{array}\right)=\left(\begin{array}{l} 2 \\ 1 \end{array}\right) ? $$ 432. Given points $A(2,1)$ and $B(-1,4)$ compute the vector $\overrightarrow{A B}$. Is $\overrightarrow{A B}$ a position vector? 433. Given: points $A(2,1), B(3,2), C(4,4)$ and $D(5,2)$. Is $A B C D$ a parallelogram? 434. Given: points $A(0,2,1), B(0,3,2)$, $C(4,1,4)$ and $D$. (a) If $A B C D$ is a parallelogram, then what are the coordinates of the point $D$ ? (b) If $A B D C$ is a parallelogram, then what are the coordinates of the point $D$ ? 435. You are given three points in the plane: $A$ has coordinates $(2,3), B$ has coordinates $(-1,2)$ and $C$ has coordinates $(4,-1)$. (a) Compute the vectors $\overrightarrow{A B}, \overrightarrow{B A}, \overrightarrow{A C}$, $\overrightarrow{C A}, \overrightarrow{B C}$ and $\overrightarrow{C B}$. (b) Find the points $P, Q, R$ and $S$ whose position vectors are $\overrightarrow{A B}, \overrightarrow{B A}, \overrightarrow{A C}$, and $\overrightarrow{B C}$, respectively. Make a precise drawing in figure 21. 436. Have a look at figure 22 (a) Draw the vectors $2 \overrightarrow{\boldsymbol{v}}+\frac{1}{2} \overrightarrow{\boldsymbol{w}},-\frac{1}{2} \overrightarrow{\boldsymbol{v}}+\overrightarrow{\boldsymbol{w}}$, and $\frac{3}{2} \overrightarrow{\boldsymbol{v}}-\frac{1}{2} \overrightarrow{\boldsymbol{w}}$ (b) Find real numbers $s, t$ such that $s \overrightarrow{\boldsymbol{v}}+t \overrightarrow{\boldsymbol{w}}=\overrightarrow{\boldsymbol{a}}$. (c) Find real numbers $p, q$ such that $p \overrightarrow{\boldsymbol{v}}+q \overrightarrow{\boldsymbol{w}}=\overrightarrow{\boldsymbol{b}}$. (d) Find real numbers $k, l, m, n$ such that $\overrightarrow{\boldsymbol{v}}=k \overrightarrow{\boldsymbol{a}}+l \overrightarrow{\boldsymbol{b}}$, and $\overrightarrow{\boldsymbol{w}}=m \overrightarrow{\boldsymbol{a}}+n \overrightarrow{\boldsymbol{w}}$. PARAMETRIC EQUATIONS FOR A LINE Figure 21. Your drawing for problem 435 437. In the figure above draw the points whose position vectors are given by $\overrightarrow{\boldsymbol{x}}=$ $\overrightarrow{\boldsymbol{a}}+t(\overrightarrow{\boldsymbol{b}}-\overrightarrow{\boldsymbol{a}})$ for $t=0,1, \frac{1}{3}, \frac{3}{4},-1,2$. (as always, $\overrightarrow{\boldsymbol{a}}=\overrightarrow{O A}$, etc.) 438. In the figure above also draw the points whose position vector are given by $\overrightarrow{\boldsymbol{x}}=\overrightarrow{\boldsymbol{b}}+s(\overrightarrow{\boldsymbol{a}}-\overrightarrow{\boldsymbol{b}})$ for $s=0,1, \frac{1}{3}, \frac{3}{4},-1,2$. 439. (a) Find a parametric equation for the line $\ell$ through the points $A(3,0,1)$ and $B(2,1,2)$. (b) Where does $\ell$ intersect the coordinate planes? 440. (a) Find a parametric equation for the line which contains the two vectors Figure 22. Drawing for problem 436 $\vec{a}=\left(\begin{array}{c}2 \\ 3 \\ 1\end{array}\right)$ and $\vec{b}=\left(\begin{array}{l}3 \\ 2 \\ 3\end{array}\right)$. (b) The vector $\vec{c}=\left(\begin{array}{c}c_{1} \\ 1 \\ c_{3}\end{array}\right)$ is on this line. What is $\vec{c}$ ? 441. Group problem. Consider a triangle $A B C$ and let $\overrightarrow{\boldsymbol{a}}, \overrightarrow{\boldsymbol{b}}, \overrightarrow{\boldsymbol{c}}$ be the position vectors of $A, B$, and $C$. (a) Compute the position vector of the midpoint $P$ of the line segment $B C$. Also compute the position vectors of the midpoints $Q$ of $A C$ and $R$ of $A B$. (Make a drawing.) (b) Let $M$ be the point on the line segment $A P$ which is twice as far from $A$ as it is from $P$. Find the position vector of $M$. (c) Show that $M$ also lies on the line segments $B Q$ and $C R$. 442. Group problem. Let $A B C D$ be a tetrahedron, and let $\overrightarrow{\boldsymbol{a}}, \overrightarrow{\boldsymbol{b}}, \overrightarrow{\boldsymbol{c}}, \overrightarrow{\boldsymbol{d}}$ be the position vectors of the points $A, B, C, D$. (i) Find position vectors of the midpoint $P$ of $A B$, the midpoint $Q$ of $C D$ and the midpoint $M$ of $P Q$. (ii) Find position vectors of the midpoint $R$ of $B C$, the midpoint $S$ of $A D$ and the midpoint $N$ of $R S$. ORTHOGONAL DECOMPOSITION OF ONE VECTOR WITH RESPECT TO ANOTHER 443. Given the vectors $\overrightarrow{\boldsymbol{a}}=\left(\begin{array}{l}2 \\ 1 \\ 3\end{array}\right)$ and $\overrightarrow{\boldsymbol{b}}=\left(\begin{array}{l}1 \\ 1 \\ 0\end{array}\right)$ find $\overrightarrow{\boldsymbol{a}}^{\prime \prime}, \overrightarrow{\boldsymbol{a}}^{\perp}, \overrightarrow{\boldsymbol{b}}^{\prime \prime}, \overrightarrow{\boldsymbol{b}}^{\perp}$ for which $$ \overrightarrow{\boldsymbol{a}}=\overrightarrow{\boldsymbol{a}}^{/ /}+\overrightarrow{\boldsymbol{a}}^{\perp}, \text { with } a^{\| /} / / \overrightarrow{\boldsymbol{b}}, a^{\perp} \perp \overrightarrow{\boldsymbol{b}}, $$ and $$ \overrightarrow{\boldsymbol{b}}=\overrightarrow{\boldsymbol{b}}^{/ /}+\overrightarrow{\boldsymbol{b}}^{\perp}, \text { with } b^{/ /} / / \overrightarrow{\boldsymbol{a}}, b^{\perp} \perp \overrightarrow{\boldsymbol{a}} . $$ 444. Bruce left his backpack on a hill, which in some coordinate system happens to be the line with equation $12 x_{1}+$ $5 x_{2}=130$. The force exerted by gravity on the backpack is $\vec{f}_{\text {grav }}=\left(\begin{array}{c}0 \\ -m g\end{array}\right)$. Decompose this force into a part perpendicular to the hill, and a part parallel to the hill. 445. An eraser is lying on the plane $\mathcal{P}$ with equation $x_{1}+3 x_{2}+x_{3}=6$. Gravity pulls the eraser down, and exerts a force given by $$ \overrightarrow{\boldsymbol{f}}_{\mathrm{grav}}=\left(\begin{array}{c} 0 \\ 0 \\ -m g \end{array}\right) . $$ (a) Find a normal $\overrightarrow{\boldsymbol{n}}$ for the plane $\mathcal{P}$. (b) Decompose the force $\vec{f}$ into a part perpendicular to the plane $\mathcal{P}$ and a part perpendicular to $\overrightarrow{\boldsymbol{n}}$. 446. (i) Simplify $\|\vec{a}-\vec{b}\|^{2}$. (ii) Simplify $\|2 \overrightarrow{\boldsymbol{a}}-\overrightarrow{\boldsymbol{b}}\|^{2}$. (iii) If $\vec{a}$ has length $3, \vec{b}$ has length 7 and $\overrightarrow{\boldsymbol{a}} \cdot \overrightarrow{\boldsymbol{b}}=-2$, then compute $\|\overrightarrow{\boldsymbol{a}}+\overrightarrow{\boldsymbol{b}}\|$, $\|\vec{a}-\vec{b}\|$ and $\|2 \vec{a}-\vec{b}\|$. 447. Simplify $(\vec{a}+\vec{b}) \cdot(\vec{a}-\vec{b})$. 448. Find the lengths of the sides, and the angles in the triangle $A B C$ whose vertices are $A(2,1), B(3,2)$, and $C(1,4)$. 449. Group problem. Given: $A(1,1)$, $B(3,2)$ and a point $C$ which lies on the line with parametric equation $\overrightarrow{\boldsymbol{c}}=$ $\left(\begin{array}{l}0 \\ 3\end{array}\right)+t\left(\begin{array}{c}1 \\ -1\end{array}\right)$. If $\triangle A B C$ is a right triangle, then where is $C$ ? (There are three possible answers, depending on whether you assume $A, B$ or $C$ is the right angle.) 450. (i) Find the defining equation and a normal vector $\vec{n}$ for the line $\ell$ which is the graph of $y=1+\frac{1}{2} x$. (ii) What is the distance from the origin to $\ell$ ? (iii) Answer the same two questions for the line $m$ which is the graph of $y=$ $2-3 x$. (iv) What is the angle between $\ell$ and $m$ ? 451. Let $\ell$ and $m$ be the lines with parametrizations $$ \ell: \overrightarrow{\boldsymbol{x}}=\left(\begin{array}{l} 2 \\ 0 \end{array}\right)+t\left(\begin{array}{l} 1 \\ 2 \end{array}\right), \quad m: \overrightarrow{\boldsymbol{x}}=\left(\begin{array}{c} 0 \\ -1 \end{array}\right)+s\left(\begin{array}{c} -2 \\ 3 \end{array}\right) $$ Where do they intersect, and find the angle between $\ell$ and $m$. 452. Let $\ell$ and $m$ be the lines with parametrizations $$ \ell: \overrightarrow{\boldsymbol{x}}=\left(\begin{array}{c} 2 \\ 1 \\ -4 \end{array}\right)+t\left(\begin{array}{l} 1 \\ 2 \\ 0 \end{array}\right), \quad m: \overrightarrow{\boldsymbol{x}}=\left(\begin{array}{c} 0 \\ 1 \\ -1 \end{array}\right)+s\left(\begin{array}{c} -2 \\ 0 \\ 3 \end{array}\right) $$ Do $\ell$ and $m$ intersect? Find the angle between $\ell$ and $m$. 453. Let $\ell$ and $m$ be the lines with parametrizations $$ \ell: \overrightarrow{\boldsymbol{x}}=\left(\begin{array}{l} 2 \\ \alpha \\ 1 \end{array}\right)+t\left(\begin{array}{l} 1 \\ 2 \\ 0 \end{array}\right), \quad m: \overrightarrow{\boldsymbol{x}}=\left(\begin{array}{c} 0 \\ 1 \\ -1 \end{array}\right)+s\left(\begin{array}{c} -2 \\ 0 \\ 3 \end{array}\right) $$ Here $\alpha$ is some unknown number. If it is known that the lines $\ell$ and $m$ intersect, what can you say about $\alpha$ ? ## THE CROSS PRODUCT 454. Compute the following cross products $$ \begin{aligned} & \text { (i) }\left(\begin{array}{l} 3 \\ 1 \\ 2 \end{array}\right) \times\left(\begin{array}{l} 3 \\ 2 \\ 1 \end{array}\right) \\ & \text { (ii) }\left(\begin{array}{c} 12 \\ -71 \\ 3 \frac{1}{2} \end{array}\right) \times\left(\begin{array}{c} 12 \\ -71 \\ 3 \frac{1}{2} \end{array}\right) \\ & \text { (iii) }\left(\begin{array}{l} 1 \\ 0 \\ 0 \end{array}\right) \times\left(\begin{array}{l} 1 \\ 1 \\ 0 \end{array}\right) \\ & \text { (iv) }\left(\begin{array}{c} \sqrt{ } 2 \\ 1 \\ 0 \end{array}\right) \times\left(\begin{array}{c} 0 \\ \sqrt{ } 2 \\ 0 \end{array}\right) \end{aligned} $$ 455. Compute the following cross products $$ \begin{aligned} & \text { (i) } \overrightarrow{\boldsymbol{i}} \times(\overrightarrow{\boldsymbol{i}}+\overrightarrow{\boldsymbol{j}}) \\ & \text { (ii) }(\sqrt{2} \overrightarrow{\boldsymbol{i}}+\overrightarrow{\boldsymbol{j}}) \times \sqrt{2} \overrightarrow{\boldsymbol{j}} \\ & \text { (iii) }(2 \overrightarrow{\boldsymbol{i}}+\overrightarrow{\boldsymbol{k}}) \times(\overrightarrow{\boldsymbol{j}}-\overrightarrow{\boldsymbol{k}}) \\ & \text { (iv) }(\cos \theta \overrightarrow{\boldsymbol{i}}+\sin \theta \overrightarrow{\boldsymbol{k}}) \times(\sin \theta \overrightarrow{\boldsymbol{i}}-\cos \theta \overrightarrow{\boldsymbol{k}}) \end{aligned} $$ 456. (i) Simplify $(\vec{a}+\vec{b}) \times(\vec{a}+\vec{b})$. $$ \begin{aligned} & \text { (ii) Simplify }(\vec{a}-\vec{b}) \times(\vec{a}-\vec{b}) \text {. } \\ & \text { (iii) Simplify }(\vec{a}+\vec{b}) \times(\vec{a}-\vec{b}) \text {. } \end{aligned} $$ 457. True or False: If $\overrightarrow{\boldsymbol{a}} \times \overrightarrow{\boldsymbol{b}}=\overrightarrow{\boldsymbol{c}} \times \overrightarrow{\boldsymbol{b}}$ and $\overrightarrow{\boldsymbol{b}} \neq \overrightarrow{\mathbf{0}}$ then $\overrightarrow{\boldsymbol{a}}=\overrightarrow{\boldsymbol{c}} ?$ 458. Group problem. Given $A(2,0,0)$, $B(0,0,2)$ and $C(2,2,2)$. Let $\mathcal{P}$ be the plane through $A, B$ and $C$. (i) Find a normal vector for $\mathcal{P}$. (ii) Find a defining equation for $\mathcal{P}$. (iii) What is the distance from $D(0,2,0)$ to $\mathcal{P}$ ? What is the distance from the origin $O(0,0,0)$ to $\mathcal{P}$ ? (iv) Do $D$ and $O$ lie on the same side of $\mathcal{P}$ ? (v) Find the area of the triangle $A B C$. (vi) Where does the plane $\mathcal{P}$ intersect the three coordinate axes? 459. (i) Does $D(2,1,3)$ lie on the plane $\mathcal{P}$ through the points $A(-1,0,0), B(0,2,1)$ and $C(0,3,0)$ ? (ii) The point $E(1,1, \alpha)$ lies on $\mathcal{P}$. What is $\alpha$ ? 460. Given points $A(1,-1,1), B(2,0,1)$ and $C(1,2,0)$. (i) Where is the point $D$ which makes $A B C D$ into a parallelogram? (ii) What is the area of the parallelogram $A B C D$ ? (iii) Find a defining equation for the plane $\mathcal{P}$ containing the parallelogram $A B C D$. (iv) Where does $\mathcal{P}$ intersect the coordinate axes? 461. Given points $A(1,0,0), B(0,2,0)$ and $D(-1,0,1)$ and $E(0,0,2)$. (i) If $\mathfrak{P}=\underset{E F G H}{A B C D}$ is a parallelepiped, then where are the points $C, F, G$ and $H$ ? (ii) Find the area of the base $A B C D$ of $\mathfrak{P}$. (iii) Find the height of $\mathfrak{P}$. (iv) Find the volume of $\mathfrak{P}$. 462. Group problem. Let ${ }_{E F G H}^{A B C D}$ be the cube with $A$ at the origin, $B(1,0,0)$, $D(0,1,0)$ and $E(0,0,1)$. (i) Find the coordinates of all the points $A, B, C, D, E, F, G, H$. (ii) Find the position vectors of the midpoints of the line segments $A G, B H, C E$ and $D F$. Make a drawing of the cube with these line segments. (iii) Find the defining equation for the plane $B D E$. Do the same for the plane $C F H$. Show that these planes are parallel. (iv) Find the parametric equation for the line through $A G$. (v) Where do the planes $B D E$ and $C F H$ intersect the line $A G$ ? (vi) Find the angle between the planes $B D E$ and $B G H$. (vii) Find the angle between the planes $B D E$ and $B C H$. Draw these planes. ## Chapter 6: Vector Functions and Parametrized Curves ## Parametric Curves 50.1. Definition. A vector function $\overrightarrow{\boldsymbol{f}}$ of one variable is a function of one real variable, whose values $\overrightarrow{\boldsymbol{f}}(t)$ are vectors. In other words for any value of $t$ (from a domain of allowed values, usually an interval) the vector function $\overrightarrow{\boldsymbol{f}}$ produces a vector $\overrightarrow{\boldsymbol{f}}(t)$. Write $\vec{f}$ in components: $$ \vec{f}(t)=\left(\begin{array}{l} f_{1}(t) \\ f_{2}(t) \end{array}\right) . $$ The components of a vector function $\overrightarrow{\boldsymbol{f}}$ of $t$ are themselves functions of $t$. They are ordinary first-semester-calculus-style functions. An example of a vector function is $$ \overrightarrow{\boldsymbol{f}}(t)=\left(\begin{array}{c} t-2 t^{2} \\ 1+\cos ^{2} \pi t \end{array}\right), \quad \text { so } \overrightarrow{\boldsymbol{f}}(1)=\left(\begin{array}{c} 1-2(1)^{2} \\ 1+(\cos \pi)^{2} \end{array}\right)=\left(\begin{array}{c} -1 \\ 2 \end{array}\right) $$ (just to mention one.) 50.2. Definition. A parametric curve is a vector function $\overrightarrow{\boldsymbol{x}}=\overrightarrow{\boldsymbol{x}}(t)$ of one real variable $t$. The variable $t$ is called the parameter. Synonyms: "Parametrized curve," or "parametrization," or "vector function (of one variable)." Logically speaking a parametrized curve is the same thing as a vector function. The name "parametrized curve" is used to remind you of a very natural and common interpretation of the concept "parametric curve." In this interpretation a vector function, or parametric curve $\overrightarrow{\boldsymbol{x}}(t)$ describes the motion of a point in the plane or space. Here $t$ stands for time, and $\overrightarrow{\boldsymbol{x}}(t)$ is the position vector at time $t$ of the moving point. A picture of a vector function. Instead of writing a parametrized curve as a vector function, one sometimes specifies the two (or three) components of the curve. Thus one will say that a parametric curve is given by $$ x_{1}=x_{1}(t), \quad x_{2}=x_{2}(t), \quad\left(\text { and } x_{3}=x_{3}(t)\right. \text { if we have a space curve). } $$ ## Examples of parametrized curves 51.1. An example of Rectilinear Motion. Here's a parametric curve: $$ \overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c} 1+t \\ 2+3 t \end{array}\right) $$ The components of this vector function are $$ x_{1}(t)=1+t, \quad x_{2}(t)=2+3 t . $$ Both components are linear functions of time (i.e. the parameter $t$ ), so every time $t$ increases by an amount $\Delta t$ (every time $\Delta t$ seconds go by) the first component increases by $\Delta t$, and the $x_{2}$ component increases by $3 \Delta t$. So the point at $\overrightarrow{\boldsymbol{x}}(t)$ moves horizontally to the left with speed 1 , and it moves vertically upwards with speed 3 . Which curve is traced out by this vector function? In this example we can find out by eliminating the parameter, i.e. solve one of the two equations (73) for $t$, and substitute the value of $t$ you find in the other equation. Here you can solve $x_{1}=1+t$ for $t$, with result $t=x_{1}-1$. From there you find that $$ x_{2}=2+3 t=2+3\left(x_{1}-1\right)=3 x_{1}-1 . $$ So for any $t$ the vector $\overrightarrow{\boldsymbol{x}}(t)$ is the position vector of a point on the line $x_{2}=3 x_{1}-1$ (or, if you prefer the old fashioned $x, y$ coordinates, $y=3 x-1)$. Conclusion: This particular parametric curve traces out a straight line with equation $x_{2}=3 x_{1}-1$, going from left to right. 51.2. Rectilinear Motion in general. This example generalizes the previous example. The parametric equation for a straight line from the previous chapter $$ \overrightarrow{\boldsymbol{x}}(t)=\overrightarrow{\boldsymbol{a}}+t \overrightarrow{\boldsymbol{v}} $$ is a parametric curve. We had $\overrightarrow{\boldsymbol{v}}=\overrightarrow{\boldsymbol{b}}-\overrightarrow{\boldsymbol{a}}$ in $\S 43$. At time $t=0$ the object is at the point with position vector $\overrightarrow{\boldsymbol{a}}$, and every second (unit of time) the object translates by $\overrightarrow{\boldsymbol{v}}$. The vector $\overrightarrow{\boldsymbol{v}}$ is the velocity vector of this motion. In the first example we had $\overrightarrow{\boldsymbol{a}}=\left(\begin{array}{l}1 \\ 2\end{array}\right)$, and $\overrightarrow{\boldsymbol{v}}=\left(\begin{array}{l}1 \\ 3\end{array}\right)$. ### Going back and forth on a straight line. Consider $$ \overrightarrow{\boldsymbol{x}}(t)=\overrightarrow{\boldsymbol{a}}+\sin (t) \overrightarrow{\boldsymbol{v}} . $$ At each moment in time the object whose motion is described by this parametric curve finds itself on the straight line $\ell$ with parametric equation $\overrightarrow{\boldsymbol{x}}=\overrightarrow{\boldsymbol{a}}+s(\overrightarrow{\boldsymbol{b}}-\overrightarrow{\boldsymbol{a}})$, where $\overrightarrow{\boldsymbol{b}}=\overrightarrow{\boldsymbol{a}}+\overrightarrow{\boldsymbol{v}}$. However, instead of moving along the line from one end to the other, the point at $\overrightarrow{\boldsymbol{x}}(t)$ keeps moving back and forth along $\ell$ between $\overrightarrow{\boldsymbol{a}}+\overrightarrow{\boldsymbol{v}}$ and $\overrightarrow{\boldsymbol{a}}-\overrightarrow{\boldsymbol{v}}$. 51.4. Motion along a graph. Let $y=f(x)$ be some function of one variable (defined for $x$ in some interval) and consider the parametric curve given by $$ \overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c} t \\ f(t) \end{array}\right)=t \overrightarrow{\boldsymbol{i}}+f(t) \overrightarrow{\boldsymbol{j}} $$ At any moment in time the point at $\overrightarrow{\boldsymbol{x}}(t)$ has $x_{1}$ coordinate equal to $t$, and $x_{2}=f(t)=$ $f\left(x_{1}\right)$, since $x_{1}=t$. So this parametric curve describes motion on the graph of $y=f(x)$ in which the horizontal coordinate increases at a constant rate. 51.5. The standard parametrization of a circle. Consider the parametric curve $$ \overrightarrow{\boldsymbol{x}}(\theta)=\left(\begin{array}{c} \cos \theta \\ \sin \theta \end{array}\right) . $$ The two components of this parametrization are $$ x_{1}(\theta)=\cos \theta, \quad x_{2}(\theta)=\sin \theta, $$ and they satisfy $$ x_{1}(\theta)^{2}+x_{2}(\theta)^{2}=\cos ^{2} \theta+\sin ^{2} \theta=1, $$ so that $\overrightarrow{\boldsymbol{x}}(\theta)$ always points at a point on the unit circle. As $\theta$ increases from $-\infty$ to $+\infty$ the point will rotate through the circle, going around infinitely often. Note that the point runs through the circle in the counterclockwise direction, which is the mathematician's favorite way of running around in circles. 51.6. The Cycloid. The Free Ferris Wheel Foundation is an organization whose goal is to empower fairground ferris wheels to roam freely and thus realize their potential. With blatant disregard for the public, members of the $\mathrm{F}^{2} \mathrm{WF}$ will clandestinely unhinge ferris wheels, thereby setting them free to roll throughout the fairground and surroundings. Suppose we were to step into the bottom of a ferris wheel at the moment of its liberation: what would happen? Where would the wheel carry us? Let our position be the point $X$, and let its position vector at time $t$ be $\overrightarrow{\boldsymbol{x}}(t)$. The parametric curve $\overrightarrow{\boldsymbol{x}}(t)$ which describes our motion is called the cycloid. In this example we are given a description of a motion, but no formula for the parametrization $\overrightarrow{\boldsymbol{x}}(t)$. We will have to derive this formula ourselves. The key to finding $\overrightarrow{\boldsymbol{x}}(t)$ is the fact that the arc $A X$ on the wheel is exactly as long as the line segment $O A$ on the ground (i.e. the $x_{1}$ axis). The length of the arc $A X$ is exactly the angle $\theta$ ("arc $=$ radius times angle in radians"), so the $x_{1}$ coordinate of $A$ and hence the center $C$ of the circle is $\theta$. To find $X$ consider the right triangle $B C X$. Its hypothenuse is the radius of the circle, i.e. $C X$ has length 1 . The angle at $C$ is $\theta$, and therefore you get $$ B X=\sin \theta, \quad B C=\cos \theta, $$ and $$ x_{1}=O A-B X=\theta-\sin \theta, \quad x_{2}=A C-B C=1-\cos \theta . $$ So the parametric curve defined in the beginning of this example is $$ \overrightarrow{\boldsymbol{x}}(\theta)=\left(\begin{array}{c} \theta-\sin \theta \\ 1-\cos \theta \end{array}\right) . $$ Here the angle $\theta$ is the parameter, and we can let it run from $\theta=-\infty$ to $\theta=\infty$. 51.7. A three dimensional example: the Helix. Consider the vector function $$ \overrightarrow{\boldsymbol{x}}(\theta)=\left(\begin{array}{c} \cos \theta \\ \sin \theta \\ a \theta \end{array}\right) $$ where $a>0$ is some constant. If you ignore the $x_{3}$ component of this vector function you get the parametrization of the circle from example 51.5. So as the parameter $\theta$ runs from $-\infty$ to $+\infty$, the $x_{1}, x_{2}$ part of $\overrightarrow{\boldsymbol{x}}(\theta)$ runs around on the unit circle infinitely often. While this happens the vertical component, i.e. $x_{3}(\theta)$ increases steadily from $-\infty$ to $\infty$ at a rate of $a$ units per second. ## The derivative of a vector function If $\overrightarrow{\boldsymbol{x}}(t)$ is a vector function, then we define its derivative to be $$ \overrightarrow{\boldsymbol{x}}^{\prime}(t)=\frac{\mathrm{d} \overrightarrow{\boldsymbol{x}}}{\mathrm{d} t}=\lim _{h \rightarrow 0} \frac{\overrightarrow{\boldsymbol{x}}(t+h)-\overrightarrow{\boldsymbol{x}}(t)}{h} . $$ This definition looks very much like the first-semester-calculus-definition of the derivative of a function, but for it to make sense in the context of vector functions we have to explain what the limit of a vector function is. By definition, for a vector function $\overrightarrow{\boldsymbol{f}}(t)=\left(\begin{array}{c}f_{1}(t) \\ f_{2}(t)\end{array}\right)$ one has $$ \lim _{t \rightarrow a} \overrightarrow{\boldsymbol{f}}(t)=\lim _{t \rightarrow a}\left(\begin{array}{l} f_{1}(t) \\ f_{2}(t) \end{array}\right)=\left(\begin{array}{l} \lim _{t \rightarrow a} f_{1}(t) \\ \lim _{t \rightarrow a} f_{2}(t) \end{array}\right) $$ In other words, to compute the limit of a vector function you just compute the limits of its components (that will be our definition.) Let's look at the definition of the velocity vector again. Since $$ \begin{aligned} \frac{\overrightarrow{\boldsymbol{x}}(t+h)-\overrightarrow{\boldsymbol{x}}(t)}{h} & =\frac{1}{h}\left\{\left(\begin{array}{l} x_{1}(t+h) \\ x_{2}(t+h) \end{array}\right)-\left(\begin{array}{l} x_{1}(t) \\ x_{2}(t) \end{array}\right)\right\} \\ & =\left(\frac{\frac{x_{1}(t+h)-x_{1}(t)}{h}}{\frac{x_{2}(t+h)-x_{2}(t)}{h}}\right) \end{aligned} $$ we have $$ \begin{aligned} \overrightarrow{\boldsymbol{x}}^{\prime}(t) & =\lim _{h \rightarrow 0} \frac{\overrightarrow{\boldsymbol{x}}(t+h)-\overrightarrow{\boldsymbol{x}}(t)}{h} \\ & =\left(\begin{array}{l} \lim _{h \rightarrow 0} \frac{x_{1}(t+h)-x_{1}(t)}{h} \\ \lim _{h \rightarrow 0} \frac{x_{2}(t+h)-x_{2}(t)}{h} \end{array}\right) \\ & =\left(\begin{array}{c} x_{1}^{\prime}(t) \\ x_{2}^{\prime}(t) \end{array}\right) \end{aligned} $$ So: To compute the derivative of a vector function you must differentiate its components. 52.1. Example. Compute the derivative of $$ \overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c} \cos t \\ \sin t \end{array}\right) \quad \text { and of } \quad \overrightarrow{\boldsymbol{y}}(t)=\left(\begin{array}{c} t-\sin t \\ 1-\cos t \end{array}\right) . $$ Solution: $$ \begin{aligned} & \overrightarrow{\boldsymbol{x}}^{\prime}(t)=\frac{\mathrm{d}}{\mathrm{d} t}\left(\begin{array}{c} \cos t \\ \sin t \end{array}\right)=\left(\begin{array}{c} -\sin t \\ \cos t \end{array}\right) \\ & \overrightarrow{\boldsymbol{y}}^{\prime}(t)=\frac{\mathrm{d}}{\mathrm{d} t}\left(\begin{array}{c} t-\sin t \\ 1-\cos t \end{array}\right)=\left(\begin{array}{c} 1-\cos t \\ \sin t \end{array}\right) . \end{aligned} $$ ## Higher derivatives and product rules If you differentiate a vector function $\overrightarrow{\boldsymbol{x}}(t)$ you get another vector function, namely $\overrightarrow{\boldsymbol{x}}^{\prime}(t)$, and you can try to differentiate that vector function again. If you succeed, the result is called the second derivative of $\overrightarrow{\boldsymbol{x}}(t)$. All this is very similar to how the second (and higher) derivative of ordinary functions were defined in 1st semester calculus. One even uses the same notation: ${ }^{11}$ $$ \overrightarrow{\boldsymbol{x}}^{\prime \prime}(t)=\frac{\mathrm{d} \overrightarrow{\boldsymbol{x}}^{\prime}(t)}{\mathrm{d} t}=\frac{\mathrm{d}^{2} \overrightarrow{\boldsymbol{x}}}{\mathrm{d} t^{2}}=\left(\begin{array}{c} x_{1}^{\prime \prime}(t) \\ x_{2}^{\prime \prime}(t) \end{array}\right) . $$ 53.1. Example. Compute the second derivative of $$ \overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c} \cos t \\ \sin t \end{array}\right) \quad \text { and of } \quad \overrightarrow{\boldsymbol{y}}(t)=\left(\begin{array}{c} t-\sin t \\ 1-\cos t \end{array}\right) . $$ Solution: In example 52.1 we already found the first derivatives, so you can use those. You find $$ \begin{aligned} & \overrightarrow{\boldsymbol{x}}^{\prime \prime}(t)=\frac{\mathrm{d}}{\mathrm{d} t}\left(\begin{array}{c} -\sin t \\ \cos t \end{array}\right)=\left(\begin{array}{c} -\cos t \\ -\sin t \end{array}\right) \\ & \overrightarrow{\boldsymbol{y}}^{\prime \prime}(t)=\frac{\mathrm{d}}{\mathrm{d} t}\left(\begin{array}{c} 1-\cos t \\ \sin t \end{array}\right)=\left(\begin{array}{c} \sin t \\ \cos t \end{array}\right) . \end{aligned} $$ Note that our standard parametrization $\overrightarrow{\boldsymbol{x}}(t)$ of the circle satisfies $$ \overrightarrow{\boldsymbol{x}}^{\prime \prime}(t)=-\overrightarrow{\boldsymbol{x}}(t) . $$ ${ }^{11}$ Not every function has a derivative, so it may happen that you can find $\overrightarrow{\boldsymbol{x}}^{\prime}(t)$ but not $\overrightarrow{\boldsymbol{x}}^{\prime \prime}(t)$ After defining the derivative in first semester calculus one quickly introduces the various rules (sum, product, quotient, chain rules) which make it possible to compute derivatives without ever actually having to use the limit-of-difference-quotient-definition. For vector functions there are similar rules which also turn out to be useful. The Sum Rule holds. It says that if $\overrightarrow{\boldsymbol{x}}(t)$ and $\overrightarrow{\boldsymbol{y}}(t)$ are differentiable ${ }^{12}$ vector functions, then so is $\overrightarrow{\boldsymbol{z}}(t)=\overrightarrow{\boldsymbol{x}}(t) \pm \overrightarrow{\boldsymbol{y}}(t)$, and one has $$ \frac{\mathrm{d} \overrightarrow{\boldsymbol{x}}(t) \pm \overrightarrow{\boldsymbol{y}}(t)}{\mathrm{d} t}=\frac{\mathrm{d} \overrightarrow{\boldsymbol{x}}(t)}{\mathrm{d} t} \pm \frac{\mathrm{d} \overrightarrow{\boldsymbol{y}}(t)}{\mathrm{d} t} . $$ The Product Rule also holds, but it is more complicated, because there are several different forms of multiplication when you have vector functions. The following three versions all hold: If $\overrightarrow{\boldsymbol{x}}(t)$ and $\overrightarrow{\boldsymbol{y}}(t)$ are differentiable vector functions and $f(t)$ is an ordinary differentiable function, then $$ \begin{aligned} \frac{\mathrm{d} f(t) \overrightarrow{\boldsymbol{x}}(t)}{\mathrm{d} t} & =f(t) \frac{\mathrm{d} \overrightarrow{\boldsymbol{x}}(t)}{\mathrm{d} t}+\frac{\mathrm{d} f(t)}{\mathrm{d} t} \overrightarrow{\boldsymbol{x}}(t) \\ \frac{\mathrm{d} \overrightarrow{\boldsymbol{x}}(t) \cdot \overrightarrow{\boldsymbol{y}}(t)}{\mathrm{d} t} & =\overrightarrow{\boldsymbol{x}}(t) \cdot \frac{\mathrm{d} \overrightarrow{\boldsymbol{y}}(t)}{\mathrm{d} t}+\frac{\mathrm{d} \overrightarrow{\boldsymbol{x}}(t)}{\mathrm{d} t} \cdot \overrightarrow{\boldsymbol{y}}(t) \\ \frac{\mathrm{d} \overrightarrow{\boldsymbol{x}}(t) \times \overrightarrow{\boldsymbol{y}}(t)}{\mathrm{d} t} & =\overrightarrow{\boldsymbol{x}}(t) \times \frac{\mathrm{d} \overrightarrow{\boldsymbol{y}}(t)}{\mathrm{d} t}+\frac{\mathrm{d} \overrightarrow{\boldsymbol{x}}(t)}{\mathrm{d} t} \times \overrightarrow{\boldsymbol{y}}(t) \end{aligned} $$ I hope these formulae look plausible because they look like the old fashioned product rule, but even if they do, you still have to prove them before you can accept their validity. I will prove one of these in lecture. You will do some more as an exercise. As an example of how these properties get used, consider this theorem: 53.2. Theorem. Let $\overrightarrow{\boldsymbol{f}}(t)$ be a vector function of constant length (i.e. $\|\overrightarrow{\boldsymbol{f}}(t)\|$ is constant.) Then $\overrightarrow{\boldsymbol{f}}^{\prime}(t) \perp \overrightarrow{\boldsymbol{f}}(t)$. Proof. If $\|\overrightarrow{\boldsymbol{f}}\|$ is constant, then so is $\overrightarrow{\boldsymbol{f}}(t) \cdot \overrightarrow{\boldsymbol{f}}(t)=\|\overrightarrow{\boldsymbol{f}}(t)\|^{2}$. the derivative of a constant function is zero, so $$ 0=\frac{\mathrm{d}}{\mathrm{d} t}\left(\|\overrightarrow{\boldsymbol{f}}(t)\|^{2}\right)=\frac{\mathrm{d}}{\mathrm{d} t}(\|\overrightarrow{\boldsymbol{f}}(t)\| \cdot\|\overrightarrow{\boldsymbol{f}}(t)\|)=2 \overrightarrow{\boldsymbol{f}}(t) \cdot \frac{\mathrm{d} \overrightarrow{\boldsymbol{f}}(t)}{\mathrm{d} t} . $$ So we see that $\overrightarrow{\boldsymbol{f}} \cdot \overrightarrow{\boldsymbol{f}}^{\prime}=0$ which means that $\overrightarrow{\boldsymbol{f}}^{\prime} \perp \overrightarrow{\boldsymbol{f}}$. ## Interpretation of $\overrightarrow{\boldsymbol{x}}^{\prime}(t)$ as the velocity vector Figure 23. The vector velocity of a motion in the plane. ${ }^{12} \mathrm{~A}$ vector function is differentiable if its derivative actually exists, i.e. if all its components are differentiable. Let $\overrightarrow{\boldsymbol{x}}(t)$ be some vector function and interpret it as describing the motion of some point in the plane (or space). At time $t$ the point has position vector $\overrightarrow{\boldsymbol{x}}(t)$; a little later, more precisely, $h$ seconds later the point has position vector $\overrightarrow{\boldsymbol{x}}(t+h)$. Its displacement is the difference vector $$ \overrightarrow{\boldsymbol{x}}(t+h)-\overrightarrow{\boldsymbol{x}}(t) . $$ Its average velocity vector between times $t$ and $t+h$ is $$ \frac{\text { displacement vector }}{\text { time lapse }}=\frac{\overrightarrow{\boldsymbol{x}}(t+h)-\overrightarrow{\boldsymbol{x}}(t)}{h} . $$ If the average velocity between times $t$ and $t+h$ converges to one definite vector as $h \rightarrow 0$, then this limit is a reasonable candidate for the velocity vector at time $t$ of the parametric curve $\overrightarrow{\boldsymbol{x}}(t)$. Being a vector, the velocity vector has both magnitude and direction. The length of the velocity vector is called the speed of the parametric curve. We use the following notation: we always write $$ \overrightarrow{\boldsymbol{v}}(t)=\overrightarrow{\boldsymbol{x}}^{\prime}(t) $$ for the velocity vector, and $$ v(t)=\|\overrightarrow{\boldsymbol{v}}(t)\|=\left\|\overrightarrow{\boldsymbol{x}}^{\prime}(t)\right\| $$ for its length, i.e. the speed. The speed $v$ is always a nonnegative number; the velocity is always a vector. 54.1. Velocity of linear motion. If $\overrightarrow{\boldsymbol{x}}(t)=\overrightarrow{\boldsymbol{a}}+t \overrightarrow{\boldsymbol{v}}$, as in examples 51.1 and 51.2 , then so that $$ \overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{l} a_{1}+t v_{1} \\ a_{2}+t v_{2} \end{array}\right) $$ $$ \overrightarrow{\boldsymbol{x}}^{\prime}(t)=\left(\begin{array}{l} v_{1} \\ v_{2} \end{array}\right)=\overrightarrow{\boldsymbol{v}} $$ So when you represent a line by a parametric equation $\overrightarrow{\boldsymbol{x}}(t)=\overrightarrow{\boldsymbol{a}}+t \overrightarrow{\boldsymbol{v}}$, the vector $\overrightarrow{\boldsymbol{v}}$ is the velocity vector. The length of $\overrightarrow{\boldsymbol{v}}$ is the speed of the motion. In example 51.1 we had $\overrightarrow{\boldsymbol{v}}=\left(\begin{array}{c}1 \\ 3\end{array}\right)$, so the speed with which the point at $\overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c}1+t \\ 1+3 t\end{array}\right)$ traces out the line is $v=\|\overrightarrow{\boldsymbol{v}}\|=\sqrt{1^{2}+3^{2}}=\sqrt{ } 10$. 54.2. Motion on a circle. Consider the parametrization $$ \overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{l} R \cos \omega t \\ R \sin \omega t \end{array}\right) . $$ The point $X$ at $\overrightarrow{\boldsymbol{x}}(t)$ is on the circle centered at the origin with radius $R$. The segment from the origin to $X$ makes an angle $\omega t$ with the $x$-axis; this angle clearly increases at a constant rate of $\omega$ radians per second. The velocity vector of this motion is $$ \overrightarrow{\boldsymbol{v}}(t)=\overrightarrow{\boldsymbol{x}}^{\prime}(t)=\left(\begin{array}{c} -\omega R \sin \omega t \\ \omega R \cos \omega t \end{array}\right)=\omega R\left(\begin{array}{c} -\sin \omega t \\ \cos \omega t \end{array}\right) $$ This vector is not constant. however, if you calculate the speed of the point $X$, you find $$ v=\|\overrightarrow{\boldsymbol{v}}(t)\|=\omega R\left\|\left(\begin{array}{c} \sin \omega t \\ \cos \omega t \end{array}\right)\right\|=\omega R . $$ So while the direction of the velocity vector $\overrightarrow{\boldsymbol{v}}(t)$ is changing all the time, its magnitude is constant. In this parametrization the point $X$ moves along the circle with constant speed $v=\omega R$. 54.3. Velocity of the cycloid. Think of the $\operatorname{dot} X$ on the wheel in the cycloid example 51.6. We know its position vector and velocity at time $t$ $$ \overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c} t-\sin t \\ 1-\cos t \end{array}\right), \quad \overrightarrow{\boldsymbol{x}}^{\prime}(t)=\left(\begin{array}{c} 1-\cos t \\ \sin t \end{array}\right) . $$ The speed with which $X$ traces out the cycloid is $$ \begin{aligned} v & =\left\|\overrightarrow{\boldsymbol{x}}^{\prime}(t)\right\| \\ & =\sqrt{(1-\cos t)^{2}+(\sin t)^{2}} \\ & =\sqrt{1-2 \cos t+\cos ^{2} t+\sin ^{2} t} \\ & =\sqrt{2(1-\cos t)} . \end{aligned} $$ You can use the double angle formula $\cos 2 \alpha=1-2 \sin ^{2} \alpha$ with $\alpha=\frac{t}{2}$ to simplify this to $$ v=\sqrt{4 \sin ^{2} \frac{t}{2}}=2\left|\sin \frac{t}{2}\right| . $$ The speed of the point $X$ on the cycloid is therefore always between 0 and 2 . At times $t=0$ and other multiples of $2 \pi$ we have $\overrightarrow{\boldsymbol{x}}^{\prime}(t)=\overrightarrow{\mathbf{0}}$. At these times the point $X$ has come to a stop. At times $t=\pi+2 k \pi$ one has $v=2$ and $\overrightarrow{\boldsymbol{x}}^{\prime}(t)=\left(\begin{array}{l}2 \\ 0\end{array}\right)$, i.e. the point $X$ is moving horizontally to the right with speed 2 . ## Acceleration and Force Just as the derivative $\overrightarrow{\boldsymbol{x}}^{\prime}(t)$ of a parametric curve can be interpreted as the velocity vector $\overrightarrow{\boldsymbol{v}}(t)$, the derivative of the velocity vector measures the rate of change with time of the velocity and is called the acceleration of the motion. The usual notation is $$ \overrightarrow{\boldsymbol{a}}(t)=\overrightarrow{\boldsymbol{v}}^{\prime}(t)=\frac{\mathrm{d} \overrightarrow{\boldsymbol{v}}(t)}{\mathrm{d} t}=\frac{\mathrm{d}^{2} \overrightarrow{\boldsymbol{x}}}{\mathrm{d} t^{2}}=\overrightarrow{\boldsymbol{x}}^{\prime \prime}(t) . $$ Sir IsAAC NEWTON's law relating force and acceleration via the formula " $F=m a$ " has a vector version. If an object's motion is given by a parametrized curve $\overrightarrow{\boldsymbol{x}}(t)$ then this motion is the result of a force $\overrightarrow{\boldsymbol{F}}$ being exerted on the object. The force $\overrightarrow{\boldsymbol{F}}$ is given by $$ \overrightarrow{\boldsymbol{F}}=m \overrightarrow{\boldsymbol{a}}=m \frac{\mathrm{d}^{2} \overrightarrow{\boldsymbol{x}}}{\mathrm{d} t^{2}} $$ where $m$ is the mass of the object. Somehow it is always assumed that the mass $m$ is a positive number. 55.1. How does an object move if no forces act on it? If $\overrightarrow{\boldsymbol{F}}(t)=\overrightarrow{\boldsymbol{0}}$ at all times, then, assuming $m \neq 0$ it follows from $\overrightarrow{\boldsymbol{F}}=m \overrightarrow{\boldsymbol{a}}$ that $\overrightarrow{\boldsymbol{a}}(t)=\overrightarrow{\mathbf{0}}$. Since $\overrightarrow{\boldsymbol{a}}(t)=\overrightarrow{\boldsymbol{v}}^{\prime}(t)$ you conclude that the velocity vector $\overrightarrow{\boldsymbol{v}}(t)$ must be constant, i.e. that there is some fixed vector $\overrightarrow{\boldsymbol{v}}$ such that $$ \overrightarrow{\boldsymbol{x}}^{\prime}(t)=\overrightarrow{\boldsymbol{v}}(t)=\overrightarrow{\boldsymbol{v}} \text { for all } t \text {. } $$ This implies that $$ \overrightarrow{\boldsymbol{x}}(t)=\overrightarrow{\boldsymbol{x}}(0)+t \overrightarrow{\boldsymbol{v}} . $$ So if no force acts on an object, then it will move with constant velocity vector along a straight line (said Newton - Archimedes long before him thought that the object would slow down and come to a complete stop unless there were a force to keep it going.) 55.2. Compute the forces acting on a point on a circle. Consider an object moving with constant angular velocity $\omega$ on a circle of radius $R$, i.e. consider $\overrightarrow{\boldsymbol{x}}(t)$ as in example 54.2, $$ \overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{l} R \cos \omega t \\ R \sin \omega t \end{array}\right)=R\left(\begin{array}{c} \cos \omega t \\ \sin \omega t \end{array}\right) . $$ Then its velocity and acceleration vectors are $$ \overrightarrow{\boldsymbol{v}}(t)=\omega R\left(\begin{array}{c} -\sin \omega t \\ \cos \omega t \end{array}\right) $$ and $$ \begin{aligned} \overrightarrow{\boldsymbol{a}}(t) & =\overrightarrow{\boldsymbol{v}}^{\prime}(t)=\omega^{2} R\left(\begin{array}{l} -\cos \omega t \\ -\sin \omega t \end{array}\right) \\ & =-\omega^{2} R\left(\begin{array}{c} \cos \omega t \\ \sin \omega t \end{array}\right) \end{aligned} $$ through this motion is Since both $\left(\begin{array}{c}\cos \theta \\ \sin \theta\end{array}\right)$ and $\left(\begin{array}{c}-\sin \theta \\ \cos \theta\end{array}\right)$ are unit vectors, we see that the velocity vector changes its direction but not its size: at all times you have $v=\|\overrightarrow{\boldsymbol{v}}\|=\omega R$. The acceleration also keeps changing its direction, but its magnitude is always $$ a=\|\overrightarrow{\boldsymbol{a}}\|=\omega^{2} R=\left(\frac{v}{R}\right)^{2} R=\frac{v^{2}}{R} . $$ The force which must be acting on the object to make it go $$ \overrightarrow{\boldsymbol{F}}=m \overrightarrow{\boldsymbol{a}}=-m \omega^{2} R\left(\begin{array}{c} \cos \omega t \\ \sin \omega t \end{array}\right) . $$ To conclude this example note that you can write this force as $$ \overrightarrow{\boldsymbol{F}}=-m \omega^{2} \overrightarrow{\boldsymbol{x}}(t) $$ which tells you which way the force is directed: towards the center of the circle. 55.3. How does it feel, to be on the Ferris wheel? In other words, which force acts on us if we get carried away by a "liberated ferris wheel," as in example 51.6? Well, you get pushed around by a force $\overrightarrow{\boldsymbol{F}}$, which according to Newton is given by $\overrightarrow{\boldsymbol{F}}=m \overrightarrow{\boldsymbol{a}}$, where $m$ is your mass and $\overrightarrow{\boldsymbol{a}}$ is your acceleration, which we now compute: $$ \begin{aligned} \overrightarrow{\boldsymbol{a}}(t) & =\overrightarrow{\boldsymbol{v}}^{\prime}(t) \\ & =\frac{\mathrm{d}}{\mathrm{d} t}\left(\begin{array}{c} 1-\cos t \\ \sin t \end{array}\right) \\ & =\left(\begin{array}{c} \sin t \\ \cos t \end{array}\right) . \end{aligned} $$ This is a unit vector: the force that's pushing you around is constantly changing its direction but its strength stays the same. If you remember that $t$ is the angle $\angle A C X$ you see that the force $\overrightarrow{\boldsymbol{F}}$ is always pointed at the center of the wheel: its direction is given by the vector $\overrightarrow{X C}$. ## Tangents and the unit tangent vector Here we address the problem of finding the tangent line at a point on a parametric curve. Let $\overrightarrow{\boldsymbol{x}}(t)$ be a parametric curve, and let's try to find the tangent line at a particular point $X_{0}$, with position vector $\overrightarrow{\boldsymbol{x}}\left(t_{0}\right)$ on this curve. We follow the same strategy as in 1st semester calculus: pick a point $X_{h}$ on the curve near $X_{0}$, draw the line through $X_{0}$ and $X_{h}$ and let $X_{h} \rightarrow X_{0}$ The line through two points on a curve is often called a secant to the curve. So we are going to construct a tangent to the curve as a limit of secants. The point $X_{0}$ has position vector $\overrightarrow{\boldsymbol{x}}\left(t_{0}\right)$, the point $X_{h}$ is at $\overrightarrow{\boldsymbol{x}}\left(t_{0}+h\right)$. Consider the line $\ell_{h}$ parametrized by $$ \overrightarrow{\boldsymbol{y}}(s ; h)=\overrightarrow{\boldsymbol{x}}\left(t_{0}\right)+s \frac{\overrightarrow{\boldsymbol{x}}\left(t_{0}+h\right)-\overrightarrow{\boldsymbol{x}}\left(t_{0}\right)}{h}, $$ in which $s$ is the parameter we use to parametrize the line. The line $\ell_{h}$ contains both $X_{0}$ (set $s=0$ ) and $X_{h}$ (set $s=h$ ), so it is the line through $X_{0}$ and $X_{h}$, i.e. a secant to the curve. Now we let $h \rightarrow 0$, which gives $$ \overrightarrow{\boldsymbol{y}}(s) \stackrel{\text { def }}{=} \lim _{h \rightarrow 0} \overrightarrow{\boldsymbol{y}}(s ; h)=\overrightarrow{\boldsymbol{x}}\left(t_{0}\right)+s \lim _{h \rightarrow 0} \frac{\overrightarrow{\boldsymbol{x}}\left(t_{0}+h\right)-\overrightarrow{\boldsymbol{x}}\left(t_{0}\right)}{h}=\overrightarrow{\boldsymbol{x}}\left(t_{0}\right)+s \overrightarrow{\boldsymbol{x}}^{\prime}\left(t_{0}\right), $$ In other words, the tangent line to the curve $\overrightarrow{\boldsymbol{x}}(t)$ at the point with position vector $\overrightarrow{\boldsymbol{x}}\left(t_{0}\right)$ has parametric equation $$ \overrightarrow{\boldsymbol{y}}(s)=\overrightarrow{\boldsymbol{x}}\left(t_{0}\right)+s \overrightarrow{\boldsymbol{x}}^{\prime}\left(t_{0}\right), $$ and the vector $\overrightarrow{\boldsymbol{x}}^{\prime}\left(t_{0}\right)=\overrightarrow{\boldsymbol{v}}\left(t_{0}\right)$ is parallel to the tangent line $\ell$. Because of this one calls the vector $\overrightarrow{\boldsymbol{x}}^{\prime}\left(t_{0}\right)$ a tangent vector to the curve. Any multiple $\lambda \overrightarrow{\boldsymbol{x}}^{\prime}\left(t_{0}\right)$ with $\lambda \neq 0$ is still parallel to the tangent line $\ell$ and is therefore also called a tangent vector. A tangent vector of length 1 is called a unit tangent vector. If $\overrightarrow{\boldsymbol{x}}^{\prime}\left(t_{0}\right) \neq 0$ then there are exactly two unit tangent vectors. They are $$ \overrightarrow{\boldsymbol{T}}\left(t_{0}\right)= \pm \frac{\overrightarrow{\boldsymbol{v}}\left(t_{0}\right)}{\left\|\overrightarrow{\boldsymbol{v}}\left(t_{0}\right)\right\|}= \pm \frac{\overrightarrow{\boldsymbol{v}}\left(t_{0}\right)}{v\left(t_{0}\right)} $$ 56.1. Example. Find Tangent line, and unit tangent vector at $\overrightarrow{\boldsymbol{x}}(1)$, where $\overrightarrow{\boldsymbol{x}}(t)$ is the parametric curve given by $$ \overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c} t \\ t^{2} \end{array}\right), \quad \text { so that } \overrightarrow{\boldsymbol{x}}^{\prime}(t)=\left(\begin{array}{c} 1 \\ 2 t \end{array}\right) . $$ parabola with tangent line parabola with unit tangent vectors Solution: For $t=1$ we have $\overrightarrow{\boldsymbol{x}}^{\prime}(1)=\left(\begin{array}{l}1 \\ 2\end{array}\right)$, so the tangent line has parametric equation $$ \overrightarrow{\boldsymbol{y}}(s)=\overrightarrow{\boldsymbol{x}}(1)+s \overrightarrow{\boldsymbol{x}}^{\prime}(1)=\left(\begin{array}{l} 1 \\ 1 \end{array}\right)+s\left(\begin{array}{l} 1 \\ 2 \end{array}\right)=\left(\begin{array}{c} 1+s \\ 1+2 s \end{array}\right) . $$ In components one could write this as $y_{1}(s)=1+s, y_{2}(s)=1+2 s$. After eliminating $s$ you find that on the tangent line one has $$ y_{2}=1+2 s=1+2\left(y_{1}-1\right)=2 y_{1}-1 . $$ The vector $\overrightarrow{\boldsymbol{x}}^{\prime}(1)=\left(\begin{array}{l}1 \\ 2\end{array}\right)$ is a tangent vector to the parabola at $\overrightarrow{\boldsymbol{x}}(1)$. To get a unit tangent vector we normalize this vector to have length one, i.e. we divide i by its length. Thus $$ \overrightarrow{\boldsymbol{T}}(1)=\frac{1}{\sqrt{1^{2}+2^{2}}}\left(\begin{array}{l} 1 \\ 2 \end{array}\right)=\left(\begin{array}{l} \frac{1}{5} \sqrt{ } 5 \\ \frac{2}{5} \sqrt{ } 5 \end{array}\right) $$ is a unit tangent vector. There is another unit tangent vector, namely $$ -\overrightarrow{\boldsymbol{T}}(1)=-\left(\begin{array}{c} \frac{1}{5} \sqrt{ } 5 \\ \frac{2}{5} \sqrt{ } 5 \end{array}\right) . $$ 56.2. Tangent line and unit tangent vector to Circle. In example 51.5 and 52.1 we had parametrized the circle and found the velocity vector of this parametrization, $$ \overrightarrow{\boldsymbol{x}}(\theta)=\left(\begin{array}{c} \cos \theta \\ \sin \theta \end{array}\right), \quad \overrightarrow{\boldsymbol{x}}^{\prime}(\theta)=\left(\begin{array}{c} -\sin \theta \\ \cos \theta \end{array}\right) . $$ If we pick a particular value of $\theta$ then the tangent line to the circle at $\overrightarrow{\boldsymbol{x}}\left(\theta_{0}\right)$ has parametric equation $$ \overrightarrow{\boldsymbol{y}}(s)=\overrightarrow{\boldsymbol{x}}\left(\theta_{0}\right)+s \overrightarrow{\boldsymbol{x}}^{\prime}\left(\theta_{0}\right)=\left(\begin{array}{c} \cos \theta+s \sin \theta \\ \sin \theta-s \cos \theta \end{array}\right) $$ This equation completely describes the tangent line, but you can try to write it in a more familiar form as a graph $$ y_{2}=m y_{1}+n . $$ To do this you have to eliminate the parameter $s$ from the parametric equations $$ y_{1}=\cos \theta+s \sin \theta, \quad y_{2}=\sin \theta-s \cos \theta . $$ When $\sin \theta \neq 0$ you can solve $y_{1}=\cos \theta+s \sin \theta$ for $s$, with result $$ s=\frac{y_{1}-\cos \theta}{\sin \theta} . $$ So on the tangent line you have $$ y_{2}=\sin \theta-s \cos \theta=\sin \theta-\cos \theta \frac{y_{1}-\cos \theta}{\sin \theta} $$ which after a little algebra (add fractions and use $\sin ^{2} \theta+\cos ^{2} \theta=1$ ) turns out to be the same as The tangent line therefore hits the vertical axis when $y_{1}=0$, at height $n=1 / \sin \theta$, and it has slope $m=-\cot \theta$. For this example you could have found the tangent line without using any calculus by studying the drawing above carefully. Finally, let's find a unit tangent vector. A unit tangent is a multiple of $\overrightarrow{\boldsymbol{x}}^{\prime}(\theta)$ whose length is one. But the vector $\overrightarrow{\boldsymbol{x}}^{\prime}(\theta)=\left(\begin{array}{c}-\sin \theta \\ \cos \theta\end{array}\right)$ already has length one, so the two possible unit vectors are $$ \overrightarrow{\boldsymbol{T}}(\theta)=\overrightarrow{\boldsymbol{x}}^{\prime}(\theta)=\left(\begin{array}{c} -\sin \theta \\ \cos \theta \end{array}\right) \quad \text { and }-\overrightarrow{\boldsymbol{T}}(\theta)=\left(\begin{array}{c} \sin \theta \\ -\cos \theta \end{array}\right) . $$ ## Sketching a parametric curve For a given parametric curve, like $$ \overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c} 1-t^{2} \\ 3 t-t^{3} \end{array}\right) $$ you might want to know what the curve looks like. The most straightforward way of getting a picture is to compute $x_{1}(t)$ and $x_{2}(t)$ for as many values of $t$ as you feel like, and then plotting the computed points. This computation is the kind of repetitive task that computers are very good at, and there are many software packages and graphing calculators that will attempt to do the computation and drawing for you. If the vector function has a constant whose value is not (completely) known, e.g. if we wanted to graph the parametric curve $$ \overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c} 1-t^{2} \\ 3 a t-t^{3} \end{array}\right) \quad(a \text { is a constant }) $$ then plugging parameter values and plotting the points becomes harder, since the unknown constant $a$ shows up in the computed points. On a graphing calculator you would have to choose different values of $a$ and see what kind of pictures you get (you would expect different pictures for different values of $a$ ). In this section we will use the information stored in the derivative $\overrightarrow{\boldsymbol{x}}^{\prime}(t)$ to create a rough sketch of the graph by hand. Let's do the specific curve (75) first. The derivative (or velocity vector) is $$ \overrightarrow{\boldsymbol{x}}^{\prime}(t)=\left(\begin{array}{c} -2 t \\ 3-3 t^{2} \end{array}\right), \quad \text { so }\left\{\begin{array}{l} x_{1}^{\prime}(t)=-2 t \\ x_{1}^{\prime}(t)=3\left(1-t^{2}\right) \end{array}\right. $$ We see that $x_{1}^{\prime}(t)$ changes its sign at $t=0$, while $x_{2}^{\prime}(t)=2(1-t)(1+t)$ changes its sign twice, at $t=-1$ and then at $t=+1$. You can summarize this in a drawing: The arrows indicate the wind direction of the velocity vector $\overrightarrow{\boldsymbol{x}}^{\prime}(t)$ for the various values of $t$. For instance, when $t<-1$ you have $x_{1}^{\prime}(t)>0$ and $x_{2}^{\prime}(t)<0$, so that the vector $\overrightarrow{\boldsymbol{x}}^{\prime}(t)=\left(\begin{array}{c}x_{1}^{\prime}(t) \\ x_{2}^{\prime}(t)\end{array}\right)=\left(\begin{array}{c}+ \\ -\end{array}\right)$ points in the direction "South-East." You see that there are three special $t$ values at which $\overrightarrow{\boldsymbol{x}}^{\prime}(t)$ is either purely horizontal or vertical. Let's compute $\overrightarrow{\boldsymbol{x}}(t)$ at those values $$ \begin{aligned} & t=-1 \quad \overrightarrow{\boldsymbol{x}}(-1)=\left(\begin{array}{c} 0 \\ -2 \end{array}\right) \quad \overrightarrow{\boldsymbol{x}}^{\prime}(-1)=\left(\begin{array}{c} 2 \\ 0 \end{array}\right) \end{aligned} $$ $$ \begin{aligned} & t=-1 \quad \overrightarrow{\boldsymbol{x}}(1)=\left(\begin{array}{c} 0 \\ 2 \end{array}\right) \quad \overrightarrow{\boldsymbol{x}}^{\prime}(1)=\left(\begin{array}{c} -2 \\ 0 \end{array}\right) \end{aligned} $$ This leads you to the following sketch: If you use a plotting program like GNUPLOT you get this picture ## Length of a curve If you have a parametric curve $\overrightarrow{\boldsymbol{x}}(t), a \leq t \leq b$, then there is a formula for the length of the curve it traces out. We'll go through a brief derivation of this formula before stating it. To compute the length of the curve $\{\overrightarrow{\boldsymbol{x}}(t)$ : $a \leq t \leq b\}$ we divide it into lots of short pieces. If the pieces are short enough they will be almost straight line segments, and we know how do compute the length of a line segment. After computing the lengths of all the short line segments, you add them to get an approximation to the length of the curve. As you divide the curve into finer \& finer pieces this approximation should get better \& better. You can smell an integral in this description of what's coming. Here are some more details: Divide the parameter interval into $N$ pieces, $$ a=t_{0}<t_{1}<t_{2}<\cdots<t_{N-1}<t_{N}=b . $$ Then we approximate the curve by the polygon with vertices at $\overrightarrow{\boldsymbol{x}}\left(t_{0}\right)=\overrightarrow{\boldsymbol{x}}(a), \overrightarrow{\boldsymbol{x}}\left(t_{1}\right), \ldots$, $\overrightarrow{\boldsymbol{x}}\left(t_{N}\right)$. The distance between to consecutive points at $\overrightarrow{\boldsymbol{x}}\left(t_{i-1}\right)$ and $\overrightarrow{\boldsymbol{x}}\left(t_{i}\right)$ on this polygon is $$ \left\|\overrightarrow{\boldsymbol{x}}\left(t_{i}\right)-\overrightarrow{\boldsymbol{x}}\left(t_{i-1}\right)\right\| . $$ Since we are going to take $t_{i-1}-t_{i}$ "very small," we can use the derivative to approximate the distance by $$ \overrightarrow{\boldsymbol{x}}\left(t_{i}\right)-\overrightarrow{\boldsymbol{x}}\left(t_{i-1}\right)=\frac{\overrightarrow{\boldsymbol{x}}\left(t_{i}\right)-\overrightarrow{\boldsymbol{x}}\left(t_{i-1}\right)}{t_{i}-t_{i-1}}\left(t_{i}-t_{i-1}\right) \approx \overrightarrow{\boldsymbol{x}}^{\prime}\left(t_{i}\right)\left(t_{i}-t_{i-1}\right), $$ so that $$ \left\|\overrightarrow{\boldsymbol{x}}\left(t_{i}\right)-\overrightarrow{\boldsymbol{x}}\left(t_{i-1}\right)\right\| \approx\left\|\overrightarrow{\boldsymbol{x}}^{\prime}\left(t_{i}\right)\right\|\left(t_{i}-t_{i-1}\right) . $$ Now add all these distances and you get $$ \text { Length polygon } \approx \sum_{i=1}^{N}\left\|\overrightarrow{\boldsymbol{x}}^{\prime}\left(t_{i}\right)\right\|\left(t_{i}-t_{i-1}\right) \approx \int_{t=a}^{b}\left\|\overrightarrow{\boldsymbol{x}}^{\prime}(t)\right\| \mathrm{d} t . $$ This is our formula for the length of a curve. Just in case you think this was a proof, it isn't! First, we have used the symbol $\approx$ which stands for "approximately equal," and we said "very small" in quotation marks, so there are several places where the preceding discussion is vague. But most of all, we can't prove that this integral is the length of the curve, since we don't have a definition of "the length of a curve." This is an opportunity, since it leaves us free to adopt the formula we found as our formal definition of the length of a curve. Here goes: 58.1. Definition. If $\{\overrightarrow{\boldsymbol{x}}(t): a \leq t \leq b\}$ is a parametric curve, then its length is given by $$ \text { Length }=\int_{a}^{b}\left\|\overrightarrow{\boldsymbol{x}}^{\prime}(t)\right\| \mathrm{d} t $$ provided the derivative $\overrightarrow{\boldsymbol{x}}^{\prime}(t)$ exists, and provided $\left\|\overrightarrow{\boldsymbol{x}}^{\prime}(t)\right\|$ is a Riemann-integrable function. In this course we will not worry too much about the two caveats about differentiability and integrability at the end of the definition. 58.2. Length of a line segment. How long is the line segment $A B$ connecting two points $A\left(a_{1}, a_{2}\right)$ and $B\left(b_{1}, b_{2}\right)$ ? Solution: Parametrize the segment by $$ \overrightarrow{\boldsymbol{x}}(t)=\overrightarrow{\boldsymbol{a}}+t(\overrightarrow{\boldsymbol{b}}-\overrightarrow{\boldsymbol{a}}), \quad(0 \leq t \leq 1) . $$ Then $$ \left\|\overrightarrow{\boldsymbol{x}}^{\prime}(t)\right\|=\|\overrightarrow{\boldsymbol{b}}-\overrightarrow{\boldsymbol{a}}\|, $$ and thus $$ \operatorname{Length}(A B)=\int_{0}^{1}\left\|\overrightarrow{\boldsymbol{x}}^{\prime}(t)\right\| \mathrm{d} t=\int_{0}^{1}\|\overrightarrow{\boldsymbol{b}}-\overrightarrow{\boldsymbol{a}}\| \mathrm{d} t=\|\overrightarrow{\boldsymbol{b}}-\overrightarrow{\boldsymbol{a}}\| . $$ In other words, the length of the line segment $A B$ is the distance between the two points $A$ and $B$. It looks like we already knew this, but no, we didn't: what this example shows is that the length of the line segment $A B$ as defined in definition 58.1 is the distance between the points $A$ and $B$. So definition 58.1 gives the right answer in this example. If we had found anything else in this example we would have had to change the definition. 58.3. Perimeter of a circle of radius $R$. What is the length of the circle of radius $R$ centered at the origin? This is another example where we know the answer in advance. The following computation should give us $2 \pi R$ or else there's something wrong with definition 58.1 We parametrize the circle as follows: $$ \overrightarrow{\boldsymbol{x}}(t)=R \cos \theta \overrightarrow{\boldsymbol{i}}+R \sin \theta \overrightarrow{\boldsymbol{j}}, \quad(0 \leq \theta \leq 2 \pi) . $$ Then $$ \overrightarrow{\boldsymbol{x}}^{\prime}(\theta)=-R \sin \theta \overrightarrow{\boldsymbol{i}}+R \cos \theta \overrightarrow{\boldsymbol{j}}, \quad \text { and }\left\|\overrightarrow{\boldsymbol{x}}^{\prime}(\theta)\right\|=\sqrt{R^{2} \sin ^{2} \theta+R^{2} \cos ^{2} \theta}=R . $$ The length of this circle is therefore $$ \text { Length of circle }=\int_{0}^{2 \pi} R \mathrm{~d} \theta=2 \pi R \text {. } $$ Fortunately we don't have to fix the definition! And now the bad news: The integral in the definition of the length looks innocent enough and hasn't caused us any problems in the two examples we have done so far. It is however a reliable source of very difficult integrals. To see why, you must write the integral in terms of the components $x_{1}(t), x_{2}(t)$ of $\overrightarrow{\boldsymbol{x}}(t)$. Since $$ \overrightarrow{\boldsymbol{x}}^{\prime}(t)=\left(\begin{array}{l} x_{1}^{\prime}(t) \\ x_{2}^{\prime}(t) \end{array}\right) \text { and thus }\left\|\overrightarrow{\boldsymbol{x}}^{\prime}(t)\right\|=\sqrt{x_{1}^{\prime}(t)^{2}+x_{2}^{\prime}(t)^{2}} $$ the length of the curve parametrized by $\{\overrightarrow{\boldsymbol{x}}(t): a \leq t \leq b\}$ is $$ \text { Length }=\int_{a}^{b} \sqrt{x_{1}^{\prime}(t)^{2}+x_{2}^{\prime}(t)^{2}} \mathrm{~d} t . $$ For most choices of $x_{1}(t), x_{2}(t)$ the sum of squares under the square root cannot be simplified, and, at best, leads to a difficult integral, but more often to an impossible integral. But, chin up, sometimes, as if by a miracle, the two squares add up to an expression whose square root can be simplified, and the integral is actually not too bad. Here is an example: 58.4. Length of the Cycloid. After getting in at the bottom of a liberated ferris wheel we are propelled through the air along the cycloid whose parametrization is given in example 51.6, $$ \overrightarrow{\boldsymbol{x}}(\theta)=\left(\begin{array}{l} \theta-\sin \theta \\ 1-\cos \theta \end{array}\right) . $$ How long is one arc of the Cycloid? Solution: Compute $\overrightarrow{\boldsymbol{x}}^{\prime}(\theta)$ and you find $$ \overrightarrow{\boldsymbol{x}}^{\prime}(\theta)=\left(\begin{array}{c} 1-\cos \theta \\ \sin \theta \end{array}\right) $$ so that $$ \left\|\overrightarrow{\boldsymbol{x}}^{\prime}(\theta)\right\|=\sqrt{(1-\cos \theta)^{2}+(\sin \theta)^{2}}=\sqrt{2-2 \cos \theta} . $$ This doesn't look promising (this is the function we must integrate!), but just as in example 54.3 we can put the double angle formula $\cos \theta=1-2 \sin ^{2} \frac{\theta}{2}$ to our advantage: $$ \left\|\overrightarrow{\boldsymbol{x}}^{\prime}(\theta)\right\|=\sqrt{2-2 \cos \theta}=\sqrt{4 \sin ^{2} \frac{\theta}{2}}=2\left|\sin \frac{\theta}{2}\right| . $$ We are concerned with only one arc of the Cycloid, so we have $0 \leq \theta<2 \pi$, which implies $0 \leq \frac{\theta}{2} \leq \pi$, which in turn tells us that $\sin \frac{\theta}{2}>0$ for all $\theta$ we are considering. Therefore the length of one arc of the Cycloid is $$ \begin{aligned} \text { Length } & =\int_{0}^{2 \pi}\left\|\overrightarrow{\boldsymbol{x}}^{\prime}(\theta)\right\| \mathrm{d} \theta \\ & =\int_{0}^{2 \pi} 2\left|\sin \frac{\theta}{2}\right| \mathrm{d} \theta \\ & =2 \int_{0}^{2 \pi} \sin \frac{\theta}{2} \mathrm{~d} \theta \\ & =\left[-4 \cos \frac{\theta}{2}\right]_{0}^{2 \pi} \\ & =8 \end{aligned} $$ To visualize this answer: the height of the cycloid is 2 (twice the radius of the circle), so the length of one arc of the Cycloid is four times its height (Look at the drawing on page 126.) ## The arclength function If you have a parametric curve $\overrightarrow{\boldsymbol{x}}(t)$ and you pick a particular point on this curve, say, the point corresponding to parameter value $t_{0}$, then one defines the arclength function (starting at $t_{0}$ ) to be $$ s(t)=\int_{t_{0}}^{t}\left\|\overrightarrow{\boldsymbol{x}}^{\prime}(\tau)\right\| \mathrm{d} \tau $$ Thus $s(t)$ is the length of the curve segment $\left\{\overrightarrow{\boldsymbol{x}}(\tau): t_{0} \leq \tau \leq t\right\}$. ( $\tau$ is a dummy variable.) If you interpret the parametric curve $\overrightarrow{\boldsymbol{x}}(t)$ as a description of the motion of some object, then the length $s(t)$ of the curve $\left\{\overrightarrow{\boldsymbol{x}}(\tau): t_{0} \leq \tau \leq t\right\}$ is the distance traveled by the object since time $t_{0}$. If you differentiate the distance traveled with respect to time you should get the speed, and indeed, by the Fundamental Theorem of Calculus one has $$ s^{\prime}(t)=\frac{\mathrm{d}}{\mathrm{d} t} \int_{t_{0}}^{t}\left\|\overrightarrow{\boldsymbol{x}}^{\prime}(\tau)\right\| \mathrm{d} \tau=\left\|\overrightarrow{\boldsymbol{x}}^{\prime}(t)\right\| $$ which we had called the speed $v(t)$ in $\S 54$. ## Graphs in Cartesian and in Polar Coordinates Cartesian graphs. Most of first-semester-calculus deals with a particular kind of curve, namely, the graph of a function, " $y=f(x)$ ". You can regard such a curve as a special kind of parametric curve, where the parametrization is $$ \overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c} t \\ f(t) \end{array}\right) $$ and we switch notation from " $(x, y)$ " to " $\left(x_{1}, x_{2}\right) . "$ For this special case the velocity vector is always given by $$ \overrightarrow{\boldsymbol{x}}^{\prime}(t)=\left(\begin{array}{c} 1 \\ f^{\prime}(t) \end{array}\right) $$ the speed is $$ v(t)=\left\|\overrightarrow{\boldsymbol{x}}^{\prime}(t)\right\|=\sqrt{1+f^{\prime}(t)^{2}}, $$ and the length of the segment between $t=a$ and $t=b$ is $$ \text { Length }=\int_{a}^{b} \sqrt{1+f^{\prime}(t)^{2}} \mathrm{~d} t . $$ Polar graphs. Instead of choosing Cartesian coordinates $\left(x_{1}, x_{2}\right)$ one can consider so-called Polar Coordinates in the plane. We have seen these before in the section on complex numbers: to specify the location of a point in the plane you can give its $x_{1}, x_{2}$ coordinates, but you could also give the absolute value and argument of the complex number $x_{1}+i x_{2}$ (see §24.) Or, to say it without mentioning complex numbers, you can say where a point $P$ in the plane is by saying (1) how far it is from the origin, and (2) how large the angle between the line segment $O P$ and a fixed half line (usually the positive $x$-axis) is. The Cartesian coordinates of a point with polar coordinates $(r, \theta)$ are $$ x_{1}=r \cos \theta, \quad x_{2}=r \sin \theta, $$ or, in our older notation, $$ x=r \cos \theta, \quad y=r \sin \theta . $$ These are the same formulas as in $\S 24$, where we had " $r=|z|$ and $\theta=\arg z$." Often a curve is given as a graph in polar coordinates, i.e. for each angle $\theta$ there is one point $(X)$ on the curve, and its distance $r$ to the origin is some function $f(\theta)$ of the angle. In other words, the curve consists of all points whose polar coordinates satisfy the equation $r=f(\theta)$. You can parametrize such a curve by or, $$ \overrightarrow{\boldsymbol{x}}(\theta)=f(\theta) \cos \theta \overrightarrow{\boldsymbol{i}}+f(\theta) \sin \theta \overrightarrow{\boldsymbol{j}} . $$ You can apply the formulas for velocity, speed and arclength to this parametrization, but instead of doing the straightforward calculation, let's introduce some more notation. For any angle $\theta$ we define the vector $$ \overrightarrow{\boldsymbol{u}}(\theta)=\left(\begin{array}{c} \cos \theta \\ \sin \theta \end{array}\right)=\cos \theta \overrightarrow{\boldsymbol{i}}+\sin \theta \overrightarrow{\boldsymbol{j}} $$ The derivative of $\overrightarrow{\boldsymbol{u}}$ is $$ \overrightarrow{\boldsymbol{u}}^{\prime}(\theta)=\left(\begin{array}{c} -\sin \theta \\ \cos \theta \end{array}\right)=-\sin \theta \overrightarrow{\boldsymbol{i}}+\cos \theta \overrightarrow{\boldsymbol{j}} . $$ The vectors $\overrightarrow{\boldsymbol{u}}(\theta)$ and $\overrightarrow{\boldsymbol{u}}^{\prime}(\theta)$ are perpendicular unit vectors. Then we have so by the product rule one has $$ \overrightarrow{\boldsymbol{x}}(\theta)=f(\theta) \overrightarrow{\boldsymbol{u}}(\theta) $$ $$ \overrightarrow{\boldsymbol{x}}^{\prime}(\theta)=f^{\prime}(\theta) \overrightarrow{\boldsymbol{u}}(\theta)+f(\theta) \overrightarrow{\boldsymbol{u}}^{\prime}(\theta) . $$ Since $\overrightarrow{\boldsymbol{u}}(\theta)$ and $\overrightarrow{\boldsymbol{u}}^{\prime}(\theta)$ are perpendicular unit vectors this implies $$ v(\theta)=\left\|\overrightarrow{\boldsymbol{x}}^{\prime}(\theta)\right\|=\sqrt{f^{\prime}(\theta)^{2}+f(\theta)^{2}} . $$ The length of the piece of the curve between polar angles $\alpha$ and $\beta$ is therefore $$ \text { Length }=\int_{\alpha}^{\beta} \sqrt{f^{\prime}(\theta)^{2}+f(\theta)^{2}} \mathrm{~d} \theta . $$ You can also read off that the angle $\psi$ between the radius $O X$ and the tangent to the curve satisfies $$ \tan \psi=\frac{f(\theta)}{f^{\prime}(\theta)} . $$ ## PROBLEMS ## SKETCHING PARAMETRIZED CURVES Sketch the curves which are traced out by the following parametrizations. Describe the motion (is the curve you draw traced out once or several times? In which direction?) In all cases the parameter is allowed to take all values from $-\infty$ to $\infty$. If a curve happens to be the graph of some function $x_{2}=f\left(x_{1}\right)$ (or $y=f(x)$ if you prefer), then find the function $f(\cdots)$. Is there a geometric interpretation of the parameter as an angle, or a distance, etc? 463. $\overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{l}1-t \\ 2-t\end{array}\right)$ 464. $\overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{l}3 t+2 \\ 3 t+2\end{array}\right)$ 465. $\overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{l}e^{t} \\ e^{t}\end{array}\right)$ 466. $\overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c}e^{t} \\ t\end{array}\right)$ 467. $\overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c}e^{t} \\ e^{-t}\end{array}\right)$ 468. $\overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c}t \\ t^{2}\end{array}\right)$ 469. $\overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c}\sin t \\ t\end{array}\right)$ 470. $\overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c}\sin t \\ \cos 2 t\end{array}\right)$ 471. $\overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c}\sin 25 t \\ \cos 25 t\end{array}\right)$ 471. $\overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{l}1+\cos t \\ 1+\sin t\end{array}\right)$ 472. $\overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c}2 \cos t \\ \sin t\end{array}\right)$ 473. $\overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c}t^{2} \\ t^{3}\end{array}\right)$ $$ * * * $$ Find parametric equations for the curve traced out by the $X$ in each of the following descriptions. 475. A circle of radius 1 rolls over the $x_{1}$ axis, and $X$ is a point on a spoke of the circle at a distance $a>0$ from the center of the circle (the case $a=1$ gives the cycloid.) 476. Group problem. A circle of radius $r>0$ rolls on the outside of the unit circle. $X$ is a point on the rolling circle (These curves are called epicycloids.) 477. Group problem. A circle of radius $0<r<1$ rolls on the inside of the unit circle. $X$ is a point on the rolling circle. 478. Let $O$ be the origin, $A$ the point $(1,0)$, and $B$ the point on the unit circle for which the angle $\angle A O B=\theta$. Then $X$ is the point on the tangent to the unit circle through $B$ for which the distance $B X$ equals the length of the circle arc $A B$. 479. $X$ is the point where the tangent line at $\overrightarrow{\boldsymbol{x}}(\theta)$ to the helix of example 51.7 intersects the $x_{1} x_{2}$ plane. ## PRODUCT RULES 480. Group problem. If a moving object has position vector $\overrightarrow{\boldsymbol{x}}(t)$ at time $t$, and if it's speed is constant, then show that the acceleration vector is always perpendicular to the velocity vector. [Hint: differentiate $v^{2}=\overrightarrow{\boldsymbol{v}} \cdot \overrightarrow{\boldsymbol{v}}$ with respect to time and use some of the product rules from $\S 53$. 481. Group problem. If a charged particle moves in a magnetic field $\vec{B}$, then the laws of electromagnetism say that the magnetic field exerts a force on the particle and that this force is given by the following miraculous formula: $$ \overrightarrow{\boldsymbol{F}}=q \overrightarrow{\boldsymbol{v}} \times \overrightarrow{\boldsymbol{B}} . $$ where $q$ is the charge of the particle, and $\overrightarrow{\boldsymbol{v}}$ is its velocity. Not only does the particle know calculus (since Newton found $\overrightarrow{\boldsymbol{F}}=m \overrightarrow{\boldsymbol{a}}$ ), it also knows vector geometry! Show that even though the magnetic field is pushing the particle around, and even though its velocity vector may be changing with time, its speed $v=\|\overrightarrow{\boldsymbol{v}}\|$ remains constant. 482. Group problem. NewTON's law of gravitation states that the Earth pulls any object of mass $m$ towards its center with a force inversely proportional to the squared distance of the object to the Earth's center. (i) Show that if the Earth's center is the origin, and $\vec{r}$ is the position vector of the object of mass $m$, then the gravitational force is given by $$ \overrightarrow{\boldsymbol{F}}=-C \frac{\overrightarrow{\boldsymbol{r}}}{\|\overrightarrow{\boldsymbol{r}}\|^{3}} \quad(C \text { is a positive constant. }) $$ [No calculus required. You are supposed to check that this vector satisfies the description in the beginning of the problem, i.e. that it has the right length and direction.] (ii) If the object is moving, then its angular momentum is defined in physics books by the formula $\overrightarrow{\boldsymbol{L}}=m \overrightarrow{\boldsymbol{r}} \times \overrightarrow{\boldsymbol{v}}$. Show that, if the Earth's gravitational field is the only force acting on the object, then its angular momentum remains constant. [Hint: you should differentiate $\overrightarrow{\boldsymbol{L}}$ with respect to time, and use a product rule.] ## CURVE SKETCHING, USING THE TANGENT VECTOR 483. Consider a triangle $A B C$ and let $\overrightarrow{\boldsymbol{a}}, \overrightarrow{\boldsymbol{b}}$ and $\overrightarrow{\boldsymbol{c}}$ be the position vectors of $A, B$ and $C$. (i) Show that the parametric curve given by $$ \overrightarrow{\boldsymbol{x}}(t)=(1-t)^{2} \overrightarrow{\boldsymbol{a}}+2 t(1-t) \overrightarrow{\boldsymbol{b}}+t^{2} \overrightarrow{\boldsymbol{c}}, $$ goes through the points $A$ and $C$, and that at these points it is tangent to the sides of the triangle. Make a drawing. (ii) At which point on this curve is the tangent parallel to the side $A C$ of the triangle? 484. Let $\vec{a}, \vec{b}, \vec{c}, \vec{d}$ be four given vectors. Consider the parametric curve (known as a Bezier curve) $\overrightarrow{\boldsymbol{x}}(t)=(1-t)^{3} \overrightarrow{\boldsymbol{a}}+3 t(1-t)^{2} \overrightarrow{\boldsymbol{b}}+3 t^{2}(1-t) \overrightarrow{\boldsymbol{c}}+t^{3} \overrightarrow{\boldsymbol{d}}$ where $0 \leq t \leq 1$. $\overrightarrow{\boldsymbol{x}}^{\prime}(1)$. Compute $\quad \overrightarrow{\boldsymbol{x}}(0), \overrightarrow{\boldsymbol{x}}(1), \overrightarrow{\boldsymbol{x}}^{\prime}(0), \quad$ and The characters in most fonts (like the fonts used for these notes) are made up of lots of Bezier curves. 485. Sketch the following curves by finding all points at which the tangent is either horizontal or vertical (in these problems, $a$ is a positive constant.) (i) $\overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c}1-t^{2} \\ t+2 t^{2}\end{array}\right)$ (ii) $\overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c}\sin t \\ \sin 2 t\end{array}\right)$ (iii) $\overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c}\cos t \\ \sin 2 t\end{array}\right)$ (iv) $\overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c}1-t^{2} \\ 3 a t-t^{3}\end{array}\right)$ $(\mathbf{v}) \overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c}1-t^{2} \\ 3 a t+t^{3}\end{array}\right)$ (vi) $\overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c}\cos 2 t \\ \sin 3 t\end{array}\right)$ (vii) $\overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c}t /\left(1+t^{2}\right) \\ t^{2}\end{array}\right)$ (viii) $\overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c}t^{2} \\ \sin t\end{array}\right)$ (ix) $\overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c}1+t^{2} \\ 2 t^{4}\end{array}\right)$ ## LENGTHS OF CURVES 486. Find the length of each of the following curve segments. An " $"$ " indicates a difficult but possible integral which you should do; " $\iint "$ indicates that the resulting integral cannot reasonably be done with the methods explained in this course - you may leave an integral in your answer after simplifying it as much as you can. All other problems lead to integrals that shouldn't be too hard. (i) The cycloid $\overrightarrow{\boldsymbol{x}}(\theta)=\left(\begin{array}{l}R(\theta-\sin \theta) \\ R(1-\cos \theta)\end{array}\right)$, with $0 \leq \theta \leq 2 \pi$. (ii) $\left[\iint\right]$ The ellipse $\overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c}\cos t \\ A \sin t\end{array}\right)$ with $0 \leq t \leq 2 \pi$. (iii) $\left[\int\right]$ The parabola $\overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c}t \\ t^{2}\end{array}\right)$ with $0 \leq t \leq 1$. (iv) $\left[\iint\right]$ The Sine graph $\overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c}t \\ \sin t\end{array}\right)$ with $0 \leq t \leq \pi$. (v) The evolute of the circle $\overrightarrow{\boldsymbol{x}}=\left(\begin{array}{c}\cos t+t \sin t \\ \sin t-t \cos t\end{array}\right)$ (with $0 \leq t \leq L$ ). (vi) The Catenary, i.e. the graph of $y=\cosh x=\frac{e^{x}+e^{-x}}{2}$ for $-a \leqslant x \leqslant a$. (vii) The Cardioid, which in polar coordinates is given by $r=1+\cos \theta,(|\theta|<\pi)$, so $\overrightarrow{\boldsymbol{x}}(\theta)=\left(\begin{array}{l}(1+\cos \theta) \cos \theta \\ (1+\cos \theta) \sin \theta\end{array}\right)$. (viii) The Helix from example 51.7, $\overrightarrow{\boldsymbol{x}}(\theta)=\left(\begin{array}{c}\cos \theta \\ \sin \theta \\ a \theta\end{array}\right), 0 \leq \theta \leq 2 \pi$. 487. Below are a number of parametrized curves. For each of these curves find all points with horizontal or vertical tangents; also find all points for which the tangent is parallel to the diagonal. Finally, find the length of the piece of these curves corresponding to the indicated parameter interval (I tried hard to find examples where the integral can be done). $$ \begin{aligned} & \text { (i) } \overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c} t^{1 / 3}-\frac{9}{20} t^{5 / 3} \\ t \end{array}\right) \\ & 0 \leq t \leq 1 \\ & \text { (ii) } \overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c} t^{2} \\ t^{2} \sqrt{ } t \end{array}\right) \\ & 1 \leq t \leq 2 \\ & \text { (iii) } \overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c} t^{2} \\ t-t^{3} / 3 \end{array}\right) \\ & 0 \leq t \leq \sqrt{ } 3 \\ & \text { (iv) } \overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c} 8 \sin t \\ 7 t-\sin t \cos t \end{array}\right) \\ & |t| \leq \frac{\pi}{2} \\ & \text { (v) Groupproblem } \overrightarrow{\boldsymbol{x}}(t)=\left(\begin{array}{c} t \\ \sqrt{1+t} \end{array}\right) \\ & 0 \leq t \leq 1 \end{aligned} $$ (The last problem is harder, but it can be done. In all the other ones the quantity under the square root that appears when you set up the integral for the length of the curve is a perfect square.) 488. Consider the polar graph $r=e^{k \theta}$, with $-\infty<\theta<\infty$, where $k$ is a positive constant. This curve is called the logarithmic spiral. (i) Find a parametrization for the polar graph of $r=e^{k \theta}$. (ii) Compute the arclength function $s(\theta)$ starting at $\theta_{0}=0$. (iii) Show that the angle between the radius and the tangent is the same at all points on the logarithmic spiral. (iv) Which points on this curve have horizontal tangents? 489. Group problem. The Archimedean spiral is the polar graph of $r=\theta$, where $\theta \geq 0$. (i) Which points on the part of the spiral with $0<\theta<\pi$ have a horizontal tangent? Which have a vertical tangent? (ii) Find all points on the whole spiral (allowing all $\theta>0$ ) which have a horizontal tangent. (iii) Show that the part of the spiral with $0<\theta<\pi$ is exactly as long as the piece of the parabola $y=\frac{1}{2} x^{2}$ between $x=0$ and $x=\pi$. (It is not impossible to compute the lengths of both curves, but you don't have to to answer this problem!) ## KEPLER's LAW's Kepler's first law: Planets move in a plane in an ellipse with the sun at one focus. Kepler's second law: The position vector from the sun to a planet sweeps out area at a constant rate. Kepler's third law: The square of the period of a planet is proportional to the cube of its mean distance from the sun. The mean distance is the average of the closest distance and the furthest distance. The period is the time required to go once around the sun. Let $\vec{p}=x \vec{i}+y \vec{j}+z \vec{k}$ be the position of a planet in space where $x, y$ and $z$ are all function of time $t$. Assume the sun is at the origin. Newton's law of gravity implies that $$ \frac{d^{2} \vec{p}}{d t^{2}}=\alpha \frac{\vec{p}}{\|\vec{p}\|^{3}} $$ where $\alpha$ is $-G M, G$ is a universal gravitational constant and $M$ is the mass of the sun. It does not depend on the mass of the planet. First let us show that planets move in a plane. By the product rule $$ \frac{d}{d t}\left(\vec{p} \times \frac{d \vec{p}}{d t}\right)=\left(\frac{d \vec{p}}{d t} \times \frac{d \vec{p}}{d t}\right)+\left(\vec{p} \times \frac{d^{2} \vec{p}}{d t^{2}}\right) $$ By (1) and the fact that the cross product of parallel vectors is $\overrightarrow{0}$ the right hand side of (2) is $\overrightarrow{0}$. It follows that there is a constant vector $\vec{c}$ such that at all times $$ \vec{p} \times \frac{d \vec{p}}{d t}=\vec{c} $$ Thus we can conclude that both the position and velocity vector lie in the plane with normal vector $\vec{c}$. Without loss of generality we assume that $\vec{c}=\beta \vec{k}$ for some scaler $\beta$ and $\vec{p}=x \vec{i}+y \vec{j}$. Let $x=r \cos (\theta)$ and $y=r \sin (\theta)$ where we consider $\mathrm{r}$ and $\theta$ as functions of t. If we calculate the derivative of $\vec{p}$ we get $$ \frac{d \vec{p}}{d t}=\left[\frac{d r}{d t} \cos (\theta)-r \sin (\theta) \frac{d \theta}{d t}\right] \vec{i}+\left[\frac{d r}{d t} \sin (\theta)+r \cos (\theta) \frac{d \theta}{d t}\right] \vec{j} $$ Since $\vec{p} \times \frac{d \vec{p}}{d t}=\beta \vec{k}$ we have $$ r \cos (\theta)\left(\frac{d r}{d t} \sin (\theta)+r \cos (\theta) \frac{d \theta}{d t}\right)-r \sin (\theta)\left(\frac{d r}{d t} \cos (\theta)-r \sin (\theta) \frac{d \theta}{d t}\right)=\beta $$ After multiplying out and simplifying this reduces to $$ r^{2} \frac{d \theta}{d t}=\beta $$ The area swept out from time $t_{0}$ to time $t_{1}$ by a curve in polar coordinates is $$ A=\frac{1}{2} \int_{t_{0}}^{t_{1}} r^{2} \frac{d \theta}{d t} d t $$ By (6) A is proportional to $t_{1}-t_{0}$. This is Kepler's second law. We will now prove Kepler's third law for the special case of a circle. So let $\mathrm{T}$ be the time it takes the planet to go around the sun one time and let $\mathrm{r}$ be its distance from the sun. We will show that $$ \frac{T^{2}}{r^{3}}=-\frac{(2 \pi)^{2}}{\alpha} $$ The second law implies that $\theta(t)$ is a linear function of $\mathrm{t}$ and so in fact $$ \frac{d \theta}{d t}=\frac{2 \pi}{T} $$ Since $\mathrm{r}$ is constant we have that $\frac{d r}{d t}=0$ and so (4) simplifies to $$ \frac{d \vec{p}}{d t}=\left[-r \sin (\theta) \frac{2 \pi}{T}\right] \vec{i}+\left[r \cos (\theta) \frac{2 \pi}{T}\right] \vec{j} $$ Differentiating once more we get $$ \frac{d^{2} \vec{p}}{d t^{2}}=\left[-r \cos (\theta)\left(\frac{2 \pi}{T}\right)^{2}\right] \vec{i}+\left[-r \sin (\theta)\left(\frac{2 \pi}{T}\right)^{2}\right] \vec{j}=-\left(\frac{2 \pi}{T}\right)^{2} \vec{p} $$ Noting that $r=\|\vec{p}\|$ and using (1) we get $$ \frac{\alpha}{r^{3}}=-\left(\frac{2 \pi}{T}\right)^{2} $$ from which (8) immediately follows. Complete derivations of the three laws from Newton's law of gravity can be found in T.M.Apostal, Calculus vol I , Blaisdel(1967), p.545-548. Newton deduced the law of gravity from Kepler's laws. The argument can be found in L.Bers, Calculus vol II , Holt,Rinhart,and Winston(1969), p.748-754. The planet earth is 93 million miles from the sun. The year has 365 days. The moon is 250,000 miles from the earth and circles the earth once every 28 days. The earth's diameter is 7850 miles. In the first four problems you may assume orbits are circular. Use only the data in this paragraph. 490. The former planet Pluto takes 248 years to orbit the sun. How far is Pluto from the sun? Mercury is 36 million miles from the sun. How many (Earth) days does it take for Mercury to complete one revolution of the sun? 491. Russia launched the first orbital satelite in 1957. Sputnik orbited the earth every 96 minutes. How high off the surface of the earth was this satelite? 492. A communication satellite is to orbit the earth around the equator at such a distance so as to remain above the same spot on the earth's surface at all times. What is the distance from the center of the earth such a satellite should orbit? 493. Find the ratio of the masses of the sun and the earth. 494. The Kmart7 satellite is to be launched into polar earth orbit by firing it from a large cannon. This is possible since the satellite is very small, consisting of a single blinking blue light. Polar orbit means that the orbit passes over both the north and south poles. Let $\mathrm{p}(\mathrm{t})$ be the point on the earth's surface at which the blinking blue light is directly overhead at time t. Find the largest orbit that the Kmart7 can have so that every person on earth will be within 1000 miles of $\mathrm{p}(\mathrm{t})$ at least once a day. You may assume that the satellite orbits the earth exactly $\mathrm{n}$ times per day for some integer $\mathrm{n}$. 495. Let $\mathrm{A}$ be the total area swept out by an elliptical orbit. Show that $\beta=\frac{2 A}{T}$. 496. Let $E$ be an ellipse with one of the focal points $f$. Let $d$ be the minimum distance from some point of the ellipse to $f$ and let $D$ be the maximum distance. In terms of $d$ and $D$ only what is the area of the ellipse $E$ ? Hint: The area of an ellipse is $\pi a b$ where $a$ is its minimum radius and $b$ its maximum radius (both from the center of the ellipse). If $f_{1}$ and $f_{2}$ are the focal points of $E$ then the sum of the distances from $f_{1}$ to $p$ and $f_{2}$ to $p$ is constant for all points $p$ on $E$. 497. Halley's comet orbits the sun every 77 years. Its closest approach is 53 million miles. What is its furthest distance from the sun? What is the maximum speed of the comet and what is the minimum speed? # GNU Free Documentation License Version 1.3, 3 November 2008<br>Copyright (C) 2000, 2001, 2002, 2007, 2008 Free Software Foundation, Inc. $\langle$ http://fsf.org/〉 Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The purpose of this License is to make a manual, textbook, or other functional and useful document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others. This License is a kind of "copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software. We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference. ## APPLICABILITY AND DEFINITIONS This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The "Document", below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as "you". You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law. A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language. A "Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none. The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words. A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, mats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not "Transparent" is called "Opaque". Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, $\mathrm{XCF}$ and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only. The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text. The "publisher" means any person or entity that distributes copies of the Document to the public. A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as "Acknowledgements", "Dedications", "Endorsements", or "History".) To "Preserve the Title" of such a section when you modify the Document means that it remains a section "Entitled XYZ" according to this definition. The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License. ## VERBATIM COPYING You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3. You may also lend copies, under the same conditions stated above, and you may publicly display copies. ## COPYING IN QUANTITY If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects. If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages. If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using publiccomputer-network location from which the general network-using public has access to download using publiclatter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public. It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document. ## MODIFICATIONS You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version A. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission. B. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement. C. State on the Title page the name of the publisher of the Modified Version, as the publisher D. Preserve all the copyright notices of the Document. E. Add an appropriate copyright notice for your modifications adjacent to the other copyright notices. F. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below. G. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice. H. Include an unaltered copy of this License. I. Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence. J. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission. K. For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein. L. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles. M. Delete any section Entitled "Endorsements". Such a section may not be included in the Modified Version. N. Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section. O. Preserve any Warranty Disclaimers. If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles. You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties - for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard. You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a BackCover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one. The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version. ## COMBINING DOCUMENTS You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers. The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work. In the combination, you must combine any sections Entitled "History" in the various original documents, forming one section Entitled "History"; likewise combine any sections Entitled "Acknowledgements", and any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements". ## COLLECTIONS OF DOCUMENTS You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects. You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document. ## AGGREGATION WITH INDEPENDENT WORKS A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individul works compilation is not used to limit the lega permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document. If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate. ## TRANSLATION Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title. ## TERMINATION You may not copy, modify, sublicense, or distribute the Document except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense, or distribute it is void, and will automatically terminate your rights under this License. However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, receipt of a copy of some or all of the same material does not give you any rights to use it. ## FUTURE REVISIONS OF THIS LICENSE The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/. Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation. If the Document specifies that a proxy can decide which future versions of this License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Document. ## RELICENSING "Massive Multiauthor Collaboration Site" (or "MMC Site") means any World Wide Web server that publishes copyrightable works and also provides prominent facilities for anybody to edit those works. A public wiki that anybody can edit is an example of such a server. A "Massive Multiauthor Collaboration" (or "MMC") contained in the site means any set of copyrightable works thus published on the MMC site. "CC-BY-SA" means the Creative Commons Attribution-Share Alike 3.0 license published by Creative Commons Corporation, a not-for-profit corporation with a principal place of business in San Francisco, California, as well as future copyleft versions of that license published by that same organization. "Incorporate" means to publish or republish a Document, in whole or in part, as part of another Document. An MMC is "eligible for relicensing" if it is licensed under this License, and if all works that were first published under this License somewhere other than this MMC, and subsequently incorporated in whole or in part into the MMC, (1) had no cover texts or invariant sections, and (2) were thus incorporated prior to November 1, 2008. The operator of an MMC Site may republish an MMC contained in the site under CC-BY-SA on the same site at any time before August 1, 2009, provided the MMC is eligible for relicensing.
Textbooks
Does the Maxwell-Boltzmann distribution apply to gases only? The Maxwell-Boltzmann distribution can be used to determine the fraction of particles with sufficient energy to react. I know that the curve applies to gaseous reactants and would like to know whether solids and/or liquids are also described by a similar distribution. In other words, can I use a Maxwell-Boltzmann distribution to interpret reaction rates at the molecular level in liquids and/or solids? physical-chemistry temperature kinetic-theory-of-gases JaywalkerJaywalker I'm going to go against the grain here: the Maxwell distribution does describe the distribution of molecule speeds in any (3-D) matter, regardless of phase. Suppose we have a system of $N$ molecules with masses $m_i$, positions $\vec{r}_i$, and velocities $\vec{v}_i$ (with $i = 1, ..., N$). Assume that the total energy of this system is of the form $$ E(\vec{r}_1, \dots, \vec{r}_N; \vec{p}_1, \dots, \vec{p}_N) = U(\vec{r}_1, \dots, \vec{r}_N) + \sum_{i = 1}^N \frac{\vec{p}_i^2}{2 m_i }, $$ i.e., a potential energy depending on the molecules' positions and a kinetic energy. According to Boltzmann statistics, the probability of finding this system within a small volume of phase space $d^{3N} \vec{r} \, d^{3N} \vec{p}$ is $$ \mathcal{P}(\vec{r}_1, \dots, \vec{r}_N; \vec{p}_1, \dots, \vec{p}_N) d^{3N} \vec{r} \, d^{3N} \vec{p} \propto e^{-E/kT} d^{3N} \vec{r} \, d^{3N} \vec{p} \\= e^{-U(\vec{r}_1, \dots, \vec{r}_N)/kT} \left[ \prod_{i=1}^N e^{-p_i^2/2m_i k T}\right]d^{3N} \vec{r} \, d^{3N} \vec{p} $$ To find the probability distribution of finding molecule #1 with a particular momentum $\vec{p}_1$, we integrate over all other configuration variables (i.e., $\vec{r}_1$ through $\vec{r}_N$ and $\vec{p}_2$ through $\vec{p}_N$). Because the $E$ is the sum of a contribution from the positions and a contribution from the momenta, the Boltzmann factors can be split between these integrals, with the result that $$ \mathcal{P}(\vec{p}_1) \, d^3\vec{p}_1 \propto e^{-p_1^2/2 m_1 kT} d^3\vec{p}_1 \left[ \int e^{-U(\vec{r}_1, \dots, \vec{r}_N)/kT} d^{3N}\vec{r}\right] \left\{ \prod_{i = 2}^N \int e^{-p_i^2/2 m_i kT} d^3 \vec{p}_i \right\} $$ These integrals are nasty (especially the one over all $N$ position vectors), but they're just constants with respect to $\vec{p}_1$, which means that the can be folded into the proportionality constant: $$ \mathcal{P}(\vec{p}_1) \, d^3\vec{p}_1 \propto e^{-p_1^2/2 m_1 kT} d^3\vec{p}_1. $$ A similar logic applies to every other molecule in my system. In other words, the probability of finding a molecule with a momentum $\vec{p}$ doesn't depend at all on how they interact with each other, assuming that their interaction energy is dependent only on their collective positions. Since these molecules obey the same momentum distribution, they must also have the same velocity distribution, and in particular they obey the Maxwell speed distribution. So the answer to the question asked in your title, "Does the Maxwell-Boltzmann distribution apply to gases only?" is "no"; it applies to all phases of matter, in the sense that it describes the distribution of particle speeds and energies. However, the question you ask at the bottom of your post, "Can I use a Maxwell-Boltzmann distribution to interpret reaction rates at the molecular level in liquids and/or solids?" may also be "no"; the connection between reaction rate and activation energy is not straightforward if the medium is dense, as was pointed out by @porphyrin in their answer. Michael SeifertMichael Seifert $\begingroup$ This is fine for a gas but for a condensed phase the coordinates - and therefore the integrals - are not separable. The position and momentum of any particle depends on coordinates of other interacting particles. You can assume a mean-field suffices but that is not the same thing, and it is often not a sufficient approximation. $\endgroup$ – Buck Thorn Nov 9 '19 at 9:54 $\begingroup$ @Thorn: Interesting. I agree that the position coordinates are often not separable, but I wasn't aware that the same could be true for the momenta. Can you provide a citation discussing non-quadratic momenta so I can edit my answer accordingly? (Note that my derivation can be extended to any system where there are "cross terms" between the momenta of the form $p_1 p_2$, so long as the kinetic energy can be described by a positive definite quadratic form .) $\endgroup$ – Michael Seifert Nov 9 '19 at 14:36 $\begingroup$ Hmm, as sure as I was that I'd spotted something wrong in your line of argument, I am now grasping at straws. A search produced this: scirp.org/journal/PaperInformation.aspx?PaperID=83351 That however used MD with a classical harmonic internuclear potential to confirm a MB distribution in a low T solid. I wonder how particles interacting with a nonlinear potential might behave? $\endgroup$ – Buck Thorn Nov 9 '19 at 18:19 I'm willing to put myself out there and say that no, a Maxwell-Boltzmann distribution will not be able to make any statement about the ability of liquids or solids to react. Meaning also that you could not study reaction rates with a Maxwell-Boltzmann distribution for the condensed phases. The reason why I am fairly confident this is true is because the Maxwell-Boltzmann distribution makes the assumption that it is working with an ideal gas. Making that assumption is both powerful and very limiting. The ideal gas approximation is quite good in a lot of cases, but it could not even come close to describing the behavior of a condensed phase. If some such distribution were used to study condensed phase reaction rates, it simply wouldn't be a Maxwell-Boltzmann distribution. I would also point out that the Maxwell-Boltzmann distribution doesn't describe all gas-phase reactions all that well. This can be inferred quite easily from the fact that the M-B distribution treats all gases as ideal, so the only possible difference between two systems is the mass of the particle and the temperature. That means also that any gas phase reaction which depends strongly on the orientation of the collision would likely be assumed by M-B to happen much more frequently than it really does. All that to say, M-B is deeply rooted in the gas phase, and even there it falls short for certain systems, so it probably isn't possible to apply to the condensed phases. On the other hand, it is possible that a partition function for a liquid or a solid phase could be used to study reactions in some way, and the partition function for a generic gas particle can be used to derive the M-B distribution so . . . There's that. The problem with that is that the partition function requires you to sum the Boltzmann Factor over all states, and for a condensed phase system . . . That's basically gonna be an infinite number of states. Anytime you approach stat. mech. or the name Boltzmann, it's wise to keep this quote by David Goodstein in mind: "Ludwig Boltzmann, who spent much of his life studying statistical mechanics, died in 1906, by his own hand. Paul Ehrenfest, carrying on the work, died similarly in 1933. Now it is our turn to study statistical mechanics. Perhaps it will be wise to approach the subject cautiously." jheindeljheindel In liquids the Boltzmann distribution limits reaction in what are called 'activation limited' reactions. In these reactions the two species collide but do not have sufficient energy to react because the activation energy is high, so they diffuse apart again. The number of particles with sufficient energy to react is given by the Boltzmann distribution. This falls with increase in energy so rapidly that reaction becomes a rate event, perhaps once in 100 million collisions. Reactions with a low activation energy can react at first approach, by the Boltzmann distribution there is enough energy to surmount the activation energy to reaction. However, in these cases the reaction rate is limited by how fast the species can diffuse together, so reaction rate becomes a property of the solvent. porphyrinporphyrin I agree with the answer of Michael Seifert. I just wanted to point out, that this applies to classical systems only (even if they are non-ideal). For example systems in which relativistic effects play a role (Maxwell–Jüttner distribution), or for systems in which quantum effects play a role, the Maxwell-Boltzmann distribution will be violated. thepiththepith No. Maxwell-Boltzman statistics are not followed. The Equipartition Theorem is followed (at equilibrium) but the distribution tends to be more like a Planck Distribution giving different equations for $v_{avg}$. Consider that solids arguably don't have an atomic $v_{avg}$ (or rather it is zero) and that liquids differ from gases because they have additional degrees of freedom. (That is, they have Potential Energy modes due to their molecule-to-molecule interactions which Ideal Gases do not have by definition. The assumption of non-correlated (random) movement can't be made (at sufficiently low densities)). This area is difficult to get a handle on. Here's two references I found useful (along with the Wikipedia articles on Phonons,the Equipartion Theorem, Diffusion, and Molecular Diffusion): https://physics.stackexchange.com/questions/111743/is-there-an-equation-to-calculate-the-average-speed-of-liquid-molecules http://cpb.iphy.ac.cn/fileup/PDF/2013-8-083101.pdf. The 2nd demonstrates that the way in which liquids move (are correctly modeled (hopefully!)) depends on their "modes" or potential energy degrees of freedom (potential function). There are both internal degrees of freedom (rotations, twistings, vibrations,...) and external (Van Der Waals forces, etc.) not to mention steric constraints (is there a size above which the concept of average molecular velocity has no meaning? (or does the molecular velocity concept transition into the macroscopic particle concept as size increases? Does it matter if "size" is significantly nonisotropic? (e.g. linear, planar, or spherical molecules?)) Buck Thorn alphonsealphonse $\begingroup$ Perhaps you mean: " The assumption of non-correlated (random) movement can't be made (at sufficiently high densities))" $\endgroup$ – Buck Thorn Nov 9 '19 at 9:50 The distributions of energy in liquids and solids fall off faster in liquids and solids than the Maxwell-Boltzmann distribution - if a molecule in a liquid or solid has high energy, it is going to transfer it to other molecules much sooner than in a gas. orthocresol♦ H. Tomasz GrzybowskiH. Tomasz Grzybowski $\begingroup$ I think the difference in distributions is due to the number of degrees of freedom due to interparticle interactions, and not due to faster interactions rate. Otherwise you could make the argument that a hotter gas is less likely to observe the MB distribution. $\endgroup$ – Buck Thorn Nov 9 '19 at 10:01 Not the answer you're looking for? Browse other questions tagged physical-chemistry temperature kinetic-theory-of-gases or ask your own question. Validity of Maxwell-Boltzmann distribution for solids Temperature in the Maxwell-Boltzmann Distribution How to interpret the Maxwell-Boltzmann distribution to find the activation energy? Regarding the free energy requirements for reacting gases Derivation of mean speed from Maxwell-Boltzmann distribution Do catalysts shift equilibrium constant towards 1? Boltzmann factor vs. graph of Maxwell–Boltzmann distribution Lagrange Multipliers for the derivation of Maxwell-Boltzmann distribution Feasibility and Entropy
CommonCrawl
SureSolv Solve any problem Innovation concepts Systematic innovation Inventive principles Innovations around us Problem solving concepts Problem solving principles Problem solving approaches Problem solving techniques Problem solving in maths Real life problem solving Value system Basic maths Puzzling math Reasoning and logic analysis Efficient Math problem solving SSC CGL Tier II UGC/CSIR Net Maths WBCS Main School math Innovative creations Home » SSC CPO » This year would surely bring new lights in your life. Happy new year to you all the time. SSC CPO Solved question Set 1, Algebra 1 Submitted by Atanu Chaudhuri on Mon, 15/06/2020 - 22:11 1st SSC CPO Solved Question Set, 1st on Algebra This is the 1st solved question set of 10 practice problem exercise for SSC CPO exam and the 1st on topic Algebra. It contains, Question set on Algebra for SSC CPO to be answered in 15 minutes (10 chosen questions) Answers to the questions, and Detailed conceptual solutions to the questions. For maximum gains, the test should be taken first, and then the solutions are to be read. IMPORTANT: To absorb the concepts, techniques and reasoning explained in the solutions fully and apply those in solving problems on Algebra quickly, one must solve many problems in a systematic manner using the conceptual analytical approach. Learning by doing is the best learning. There is no other alternative towards achieving excellence. 1st Question set - 10 problems for SSC CPO exam: 1st on topic Algebra - answering time 15 mins Q1. What is the value of $\displaystyle\frac{(a^2+b^2)(a-b)-(a-b)^3}{a^2 b-ab^2}$? $-1$ $0$ Q2. What is the value of $\left(\displaystyle\frac{x^2-x-6}{x^2+x-12}\right) \div \left(\displaystyle\frac{x^2+5x+6}{x^2+7x+12}\right)$? $\displaystyle\frac{x-3}{x+4}$ $\displaystyle\frac{x+4}{x-3}$ Q3. If $\displaystyle\frac{1}{x+2}=\displaystyle\frac{3}{y+3}=\displaystyle\frac{1331}{z+1331}=\displaystyle\frac{1}{3}$, then what is the value of $\displaystyle\frac{x}{x+1}+\displaystyle\frac{4}{y+2}+\displaystyle\frac{z}{z+2662}$? $\displaystyle\frac{3}{2}$ Q4. If $p=\displaystyle\frac{5}{18}$, then $27p^3-\displaystyle\frac{1}{216}-\displaystyle\frac{9}{2}p^2+\displaystyle\frac{1}{4}p$ is equal to, $\displaystyle\frac{10}{27}$ $\displaystyle\frac{8}{27}$ Q5. If $p(x+y)^2=5$ and $q(x-y)^2=3$, then the simplified value of $p^2(x+y)^2+4pqxy-q^2(x-y)^2$ is, $-(p+q)$ $-2(p+q)$ $2(p+q)$ $(p+q)$ Q6. What is the simplified value of $\left(x^{32}+\displaystyle\frac{1}{x^{32}}\right)\left(x^8+\displaystyle\frac{1}{x^8}\right)\left(x-\displaystyle\frac{1}{x}\right)$ $\hspace{10mm}\times{\left(x^{16}+\displaystyle\frac{1}{x^{16}}\right)\left(x+\displaystyle\frac{1}{x}\right)\left(x^4+\displaystyle\frac{1}{x^4}\right)}$? $\displaystyle\frac{x^{32}-\displaystyle\frac{1}{x^{32}}}{x+\displaystyle\frac{1}{x}}$ $\displaystyle\frac{x^{64}-\displaystyle\frac{1}{x^{64}}}{x^2+\displaystyle\frac{1}{x^2}}$ $x^{64}+\displaystyle\frac{1}{x^{64}}$ Q7. If $\displaystyle\frac{1}{x}+\displaystyle\frac{1}{y}+\displaystyle\frac{1}{z}=0$ and $x+y+z=11$, then what is the value of $x^3+y^3+z^3-3xyz$? $14641$ Q8. If $x=\sqrt[3]{7}+3$, then the value of $x^3-9x^2+27x-34$ is, Q9. If $\left(x+\displaystyle\frac{1}{x}\right)=3\sqrt{2}$, then what is the value of $\left(x^5+\displaystyle\frac{1}{x^5}\right)$? $717\sqrt{2}$ $1581\sqrt{2}$ Q10. If $\displaystyle\frac{x+\sqrt{x^2-1}}{x-\sqrt{x^2-1}}+\displaystyle\frac{x-\sqrt{x^2-1}}{x+\sqrt{x^2-1}}=62$, then what is the value of $x$ $(x \lt 0)$? $16$ Answers to the questions Q1. Answer: Option d: $2$. Q2. Answer: Option a: $1$. Q3. Answer: Option d: $\displaystyle\frac{3}{2}$. Q4. Answer: Option b: $\displaystyle\frac{8}{27}$. Q5. Answer: Option c: $2(p+q)$. Q6. Answer: Option c: $\displaystyle\frac{x^{64}-\displaystyle\frac{1}{x^{64}}}{x^2+\displaystyle\frac{1}{x^2}}$. Q7. Answer: Option c: $1331$. Q8. Answer: Option b: $0$. Q9. Answer: Option a: $717\sqrt{2}$. Q10. Answer: Option b: $-4$. 1st solution set - 10 problems for SSC CPO exam: 1st on topic Algebra - answering time 15 mins Solution 1: Quick solution by key pattern identification of common factor between numerator and denominator First identify the common factor of $(a-b)$ between numerator and denominator and eliminate it. Further simplification is straightforward. The target expression is, $\displaystyle\frac{(a^2+b^2)(a-b)-(a-b)^3}{a^2b-ab^2}$ $=\displaystyle\frac{(a-b)[(a^2+b^2)-(a-b)^2]}{ab(a-b)}$ $=\displaystyle\frac{2ab}{ab}=2$. Answer: Option d: $2$. Key concepts used: Key pattern identification -- Factorization and common factor elimination -- Solving in mind. Solution 2: Factorization of quadratic equation and common factor identification and elimination Identify the key pattern that all four quadratic equations in the target expression can easily be factorized. Next step is just cancellation of common factors between numerator and denominator. The target expression, $\left(\displaystyle\frac{x^2-x-6}{x^2+x-12}\right) \div \left(\displaystyle\frac{x^2+5x+6}{x^2+7x+12}\right)$ $=\displaystyle\frac{(x-3)(x+2)}{(x+4)(x-3)} \times{ \displaystyle\frac{(x+4)(x+3)}{(x+2)(x+3)}}$, inverting the second term which is the dividend $=1$, all four pairs of factors cancel out between numerator and denominator. Answer: Option a: $1$. Key concepts used: Quadratic equation factorization -- Key pattern identification -- Solving in mind. Solution 3: Quick solution by splitting chained equation into three independent equations, evaluation of variable values and substitution The given is a chained equation that is to be first split into three standalone equations and $x$, $y$ and $z$ evaluated. The given equation is, $\displaystyle\frac{1}{x+2}=\displaystyle\frac{3}{y+3}=\displaystyle\frac{1331}{z+1331}=\displaystyle\frac{1}{3}$. First split the chained equation into three independent equations equating each of the first three LHS with the numeric RHS, the fourth term from left. This would enable you to get the values of $x$, $y$ and $z$ directly. The results are, $\displaystyle\frac{1}{x+2}=\displaystyle\frac{1}{3}$, Or, $x+2=3$, Or, $x=1$. $\displaystyle\frac{3}{y+3}=\displaystyle\frac{1}{3}$, Or, $y+3=9$, Or, $y=6$. $\displaystyle\frac{1331}{z+1331}=\displaystyle\frac{1}{3}$, Or, $z+1331=3\times{1331}$, Or, $z=2662$. Substitute these variable values in the target expression, $\displaystyle\frac{x}{x+1}+\displaystyle\frac{4}{y+2}+\displaystyle\frac{z}{z+2662}$ $=\displaystyle\frac{1}{2}+\displaystyle\frac{4}{6+2}+\displaystyle\frac{2\times{1331}}{4\times{1331}}$ $=\displaystyle\frac{3}{2}$. Answer: Option d: $\displaystyle\frac{3}{2}$. Key concepts used: Splitting of chained equation into three independent equations to evaluate $x$, $y$ and $z$ -- Chained equation treatment technique -- Substitution -- Solving in mind. Solution 4: Solving in mind by key pattern identification of target expression as an expanded cube of sum and then getting the value of cubed expression from the given equation Identify that the target expression is the expanded form of a cube of sum by rearranging the terms, $27p^3-\displaystyle\frac{1}{216}-\displaystyle\frac{9}{2}p^2+\displaystyle\frac{1}{4}p$ $=27p^3-\displaystyle\frac{9}{2}p^2+\displaystyle\frac{1}{4}p-\displaystyle\frac{1}{216}$ $=(3p)^3-3.(3p)^2.\left(\displaystyle\frac{1}{6}\right)+3.(3p).\left(\displaystyle\frac{1}{6}\right)^2-\left(\displaystyle\frac{1}{6}\right)^3$ $=\left(3p-\displaystyle\frac{1}{6}\right)^3$. From the given equation get the value of $\left(3p-\displaystyle\frac{1}{6}\right)$ as, $p=\displaystyle\frac{5}{18}$, Or, $3p=\displaystyle\frac{5}{6}$, Or, $\left(3p-\displaystyle\frac{1}{6}\right)=\displaystyle\frac{4}{6}=\frac{2}{3}$. Substitute this value of cubed expression in the transformed target expression, $\left(3p-\displaystyle\frac{1}{6}\right)^3=\left(\displaystyle\frac{2}{3}\right)^3=\displaystyle\frac{8}{27}$. Answer: Option b: $\displaystyle\frac{8}{27}$. Key concepts used: Key pattern identification -- Cube of sum expansion -- Substitution-- Solving in mind. Solution 5: Solve quickly by key pattern identification, base equalization and two stage substitution It is apparent that first and third term of the target expression are easily simplified by direct substitution of the RHSs from the given equations, $p^2(x+y)^2+4pqxy-q^2(x-y)^2$ $=5p+4pqxy-3q$. Question is, how to transform the middle term $4pqxy$ in terms of $p$ and $q$. Easiest way to do this is to first equalize the factors $p$ and $q$ in the LHSs of the two given equations to $pq$ by multiplying the first equation by $q$ and the second by $p$. This is one form of application of base equalization technique, where bases are the factors, $p$ and $q$. Now if you subtract the second result from the first, the difference of $(x+y)^2$ and $(x-y)^2$ becomes simply $4xy$ giving you $4pqxy$ in the LHS of the subtraction result. The RHS is in terms of only $p$ and $q$. Let us show you the deductions. Multiplying the first given equation by $q$, the second given equation by $p$ and subtracting the second result from the first, $pq(x+y)^2-pq(x-y)^2=5q-3p$, Or, $5q-3p=pq\left[(x+y)^2-(x-y)^2\right]$, Or, $5q-3p=pq(4xy)=4pqxy$. Substitute this result in the transformed target expression, $5p+4pqxy-3q$ $=5p+(5q-3p)-3q=2(p+q)$. Answer: Option c. $2(p+q)$. Key concepts used: Key pattern identification -- Base equalization technique -- Two stage given expression transformation and substitution -- Solving in mind. Solution 6: Solving in mind by missing element identification and combining the like factors Identify that if you multiply the third and fifth factors you would get a promising result, $\left(x-\displaystyle\frac{1}{x}\right)\left(x+\displaystyle\frac{1}{x}\right)=\left(x^2-\displaystyle\frac{1}{x^2}\right)$. But, now among the four other remaining factors, you don't have $\left(x^2+\displaystyle\frac{1}{x^2}\right)$. If you had it, you could have multiplied your current result with this sum of inverses of squares in $x$ getting another promising result of subtractive sum of inverses in 4th power of $x$, that is, $\left(x^4-\displaystyle\frac{1}{x^4}\right)$. This is your missing element. You don't have it in the problem expression. No problem, introduce it by multiplying and dividing the last resultant expression by $\left(x^2+\displaystyle\frac{1}{x^2}\right)$. This is the technique of missing element identification and introduction. Let us show you the result, $\displaystyle\frac{1}{\left(x^2+\displaystyle\frac{1}{x^2}\right)}\times{\left(x^{32}+\displaystyle\frac{1}{x^{32}}\right)\left(x^8+\displaystyle\frac{1}{x^8}\right)\left(x^{16}+\displaystyle\frac{1}{x^{16}}\right)}$ $\hspace{8mm}\times{\left(x^2-\displaystyle\frac{1}{x^2}\right)\left(x^2+\displaystyle\frac{1}{x^2}\right)\left(x^4+\displaystyle\frac{1}{x^4}\right)}$. Now it is straightforward combining like factors (or in general, like terms). Combine 4th and 5th factors to get the promising factor of $\left(x^4-\displaystyle\frac{1}{x^4}\right)$. Next combine it with $\left(x^4+\displaystyle\frac{1}{x^4}\right)$ to get, $\left(x^8-\displaystyle\frac{1}{x^8}\right)$. And continue to combine this way. Following are the results, $\displaystyle\frac{1}{\left(x^2+\displaystyle\frac{1}{x^2}\right)}\times{\left(x^{32}+\displaystyle\frac{1}{x^{32}}\right)\left(x^{16}+\displaystyle\frac{1}{x^{16}}\right)\left(x^8+\displaystyle\frac{1}{x^8}\right)}$ $\hspace{8mm}\times{\left(x^4-\displaystyle\frac{1}{x^4}\right)\left(x^4+\displaystyle\frac{1}{x^4}\right)}$ $=\displaystyle\frac{1}{\left(x^2+\displaystyle\frac{1}{x^2}\right)}\times{\left(x^{32}+\displaystyle\frac{1}{x^{32}}\right)\left(x^{16}+\displaystyle\frac{1}{x^{16}}\right)\left(x^8+\displaystyle\frac{1}{x^8}\right)}$ $\hspace{8mm}\times{\left(x^8-\displaystyle\frac{1}{x^8}\right)}$ $=\displaystyle\frac{1}{\left(x^2+\displaystyle\frac{1}{x^2}\right)}\times{\left(x^{32}+\displaystyle\frac{1}{x^{32}}\right)\left(x^{16}+\displaystyle\frac{1}{x^{16}}\right)\left(x^{16}-\displaystyle\frac{1}{x^{16}}\right)}$ $=\displaystyle\frac{1}{\left(x^2+\displaystyle\frac{1}{x^2}\right)}\times{\left(x^{32}+\displaystyle\frac{1}{x^{32}}\right)\left(x^{32}-\displaystyle\frac{1}{x^{32}}\right)}$ $=\displaystyle\frac{x^{64}-\displaystyle\frac{1}{x^{64}}}{x^2+\displaystyle\frac{1}{x^2}}$. Answer: Option c: $\displaystyle\frac{x^{64}-\displaystyle\frac{1}{x^{64}}}{x^2+\displaystyle\frac{1}{x^2}}$. Key concepts used: Key pattern identification -- Technique of missing element identification and introduction -- Combining like factors (terms) -- Principle of collection of like terms -- Solving in mind. If you can identify the missing element and are able to introduce it, solution should take a few tens of seconds. Explaining and writing the deductive steps take large space and time, you know. Solution 7: Quick solution by key pattern identification, square of three variable sum and expanded form of three variable sum of cubes The first key pattern identified is by evaluation of $xy+yz+zx$ from given equation, $\displaystyle\frac{1}{x}+\displaystyle\frac{1}{y}+\displaystyle\frac{1}{z}=\displaystyle\frac{xy+yz+zx}{xyz}=0$, Or, $xy+yz+zx=0$. Now let us use the expanded form of three variable sum of cubes to determine what more are required to evaluate the target expression, $x^3+y^3+z^3=(x+y+z)(x^2+y^2+z^2-xy-yz-zx)+3xyz$, So target expression, $E=x^3+y^3+z^3-3xyz=11(x^2+y^2+z^2)$. It remains only to evaluate $x^2+y^2+z^2$. Value of this expression we would get easily by the three variable square of sum, $(x+y+z)^2=x^2+y^2+z^2+2(xy+yz+zx)$, Or, $x^2+y^2+z^2=(x+y+z)^2=(11)^2=121$. So value of target expression is, $E=11(x^2+y^2+z^2)=11\times{121}=1331$. Answer: Option: c: $1331$. Key concepts used: Key pattern identification -- Three variable square of sum -- Three variable sum of cubes -- Solving in mind. If you know the expanded relations well, you should be able to solve the problem mentally. Solution 8: Quick solution by key pattern identification of similarity of target expression with expanded $(x-3)^3$ from given expression The key pattern identified from the given expression is the similarity of the expanded $(x-3)^3$ with the target expression, $(x-3)^3=x^3-9x^2+27x-27$. The first three terms of the target expression are same as the expanded form of $(x-3)^3$. This is use of End state analysis. So we would transform the given expression to, $x=\sqrt[3]{7}+3$, Or, $x-3=\sqrt[3]{7}$. And then raise this transformed equation to its cube to get, $(x-3)^3=7$, Or, $x^3-9x^2+27x-27=7$, Or, $x^3-9x^2+27x-34=0$. Answer: Option b: $0$. Key concepts used: Key pattern identification of maximum similarity of cube of modified given expression with target expression -- End state analysis approach -- Cube of sum expansion -- Solving in mind. Solution 9: Quick solution by raising power of $x$ in sum of inverses using principle of inetraction of inverses If you raise the given sum of inverses to its square, the mutually inverse variables in the middle term cancel out to leave just a numeric value, $\left(x+\displaystyle\frac{1}{x}\right)=3\sqrt{2}$, Or, $\left(x+\displaystyle\frac{1}{x}\right)^2=18$, Or, $x^2+\displaystyle\frac{1}{x^2}=16$. Now we will use the two factor expanded form of $\left(x^3+\displaystyle\frac{1}{x^3}\right)$, $x^3+\displaystyle\frac{1}{x^3}=\left(x+\displaystyle\frac{1}{x}\right)\left(x^2-1+\displaystyle\frac{1}{x^2}\right)$ $=3\sqrt{2}\times{(16-1)}=45\sqrt{2}$. In the last step, we would multiply $\left(x^2+\displaystyle\frac{1}{x^2}\right)$ with $\left(x^3+\displaystyle\frac{1}{x^3}\right)$ to get the value of $\left(x^5+\displaystyle\frac{1}{x^5}\right)$, $\left(x^2+\displaystyle\frac{1}{x^2}\right)\left(x^3+\displaystyle\frac{1}{x^3}\right)$ $=\left(x^5+\displaystyle\frac{1}{x^5}\right)+\left(x+\displaystyle\frac{1}{x}\right)$. $\left(x^5+\displaystyle\frac{1}{x^5}\right)=16\times{45\sqrt{2}}-3\sqrt{2}=717\sqrt{2}$. Answer: Option a: $717\sqrt{2}$. Key concepts used: Key pattern identification of similarity of target expression with product of sum of inverses of squares with sum of inverses of cubes -- Two factor expansion of sum of cubes -- Principle of interaction of inverses -- Solving in mind. With clear concepts, quick decision making based on key pattern identification and reasonably accurate mental math skill, it should easily be possible to solve this problem in mind. Solution 10: Quick solution by identifying simple numerator and denominator of two LHS terms combined Assume dummy variable $p$ for $\sqrt{x^2-1}$, $p=\sqrt{x^2-1}$. This dummy variable substitution is not necessary but it makes mental visualization of the solution steps comfortably easy. By this substitution the given equation is simplified to, $\displaystyle\frac{x+p}{x-p}+\displaystyle\frac{x-p}{x+p}=62$, Or, $\displaystyle\frac{(x+p)^2+(x-p)^2}{x^2-p^2}=62$, Or, $4x^2-2=62$, as denominator $x^2-p^2=1$, Or, $x^2=16$, Or, $x=-4$, as by given condition $x$ must be negative. Answer: Option b: $-4$. Key concepts used: Dummy variable substitution -- Simplified result of sum of $(x+p)^2$ and $(x-p)^2$ as $2(x^2+p^2)$ -- Solving in mind. Observe that, each of the problems could be quickly and cleanly solved in minimum number of mental steps using special key patterns and methods in each case. This is the hallmark of quick problem solving: Concept based pattern and method formation, and, Identification of the key pattern and use of the method associated with it. Every special pattern has its own method, and not many such patterns are there. Important is the concept based pattern identification and use of quick problem solving method. Guided help on Algebra in Suresolv To get the best results out of the extensive range of articles of tutorials, questions and solutions on Algebra in Suresolv, follow the guide, Suresolv Algebra Reading and Practice Guide for SSC CHSL, SSC CGL, SSC CGL Tier II and Other Competitive exams. The guide list of articles includes ALL articles on Algebra in Suresolv and is up-to-date. SSC CPO level Question and Solution sets SSC CPO level Solved Question set 1 on Algebra 1 SSC CPO level Solved Question set 3 on Trigonometry 1 SSC CPO level Solved Question set 5 on Number system 1 Surds and Indices SSC CPO level Solved Question set 6 on Surds and indices 1 How many addition signs needed to get the sum of 99 and two similar puzzles SSC CGL Solved question Set 99, Number system 12 Move 3 matches to take the cherry out of wine glass and move 2 matches to take the cherry out - pair of matchstick puzzles Move 2 matchsticks to make 6 squares and move 8 matchsticks to make 6 squares - a pair of matchstick puzzles How to solve a hard number system question for CAT confidently and quickly 8 Copyright: Atanu Chaudhuri and respective Authors
CommonCrawl
\begin{document} \title{Escape to \mizar{} \begin{abstract} We announce a tool for mapping \eprover{} derivations to \mizar{} proofs. Our mapping complements earlier work that generates problems for automated theorem provers from \mizar{} inference checking problems. We describe the tool, explain the mapping, and show how we solved some of the difficulties that arise in mapping proofs between different logical formalisms, even when they are based on the same notion of logical consequence, as \mizar{} and \eprover{} are (namely, first-order classical logic with identity). \end{abstract} \section{Introduction}\label{sec:intro} The problem of generating a mapping between proofs in different formats is an important research problem. Proofs coming from a many sources can be found today. There are about as many implemented proof formats as there are different systems for interactive and automated theorem proving, not to mention the ``pure'' proof formats coming from mathematical logic. Even within the latter we find a plethora of possibilities. If we pick a Hilbert-style system, there is a choice about which axioms and rules of inference to pick. Even natural deduction comes in a number of shapes: Jáskowski, Gentzen, Fitch, Suppes\dots~\cite{pelletier1999brief}. It seems likely that as the use of proof systems grows we will need to have better tools for mapping between different; this need has been recognized for decades~\cite{wos1990problem,andrews1991more}, and it still seems we have some way to go. This paper discusses the problem of transforming derivations output by the \eprover{} automated theorem prover into \mizar{} texts. \mizar{} is a language for writing mathematical texts in a ``natural'' style. It features a kind of natural deduction proof language. The library of knowledge formalized in \mizar{}, the \mizar{} Mathematical Library (MML), is quite advanced, going from the axioms of set theory to graduate-level pure mathematics. For the purposes of this paper we are not interested in the MML. Instead, we view \mizar{} as a language and a suite of tools for carrying out arbitrary reasoning in first-order classical logic. Our work is available at \begin{quotation} \url{https://github.com/jessealama/tptp4mizar} \end{quotation} Related work is discussed in Section~\ref{sec:related-work}. Section~\ref{sec:translating-tptp} discusses an important preliminary exercise to mapping derivations, and which is perhaps already of interest: mapping an arbitrary TPTP problem (not necessarily derivations) into a corresponding \mizar{} article. The generated \mizar{} text has the same flat structure as initial TPTP problem from which it comes. Section~\ref{sec:translating-derivations} is the heart of the paper; it discusses in detail translation from \eprover{} derivations to \mizar{} proofs. Because of the fine-grained level of detail offered by \eprover{} and the simple multi-premise ``obvious inference'' rule of \mizar{}, the mapping is more or less straightforward, save for \emph{skolemization} and \emph{resolution}, neither of which have direct analogues in ``human friendly'' \mizar{} texts. Skolemization is discussed in Section~\ref{sec:skolemization} and our treatment of resolution is discussed in~\ref{sec:resolution}. The problem of making the generated \mizar{} texts more humanly comprehensible is discussed in Section~\ref{sec:compressing}. Section~\ref{sec:conclusion} concludes and proposes applications and further opportunities for development. Appendix~A is a complete example of a text (a solution to the Dreadbury Mansion puzzle found by \eprover{}, translated to \mizar) produced by our translation. \section{Related work}\label{sec:related-work} In recent years there is an interest in adding automation to interactive theorem proving systems. An important challenge is to make sense, at the level of the interactive theorem prover, of the solution produced by external automated reasoning tools. Such \emph{proof reconstruction} has been done for \isabelle/HOL~\cite{paulson2007reconstruction}. There, the problem of finding an \isabelle{}/HOL text suitable for solving an inference problem $P$ is done as follows: \begin{enumerate} \item Translate $P$ to a first-order theorem proving problem $P^{*}$. \item Solve $P^{*}$ using an automated theorem prover, yielding solution $S^{*}$. \item Translate $S^{*}$ into a \isabelle/HOL text, yielding a solution $S$ of the original problem. \end{enumerate} The work described in this paper could be used to provide a similar service for \mizar. It is interesting to note that in the case of \mizar{} the semantics of the source logic and the logic of the external theorem prover are the same: first-order classical logic with identity. In the \isabelle/HOL case, at step (1) there is a potential loss of information because of a mismatch of \isabelle/HOL's logic and the logic of the ATPs used to solve problems (which may not in any case matter at step~(3)). In the \mizar{} context, two-thirds (steps (1) and (2)) of the problem has been solved~\cite{rudnicki2011escape}; our work was motivated by that paper. Steps toward (3) have been taken in the form of Urban's \otttomiz{}\footnote{See its homepage \url{https://github.com/JUrban/ott2miz} and its announcement \url{http://mizar.uwb.edu.pl/forum/archive/0306/msg00000.html} on the \mizar{} users mailing list.}. In fact, more than 2/3 of the problem is solved. Our work here builds on \otttomiz{} by accounting for the clause normal form transformation, rather than starting with the clause normal form of a problem. Our translated proofs thus start with (the \mizar{} form of) the relevant initial formulas, which arguably improves the readability of the proofs. Moreover, our tool works with arbitrary TPTP problems and TSTP derivations (produced by \eprover), rather than with \otter{} proof objects. The restriction to \eprover{} is not essential; there is no inherent obstacle to extending our work to handle TSTP derivations produced by other automated theorem provers, provided that these derivations are sufficiently detailed, like \eprover{}'s. One must acknowledge, of course, that providing high-quality, fine-grained proof objects is a challenging practical problem for automated theorem provers. To account for the clausal normal form transformation, one needs to deal with skolemization. This is a well-known issue in discussions surrounding proof objects for automated theorem provers~\cite{denivelle2002extraction}. Interestingly, our method for handling skolemization is quite analogous to the handling of quantifiers in the problem opposite ours, namely, converting \mizar{} proofs to TSTP derivations~\cite{urban2008atp} in the setting of MPTP (\mizar{} Problems for Theorem Provers)~\cite{urban2006mptp}. There, Henkin-type implications are a natural solution to the problem of justifying a substitution instance of a formula given that its generalization is justified. Our justification of skolemization steps is virtually the same as this; see Section~\ref{sec:skolemization} for details. An export and cross-verification of \mizar{} proofs by ATPs has been carried out~\cite{urban2008atp}. Such work is an inverse of ours because it goes from \mizar{} proofs to ATP problems. We do not intend to enter into a discussion about the proof identity problem. For a discussion, see Do{\v{s}}en~\cite{dosen-proof-identity}. Certainly the intension behind the mapping is to preserve whatever abstract proof expressed by the \eprover{} derivation. That the \eprover{} derivation and the \mizar{} text generated from it are isomorphic will be clarified by the discussion below of the translation algorithm. Mapping such as the one discussed here can help contribute to a concrete investigation of the proof identity problem, which in fact motivates the project reported here. The reader need not share the author's interest in the proof identity problem to understand what follows. It is well-known that derivations carried out in clause-based calculi (such as resolution and kindred methods) tend to be difficult to understand, if not downright inscrutable. An important problem for the automated reasoning community for many years is to find methods whereby we can understand machine-discovered proofs, such as resolution refutations. One approach to this problem is to map resolution derivations into natural deduction proofs. Much work has been done in this direction~\cite{miller1984expansion,miller1987compact,felty1987proof,egly1997structuring,lingenfelder1989structuring,meier2000tramp}. The transformations we employ are rather simple. Because of the coarseness of \mizar{}'s proof apparatus (there is essentially only one rule of inference that subsumes most of the traditional introduction and elimination rules of natural deduction), we need not be concerned with a translation that preserves the fine structure of an \eprover{} derivation. To ``clean up'' the generated text, we take advantage of the various proof ``enhancers'' bundled with the standard \mizar{} distribution~\cite[\S 4.6]{grabowski2010mizar}. These enhancers suggest compressions of a \mizar{} text that make it more parsimonious while preserving its semantics. In the end, though, it would seem that the judgment of whether an ``enhanced'' \mizar{} text is the best representative of a resolution proof is something that has to be left to the reader. \section{Translating TPTP problems into \mizar{} texts}\label{sec:translating-tptp} In this section we describe a method for generating a \mizar{} text from an arbitrary (first-order) TPTP problem~\cite{sutcliffe2009tptp}. TPTP problems are not themselves derivations, so this mapping is not the heart of our work. However, it was an important first step to mapping derivations to \mizar{} proofs because it revealed some difficulties that had to be solved in the translation of formulas part of the mapping of derivations to \mizar{} proofs. The next section is devoted to the proof mapping problem. TPTP is a language for specifying automated reasoning problems. One states some axioms and definitions, and perhaps a conjecture. Although TPTP has in recent years been extended to support various extensions of the language of first-order logic, we are interested in this paper only in the first-order part of TPTP. To construct a \mizar{} text from a TPTP problem, one first identifies the function and predicate symbols of the TPTP problem and creates a \emph{environment} for the text. This step is necessary because \mizar{} is a richer language than TPTP. Given a well-formed TPTP file, one can simply determine, for each symbol appearing in it, whether it is a function or a predicate, and what it's arity is. Since (at the time of writing) TPTP focuses only on the case of one-sorted first-order logic, there is no issue about the sorts of the arguments and values. The language of \mizar{}, on the other hand, permits overloading of various kinds and has (dependent) types. There is no issue of inferring from a purported \mizar{} text what the predicate and function symbols are. To implement this complexity, when working with \mizar{} on specifies in advance its so-called environment. The environment provides the necessary information to make sense of the text. Constructing an environment for a \mizar{} text amounts to creating a handful of XML files. Normally, one does not develop \mizar{} texts from scratch but rather builds on some preexisting formalizations. Since we not interested in using the \mizar{} library, we cannot use the usual toolchain. Instead, we create a fresh environment with respect to which the generated \mizar{} text is sensible. This environment gives a meaning to the TPTP problem even if the TPTP ``problem'' is actually a derivation. Constructing \mizar{} proofs from \eprover{} derivations (expressed in the TSTP notation) is the subject of the next section. \section{Translating \eprover{} derivations into \mizar{} texts}\label{sec:translating-derivations} This section discusses the main part of our contribution: mapping \eprover{} derivations to \mizar{} texts. The input to our procedure is an \eprover{} derivation in TSTP format~\cite{sutcliffe2006using} (the standard \eprover{} distribution comes with a tool, \epclextract, which can translate derivations expressed in \eprover's custom proof language into proofs in the desired format). The \mizar{} proof is isomorphic to the \eprover{} derivation in the sense that the premises $P_{\text{\eprover}}$ of the \eprover{} derivation map to a set $P_{\text{\mizar}}$ of the same cardinality and the same logical form, and the conclusion $c_{\eprover}$ of the $\eprover$ derivation maps directly to the sole theorem $c_{\mizar}$ of the \mizar{} text. The logical content of the two proofs are the same because \eprover{} and \mizar{} are both based on first-order classical logic. Because \eprover{}'s calculus is based essentially on clauses while \mizar{} works with formulas, some hurdles need to be overcome when mapping (i)~the part of an \eprover{} derivation dealing with converting the input problem to clause normal form, and (ii)~applications of the rule of resolution. We describe the mapping and our solution to these difficulties. As one might expect, the mapping between an \eprover{} derivation, which operates essentially on clauses, is not a simple one-to-one mapping of formulas (more precisely, clauses) to formulas. \eprover{}'s calculus can to a large extent be recognized by \mizar{} in the sense that most steps in an \eprover{} derivation do map directly to (single) steps in the generated \mizar{} text. Two classes of inferences, though, raises some problems: skolemization and resolution, which are the heart of a resolution calculus such as the one behind \eprover{}. It seems to be a hard AI problem to transform arbitrary resolution proofs into human-comprehensible natural deductions. There often seems to be a artificial ``flavor'' of such proofs that no spice can overcome. Still, some simple organizational principles can help to make the proof more manageable. (Later in Section~\ref{sec:compressing} we will see some stronger syntactic and semantic methods, going beyond the simple structural guidelines we are about to discuss, for ``enhancing'' the generated proofs even further.) Section~\ref{sec:overall} discusses the overall organization of the generated proof. In Section~\ref{sec:skolemization} we discuss the skolemization problem. In Section~\ref{sec:resolution} we discuss the problem of resolution. \subsection{Global and local organization of the proof} \label{sec:overall} The first batch of transformation do not compress the derivations in any way: every step in the TSTP derivation appears in the \mizar{} output. However, the refutation is ``groomed'' in the following ways: \begin{enumerate} \item Linearly order the formulas. Unlike TPTP/TSTP problems, where order of formulas is immaterial, the order of formulas in \mizar{} has to be coherent. We topologically sort the input ordered in the obvious way (if conclusion $A$ uses formula $B$ as a premise, then $B$ should appear earlier than $A$) and work with a linear order. \item Because one can ``reserve'' variables globally in \mizar, one can strip away the initial universal prefix of clauses-as-formulas. This transformation not only makes the formulas appearing in the proof shorter and hence more readable, it helps to keep \mizar{}'s \verb+by+ rule of inference aligned with the various clause-oriented rules of inference in \eprover{}'s calculus (clauses don't have quantifiers). \item Separate reasoning done among the axioms (establishing lemmas) from the application of lemmas toward the derivation of $\bottom$. In other words, we distinguish conclusions that depend on the conjecture from conclusions that are independent of it. \item Separate those lemmas that are used in the refutation proper from those that not used. (I.e., distinguish lemmas that are used in the refutation proper from the lemmas that are used only to prove other lemmas.) \end{enumerate} Step~(1) is strongly necessary because if a conclusion is drawn in a \mizar{} text from a premise that has not yet been introduced, this is a fatal error. Step~(2) is needed for a deeper reason: if we were to deal always with explicit universal closures of formulas, we would quickly start to outstrip the notion of obvious inference on which \mizar{} is based. Steps~(3)--(5) are not necessary; there is nothing wrong with disregarding those organizational principles. However, there is a cost: abandoning them results in an undifferentiated, disorganized melange of inferences, a mere ``print out'' in \mizar{} form of the \eprover{} derivation. A refutation starts with some axioms, a conjecture, and proceeds by negating the conjecture formula and deriving $\bottom$ by reasoning with the axioms and the negation of the conjecture. \mizar{} texts in the \mizar{} Mathematical Library, on the other hand, if read at their toplevel, are intended to be consistent: given some axioms and lemmas, one states theorems. The \emph{proofs} of these lemmas and theorems may use proof by contradiction, but that is done inside a proof block, outside of which any contradictory assumptions and conclusions derived therein are no longer ``accessible''. However, a TSTP representation of a refutation is a flat sequence of formulas ending with a contradiction: the axioms, the conjecture, the negation of the conjecture, and conclusions drawn among the axioms and the negation of the conjecture all at the same level. To capture the spirit of proof by contradiction while ensuring that the toplevel content of the generated \mizar{} article is coherent (or at least not manifestly incoherent), we refactor \eprover{} refutations into so-called diffuse reasoning blocks. We write: \begin{lstlisting}[language=Mizar] theorem @$\phi$@ proof now assume @$\neg \phi$@; S1: @$\langle \text{conclusion 1} \rangle$@ by @\dots@; S2: @$\langle \text{conclusion 2} \rangle$@ by @\dots@; @\dots@ S@$n$@: @$\langle \text{conclusion $n$} \rangle$@ by @\dots@; thus contradiction by @$\mathtt{S}_{a_{1}}$@, @$\mathtt{S}_{a_{2}}$@, @\dots@, @$\mathtt{S}_{a_{m}}$@ end; hence thesis; end; \end{lstlisting} This concludes the discussion of the organization of the generated \mizar{} proof. \subsection{Skolemization}\label{sec:skolemization} \eprover{}'s finely detailed proof output contains not simply the derivation of $\bottom$ starting from the clause form of the input formulas. \eprover{} can also record the transformation of the input formulas into clause form. It is important to preserve these inferences because they give information about what was actually given to \eprover; throwing away this information strikes us as unwelcome because one would have to work harder to make sense of the overall proof. If we insist on preserving skolemization steps in the \mizar{} output, then we have a difficulty in accounting for them. Carrying out this task is a well-known issue in generating proof objects~\cite{denivelle2002extraction,denivelle2005translation}. The difficulty is that skolem functions are curious creatures in an interactive setting like \mizar{}'s. Introducing a function in into a \mizar{} text requires that the use can prove existence and uniqueness of its definiens. But what is the definiens of a skolem function? We solve the problem by introducing, as part of the environment of an article (and not in the generated text), a ``definition'' for skolem functions in the following manner. To take a simple example, suppose we have proved $\forall x \exists y \phi$ and we have that $\forall x \phi [y \coloneqq f(x)]$ is ``derived'' from this, in the sense that it is is the conclusion of a skolemization step. We covertly introduce at this point a new definition: \[ (\forall x \exists y \phi) \rightarrow \forall x \phi [y \coloneqq f(x)] \] This formula does not have the usual shape of an explicit definition of a function. One wonders how one would prove existence and uniqueness for this definiens. We do not address these problems; in effect, the above implication is treated as a new axiom. Our approach seems defensible to us. After all, \eprover{} does not give a proof that introducing the skolem function is acceptable, so there is no step in the \eprover{} derivation that would contain the needed information. Giving a proof in \mizar{} that would justify skolemization steps is in fact possible. One introduces a new type $\tau{f}$ inhabited by definition by those objects that satisfy the sentence $\forall x \exists y \phi$, prove that the type is inhabited by exploiting the fact that the domain of interpretation of any first-order structure is non-empty, and finally defining $f$ outright using \mizar{}'s built-in Hilbert choice operator. Initial experiments with this approach to skolemization lead us to turn off this feature by default because it introduces ``noise'' into the \mizar{} proof. We know that skolemization is a valid transformation, so it seems excessive to us to put an explicit justification of every skolemization step. There is one limitation with the current approach to skolemization at the moment. We require that all skolemization steps introduce exactly one skolem function. \subsection{Resolution}\label{sec:resolution} Targeting \mizar{} is sensible because of the presence of a single rule of inference, called \verb+by+, which takes a variable number of premises. The intended meaning of an application \begin{prooftree} \AxiomC{$\phi_{1}$, \dots, $\phi_{n}$} \RightLabel{$\mathtt{by}$} \UnaryInfC{$\phi$} \end{prooftree} of \verb+by+ is that $\phi$ is an ``obvious'' inference from premises $\phi_{1}$, \dots, $\phi_{n}$. See Davis~\cite{davis1981obvious} and Rudnicki~\cite{rudnicki1987obvious} for more information about the the tradition of ``obvious inference'' in which \mizar{} works. The implementation in \mizar{} diverges somewhat from these proposals, but roughly speaking a conclusion is obtained by an ``obvious inference'' from some premises if there is a Herbrand proof of the conclusion in which we have chosen at most one substitution instance of each premise. One important difficulty for mapping arbitrary resolution proofs to \mizar{} texts is that \mizar{}'s notion of ``obvious inference'' overlaps with various forms of resolution, but is neither weaker nor stronger than resolution. The consequence of this is that it is generally not the case that an application of resolution can be mapped to a single acceptable application of \mizar{}'s \verb+by+ rule. Consider the following example: \begin{example}[Non-obvious resolution inference] Consider the inference \begin{prooftree} \AxiomC{$\neg l(x) \vee d(x)$} \AxiomC{$\neg l(x) \vee \neg d(x) \vee \neg d(y)$} \RightLabel{Resolution} \BinaryInfC{$\neg l(x) \vee \neg d(y)$} \end{prooftree} Here $l$ and $d$ are unary predicate symbols and $x$ and $y$ are variables; all formulas should be read as implicitly universally quantified. This application of resolution simply eliminates $d(x)$ from the premises. If we map the two premises and the conclusion of the application of resolution to three \mizar{} theorems and attempt justify the mapped conclusion simply by appealing by name to the two mapped premises, then we are asking to check an application of \verb+by+, as follows: \begin{prooftree} \AxiomC{$\forall x \left [ \neg l(x) \vee d(x) \right ]$} \AxiomC{$\forall x, y \left [ \neg l(x) \vee \neg d(x) \vee \neg d(y) \right ]$} \RightLabel{\texttt{by}} \BinaryInfC{$\forall x,y \left [ \neg l(x) \vee \neg d(y) \right ]$} \end{prooftree} The problem here is that we cannot choose a single substitution instance of the premises such that we can find a Herbrand derivation, and hence the inference is non-obvious even though it is essentially (i.e., at the clause level) a single application of propositional resolution. The reason for the difficulty is that we are making things difficulty for ourselves by working at the level of formulas rather than clauses. A solution is available: map the application of resolution not to a single application of \mizar{}'s \verb+by+ rule, but to a proof: \begin{lstlisting}[language=Mizar] ((not l x) or (not d y)) proof A: (not l x) or (not d x) by Premise1; B: (not l x) or (not d x) or (not d y) by Premise2; thus thesis by A,B; end; \end{lstlisting} There is an application of \mizar's \verb+by+ rule at the end, whose conclusion is \verb+thesis+, i.e., the formula to be proved at that point in the proof. We solve the problem by reasoning with substitution instances of the premises, obtained by taking instances of the premises (these are \verb+A+ and \verb+B+, respectively) rather than with whole universal formulas. Note that the substitution instances are not built from constants and function symbols, but from (fixed) variables. \end{example} \subsection{Compressing \mizar{} proofs}\label{sec:compressing} The ``epicycles'' of resolution notwithstanding, \mizar{} is able to compress many of \eprover{}'s proof steps: many steps can be combined into a single acceptable application of \mizar{}'s \verb+by+ rule of inference. For example, if $\phi$ is inferred from $\phi^{\prime}$ from variable renaming, and $\phi^{\prime}$ is inferred by an application of conjunction elimination to $\phi^{\prime\prime}$, typically in the \mizar{} setting $\phi$ can be inferred from $\phi^{\prime\prime}$ alone by a single application of \verb+by+. This is typical for most of the fine-grained rules of \eprover{}'s calculus: their applications are acceptable according to \mizar{}'s \verb+by+, and often they can be composed (sometimes multiple times) while still being acceptable to \verb+by+. Other rules in \eprover's proof calculus that can often be eliminated are variable rewritings, putting formulas into negation normal form, reordering of literals in clauses (but recall that \mizar{} proofs are written at the level of full first-order logic, not in a clause language). More interesting compressions exploit the gap between ``obvious inference'' and \eprover{}'s more articulated calculus. Compressing proofs helps us to get a sense of what the proof is about. The \mizar{} notion of obvious inference has been tested through daily work with substantial mathematical proofs for decades, and thus enjoys a time-tested robustness (though it is not always uncontroversial). It seems to be an open problem to specify what we mean by the ``true'' or ``best'' view of a proof. When \mizar{} texts come from \eprover{} proofs, \mizar{} finds that the steps are usually excessively detailed (i.e., most steps are obvious) and can be compressed. On the other hand, often the whole proof cannot be compressed into a single application of \verb+by+. We employ the algorithm discussed in~\cite{rudnicki2011escape}: a simple fixed-point algorithm is used to maximally compress a \mizar{} text. Thus, by repeatedly attempting to compress the proof until we reach the limits of \verb+by+, we obtain a more parsimonious presentation of the proof. Proof compression is not without its pitfalls; if one compresses \mizar{} proofs too much, the \mizar{} text can become as ``inhuman'' as the resolution proof from which it comes. This is a well-known phenomenon in the \mizar{} community. Applying the proof compression tools seems to require a human's \emph{bon sens}. Experience with texts generated by our translation shows that often considerable compression is possible, but at the cost of introducing a new artificial ``scent'' into the \mizar{} text. \section{Conclusion and future work}\label{sec:conclusion} One naturally wants to extend the work here to work with output of other theorem provers, such as \vampire{}. There is no inherent difficulty in that, though it appears that the TSTP derivations output by \vampire{} contain different information compared to \eprover{} proofs; the generic transformations described in Section~\ref{sec:overall} would carry over, but the mapping of skolemization and resolution steps of Sections~\ref{sec:skolemization} and~\ref{sec:resolution} will likely need to be customized for \vampire. The TPTP language recognizes definitions, but whether an automated theorem prover treats them differently from an axiom is unspecified. In \mizar{}, definitions play a vital role. After all, \mizar{} is designed to be a language for developing mathematical theories; only secondarily is it a language for representing solutions to arbitrary reasoning problems, as we are using it in this paper. One could try to detect definitions either by scanning the problem looking for formulas that have the form of definitions, or, if the original TPTP problem is available, one can extract the formulas whose TPTP status is \verb+definition+. Such definition detection and synthesis has no semantic effect, but could make the generated \mizar{} texts more manageable and perhaps even facilitate new compressions. At the moment the tool simply translates \eprover{} derivations to \mizar{} proofs. A web-based frontend to the translator could help to spur increased usage (and testing) of our system. One can even imagine our tool as part of the \systemontptp{} suite~\cite{sutcliffe2009tptp}. An important incompleteness of the current solution is the treatment of equality. Some atomic equational reasoning steps (specifically, inferences involving non-ground equality literals) in \eprover{} derivations can be non-\mizar{}-obvious. One possible solution is to use \provernine's \ivy{} proof objects. \ivy{} derivations provide some information (namely, which instances of which variables in non-ground literals) that (at present) is missing from \eprover{}'s proof object output. For the sake of clarity in the mapping of skolemization steps in \eprover{} derivation to \mizar{} steps, we restricted attention to those \eprover{} derivations in which each skolemization step introduces exactly one new skolem function. The restriction does not reflect a weakness of \mizar{}; it is a merely technical limitation and we intend to remove it. We have thus completed the cycle started in~\cite{rudnicki2011escape} and returned from ATPs to \mizar{}. We leave it to the reader to decide whether he wishes to escape again. \appendix \section{Pelletier's Dreadbury Mansion Puzzle: From \eprover{} to \mizar{}} \begin{lstlisting}[language=Mizar,basicstyle=\ttfamily\scriptsize,escapechar=] Ax1: ex X1 st (lives X1 & killed X1,agatha) by AXIOMS:1; Ax2: lives X1 implies (X1 = agatha or X1 = butler or X1 = charles) by AXIOMS:2; Ax3: killed X1,X2 implies hates X1,X2 by AXIOMS:3; Ax4: killed X1,X2 implies (not richer X1,X2) by AXIOMS:4; Ax5: hates agatha,X1 implies (not hates charles,X1) by AXIOMS:5; Ax6: (not X1 = butler) implies hates agatha,X1 by AXIOMS:6; Ax7: (not richer X1,agatha) implies hates butler,X1 by AXIOMS:7; Ax8: hates agatha,X1 implies hates butler,X1 by AXIOMS:8; Ax9: ex X2 st (not hates X1,X2) by AXIOMS:9; Ax10: not agatha = butler by AXIOMS:10; S1: killed skolem1,agatha by Ax1,SKOLEM:def 1; S2: agatha = skolem1 or butler = skolem1 or charles = skolem1 by Ax2,Ax1,SKOLEM:def 1; S3: not hates agatha,(skolem2 butler) by Ax9,SKOLEM:def 2,Ax8; S4: hates charles,agatha or skolem1 = butler or skolem1 = agatha by Ax3,Ax1,SKOLEM:def 1,S2; S5: butler = (skolem2 butler) by S3,Ax6; S6: not hates butler,butler by Ax9,SKOLEM:def 2,S5; S7: hates butler,butler or skolem1 = agatha by Ax4,Ax7,Ax1,SKOLEM:def 1,Ax5,S4,Ax6,Ax10; S8: skolem1 = agatha by S7,S6; theorem killed agatha,agatha proof now assume S9: not killed agatha,agatha; thus contradiction by S1,S8,S9; end; hence thesis; end; \end{lstlisting} Pelletier's Dreadbury Mansion~\cite{Pel86-JAR} goes as follows: \begin{quotation} {\small Someone who lives in Dreadbury Mansion killed Aunt Agatha. Agatha, the butler, and Charles live in Dreadbury Mansion, and are the only people who live therein. A killer always hates his victim, and is never richer than his victim. Charles hates no one that Aunt Agatha hates. Agatha hates everyone except the butler. The butler hates everyone not richer than Aunt Agatha. The butler hates everyone Aunt Agatha hates. No one hates everyone. Agatha is not the butler.} \end{quotation} The problem is: Who killed Aunt Agatha? (Answer: she killed herself.) The problem belongs to the TPTP Problem Library (it is known there as \tptpproblemlink{PUZ001+1}) and can easily by solved by many automated theorem provers. Above is the result of mapping \eprover's solution to a standalone \mizar{} text and then compressing it as described in Section~\ref{sec:compressing}. Two skolem functions \verb+skolem1+ (arity~0) and \verb+skolem2+ (arity~2) are introduced. There are 10 axioms and 8 steps that do not depend depend on the negation of the conjecture (\verb+killed agatha,agatha+) This problem is solved essentially by forward reasoning from the axioms; proof by contradiction is unnecessary, but that is the nature of \eprover's solution. \end{document}
arXiv
The generalized ratios intrinsic dimension estimator Michael Greenacre, Patrick J. F. Groenen, … Elena Tuzhilina Exploring patterns enriched in a dataset with contrastive principal component analysis Abubakar Abid, Martin J. Zhang, … James Zou Dynamic visualization of high-dimensional data Eric D. Sun, Rong Ma & James Zou Benchmark and application of unsupervised classification approaches for univariate data Maria El Abbassi, Jan Overbeck, … Mickael L. Perrin Exploring a world of a thousand dimensions Catalina A. Vallejos Dimensionality reduction using singular vectors Majid Afshar & Hamid Usefi Topological methods for data modelling Gunnar Carlsson Relative, local and global dimension in complex networks Robert Peach, Alexis Arnaudon & Mauricio Barahona Degree difference: a simple measure to characterize structural heterogeneity in complex networks Amirhossein Farzam, Areejit Samal & Jürgen Jost Francesco Denti1, Diego Doimo2, Alessandro Laio2,3 & Antonietta Mira4,5 Scientific Reports volume 12, Article number: 20005 (2022) Cite this article Modern datasets are characterized by numerous features related by complex dependency structures. To deal with these data, dimensionality reduction techniques are essential. Many of these techniques rely on the concept of intrinsic dimension (id), a measure of the complexity of the dataset. However, the estimation of this quantity is not trivial: often, the id depends rather dramatically on the scale of the distances among data points. At short distances, the id can be grossly overestimated due to the presence of noise, becoming smaller and approximately scale-independent only at large distances. An immediate approach to examining the scale dependence consists in decimating the dataset, which unavoidably induces non-negligible statistical errors at large scale. This article introduces a novel statistical method, Gride, that allows estimating the id as an explicit function of the scale without performing any decimation. Our approach is based on rigorous distributional results that enable the quantification of uncertainty of the estimates. Moreover, our method is simple and computationally efficient since it relies only on the distances among data points. Through simulation studies, we show that Gride is asymptotically unbiased, provides comparable estimates to other state-of-the-art methods, and is more robust to short-scale noise than other likelihood-based approaches. In recent years, we have witnessed an unimaginable growth in data production. From personalized medicine to finance, datasets characterized by a large number of features are ubiquitous in modern data analyses. The availability of these high-dimensional datasets poses novel and engaging challenges for the statistical community, called to devise new techniques to extract meaningful information from the data in a limited amount of computational time and memory. Fortunately, data contained in high-dimensional embeddings can often be described by a handful of variables: a subset of the original ones or a combination—not necessarily linear—thereof. In other words, one can effectively map the features of a dataset onto spaces of much lower dimension, typically nonlinear manifolds1. Estimating the dimensionality of these manifolds is of paramount importance. We will call this quantity the intrinsic dimension (id from now on) of a dataset, i.e., the number of relevant coordinates needed to describe the data-generating process accurately2. Many definitions of id have been proposed in the literature since this concept has been investigated in a wide range of disciplines ranging from mathematics, physics, and engineering to computer science and statistics. For example, Fukanaga3 expressed the id as the minimum number of parameters needed to describe the essential characteristics of a system accurately. For4, the id is the dimension of the subspace in which the data are entirely located, without information loss. An alternative definition, that exploits the language of pattern recognition, is provided by5. In this framework, a set of points is viewed as a uniform sample obtained from a distribution over an unknown smooth (or locally smooth) manifold structure (its support),eventually embedded in a higher-dimensional space through a nonlinear smooth mapping. Thus, the id represents the topological dimension of the manifold. All these definitions are useful for delineating different aspects of the multi-faceted concept that is the id. The literature on statistical methods for dimensionality reduction and id estimation is extraordinarily vast and heterogeneous. We refer to5,6 for comprehensive reviews of state-of-the-art methods, where the strengths and weaknesses of numerous methodologies are outlined and compared. Generally, methods for the estimation of the \(\texttt {id }\) can be divided into two main families: projective methods and geometric methods. On the one hand, projective methods estimate the low-dimensional embedding of interest through transformations of the data, which can be linear, such as Principal Component Analysis (PCA)7 and its Probabilistic8, Bayesian9, and Sparse10 extensions; or nonlinear, as Local Linear Embedding11, Isomap12, and others13,14. See also15 and the references therein. On the other hand, geometric methods rely on the topology of a dataset, exploiting the properties of distances among data points. Within this family, we can distinguish among fractal methods16, graphical methods17,18, and methods based on nearest neighbor distances (e.g., IDEA19) and angles (e.g., DANCo20). We will focus on the latter category, which is directly related to our proposal. Nearest neighbors (NNs) methods rely on the assumption that points close to each other can be considered as uniformly drawn from d-dimensional balls (hyperspheres). More formally, consider a generic data point \(\varvec{x}\) and denote with \({\mathcal {B}}_{d}(\varvec{x}, r)\) a hypersphere, characterized by a small radius \(r \in {\mathbb {R}}^{+}\), centered in \(\varvec{x}\). Let \(\rho (\varvec{x})\) be the density function of the points in \({\mathbb {R}}^{d}\). Intuitively, the proportion of points of a given sample of size n from \(\rho (\varvec{x})\) that falls into \({\mathcal {B}}(\varvec{x}, r)\) is approximately \(\rho (\varvec{x})\) times the volume of the ball. This intuition gives rise to the following formal relationship: \(\frac{k}{n} \approx \rho (\varvec{x})\, \omega _{d}\, r^{d}\). Here, k is the number of NNs of \(\varvec{x}\) within the hypersphere \({\mathcal {B}}_d(\varvec{x}, r)\), while \(\omega _{d}\) is the volume of the d-dimensional unit hypersphere in \({\mathbb {R}}^{d}\). If, in the previous relationship, the density is assumed to be constant, one can estimate the id as a function of the average of the distances among the sample points and their respective k-th NN21. This type of approach gives rise to the question on how to effectively select k, the number of considered NNs. From a different perspective, various authors adopted model-based frameworks for manifold learning and id estimation. One possible approach is to specify a model for the distribution of the distances among the data points. Amsaleg et al.22, exploiting results from23, suggested modeling the distances as a Generalized Pareto distribution since they showed that a (local) id can be recovered, asymptotically, as a function of its parameters. In a Bayesian framework, Duan and Dunson24 proposed modeling the pairwise distances among data points to coherently estimate a clustering structure. Furthermore, some model-based methods to explore the topology of datasets have recently been developed, pioneered by the likelihood approach discussed in1. Mukhopadhyay et al.25 used Fisher-Gaussian kernels to estimate densities of data embedded in nonlinear subspaces. Li et al.26 proposed to learn the structure of latent manifolds by approximating them with spherelets instead of locally linear approximation, developing a spherical version of PCA. In the same spirit, Li and Dunson27 applied this idea to the classification of data lying on complex, nonlinear, overlapping, and intersecting supports. Similarly, Li and Dunson28 proposed to use the spherical PCA to estimate a geodesic distance matrix, which takes into account the structure of the latent embedding manifolds, and created a spherical version of the k-medoids algorithm29. Alternatively, Gomtsyan et al.30 directly extended the maximum likelihood estimator (MLE) by1 proposing a geometry-aware estimator to correct the negative bias that often plagues MLE approaches in high dimensions. The geometric properties of a dataset are also exploited by the ESS estimator31, which is based on the evaluation of simplex volumes spanned by data points. Finally, Serra and Mandjes32 and Qiu et al.33 estimated the id via random graph models applied to the adjacency matrices among data points, recovered by connecting observations whose distances do not exceed a certain threshold. This paper introduces a likelihood-based approach to derive a novel \(\texttt {id }\) estimator. Our result stems from the geometrical probabilistic properties of the NNs distances. Specifically, we build on the two nearest neighbors (TWO-NN) estimator, recently introduced by2. Similarly to1,34, the TWO-NN is a model-based id estimator derived from the properties of a Poisson point process, whose realizations occur on a manifold of dimension d. Facco et al.2 proved that the ratio of distances between the second and first NNs of a given point is Pareto distributed with unitary scale parameter and shape parameter precisely equal to d. To estimate the id, they suggested fitting a Pareto distribution to a proper transformation of the data. Their result holds under mild assumptions on the data-generating process, which we will discuss in detail. We extend the TWO-NN theoretical framework by deriving closed-form distributions for the product of consecutive ratios of distances and, more importantly, for the ratio of distances among NNs of generic order. These theoretical derivations have relevant practical consequences. By leveraging our distributional results, we attain an estimator that is more robust to the noise present in a dataset, as we will show with various simulation studies. Moreover, the new estimator allows the investigation of the \(\texttt {id }\) evolution as a function of the distances among NNs. Monitoring this evolution is beneficial for two reasons. First, it is a way to examine how the \(\texttt {id }\) depends on the size of the neighborhood at hand. Second, as the size of the neighborhood increases, our estimator can reduce the bias induced by potential measurement noise. Finally, the principled derivation of our results enables the immediate specification of methods to perform uncertainty estimation. The article is organized as follows. Section "Likelihood-based TWO-NN estimators" briefly introduces the TWO-NN modeling framework developed by2 and discusses the MLE and Bayesian alternatives. In "Gride, the generalized ratios intrinsic dimension estimator", we contribute to the Poisson point process theory by providing closed-form distributions for functions of distances between a point and its NNs. We exploit these results to devise a new estimator for the \(\texttt {id }\) of a dataset that we name Gride. Section "Results" presents several numerical experiments that illustrate the behavior of Gride. We compare our proposal with other relevant estimators in terms of estimated values, robustness to noise, and computational cost. In "Discussion", we discuss possible future research directions. The interested reader is also referred to the Supplementary Material, where we report the proofs of our theoretical results, along with extended simulation studies. Likelihood-based TWO-NN estimators In this section, we briefly introduce the modeling framework that led to the development of the TWO-NN estimator, propose a maximum likelihood and Bayesian counterparts, and discuss its shortcomings when applied to noisy datasets. More details about this estimator and its assumptions are deferred to the Supplementary Material. Consider a dataset \(\varvec{X}=\{\varvec{x}_i\}_{i=1}^n\) composed of n observations measured over D distinct features, i.e., \(\varvec{x}_i\in {\mathbb {R}}^D\), for \(i=1,\ldots ,n\). Denote with \(\Delta :{\mathbb {R}}^D\times {\mathbb {R}}^D \rightarrow {\mathbb {R}}^+\) a generic distance function between pairs of elements in \({\mathbb {R}}^D\). We assume that the dataset \(\varvec{X}\) is a particular realization of a Poisson point process characterized by density function (that is, normalized intensity function) \(\rho \left( \varvec{x}\right)\). We also suppose that the density of the considered stochastic process has its support on a manifold of unknown intrinsic dimension \(d\le D\). We expect, generally, that \(d<<D\). For any fixed point \(\varvec{x}_i\), we sort the remaining \(n-1\) observations according to their distance from \(\varvec{x}_i\) by increasing order. Let us denote with \(\varvec{x}_{(i,l)}\) the l-th NN of \(\varvec{x}_i\) and with \(r_{i,l}=\Delta (\varvec{x}_{i},\varvec{x}_{(i,l)})\) their distance, with \(l=1,\ldots , n-1\). For notation purposes, we define \(\varvec{x}_{i,0}\equiv \varvec{x}_{i}\) and \(r_{i,0}=0\). A crucial quantity in this context is the volume of the hyperspherical shell enclosed between two successive neighbors of \(\varvec{x}_i\), defined as $$\begin{aligned} v_{i,l}=\omega _{d}\left( r_{i,l}^{d}-r_{i,l-1}^{d}\right) , \quad \quad \text {for }l =1,\ldots ,n-1,\text { and }i=1,\dots ,n, \end{aligned}$$ where d is the dimension of the space in which the points are embedded (the id) and \(\omega _{d}\) is the volume of the d-dimensional sphere with unitary radius. We also assume that the density function \(\rho\) is constant. Under these premises, we have \(v_{i,l}\sim Exp(\rho )\), for \(l =1,\ldots ,n-1,\) and \(i=1,\dots ,n\). Theorem 2.1 2 Consider a distance function \(\Delta\) taking values in \({\mathbb {R}}^+\) defined among the data points \(\{\varvec{x}_i\}_{i=1}^n\), which are a realization of a Poisson point process with constant density \(\rho\). Let \(r_{i,l}\) be the value of this distance between observation i and its l-th NN. Then, $$\begin{aligned} \mu _i = \dfrac{r_{i,2}}{r_{i,1}} \sim Pareto(1,d), \quad \quad \mu _i \in \left( 1,+\infty \right) . \end{aligned}$$ An alternative proof for this result is reported in the Supplementary Material. We remark that, while the theorem can be proven only if the density \(\rho\) is constant, the result and the \(\texttt {id }\) estimator are empirically valid as long as the density is approximately constant on the scale defined by the distance of the second NN \(r_{i,2}\). We refer to this weakened assumption as local homogeneity. The TWO-NN estimator treats the ratios in \(\varvec{\mu }=\{\mu _i\}_{i=1}^n\) as independent, \(i=1,\ldots ,n\), and estimates the global \(\texttt {id }\) employing a least-squares approach. In detail, Facco et al.2 proposed to consider the cumulative distribution function (c.d.f.) of each ratio \(\mu _i\) given by \(F({\mu _i})= (1-\mu _i^{-d})\), and to linearize it into \(\log (1-F({\mu _i}))=-d\log (\mu _i)\). Then, a linear regression with no intercept is fitted to the pairs \(\{-\log (1-{\tilde{F}}(\mu _{(i)})),\log (\mu _{(i)}) \}_{i=1}^n\), where \({\tilde{F}}(\mu _{(i)})\) denotes the empirical c.d.f. of the sample \(\varvec{\mu }\) sorted by increasing order. To improve the estimation, the authors also suggested discarding the ratios \(\mu _i\)'s that fall above a given high percentile (e.g., 90%), usually generated by observations that fail to comply with the local homogeneity assumption. Since it is based on a simple linear regression, the TWO-NN estimator provides a fast and accurate estimation of the \(\texttt {id }\), even when the sample size is large. Nonetheless, from (2) we can immediately derive the corresponding maximum likelihood estimator (MLE) and the posterior distribution of d within a Bayesian setting. First, let us discuss the MLE and the relative confidence intervals (CI). For the shape parameter of a Pareto distribution, the (unbiased) MLE is given by: $$\begin{aligned} {\hat{d}} = \frac{n-1}{\sum _{i}^n\log (\mu _i)}. \end{aligned}$$ Moreover, \({\hat{d}}/d \sim IG(n,(n-1))\), where IG denotes an Inverse-Gamma distribution. Therefore, the corresponding confidence interval (CI) of level \(1-\alpha\) is given by $$\begin{aligned} CI(d,1-\alpha )=\left[ \frac{{\hat{d}}}{q^{1-\alpha /2}_{IG_{n,(n-1)}}}; \frac{{\hat{d}}}{q^{\alpha /2}_{IG_{n,(n-1)}}}\right] , \end{aligned}$$ where \(q^{\alpha /2}_{IG}\) denotes the quantile of order \(\alpha /2\) of an Inverse-Gamma distribution. Alternatively, to carry out inference under the Bayesian approach we specify a prior distribution on the parameter d. The most convenient prior choice is \(d\sim Gamma(a,b)\) because of its conjugacy property. In this case, it is immediate to derive the posterior distribution of the id: $$\begin{aligned} d|\varvec{\mu } \sim Gamma\left( a+n, b+\sum _{i=1}^n \log (\mu _i)\right) . \end{aligned}$$ Under the Bayesian paradigm, we obtain the credible intervals by taking the relevant quantiles of the posterior distribution. Moreover, one can immediately derive the posterior predictive distribution $$\begin{aligned} p({\tilde{\mu }}|\varvec{\mu })= \frac{a^*}{b^*\;{\tilde{\mu }}}\left( 1+\frac{\log ({\tilde{\mu }})}{b^*}\right) ^{-a^*-1}, \text { with } \, {\tilde{\mu }} \in \left( 1,+\infty \right) , \end{aligned}$$ where \(a^*=a+n\) and \(b^*=b+\sum _{i=1}^n \log (\mu _i)\). The posterior predictive distribution is useful to assess the model's goodness of fit. For example, one can compute the discrepancy between synthetic data generated from the distribution in (6) and the dataset at hand to assess the validity of the assumed data-generating mechanism35. From Eq. (6), it can be easily shown that the posterior predictive law for \(\log ({\tilde{\mu }})\) follows a \(Lomax(a^{*},b^{*})\) distribution, for which samplers are readily available. The derivations in (3)–(5) lead to alternative ways to estimate—by point or confidence/credible intervals—the \(\texttt {id }\) within the TWO-NN model enabling immediate uncertainty quantification, an aspect that was not developed in detail in2. Three-dimensional Spiral dataset, with \(n=5000\), \({\bar{S}}=1\), \(\sigma ^2_x=\sigma ^2_y=0.5\), and \(\sigma ^2_z=1\). The resulting data points are displayed on the left. On the right, we show how observing the data at different scales can produce different insights regarding the dimensionality of the dataset. Table 1 MLE point estimates and confidence intervals computed on the Spiral dataset (\(d=1\)) with the TWO-NN and Gride estimators. The TWO-NN modeling framework presents a potential shortcoming: it does not account for the presence of noise in the data. Measurement errors can significantly impact the estimates since the \(\texttt {id }\) estimators are sensitive to different sizes of the considered neighborhood. As an example, consider a dataset of n observations measured in \({\mathbb {R}}^3\) created as follows. The first two coordinates are obtained from the spiral defined by the parametric equations \(x=u \cos (u+2\pi )\) and \(y= u\sin (u+2\pi )\), where \(u = 2\pi \sqrt{u_0}\) and \(u_0\) is attained from an evenly-spaced grid of n points over the support \(\left[ \frac{1}{4\pi },{\bar{S}}\right]\). The third coordinate is defined as a function of the previous two, \(z = x^2 + y^2\). Gaussian random noise (with standard deviations \(\sigma _x,\,\sigma _y\), and \(\sigma _z\)) is added to all three coordinates. We simulated a first Spiral dataset setting \(n=5000\), \({\bar{S}}=1\), \(\sigma _x=\sigma _y=0.5\), and \(\sigma _z=1\). A three-dimensional depiction of the resulting dataset is reported in the left part of Fig. 1. The value of the \(\texttt {id }\) estimated with the TWO-NN model is 2.99. However, \(u_0\) is the only free variable since the three coordinates are deterministic functions of \(u_0\). Therefore, only one degree of freedom is used in the data generating process. In other words, the true \(\texttt {id }\) is 1, and the noise at short scale misleads the TWO-NN estimator. For a visual example of how the \(\texttt {id }\) may change with the size of the considered neighborhood, see the right part of Fig. 1. As a strategy to mitigate the local noise effect, Facco et al.2 proposed to subsample the dataset at hand and consider only a fraction c of the points. By doing this, we effectively extend the average size of the neighborhood considered by the estimator. Although this decimation strategy helps understand how the TWO-NN is affected by the resolution of the considered neighborhood, it comes at a critical cost in terms of statistical power. As the value of c decreases, this procedure discards the majority of the data points. Moreover, little heuristic on how to fix an optimal value is available. Facco et al.2 proposed to monitor the \(\texttt {id }\) evolution as a function of c, looking for a plateau in the estimates. Thus, the best value for c would be the highest proportion such that the \(\texttt {id }\) is close to the plateau values. In the next section, we will introduce the Gride, which is based on ratios of NNs distances of order higher than the second. Let us denote the orders of the considered NNs with \(n_1\) and \(n_2\), respectively. This novel estimator can go beyond the local reach of the TWO-NN, effectively reducing the impact of noise on the id estimate. Moreover, by increasing the order of the considered NNs, we can monitor how the id estimate changes as a function of the neighborhood size without discarding any data point. As a preliminary result, we compare the performance of Gride with the decimated TWO-NN on the Spiral dataset. We report in Table 1 the point estimates (obtained via MLE) and confidence intervals, along with the corresponding bias and interval width. The first four columns show the results for the TWO-NN estimator applied to a fraction \(c\in \{1;\,0.20;\,0.01;\,0.001\}\) of the original dataset. The remaining four columns contain the results for Gride with different NN orders: \((n_1,n_2)\in \{(2,4);\,(100,200);\,(250,500);\,(750,1500)\}\). We aim to monitor the evolution of the estimate as a function of the NN orders to assess the model's sensitivity to the noise. On the one hand, the TWO-NN estimator applied to a decimated dataset leads to reasonable point estimates when minimal values of c are considered. However, this comes at the price of greater uncertainty, which is reflected by the wider confidence intervals. Gride, on the other hand, escapes the positive bias induced by the noise for large values of \(n_1\) and \(n_2\) while maintaining narrow confidence intervals. Note that low values of c and high values of \(n_2\) induce the TWO-NN and Gride, respectively, to cover broader neighborhoods. However, the smaller uncertainty of Gride highlights that our method does not have to discard any information to reach this goal. This preliminary result suggests that, by extending the orders of NNs distances that we consider, Gride can escape the short, "local reach" of the TWO-NN model, which is extremely sensitive to data noise. Thus, extending the neighborhood of a point to further NNs allows extracting meaningful information about the topology and the dataset's structure at different distance resolutions. Gride, the generalized ratios intrinsic dimension estimator In this section, we develop novel theoretical results that contribute to the Poisson point processes theory. We will then exploit these results to devise a better estimator for d. In detail, we first extend the distributional results of "Likelihood-based TWO-NN estimators" providing closed-form distributions for vectors of consecutive ratios of distances. Then, building upon that, we move a step further and derive the closed-form expression for the distribution of ratios of NNs of generic order. Distribution of consecutive ratios, generic ratios, and related estimators Consider the same setting introduced in the previous section and define \(V_{i,l} = \omega _d \, r^d_{i,l}\) as the volume of the hypersphere centered in \(\varvec{x}_i\) with radius equal to the distance between \(\varvec{x}_i\) and its l-th NN. Because of their definitions, for \(l=2,\ldots ,L\), we have that \(v_{i,l}\) and \(V_{i,l-1}=v_{i,1}+\cdots +v_{i,l-1}\) are independent. Moreover, \(V_{i,l}\sim Erlang(1,l-1)\). Then, we can write $$\begin{aligned} \frac{v_{i,l}}{V_{i,l-1}} = \frac{\omega _d \left( r_{i,l}^d-r_{i,l-1}^d \right) }{\omega _d r^d_{i,l-1}} =\left( \frac{r_{i,l} }{ r_{i,l-1}}\right) ^d-1, \end{aligned}$$ which can be re-expressed as $$\begin{aligned} \mu _{i,l}=\frac{r_{i,l} }{ r_{i,l-1}} = \left( \frac{v_{i,l}}{V_{i,l-1}}+1\right) ^{1/d}. \end{aligned}$$ Given these premises, the following theorem holds. Consider a distance \(\Delta\) taking values in \({\mathbb {R}}^+\) defined among the data points \(\{\varvec{x}_i\}_{i=1}^n\), which are realizations of a Poisson point process with constant density \(\rho\). Let \(r_{i,l}\) be the value of the distance between observation i and its l-th NN. Define \(\mu _{i,l} =r_{i,l} / r_{i,l-1}\). It follows that $$\begin{aligned} \begin{aligned} \mu _{i,l}&\sim Pareto\,(1,(l-1)d), \quad \text { for }\quad l = 2,\ldots ,L. \end{aligned} \end{aligned}$$ Moreover, the elements of the vector \(\varvec{\mu }_{i,L}=\{ \mu _{i,l} \}_{l=2}^L\) are jointly independent. The proof is deferred to the Supplementary Material. Theorem 2.2 provides a way to characterize the distributions of consecutive ratios of distances. Remarkably, given the homogeneity assumption, the different ratios are all independent. Building on the previous statements, we can derive more general results about the distances among NNs from a Poisson point process realization. The following theorem characterizes the distribution of the ratio of distances from two NNs of generic order. It will be the foundation of the estimator that we propose in this paper. Consider a distance \(\Delta\) taking values in \({\mathbb {R}}^+\) defined among the data points \(\{\varvec{x}_i\}_{i=1}^n\), which are realizations of a Poisson point process with constant density \(\rho\). Let \(r_{i,l}\) be the value of this distance between observation i and its l-th NN. Consider two integers \(1\le n_1<n_2\) and define \({\dot{\mu }}=\mu _{i,n_1,n_2} = r_{i,n_2} / r_{i,n_1}\). The random variable \({\dot{\mu }}\) is characterized by density function $$\begin{aligned} f_{\mu _{i,n_1,n_2}}({\dot{\mu }})= \frac{d({\dot{\mu }}^d-1)^{n_2-n_1-1}}{{\dot{\mu }}^{(n_2-1)d+1}B(n_2-n_1,n_1)}, \quad {\dot{\mu }}>1, \end{aligned}$$ where \(B(\cdot ,\cdot )\) denotes the Beta function. Moreover, \({\dot{\mu }}\) has k-th moment given by $$\begin{aligned} {\mathbb {E}}\left[ {\dot{\mu }}^k\right] =\frac{B(n_2-n_1,n_1-k/d)}{B(n_2-n_1,n_1)}. \end{aligned}$$ The proof is given in the Supplementary Material. Moreover, we also report a figure with some examples of the shapes of the density functions defined in Eq. (10). We now state some important remarks. Given the expression of the generic moment of \({\dot{\mu }}\), we can derive its expected value and variance: $$\begin{aligned} {\mathbb {E}}\left[ {\dot{\mu }}\right] = \frac{B(n_2-n_1,n_1-1/d)}{B(n_2-n_1,n_1)}\quad \text {and} \quad {\mathbb {V}}\left[ {\dot{\mu }}\right] =\frac{B(n_2-n_1,n_1-2/d)}{B(n_2-n_1,n_1)} -\frac{B(n_2-n_1,n_1-1/d)^2}{B(n_2-n_1,n_1)^2}, \end{aligned}$$ both well-defined when \(d>2\). From the first equation, it is straightforward to derive an estimator based on the method of moments. Formula (10) can be specialized to the case where \(n_1=n_0\) and \(n_2=2n_0\). We obtain $$\begin{aligned} f_{\mu _{i,n_0,2n_0}}({\dot{\mu }})= \frac{(2n_0-1)!}{(n_0-1)!^2} \cdot \frac{d({\dot{\mu }}^d-1)^{n_0-1}}{{\dot{\mu }}^{(2n_0-1)d+1}} = \frac{d({\dot{\mu }}^d-1)^{n_0-1}}{B(n_0,n_0)\cdot {\dot{\mu }}^{(2n_0-1)d+1}} , \quad {\dot{\mu }}>1. \end{aligned}$$ The result in Eq. (9) can be derived as a special case of formula (10). Consequently, we can say the same for the TWO-NN model in Eq. (2). Specifically, if we set \(n_1=n_0\) and \(n_2=n_0+1\), we obtain $$\begin{aligned} f_{\mu _{i,n_0,n_0+1}}({\dot{\mu }})= n_0 d {\dot{\mu }}^{-n_0d-1}, \quad {\dot{\mu }}>1, \end{aligned}$$ which is the density of a \(Pareto(1,n_0d)\) distribution. Given the previous results, it is also possible to show that, within our theoretical framework, the joint density of the random distances between a point and its first L NNs follows a Generalized Gamma distribution. We report a formal statement of this result and its proof in the Supplementary Material. The distributions reported in Eqs. (10) and (13) allow us to devise a novel estimator for the \(\texttt {id }\) parameter based on the properties of the distances measured between a point and two of its NNs of generic order. We name this method the Generalized ratios \(\texttt {id }\) estimator (Gride). From Eq. (10), by assuming that the n observations are independent, we derive the expression of the log-likelihood: $$\begin{aligned} \log {\mathcal {L}}=n\log (d)+(n_2-n_1-1)\sum _i\log ({\dot{\mu }}^d_i-1) - \log (B(n_2-n_1,n_1)) - ((n_2-1)d+1)\sum _i\log ({\dot{\mu }}_i). \end{aligned}$$ Following a maximum likelihood approach, we estimate d by finding the root of the following score function: $$\begin{aligned} \frac{\partial \log {\mathcal {L}}}{\partial d}=\frac{n}{d}+(n_2-n_1-1)\sum _i \frac{{\dot{\mu }}^d_i\log ({\dot{\mu }}_i)}{{\dot{\mu }}^d_i-1}-(n_2-1)\sum _i \log ({\dot{\mu }}_i)=0. \end{aligned}$$ This equation cannot be solved in closed-form, but the second derivative of the log-likelihood function \(\log {\mathcal {L}}\) for n observations is always negative on the entire parameter space \(d\in \left[ 1,+\infty \right)\): $$\begin{aligned} \frac{\partial ^2 \log {\mathcal {L}}}{\partial d^2} = -\frac{n}{d^2}-(n_2-n_1-1)\sum _i \frac{{\dot{\mu }}^d_i\log ({\dot{\mu }}_i)^2}{({\dot{\mu }}^d_i-1)^2} <0. \end{aligned}$$ Therefore, the log-likelihood function is concave, and univariate numerical optimization routines can obtain the MLE. Moreover, one can exploit numerical methods for uncertainty quantification: for example, one can estimate the confidence intervals with parametric bootstrap36. A more straightforward alternative estimator can be devised by setting \(n_2=n_1+1\) and leveraging on the consecutive ratios independence result presented in Theorem 2.3. In this specific case, we can derive an estimator that is the direct extension of the MLE version of the TWO-NN: $$\begin{aligned} {\hat{d}}_L = \frac{n(L-1)-1}{\sum _{i=1}^n\sum _{l=2}^L (l-1)\log (\mu _{i,l})}, \end{aligned}$$ by focusing on the properties of consecutive ratios of distances contained in the vectors \(\varvec{\mu }_{i,L}\), for \(i=1,\ldots , n\). The estimator in (16) has variance \({\mathbb {V}}\left[ {\hat{d}}_L\right] =d^2/(n(L-1)-2)\) which is smaller than the variance of the MLE estimator in (3), that is recovered when \(L=2\). The confidence interval is analogous to (4), with n substituted by \(n(L-1)\). From a Bayesian perspective, we can, as before, specify a conjugate Gamma prior for d, obtaining the posterior distribution $$\begin{aligned} {\hat{d}}_L|\varvec{\mu }_{L} \sim Gamma \left( a+n(L-1),b+\sum _{i=1}^n\sum _{l=2}^L (l-1)\log (\mu _{i,l})\right) . \end{aligned}$$ We note that the expression in (16) is equivalent to the corrected estimator proposed by34 in a famous online comment. Thus, we refer to it as MG estimator. We will discuss this equivalence more in detail in "Connection with existing likelihood-based methods". Although the availability of a closed-form expression is appealing, in "A comparison of the assumptions behind the TWO-NN, MG, and Gride" we will motivate why Gride is preferable to MG. Connection with existing likelihood-based methods Here, we discuss how our proposals are closely related to estimators introduced in the seminal work of1 (LB) and the subsequent comment of34 (MG). This relationship is not surprising, since the two estimators were derived within the same theoretical framework. Recall that we defined \(\mu _{i,j,k} = r_{i,k} / r_{i,j}\). Given two integer values \(q_1<q_2\), the LB estimator is defined as $$\begin{aligned} \texttt {LB}(q_1,q_2) = \frac{1}{q_2-q_1+1}\sum _{k=q_1}^{q_2}{\hat{m}}_k,\quad \quad {\hat{m}}_k = \frac{k-1}{n}\sum _{i=1}^n \left( \sum _{j=1}^{k-1}\log \left( \mu _{i,j,k}\right) \right) ^{-1}, \end{aligned}$$ where we exploit the equality \(\sum _{l=2}^L (l-1) \log (\mu _{i,1,l}) = \sum _{l=1}^{L-1} \log (r_{i,L}/r_{i,l})\) to re-express their estimators in terms of the \(\mu\)'s. The estimator proposed in34 considers a different expression for \({\hat{m}}_k\), that we denote by \({\hat{m}}'_k\): $$\begin{aligned} {\hat{m}}'_k = \frac{n(k-1)}{\sum _{i=1}^n \left( \sum _{j=1}^{k-1}\log \left( \mu _{i,j,k}\right) \right) }. \end{aligned}$$ The LB estimator combines the terms contributing to the likelihood through a simple average. These estimators are evaluated for different values of the larger NN order, considered between \(q_1\) and \(q_2\), and then averaged together again. MacKay and Ghahramani34 noted that the authors should have instead averaged the inverse of the contributions to be coherent with the proposed theoretical framework. This correction leads to the expression in (19), which is equivalent to the MLE for MG, as stated in Equation (16). Although the expressions are the same, we believe that our derivation presents an advantage. Indeed, starting from the distributions of the ratios of NNs distances, we can effortlessly derive uncertainty quantification estimates, as in (4), by simply exploiting well-known properties of the Pareto distribution. Following the LB strategy, one can pool together different estimates obtained with MG over a set of different NN orders \(L\in \{L_1;\,\ldots ;\,L_2\}\) by considering the value \(\sum _{l=L_1}^{L_2}{\hat{m}}'_l/(L_2-L_1+1)\). Unless otherwise stated, when computing the MG estimator in "Results" we will adopt this averaging approach, as implemented in the R package Rdimtools37. Among all the discussed estimators, Gride is the genuinely novel contribution of this work, and it is also the most general and versatile. Indeed, it relies on a single ratio of distances for each data point (similarly to the TWO-NN) while considering information collected on larger neighbors (similarly to MG) and, therefore, is likely to be more compliant with the independence assumption. A comparison of the assumptions behind the TWO-NN, MG, and Gride We now discuss the similarities and differences among the three estimators presented so far. The first point we need to make is that, similarly to Theorem 2.1, Theorems 2.2 and 2.3 can be proved only assuming \(\rho\) to be constant. However, from a practical perspective, the novel estimators are empirically valid as long as the density \(\rho\) is approximately constant on the scale defined by the distance of the L-th NN \(r_{i,L}\) (MG) and the \(n_2\)-th NN \(r_{i,n_2}\) (Gride), respectively. Again, we will refer to this assumption as local homogeneity. In the following, when we need to underline the dependence of the introduced families of estimators on specified NN orders, we will write \(\texttt {MG}(L)\) and \(\texttt {Gride}(n_1,n_2)\). Both MG and Gride extend the TWO-NN rationale, estimating the \(\texttt {id }\) on broader neighborhoods. By considering the ratio of two NNs of generic order, Gride extracts more information regarding the topology of the data configuration. Moreover, monitoring how Gride's estimates vary for different NNs orders allows the investigation of the relationship between the dataset's \(\texttt {id }\) and the scale of the neighborhood. That way, it is possible to escape the strict, extremely local point of view of the TWON-NN. This property reduces the distortion produced by noisy observations in estimating the \(\texttt {id }\). With its alternative formulation, MG reaches a similar goal exploiting the properties of all the consecutive ratios up to the highest NNs order that we consider. MG is appealing, being an intuitive extension of the TWO-NN model, and possessing a closed-form expression for its MLE and confidence interval. However, we are going to show that Gride is more reliable when it comes to real datasets. To support this statement, we need to discuss the validity of the assumptions required for deriving these estimators. As mentioned in "Likelihood-based TWO-NN estimators", the main modeling assumptions are two: the local homogeneity of the underlying Poisson point process density and the independence among ratios of distances centered in different data points. These assumptions affect the three estimators differently. To provide visual intuition, in Fig. 2 we display 500 points generated from a bidimensional Uniform distribution over the unit square. Then, we randomly select four points (in blue) and highlight (in red) the NNs involved in the computation of the ratios that are used by the TWO-NN, MG, and Gride models. We consider \(\texttt {MG}(40)\), and \(\texttt {Gride}(20,40)\). Neighboring points (in red) and distances (dotted lines) involved in the \(\texttt {id }\) estimation centered in four data points (in blue). Each panel corresponds to one model: TWO-NN, MG\((L=40)\), and Gride\((n_1=20,n_2=40)\), respectively. For both \(\texttt {MG}(L)\) and \(\texttt {Gride}(n_1,n_2)\), the local homogeneity hypothesis has to hold for larger neighborhoods, up to the NN of orders \(L>2\) and \(n_2>2\), respectively. We will empirically show that while \(\texttt {MG}\) and \(\texttt {Gride}\) are more reliable than TWO-NN if used on dense configurations, care should be taken when interpreting the results obtained from scarce datasets. Although the stricter local homogeneity assumption affects the two estimators similarly, they are not equally impacted by the assumption of independence of the ratios. By comparing the second and third panels of Fig. 2, we observe that MG, in its computation, needs to take into account all the distances among points and its NNs up to the L-th order. When L is large and the sample size is limited, neighborhoods centered in different data points may overlap, inducing dependence across the ratios and violating one of our fundamental assumptions. Gride instead uses only two distances, and the probability of shared NNs across different data points is lower, especially if large values of \(n_1\) and \(n_2\) are chosen. Given the previous points, in the experiments outlined in the next section, we set \(n_2=2n_1\). Our simulation studies showed that this choice is robust to the dimensionality of the dataset and provides a good trade-off between the scalability of the algorithm and the careful assessment of the dependence of the id to the scale. The numerical experiments carried out in this section are based on the functions implemented in the Python package DADApy38 (available at the GitHub repository sissa-data-science/DADApy ) and in the R package intRinsic39, unless otherwise stated. Simulation studies Gride is asymptotically unbiased First, we empirically show the consistency of Gride. This result represents an important gain with respect to the TWO-NN estimator. We sample 10000 observations from a bivariate Gaussian distribution, and aim at estimating the true \(\texttt {id}=2\). To assess the variance of the numerical estimator devised from the log-likelihood in (15), we resort to a parametric bootstrap technique. We collect 5000 simulations as bootstrap samples under four different scenarios that we report in the panels displayed in the top row of Fig. 3. A similar analysis can be performed within the Bayesian setting, studying the concentration of the posterior distribution. We display the posterior simulations in the Supplementary Material (Fig. S2) . We see that, as the NNs order increases, the bootstrap samples are progressively more concentrated around the truth, with minor remaining bias due to the lack of perfect homogeneity in the data generating process. Top row: histograms of the Gride parametric bootstrap samples estimated within the frequentist framework for different NN orders. Note that the first panel corresponds to the TWO-NN model. Bottom row: average Gride MLE obtained over different NN orders. The various panels showcase the different neighborhood sizes that are considered for computing the estimates. The error bands display the \(95\%\) confidence intervals on the average MLE. As a second analysis, we show that high-order Gride estimates are also empirically unbiased when the homogeneity assumption of the underlying Poisson process holds. To create a dataset that complies as much as possible with the theoretical data-generating mechanism, we start by fixing a pivot point, and we generate a sequence of \(n=30000\) volumes of hyperspherical shells from an exponential distribution under the homogeneous Poisson process framework. Let us denote the sequence of these volumes with \(\{v_i\}_{i=1}^{n}\). Once the volumes are collected, we compute the actual distance (radius) from the pivot point by using Eq. (1) with \(d=2\) and \(r_0=0\). Thus, we have \(r_1=\sqrt{v_1/\omega _2}\,\), \(r_2=\sqrt{(v_1+v_2)/\omega _2}\), and so on. Then, we generate the position of the i-th point at a distance \(r_i\) from the pivot by sampling its angular coordinates from a uniform distribution with support \(\left[ 0, 2\pi \right)\) for each i. The panels in the bottom row of Fig. 3 show the id estimates as a function the number of points closest to the pivot \(j \in \{128;\, 512;\, 2048;\, 8192\}\). We employ different NN orders keeping the ratio \({n_2}/{n_1}=2\) fixed and we increase \(n_1\) geometrically from 1 to 256 (x-axis). In this experiment, the id is estimated via MLE on 1000 repeated samples. Given the sample of 1000 estimates \({\hat{d}}\) we compute its average with its 95% confidence bands. The first three panels show a small but consistent bias for the id estimated with \(n_1=1\) (TWO-NN) and \(n_1=2\). The most viable explanation for the behavior of the estimator at small \(n_1\) is the statistical correlation: the \({\dot{\mu }}\)'s entering in the likelihood (see Eq. 10) are computed at nearby points and, as a consequence, they cannot be considered purely independent realizations. But, remarkably, this correlation effect is significantly reduced when larger values of \(n_1\) are considered. Moreover, the slight bias we may observe at large NN orders is likely due to numerical error accumulation. Recall that the radii of the produced points are obtained from the sum of l volumes sampled from a homogeneous Poisson process. Given the data generating mechanism we used, the statistical error might compound across different stages. Gride performance as the dimensionality grows We investigate the evolution of the \(\texttt {id }\) estimates produced by Gride as we vary the size of the neighborhoods considered in the estimation and the true \(\texttt {id }\). To simultaneously assess the variability of our estimates, we generate 50 replicated datasets from a Uniform random variable over hypercubes in dimensions \(d\in \{2;\,4;\,6;\,8;\,10\},\) with sample size \(n = 10000.\) We choose to keep the sample size of this experiment relatively low (w.r.t. high \(\texttt {id }\) values, such as \(d=10\)) to showcase the effect of the negative bias that is known to affect many id estimators in large dimensions. For each dataset, we apply a sequence of Gride models with varying NN orders, fixing \(n_2=2n_1\), with \(n_1 \in \{ 1;\,10;\,20;\,\ldots ;\,n/2-10\}\). We average the results over the 50 Monte Carlo (MC) replicas and plot them as functions of the ratio \(n/n_1\), along with their MC standard errors (shaded areas). We display the results in Fig. 4. Note that plotting the resulting \(\texttt {id }\) as a function of \(n/n_1\) provides an idea of the evolution of the estimates as the considered scale goes from extended neighborhoods (\(n/n_1\approx 2\)) to highly local neighborhoods (\(n/n_1 = n\)). Indeed, this graphical representation allows us to monitor the effect induced by the scale: the negative bias becomes more prominent as the sizes of the considered neighborhoods increase, collapsing the estimates towards 1 as \(n/n_1\rightarrow 2\), as expected. Focusing on highly local neighborhoods (i.e., TWO-NN case) produces more accurate estimates on average since the underlying modeling assumptions are more likely to be met. This accuracy is achieved at the cost of high dispersion, which is mitigated by the increment of NN orders. In the Supplementary Material, we report similar results obtained using smaller sample sizes, \(n\in \{500;\, 5000\}\) to assess how the uncertainty of the \(\texttt {id }\) estimates changes as a function of n (Fig. S4). Evolution of the id estimates as a function of the ratio \(n/n_1\) (logarithmic scale) computed on uniform hypercubes characterized by different sample sizes and increasing true \(\texttt {id }\). Gride is computed setting \(n_2=2n_1\). The horizontal lines highlight the true values of the id. Comparison of the evolution of likelihood-based id estimates in the presence of noise We present different studies on the evolution of the estimates of the \(\texttt {id }\) applied to datasets contaminated with noise, focusing on the comparison of model-based id estimators such as Gride, TWO-NN, LB, and MG. Facco et al.2 showed that a scale-dependent analysis of the id is essential to identify the correct number of relevant directions in noisy data. In their work, the authors proposed to subsample the dataset to increase the average distance from the second NN (and thus the average neighborhood size) involved in the TWO-NN estimate. With the same aim, we instead adopt a different approach. Again, we apply a sequence of Gride models on the entire dataset to explore larger regions: the higher \(n_1\) and \(n_2\) are, the larger is the average neighborhood size analyzed. As a first example, we focus on a second Spiral dataset generated as described in "Likelihood-based TWO-NN estimators". We generate a sample of size \(n=5000\) setting \({\bar{S}}=6\), \(\sigma _x=\sigma _y=\sigma _z=0.1\). Specifically, we study the \(\texttt {id }\) as a function of the size of the neighborhood by comparing three estimators: Gride with \(n_2=2n_1\), MG with \(L=n_2\) (single estimate, not averaged), and the decimated TWO-NN (\(n_2=2\)). In this simulation, we compute the estimates setting \(n_1\in \{2^j\}_{j=1}^{10}\). The results are displayed in the top row of Fig. 5, where the x-axis reports the \(\log_{10}\) average distance from the furthest nearest neighbor \(n_2\) at each step. Gride plateaus around the true \(\texttt {id }\) value faster than the competitors. Eventually, MG reaches a similar result, but much larger neighborhoods are required. Lastly, the decimated TWO-NN shows an \(\texttt {id }\) evolution pointed in the right direction, but as the ratio of data considered decreases, its performance deteriorates. Top panel: study of the evolution of the \(\texttt {id }\) on a simulated Spiral dataset. The three lines represent three different models: Gride, MG, and the decimated TWO-NN. Bottom panels: analysis of the impact of the scale on the id estimates comparing Gride models of order ratios \(n_{2,1}=2\) vs. the TWO-NN estimator performed on a 2D noisy Gaussian dataset. The error bounds (\(\pm 2\) std. err.) are visible only for one model. As a second experiment devised to investigate the impact of the scale on the id estimates, we simulate 50000 data points from a two-dimensional Gaussian distribution and perturb them with orthogonal Gaussian white noise. We compare the results obtained in two cases: one-dimensional (1D) and twenty-dimensional (20D) noise; in both cases, the perturbation variance is set to \(\sigma ^2 = 0.0001\). The second row of Fig. 5 reports the results of the scale analysis done with TWO-NN and Gride with \(n_{2, 1} =2\). Following2, we apply the TWO-NN estimator on several subsets of the original data and report the average id with its 95% confidence intervals. Both in the case of high and low dimensional noise, Gride reaches the true value 2 at smaller scales than the TWO-NN estimator. The left panel also shows that the decimation protocol of TWO-NN can introduce a bias at large scales when the size of the replicates becomes small. In our experiment, by halving the sample size at each decimation step, we use subsets with 12 data points when \({\bar{r}} \approx 0.8\). At a comparable scale, Gride performs much better since it always maximizes the likelihood utilizing all of the original 50000 data points. In our last experiment on simulated data, we compare the performance of the MLEs introduced by1 (LB) and modified by34 (MG) with Gride in term of robustness to noise. To compute the first two estimators, we rely on the implementation contained in the R package Rdimtools37. As in the previous experiments, we want to compare how well the different estimators can escape the overestimation of the id induced by the presence of noise in the data. We have already established that Gride can exhibit a plateau around the true \(\texttt {id }\) when enough signal is available (conveyed both in terms of large sample size and low level of noise). Instead, we now test our estimator in a similar but more challenging context, considering the limited sample size and the increasing noise level. Thus, we generate 30 replicas of \(n\in \{1000;\,5000\}\) observations sampled from a Gaussian distribution. We consider two possible values for the intrinsic dimension: \(d \in \{2;\,5\}\). Each dataset was then embedded in a \(D = d+5\) dimensional space and contaminated with independent Gaussian noise \(N(0,\sigma ^2)\), with \(\sigma \in \{0;\,0.1;\,0.25;\,0.50\}\), expecting the random noise to induce an incremental positive bias in the id estimation. To let the estimators gather information from increasingly wider neighborhoods, we consider the relation \(n_2 = 2n_1\), with \(n_1=\{2,\ldots ,50\}\). The same range is considered for the averages computed with LB and MG. In the Supplementary Material, we report the plots summarizing all the results (Fig. S8). Here, we focus on the representative scenario where \(n=1000\) and \(\sigma =0.1\). The results are shown in Fig. 6. Each panel shows the average estimates of the \(\texttt {id }\) over 30 replicates with three different methods: Gride (with \(n_2=2n_1\)), LB, and MG. Each dataset has 1000 observations. The confidence bands are drawn at \(\pm 2\) standard errors. In the left panel the true \(\texttt {id }\) is \(d=2\), in the right panel the true \(\texttt {id }\) is \(d=5\). From the panels in Fig. 6 we observe that the estimators present similar patterns for the two considered \(\texttt {id }\) values. As expected, the id estimates are inflated by the addition of noise to the data. For small neighborhoods, Gride and MG show similar behaviors, while as \(n_1\) increases MG tends to perform similarly to LB. Gride instead decreases faster than the two competitors. Thus, our proposal is more robust than the two model-based competitors when handling noisy datasets. Comparisons with other estimators In the remaining of the paper, we investigate the evolution of the id estimates obtained on simulated and real benchmark datasets, comparing Gride and the TWO-NN models to three other state-of-the-art estimators: DANCo20, GeoMLE30, and ESS31. In our analyses, we employ both the MATLAB package of40 and the R package intrinsicDimension to compute DANCo, the code available at the GeoMLE GitHub repository to compute GeoMLE, and again the R package intrinsicDimension41 to obtain the ESS values (employed here as a global id estimator). For each model, we opted the default parameter specifications available in the code, whenever possible. Finally, let us denote the number of observations and the number of features with n and D, respectively. Application to datasets with known id To start, we employ four synthetic datasets with known id. The datasets we use are generated with (1) the Spiral transform we introduced earlier (\(D=3,\) id=1), (2) the Swissroll mapping11 (\(D=3,\) id=2), (3) a five-dimensional (id=5) Normally distributed cloud of points embedded in dimension \(D=7\) , and (4) the 10-Möbius dataset17,42 (\(D=3,\) id=2). In all datasets, we slightly perturb the original coordinates with Gaussian random values to assess if the estimators are robust to noise and to study the effect of the scale of the considered neighborhoods. We perform the estimations of the ids over 30 replications of size \(n=1000\), and then we average the results. To monitor the effect of the scale, we decimate the data by considering a fraction \(n/n_1\) of observations, where \(n_1=\{2^j\}_{j=0}^4\). This procedure holds for all the estimators but Gride, for which we change the NN-orders. The results are summarized in Fig. 7. In the Supplementary Material, we report an additional figure containing the error bands of the estimates (Fig. S5). All the competitors behave similarly, returning estimates that decrease as broader neighborhoods are considered, except for ESS, which remains relatively constant regardless of the dataset size, and GeoMLE. ESS performs best on the Gaussian data but tends to slightly inflate the estimates in the Swissroll case. Gride almost always outperforms the decimated TWO-NN, successfully overcoming the noise effect. That said, a uniformly better estimator does not emerge. For example, DANCo works extremely well for the Swissroll data while obtaining worse performance than its competitors in the other datasets, especially when the full datasets, with no decimation, are considered (\(n/n_1=1000\)). Nonetheless, we are reassured by the fact that Gride provides results that are either better or, at worse, in line with the other state-of-the-art estimators. Furthermore, an important feature of Gride appears from Fig. S5 in the Supplementary Material: its uncertainty decreases as larger neighborhoods are considered. At the same time, as decimation increases, results become more volatile for most of the competitors. Evolution of the estimates of the id of the Spiral, Swissroll, Gaussian, and Möbius datasets computed via DANCo, GeoMLE, ESS, TWO-NN, and Gride while varying the considered neighborhood size. Application to real datasets Following20, we consider the MNIST (focusing on the training points representing digit 1: \(n = 6742,\; D = 7797\)) and the Isolet datasets (\(n = 784,\; D =617\)). Moreover, we consider the Isomap faces dataset (\(n = 698,\; D =4096\)) as in33,43, and the CIFAR-10 dataset as in44 (training data, \(n = 50000,\; D =3072\)). id estimation as a function of the sample size. We study how the estimates returned by the five considered models change when applied to the Isolet, Isomap faces, and MNIST datasets, as we consider different sample sizes. For each dataset, we randomly extract six sub-samples of size n/k, where \(k\in \{1;\,2;\,4;\,8\}\), and use them to estimate the \(\texttt {id }\). To obtain more robust estimates, each sub-sample of size n/k is replicated k times, and the resulting estimates are subsequently averaged. We report the results in Fig. 8. First, we observe that most estimators yield heterogeneous results across the data sizes, with the only exception of ESS, which produces coherent estimates regardless of the sample size. However, in line with previous studies, the ESS estimator tends to struggle when the sample size is limited w.r.t. the number of features. Indeed, as noted in43, from the second panel we observe that ESS overestimates the expected value for the \(\texttt {id }\) of the Isomap faces dataset. GeoMLE obtains mixed results: while producing reasonably consistent results on MNIST, it provides widely variable estimates on the remaining two datasets. Gride and TWO-NN provide results that are, overall, very close to the ones obtained with DANCo. This result is remarkable, especially when considering the high-dimensional nature of the datasets. Moreover, albeit our proposal is exclusively based on the information provided by the distances among data points (while all the competitors utilize some additional topological features), we do not observe any systematic bias or abnormal pattern in the estimates. Evolution of the estimates of the id of the datasets Isolet, Isomap faces, and MNIST (digit 1) obtained with DANCo, GeoMLE, ESS, TWO-NN, and Gride varying the considered sample size. Differences in computational costs. Finally, we investigate the differences in computational costs for various estimators. To this end, we consider two versions of the CIFAR-10 dataset, chosen because of its high dimensionality in both the numbers of instances and features. On the one hand, to study how the different algorithms scale as the considered sample size increases, we utilize the CIFAR-10 (training) dataset. We compute the \(\texttt {id }\) estimates after sub-sampling the datasets producing samples of size n/k, with \(k\in \{2^j\}_{j=0}^7\), while leaving D unaltered and equal to 3072. The results of this experiment are shown in Fig. 9, where we display the retrieved \(\texttt {id }\)s and the elapsed time in seconds. On the other hand, we also explore how the algorithms scale as the number of features increases by employing a subset of the CIFAR-10 dataset, where we focus on \(n=5000\) pictures of cats. These images were re-sized (both shrunk and enlarged) to \(q\times q\) pictures, where q assumes values between 8 and 181. Notice that the datasets encode the RGB information for each picture. Therefore, the number of features is \(D =3q^2\), ranging from a minimum of 192 to a maximum of 98283. We defer the results of the latter experiment to the Supplementary Material (Fig. S6). In both cases, we observe that GeoMLE presents highly varying results, especially when we consider a variable number of features. The other estimators, on the contrary, deliver consistent results, with Gride providing similar estimates to DANCo. Moreover, while the \(\texttt {id }\) estimates are still on par with the competitor, we observe an important gain in computational speed: Gride is considerably faster than its competitors, and it is second only to the TWO-NN when dealing with small datasets. For example, to run the model on the complete CIFAR-10 (training) dataset, ESS takes 1.43 times the seconds needed to run Gride, DANCo 6.66 times, and GeoMLE 21 times. Trajectories of estimated \(\texttt {id }\)s (left panel) and elapsed times in seconds (right panel) obtained on the CIFAR-10 (training) dataset with DANCo, GeoMLE, ESS, TWO-NN, and Gride. In this paper, we introduced and developed novel distributional results concerning the homogeneous Poisson point process related to the estimate of the id, a crucial quantity for many dimensionality reduction techniques. The results extend the theoretical framework of the TWO-NN estimator. In detail, we derived closed-form density functions for the ratios of distances between a point and its nearest neighbors, ranked in increasing order. The distributional results have a theoretical importance per se but are also useful to improve the model-based estimation of the id. Specifically, we have discussed two estimators: MG and Gride. The first one builds on the independence of the elements of the vector of consecutive ratios \(\varvec{\mu }_{L}\), which we exploited to derive a closed-form estimator with lower variance than the TWO-NN. We showed how this estimator is equivalent to the one proposed in34. However, in real cases, considering multiple ratios of distances for each point in the sample can violate the assumed independence. Our main proposal is Gride, an estimator based on NNs of generic order capable of mitigating the issues mentioned above. We showed that the latter estimator is also more robust to the presence of noise in the data than the other model-based methods. We remark that the inclusion of NNs of higher orders has to be accompanied by stronger assumptions on the homogeneity of the density of the data-generating process. Nonetheless, by dedicated computational experiments, we have shown that one can weaken the assumption of homogeneity of the Poisson point process. Indeed, given a specific point in the configuration, the homogeneity hypothesis should only hold up to the scale of the distance of the furthest nearest neighbor entering the estimator. To summarize, when dealing with real data, we face a complex trade-off among the assumptions of density homogeneity, independence of the ratios, and robustness to noise. On the one hand, the TWO-NN is more likely to respect the local homogeneity hypothesis but is extremely sensitive to measurement noise since it only involves a narrow neighborhood of each point. On the other hand, MG focuses on broader neighborhoods, which makes it more robust to noisy data. However, their definition also imposes a more substantial local homogeneity requirement. It is also more likely to induce dependencies among different sequences of ratios. We believe that Gride provides a reliable alternative to the previous two maximum likelihood estimators, being both robust to noise and more likely to comply with the independence assumptions. Moreover, we have also compared Gride with other state-of-the-art methodologies, such as DANCo, ESS, and GeoMLE, over various simulated and well-known benchmark datasets. We have observed that Gride obtained performance on par with its competitors in terms of id estimation, especially similar to DANCo. This fact is even more remarkable if we consider that, differently from the competitors, our estimator is exclusively based on the information extracted from the distances among data points. Therefore, Gride represents a valuable tool, primarily because of its simplicity and computational efficiency. The results in this paper pave the way for many other possible research avenues. First, we have implicitly assumed the existence of a single manifold of constant id. However, it is reasonable to expect that a complex dataset can be characterized by multiple latent manifolds with heterogeneous \(\texttt {id }\)s. Allegra et al.45 extended the TWO-NN model in this direction by proposing Hidalgo, a tailored mixture of Pareto distributions to partition the data points into clusters driven by different \(\texttt {id }\) values. It would be interesting to combine the Hidalgo modeling framework with our results, where the distribution in Eq. (10) can replace the Pareto mixture kernels. Second, the estimators derived from the models do not directly consider any source of error in the observed sample. Although we showed how one could reduce the bias generated by this shortcoming by considering higher-order nearest neighbors that allow escaping the local distortions, we are still investigating how to address this issue more broadly. For example, a simple solution would be to model the measurement errors at the level of the ratios, accounting for a Gaussian noise that can distort each \(\mu _i\). By focusing directly on the distribution of the distances among data points in an ideal, theoretical setting, we can obtain informative insights on how to best model the measurement noise. Data availibility The script to generate and analyze the datasets discussed in the current study are reported in the GRIDE_repo GitHub repository, available at https://github.com/Fradenti/GRIDE_repo. The real datasets utilized in the manuscript are openly available online at the following links: Isolet, Isomap, MNIST, and CIFAR-10. Levina, E. & Bickel, P. J. Maximum likelihood estimation of intrinsic dimension. In Advances in Neural Information Processing Systems Vol. 17 (eds Saul, L. K. et al.) 777–784 (MIT Press, 2005). Facco, E., D'Errico, M., Rodriguez, A. & Laio, A. Estimating the intrinsic dimension of datasets by a minimal neighborhood information. Sci. Rep. 7, 1–8. https://doi.org/10.1038/s41598-017-11873-y (2017). Fukanaga, K. Introduction to Statistical Pattern Recognition (Academic Press, 1990). Bishop, C. M. Neural Networks for Pattern Recognition (Oxford University Press Inc, 1995). Campadelli, P., Casiraghi, E., Ceruti, C. & Rozza, A. Intrinsic dimension estimation: Relevant techniques and a benchmark framework. Math. Probl. Eng.https://doi.org/10.1155/2015/759567 (2015). Article MathSciNet MATH Google Scholar Camastra, F. & Staiano, A. Intrinsic dimension estimation: Advances and open problems. Inf. Sci. 328, 26–41. https://doi.org/10.1016/j.ins.2015.08.029 (2016). Article MATH Google Scholar Hotelling, H. Analysis of a complex of statistical variables into principal components. J. Educ. Psychol. 24, 498–520. https://doi.org/10.1037/h0070888 (1933). Tipping, M. E. & Bishop, C. M. Probabilistic principal component analysis. J. R. Stat. Soc. Ser. Bhttps://doi.org/10.1111/1467-9868.00196 (1999). Bishop, C. M. Bayesian PCA. Adv. Neural Inf. Process. Syst. 20, 382–388 (1999). Zou, H., Hastie, T. & Tibshirani, R. Sparse principal component analysis. J. Comput. Graph. Stat. 15, 265–286. https://doi.org/10.1198/106186006X113430 (2006). Article MathSciNet Google Scholar Roweis, T. S. & Lawrence, K. S. Nonlinear dimensionality reduction by locally linear embedding. Science 290, 2323–2326 (2000). Article ADS PubMed CAS Google Scholar Tenenbaum, J. B., De Silva, V. & Langford, J. C. A global geometric framework for nonlinear dimensionality reduction. Science 290, 2319–2323. https://doi.org/10.1126/science.290.5500.2319 (2000). Belkin, M. & Niyogi, P. Laplacian eigenmaps and spectral techniques for embedding and clustering. Adv. Neural. Inf. Process. Syst.https://doi.org/10.7551/mitpress/1120.003.0080 (2002). Donoho, D. L. & Grimes, C. Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data. Proc. Natl. Acad. Sci. USA 100, 5591–5596. https://doi.org/10.1073/pnas.1031596100 (2003). Article ADS MathSciNet PubMed PubMed Central MATH CAS Google Scholar Jollife, I. T. & Cadima, J. Principal component analysis: A review and recent developments. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci.https://doi.org/10.1098/rsta.2015.0202 (2016). Falconer, K. Fractal Geometry-Mathematical Foundations and Applications 2nd edn. (Wiley, 2003). Book MATH Google Scholar Granata, D. & Carnevale, V. Accurate estimation of the intrinsic dimension using graph distances: Unraveling the geometric complexity of datasets. Sci. Rep.https://doi.org/10.1038/srep31377 (2016). Costa, J. A. & Hero, A. O. Geodesic entropic graphs for dimension and entropy estimation in Manifold learning. IEEE Trans. Signal Process. 52, 2210–2221. https://doi.org/10.1109/TSP.2004.831130 (2004). Article ADS MathSciNet MATH Google Scholar Rozza, A., Lombardi, G., Rosa, M., Casiraghi, E. & Campadelli, P. IDEA: Intrinsic dimension estimation algorithm. Lect. Notes Comput. Sci. 6978, 433–442. https://doi.org/10.1007/978-3-642-24085-0_45 (2011) (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Ceruti, C. et al. DANCo: An intrinsic dimensionality estimator exploiting angle and norm concentration. Pattern Recogn. 47, 2569–2581. https://doi.org/10.1016/j.patcog.2014.02.013 (2014). Article ADS MATH Google Scholar Pettis, K. W., Bailey, T. A., Jain, A. K. & Dubes, R. C. An intrinsic dimensionality estimator from near-neighbor information. IEEE Trans. Pattern Anal. Mach. Intell. PAMI–1, 25–37. https://doi.org/10.1109/TPAMI.1979.4766873 (1979). Amsaleg, L. et al. Extreme-value-theoretic estimation of local intrinsic dimensionality. Data Min. Knowl. Disc. 32, 1768–1805. https://doi.org/10.1007/s10618-018-0578-6 (2018). Houle, M. E. Dimensionality, Discriminability, Density and Distance Distributions (ICDMW, 2013). Duan, L. L. & Dunson, D. B. Bayesian distance clustering. J. Mach. Learn. Res. 22, 1–27 (2021) (arXiv:1810.08537). MathSciNet Google Scholar Mukhopadhyay, M., Li, D. & Dunson, D. B. Estimating densities with non-linear support by using Fisher–Gaussian kernels. J. R. Stat. Soc. Ser. B Stat. Methodol. 82, 1249–1271. https://doi.org/10.1111/rssb.12390 (2020) (arXiv:1907.05918). Li, D., Mukhopadhyay, M. & Dunson, D. B. Efficient manifold approximation with spherelets (2017). arXiv:1706.08263. Li, D. & Dunson, D. B. Classification via local manifold approximation. Biometrika 107, 1013–1020. https://doi.org/10.1093/biomet/asaa033 (2020) arXiv:1903.00985. Li, D. & Dunson, D. B. Geodesic distance estimation with spherelets (2019). arXiv:1907.00296. Kaufman, L. & Rousseeuw, P. J. Clustering by means of mediods. In Statistical Data Analysis based on the L1 Norm. 405–416 (1987). Gomtsyan, M., Mokrov, N., Panov, M. & Yanovich, Y. Geometry-aware maximum likelihood estimation of intrinsic dimension. In Asian Conference on Machine Learning 1126–1141 (2019). arXiv:1904.06151. Johnsson, K., Soneson, C. & Fontes, M. Low bias local intrinsic dimension estimation from expected simplex skewness. IEEE Trans. Pattern Anal. Mach. Intell. 37, 196–202. https://doi.org/10.1109/TPAMI.2014.2343220 (2015). Serra, P. & Mandjes, M. Dimension estimation using random connection models. J. Mach. Learn. Res. 18, 25 (2017). MathSciNet MATH Google Scholar Qiu, H., Yang, Y. & Li, B. Intrinsic dimension estimation based on local adjacency information. Inf. Sci. 558, 21–33. https://doi.org/10.1016/j.ins.2021.01.017 (2021). MacKay, D. & Ghahramani, Z. Comments on 'Maximum Likelihood Estimation of Intrinsic Dimension' by E. Levina and P. Bickel (2004). Comment on personal webpage (2005). Gelman, A., Meng, X. L. & Stern, H. Posterior predictive assessment of model fitness via realized discrepancies. Stat. Sin. 6, 733–807 (1996). Davison, A. C. & Hinkley, D. V. Bootstrap Methods and Their Application Vol. 1 (Cambridge University Press, 1997). You, K. Rdimtools: Dimension Reduction and Estimation Methods (2021). R package version 1.0.8. Glielmo, A. et al. DADApy: Distance-based analysis of DAta-manifolds in Python. arXiv manuscript https://doi.org/10.48550/ARXIV.2205.03373 (2022). Denti, F. intRinsic: An R package for model-based estimation of the intrinsic dimension of a dataset (2021). arXiv:2102.11425. Lombardi, G. Intrinsic dimensionality estimation techniques (2022). MATLAB Central File Exchange. Retrieved. Johnsson, K. & University, L. intrinsicDimension: Intrinsic Dimension Estimation (2019). R package version 1.2.0. Hein, M. & Audibert, J. Y. Intrinsic dimensionality estimation of submanifolds in Rd. In ICML 2005—Proceedings of the 22nd International Conference on Machine Learning, 289–296. https://doi.org/10.1145/1102351.1102388 (2005). Bac, J. & Zinovyev, A. Local intrinsic dimensionality estimators based on concentration of measure. In Proceedings of the International Joint Conference on Neural Networks. https://doi.org/10.1109/IJCNN48605.2020.9207096 (2020). arXiv:2001.11739. Pope, P., Zhu, C., Abdelkader, A., Goldblum, M. & Goldstein, T. The intrinsic dimension of images and its impact on learning. Conference paper at ICLR 2021 (2021). arXiv:2104.08894. Allegra, M., Facco, E., Denti, F., Laio, A. & Mira, A. Data segmentation based on the local intrinsic dimension. Sci. Rep. 10, 1–27. https://doi.org/10.1038/s41598-020-72222-0 (2020) arXiv:1902.10459. Department of Statistics, Università Cattolica del Sacro Cuore, Milan, Italy Francesco Denti SISSA, Via Bonomea 265, Trieste, Italy Diego Doimo & Alessandro Laio ICTP, Strada Costiera 11, 34151, Trieste, Italy Alessandro Laio Faculty of Economics, Università della Svizzera italiana, Lugano, Switzerland Antonietta Mira University of Insubria, Varese, Italy Diego Doimo F.D., D.D., A.L., and A.M. designed and performed research; F.D., D.D., A.L., and A.M. analyzed data; F.D., D.D., A.L., and A.M. wrote the paper. Correspondence to Francesco Denti or Antonietta Mira. Supplementary Information. Denti, F., Doimo, D., Laio, A. et al. The generalized ratios intrinsic dimension estimator. Sci Rep 12, 20005 (2022). https://doi.org/10.1038/s41598-022-20991-1 About Scientific Reports Guide to referees Scientific Reports (Sci Rep) ISSN 2045-2322 (online)
CommonCrawl
Home » CBSE » Class 10 » NCERT Exemplar Draw a line segment of length 7.6 cm and divide it in the ratio 5:8. Measure the two parts. CBSE, Class 10, Constructions, Maths, RS Aggarwal a) Consider the circuit in the figure. How much energy is absorbed by electrons from the initial state of no current to the state of drift velocity? b) Electrons give up energy at the rate of RI2 per second to the thermal energy. What time scale would one associate with energy in problem a) n = no of electron/volume = 1029/m3, length of circuit = 10 cm, cross-section = A = 1mm2 CBSE, Class 12, Current Electricity, Lakhmir Singh, NCERT Exemplar, Physics a) Current is given as I = V/R from the Ohm's law Therefore, I = 1A But, I = ne Avd vd = I/neA When the values for the above parameters are substituted, vd = 1/1.6 × 10-4 m/s The KE = (KE of one... In an experiment with a potentiometer, VB = 10V. R is adjusted to be 50 Ω. A student wanting to measure voltage E1 of a battery finds no null point possible. He then diminishes R to 10 Ω and is able to locate the null point on the last segment of the potentiometer. Find the resistance of the potentiometer wire and potential drop per unit length across the wire in the second case. Equivalent resistance of the potentiometer = 50 Ohm + R' Equivalent voltage across the potentiometer = 10 V Current through the main circuit I = 10/(50 Ohms +R') Potential difference across wire of... A room has AC run for 5 hours a day at a voltage of 220V. The wiring of the room consists of Cu of 1 mm radius and a length of 10 m. Power consumption per day is 10 commercial units. What fraction of it goes in the joule heating in wires? What would happen if the wiring is made of aluminium of the same dimensions? CBSE, Class 12, Current Electricity, NCERT, NCERT Exemplar, Physics Power consumption in a day = 10 units Power consumption per hour = 2 units Power consumption = 2 units = 2 kW = 2000 J/s Power consumption in resistors, P = VI Which gives I = 9A We know that... Two cells of voltage 10V and 2V and internal resistances 10Ω and 5Ω respectively are connected in parallel with the positive end of the 10V battery connected to the negative pole of 2V battery. Find the effective voltage and effective resistance of the combination. CBSE, Class 12, Current Electricity, Lakhmir Singh, NCERT, NCERT Exemplar, Physics Kirchhoff's law is applied at c, I1 = I + I2 Kirchhoff's law is applied at efbae, 10 – IR – 10I2 = 0 10 = IR + 10I1 Kirchhoff's law is applied at cbadc, -2-IR+5I2 = 0 2 = 5I2- RI I2 = I1 – I... Suppose there is a circuit consisting of only resistances and batteries and we have to double all voltages and all resistances. Show that currents are unaltered. Reff is the equivalent internal resistance of the battery Veff is the equivalent voltage of the battery Using Ohm's law, I = Veff/(Reff + R) When the resistance and effective voltage are increased... (i) Consider a thin lens placed between a source (S) and an observer (O). Let the thickness of the lens vary as 2 0 () – α = b wb w, where b is the verticle distance from the pole. w0 is a constant. Using Fermat's principle i.e. the time of transit for a ray between the source and observer is an extremum, find the condition that all paraxial rays starting from the source will converge at a point O on the axis. Find the focal length. (ii) A gravitational lens may be assumed to have a varying width of the form w(b) = k1 ln (k2/b) = k1 ln (k2/bmin). Show that an observer will see an image of a point object as a ring about the centre of the lens with an angular radius. β = √(n-1)k1 u/v / u + v CBSE, Class 12, NCERT Exemplar, Physics, Ray Optics and Optical Instruments i) Time taken by the ray to travel from S to P1 is = t1 = √u2 + b2/c Time taken by the ray to travel from P1 to O is = t2 = v/c (1+ ½ b2/v2) Time taken to travel through the lens is =... An infinitely long cylinder of radius R is made of an unusual exotic material with refractive index –1. The cylinder is placed between two planes whose normals are along the y-direction. The centre of the cylinder O lies along the y-axis. A narrow laser beam is directed along the y-direction from the lower plate. The laser source is at a horizontal distance x from the diameter in the y-direction. Find the range of x such that light emitted from the lower plane does not reach the upper plane. The refractive index of the cylinder is -1 and is placed in the air of μ = 1 AB is incident at B to the cylinder such that θr will be negative θ1= θi = θr Total deviation of the outcoming ray is... If light passes near a massive object, the gravitational interaction causes a bending of the ray. This can be thought of as happening due to a change in the effective refractive index of the medium given by n(r) = 1 + 2 GM/rc2 where r is the distance of the point of consideration from the centre of the mass of the massive body, G is the universal gravitational constant, M the mass of the body and c the speed of light in vacuum. Considering a spherical object find the deviation of the ray from the original path as it grazes the object. n(r) = 1 + 2GM/rc2 The mixture a pure liquid and a solution in a long vertical column (i.e, horizontal dimensions << vertical dimensions) produces diffusion of solute particles and hence a refractive index gradient along the vertical dimension. A ray of light entering the column at right angles to the vertical deviates from its original path. Find the deviation in travelling a horizontal distance d << h, the height of the column. Let the height of the long vertical column with transparent liquid be h and dx be the thickness The angle at which the ray AB enters is θ Let y be the new height of the liquid (θ + d θ) is the... Show that for a material with refractive index µ ≥ 2 , light incident at any angle shall be guided along a length perpendicular to the incident face. Answer: Let the refractive index of the rectangular slab be μ ≥ √2. μ = 1/sin ic sin ic > 1/ μ cos r ≥ 1/ μ sin i/sin r = μ From Snell's law Sin I = μ sin r i = 90o 1 + 1 ≤ μ2 2 ≤ μ2 Taking the square... A myopic adult has a far point at 0.1 m. His power of accomodation is 4 diopters. (i) What power lenses are required to see distant objects? (ii) What is his near point without glasses? (iii) What is his near point with glasses? (Take the image distance from the lens of the eye to the retina to be 2 cm.) i) Power lenses are required to see distant objects 1/f = 1/v – 1/u 1/f = 1/10 f = -10 cm = -0.1 m P = 1/f P = 1/(-0.1) P = -10 diopter ii) When no corrective lens used Pn = Pf + Pa u = -10 cm =... A jar of height h is filled with a transparent liquid of refractive index µ. At the centre of the jar on the bottom surface is a dot. Find the minimum diameter of a disc, such that when placed on the top surface symmetrically about the centre, the dot is invisible. tan ic d/2/h ic = d/2h d = 2h tan ic d = 2h ×1/√μ2 – 1 In many experimental set-ups the source and screen are fixed at a distance say D and the lens is movable. Show that there are two positions for the lens for which an image is formed on the screen. Find the distance between these points and the ratio of the image sizes for these two points. u = -x1 v = +(D – x1) 1/D – x1 – 1/(-x1) = 1/f u = -x2 v = +(D – x2) 1/D – x2 – 1/(-x2) = 1/f D = x1 + x2 d = x2 – x1 x1 = D – d/2 D – x1 = D + d/2 u = D/2 + d/2 v = D/2 – d/2 m1 = D – d/D + d m2/m1... A thin convex lens of focal length 25 cm is cut into two pieces 0.5 cm above the principal axis. The top part is placed at (0,0) and an object placed at (–50 cm, 0). Find the coordinates of the image. 1/v = 1/u + 1/f = 1/-50 + 1/25 = 1/50 v = 50 cm Magnification is m = v/u = -50/50 = -1 Therefore, the coordinates of the image are (50 cm, -1 cm) ... A circular disc of radius 'R' is placed co-axially and horizontally inside an opaque hemispherical bowl of radius 'a'. The far edge of the disc is just visible when viewed from the edge of the bowl. The bowl is filled with transparent liquid of refractive index µ and the near edge of the disc becomes just visible. How far below the top of the bowl is the disc placed? Distance at which the bowl should be placed in the disc is given as: d = μ(a2 – b2)/√(a + r)2 – μ(a – r)2 A short object of length L is placed along the principal axis of a concave mirror away from focus. The object distance is u. If the mirror has a focal length f, what will be the length of the image? You may take L << |v-f| The mirror formula is 1/v + 1/u = 1/f u is the object distance v is the image distance du = |u1 – u2| = L Differentiating on the both sides we get, dv/v2 = -du/u2 v/u = f/u-f du = L, therefore,... For a glass prism (µ = √3 ) the angle of minimum deviation is equal to the angle of the prism. Find the angle of the prism. μ = sin[(A + δm)/2]/sin (A/2) Three immiscible liquids of densities d1 > d2 > d3 and refractive indices µ1 > µ2 > µ3 are put in a beaker. The height of each liquid column is h/3. A dot is made at the bottom of the beaker. For near-normal vision, find the apparent depth of the dot. Answer: An unsymmetrical double convex thin lens forms the image of a point object on its axis. Will the position of the image change if the lens is reversed? The near vision of an average person is 25cm. To view an object with an angular magnification of 10, what should be the power of the microscope? Will the focal length of a lens for red light be more, same or less than that for blue light? An astronomical refractive telescope has an objective of focal length 20m and an eyepiece of focal length 2cm.(a) The length of the telescope tube is 20.02m. (b) The magnification is 1000. (c) The image formed is inverted. (d) An objective of a larger aperture will increase the brightness and reduce chromatic aberration of the image. Answer: (a) The length of the telescope tube is 20.02m. (b) The magnification is 1000. (c) The image formed is inverted. ... A magnifying glass is used, as the object to be viewed can be brought closer to the eye than the normal near point. This results in (a) a larger angle to be subtended by the object at the eye and hence viewed in greater detail. (b) the formation of a virtual erect image. (c) increase in the field of view. (d) infinite magnification at the near point. Answer: (a) a larger angle to be subtended by the object at the eye and hence viewed in greater detail. (b) the formation of a virtual erect image. ... Between the primary and secondary rainbows, there is a dark band known as Alexandar's dark band. This is because (a) light scattered into this region interfere destructively. (b) there is no light scattered into this region (c) light is absorbed in this region. (d) angle made at the eye by the scattered rays with respect to the incident light of the sun lies between approximately 42° and 50°. Answer: (a) light scattered into this region interfere destructively. (d) angle made at the eye by the scattered rays with respect to the incident light of the sun lies between approximately 42° and... A rectangular block of glass ABCD has a refractive index 1.6. A pin is placed midway on the face AB. When observed from the face AD, the pin shall (a) appear to be near A. (b) appear to be near D. (c) appear to be at the centre of AD. (d) not be seen at all. Answer: (a) appear to be near A. (d) not be seen at all. The pin will appear to be near A as long as the angle of incidence on AD of the ray emerging from the pin is smaller... Consider an extended object immersed in water contained in a plane trough. When seen from close to the edge of the trough the object looks distorted because (a) the apparent depth of the points close to the edge is nearer the surface of the water compared to the points away from the edge. (b) the angle subtended by the image of the object at the eye is smaller than the actual angle subtended by the object in the air. (c) some of the points of the object far away from the edge may not be visible because of total internal reflection. (d) water in a trough acts as a lens and magnifies the object. Answer: (a) the apparent depth of the points close to the edge is nearer the surface of the water compared to the points away from the edge. (b) the angle subtended by the image of the object at the... There are certain material developed in laboratories which have a negative refractive index (Fig. 9.3). A ray incident from the air (medium 1) into such a medium (medium 2) shall follow a path given by Answer: (a) The speed of the car in the rear is 65 km h–1. Negative refractive index materials react to Snell's law in the exact opposite direction. When a... A car is moving with at a constant speed of 60 km h–1 on a straight road. Looking at the rearview mirror, the driver finds that the car following him is at a distance of 100 m and is approaching with a speed of 5 km h –1. In order to keep track of the car in the rear, the driver begins to glance alternatively at the rear and side mirror of his car after every 2 still the other car overtakes. If the two cars were maintaining their speeds, which of the following statement (s) is/are correct? (a) The speed of the car in the rear is 65 km h–1. (b) In the side mirror, the car in the rear would appear to approach with a speed of 5 km h–1 to the driver of the leading car. (c) In the rearview mirror the speed of the approaching car would appear to decrease as the distance between the cars decreases. (d) In the side mirror, the speed of the approaching car would appear to increase as the distance between the cars decreases. Answer: (d) In the side mirror, the speed of the approaching car would appear to increase as the distance between the cars decreases. The optical density of turpentine is higher than that of water while its mass density is lower. The figure shows a layer of turpentine floating over water in a container. For which one of the four rays incident on turpentine in the figure, the path shown is correct? b) 2 c) 3 Answer: b) 2 When light travels from (optically) rarer medium air to optically denser medium turpentine, it bends towards the normal, i.e., θ1 >... The direction of a ray of light incident on a concave mirror as shown by PQ while directions in which the ray would travel after reflection is shown by four rays marked 1, 2, 3, and 4. Which of the four rays correctly shows the direction of reflected ray? Answer: b) 2 After reflection, the ray PQ of light that passes through focus F and strikes the concave mirror should become parallel to the primary... The phenomena involved in the reflection of radiowaves by ionosphere is similar to a) reflection of light by a plane mirror b) total internal reflection of light in the air during a mirage c) dispersion of light by water molecules during the formation of a rainbow d) scattering of light by the particles of air Answer: b) total internal reflection of light in the air during a mirage The ionosphere, a layer of the atmosphere, reflects radio waves, allowing them to reach far-flung portions of the globe.... The radius of curvature of the curved surface of a plano-convex lens is 20 cm. If the refractive index of the material of the lens be 1.5, it will a) act as a convex lens only for the objects that lie on its curved side b) act as a concave lens only for the objects that lie on its curved side c) act as a convex lens irrespective of the side on which the object lies d) act as a concave lens irrespective of the side on which the object lies Answer: c) act as a convex lens irrespective of the side on which the object lies You are given four sources of light each one providing a light of a single colour- red, blue, green, and yellow. Suppose the angle of refraction for a beam of yellow light corresponding to a particular angle of incidence at the interface of two media is 90o. Which of the following statements is correct if the source of yellow light is replaced with that of other lights without changing the angle of incidence? a) the beam of red light would undergo total internal reflection b) the beam of red light would bend towards normal while it gets refracted through the second medium c) the beam of blue light would undergo total internal reflection d) the beam of green light would bend away from the normal as it gets refracted through the second medium Answer: c) the beam of blue light would undergo total internal reflection A passenger in an aeroplane shall a) never see a rainbow b) may see a primary and a secondary rainbow as concentric circles c) may see a primary and a secondary rainbow as concentric arcs d) shall never see a secondary rainbow Answer: b) may see a primary and a secondary rainbow as concentric circles As an aeroplane flies higher in the sky, passengers may notice a primary and secondary rainbow in the form of concentric... An object approaches a convergent lens from the left of the lens with a uniform speed 5 m/s and stops at the focus. The image a) moves away from the lens with a uniform speed 5 m/s b) moves away from the lens with a uniform acceleration c) moves away from the lens with a non-uniform acceleration d) moves towards the lens with a non-uniform acceleration Answer: c) moves away from the lens with a non-uniform acceleration In our case, the object approaches a convergent lens from the left at a uniform speed of 5 m/s, causing the image to travel away... A short pulse of white light is incident from air to a glass slab at normal incidence. After travelling through the slab, the first colour to emerge is a) blue b) green c) violet d) red Answer: d) Red The relation v = fλ describes the velocity of a wave. The frequency of light does not change when it travels from one medium to another. As a result, the bigger the wavelength, the... Two cells of same emf E but internal resistance r1 and r2 are connected in series to an external resistor R. What should be the value of R so that the potential difference across the terminals of the first cell becomes zero. Effective emf of two cells = E + E = 2E Effective resistance = R + r1 + r2 Electric current is given as I = 2E/R+r1+r2 Potential difference is given as V1 – E – Ir1 = 0 Which f=gives R = r1 –... Let there be n resistors R1……..Rn with Rmax = max(R1……Rn) and Rmin = min(R1…….Rn). Show that when they are connected in parallel, the resultant resistance Rp < Rmin and when they are connected in series, the resultant resistance Rs > Rmax. Interpret the result physically. The current is represented as I = E/R+nR when the resistors are connected in series. Current is expressed as 10I = E/(R+R/n) when the resistors are connected in parallel.... First, a set of n equal resistors of R each are connected in series to a battery of emf E and internal resistance R. A current I is observed to flow. Then the n resistors are connected in parallel to the same battery. It is observed that the current is increased 10 times. What is 'n'? The current is represented as I = E/R+nR when the resistors are connected in series. Current is expressed as 10I = E/(R+R/n) when the resistors are connected in parallel. We get n = 10 by solving... . A cell of emf E and internal resistance r is connected across an external resistance R. Plot a graph showing the variation of PD across R versus R. CBSE, Class 12, Current Electricity, NCERT Exemplar, Physics The graphic depiction is as follows: The resistance r is connected across the external resistance R, and E is the cell's emf. V = ER/R+r is the connection between voltage and R.... While doing an experiment with potentiometer it was found that the deflection is one-sided and i) the deflection decreased while moving from one end A of the wire to the end B; ii) the deflection increased, while the jockey was moved towards the end B. i) Which terminal +ve or –ve of the cell E, is connected at X in case i) and how is E1 related to E? ii) Which terminal of the cell E1 is connected at X in case ii)? Class 12, Current Electricity, NCERT, NCERT Exemplar, Physics The positive terminal of cell E1 is linked to E, and E is connected to X. Furthermore, E1 > E ii) cell E1's negative terminal is linked to X. AB is a potentiometer wire. If the value of R is increased, in which direction will the balance point J shift? CBSE, Class 12, NCERT, NCERT Exemplar, Physics The potential difference across AB dropped as the value of R grew, and therefore the potential gradient across AB decreased. E' = kl is the formula for this.... Power P is to be delivered to a device via transmission cables having resistance Rc. If V is the voltage across R and I the current through it, find the power wasted and how can it be reduced. P = i2Rc is the power utilised by transmission lines. The resistance of connecting wires is denoted by Rc. P = VI is the formula for calculating power. Power... Match the example given in Column I with the name of the reaction in Column II Chemistry, Class 12, NCERT Exemplar Solution: (i) is e (ii) is d (iii) is a (iv) is b (v) is f (vi) is c Match the reactions given in Column I with the suitable reagents given in Column II. Solution: (i) is c (ii) is d (iii) is a (iv) is b Match the acids given in Column I with their correct IUPAC names given in Column II. Solution: (i) is b (ii) is e (iii) is d (iv) is a (v) is c Match the common names given in Column I with the IUPAC names given in Column II Solution: (i) is d (ii) is e (iii) is a (iv) is b (v) is c Can Gatterman-Koch reaction be considered similar to Friedel Craft's acylation? Discuss. Solution: Both reactions resemble each other. In Friedel Craft's acylation reaction, an aryl group or benzene is treated with an acid chloride in the presence of anhydrous AlCl3 and corresponding... Ethylbenzene is generally prepared by acetylation of benzene followed by reduction and not by direct alkylation. Think of a possible reason. Solution: This is due to the formation of polysubstituted products. To avoid the formation of polysubstituted products Friedel-craft's alkylation reaction is not used for the preparation of... Complete the following reaction sequence. Why are carboxylic acids more acidic than alcohols or phenols although all of them have a hydrogen atom attached to an oxygen atom (—O—H)? Solution: Due to the resonance in carboxylic acids, the negative charge is at the more electronegative oxygen whereas, in alcohols or phenols, the negative charge is on a less electronegative atom.... . Identify the compounds A, B and C in the following reaction. Solution: Compound A = CH3-MgBr Compound B = CH3-COOH Compound C = CH3COOCH3 Carboxylic acids contain carbonyl group but do not show the nucleophilic addition reaction like aldehydes or ketones. Why? Solution: The oxygen atom in carbonyl compound pull more shared pair of electron towards itself and so, carbon acquires partial positive charge and oxygen acquires partial negative charge in... Alkenes and carbonyl compounds both contain a π bond but alkenes show electrophilic addition reactions whereas carbonyl compounds show nucleophilic addition reactions. Explain. Solution: Both the compounds carbon atom is attached to the electronegative atom oxygen. Thus the oxygen pulls more shared pair of electron towards them and a partial positive charge will be... Arrange the following in decreasing order of their acidic strength. Explain the arrangement. C6H5COOH, FCH2COOH, NO2CH2COOH Solution: NO2CH2COOH > FCH2COOH > C6H5COOH. NO2CH2COOH is most acidic among the given three compounds. Electron withdrawing groups like -NO2, increases the acidity of carboxylic acids by... Compound 'A' was prepared by oxidation of compound 'B' with alkaline KMnO4. Compound 'A' on reduction with lithium aluminium hydride gets converted back to compound 'B'. When compound 'A' is heated with compound B in the presence of H2SO4 it produces the fruity smell of compound C to which family the compounds 'A', 'B' and 'C' belong to? Solution: Compound 'A' belongs to the carboxylic acid. Compound 'B' belongs to alcohol. Compound 'C' belongs to an ester group. What product will be formed on reaction of propanal with 2-methyl propanal in the presence of NaOH? What products will be formed? Write the name of the reaction also. Solution: When propanal reacts with 2-methyl propanal in the presence of NaOH, the mixture of aldehydes are formed. Both the reactants have an alpha-hydrogen and hence, can undergo cross aldol... Arrange the following in decreasing order of their acidic strength and give the reason for your answer. Solution: FCH2COOH > ClCH2COOH > C6H5CH2COOH > CH3COOH > CH3CH2OH. CH3CH2OH is least acidic among the given compounds. C6H5CH2COOH is more acidic than CH3COOH due to the resonance in... Oxidation of ketones involves carbon-carbon bond cleavage. Name the products formed on oxidation of 2, 5-dimethylhexan-3-one. Solution: Solution: The products formed on oxidation of 2, 5-dimethylhexan-3-one are the mixtures of ketone and carboxylic acids. Ketone is then further oxidized to carboxylic acids. Overall the... Name the electrophile produced in the reaction of benzene with benzoyl chloride in the presence of anhydrous AlCl3. Name the reaction also. Solution: The electrophile produced in the reaction of benzene with benzoyl chloride in the presence of anhydrous AlCl3 is benzoylinium cation. The product formed in this reaction is benzophenone.... Benzaldehyde can be obtained from benzal chloride. Write reactions for obtaining benzyl chloride and then benzaldehyde from it. SOLUTION: Toluene is first converted to benzal chloride by side-chain chlorination, in presence of Chlorine gas and light. Benzal chloride on hydrolysis at 373K gives benzaldehyde. Write IUPAC names of the following structures. Solution: (i) Ethane-1,2-dial. (ii) Benzene-1, 4-dicarbaldehyde. (iii) 3-Bromobenzaldehyde. Give the structure of the following compounds. (i) 4-Nitropropiophenone (ii) 2-Hydroxycyclopentanecarbaldehyde (iii) Phenyl acetaldehyde Give the IUPAC names of the following compounds Solution: (i) 3-Phenylprop-2-ene-1-al. (ii) Cyclohexanecarbaldehyde (iii) 3-Oxopentan-1-al (iv) IUPAC name: But-2-enal Power consumed by the transmission lines is given as P = i2Rc. Where Rc is the resistance of connecting cables. Power is given as P = VI. The transmission of power takes places either during low... Arrange the bonds in order of increasing ionic character in the molecules: LiF, , and . CBSE, Chemical Bonding and Molecular Structure, Chemistry, Class 12, NCERT Exemplar Solution: The difference in electronegativity between constituent atoms determines the ionic character of a molecule. As a result, the greater the difference, the greater the ionic character of a... Why are alloys used for making standard resistance coils? Alloys are used in the making of the standard resistance coils because they have less temperature coefficient of resistance and the temperature sensitivity is also less. Explain with the help of suitable example polar covalent bond. Solution: The bond pair of electrons are not shared equally when two unique atoms with different electronegativities join to form a covalent bond. The bond pair is attracted to the nucleus of an... Define electronegativity. How does it differ from electron gain enthalpy? Solution: "Electronegativity refers to an atom's ability to attract a bond pair of electrons towards itself in a chemical compound." Sr. No Electronegativity Electron affinity 1 A tendency to... For wiring in the home, one uses Cu wires or Al wires. What considerations are involved in this? The main considerations in the selection of the wires is the conductivity of the metal, cost of metal, and their availability. What is the advantage of using thick metallic strips to join wires in a potentiometer?strips to join wires in a potentiometer? The advantage of using thick metallic strips is that the resistance of these strips is negligible. Write the significance/applications of dipole moment. Solution: There is a difference in electro-negativities of constituents of the atom in a heteronuclear molecule, which causes polarisation. As a result, one end gains a positive charge, while the... Although both and are triatomic molecules, the shape of the molecule is bent while that of is linear. Explain this on the basis of dipole moment. CBSE, Chemical Bonding and Molecular Structure, Chemistry, Chemistry, Class 12, NCERT Exemplar Solution: $CO_2$ has a dipole moment of 0 according to experimental results. And it's only possible if the molecule's shape is linear, because the dipole moments of the C-O bond are equal and... What are the advantages of the null-point method in a Wheatstone bridge? What additional measurements would be required to calculate R unknown by any other? The advantage of a null-point in the Wheatstone bridge is that the resistance of the galvanometer is not affected by the balance point. The R unknown is calculated by using Kirchhoff's rule. The relaxation time τ is nearly independent of applied E field whereas it changes significantly with temperature T. First fact is responsible for Ohm's law whereas the second fact leads to a variation of ρ with temperature. Elaborate why? Relaxation time is the time interval between two successive collisions of the electrons.It is defined asτ = mean free path/rms velocity of electrons usually, the drift velocity of the electrons is... Use Lewis symbols to show electron transfer between the following atoms to form cations and anions :(iii) Al and N. Solution: Below is a list of Lewis symbols. To form a cation, a metal atom loses one or more electrons, while a nonmetal atom gains one or more electrons. Ionic bonds are formed between cations and... Use Lewis symbols to show electron transfer between the following atoms to form cations and anions : (i) K and S (ii) Ca and O Temperature dependence of resistivity ρ(T) of semiconductors, insulators, and metals is significantly based on the following factors: a) number of charge carriers can change with temperature T b) time interval between two successive collisions can depend on T c) length of material can be a function of T d) mass of carriers is a function of T CBSE, Class 12, Current Electricity, NCERT Exemplar, Periodic Classification of Elements, Physics, Some Special Series The correct answer is a) number of charge carriers can change with temperature T b) time interval between two successive collisions can depend on T In a meter bridge, the point D is a neutral point. a) the meter bridge can have no other neutral point for this set of resistances b) when the jockey contacts a point on meter wire left of D, current flows to B from the wire c) when the jockey contacts a point on a meter wire to the right of D, current flows from B to the wire through the galvanometer d) when R is increased, the neutral point shifts to left The correct answer is a) the meter bridge can have no other neutral point for this set of resistances c) when the jockey contacts a point on a meter wire to the right of D, current flows from B to... Write the resonance structures for , and Solution: Resonance is the phenomenon that allows a molecule to be expressed in multiple ways, none of which fully explain the molecule's properties. The molecule's structure is called a resonance... can be represented by structures 1 and 2 shown below. Can these two structures be taken as the canonical forms of the resonance hybrid representing ? If not, give reasons for the same. Solution: The positions of the atoms remain constant in canonical forms, but the positions of the electrons change. The positions of atoms change in the given canonical forms. As a result, they... Explain the important aspects of resonance with reference to the ion. Solution: However, while the carbonate ion cannot be represented by a single structure, the properties of the ion can be described by two or more different resonance structures. The actual structure... Define Bond length. Solution: Bond length is defined as the equilibrium distance between the nuclei of two bonded atoms in a molecule. How do you express the bond strength in terms of bond order? Solution: During the formation of a molecule, the extent of bonding that occurs between two atoms is represented by the bond strength of the molecule. As the bond strength increases, the bond... Although geometries of and molecules are distorted tetrahedral, bond angle in water is less than that of Ammonia. Discuss. Solution: Ammonia's central atom (N) has one lone pair and three bond pairs. In water, the central atom (O) has two lone pairs and two bond pairs. As a result, the two bond pairs repel the two lone... Discuss the shape of the following molecules using the VSEPR model: Solution: $BeCl_2$ The central atom does not have a lone pair, but it does have two bond pairs. As a result, its shape is AB2, or linear. $BCl_3$ The central atom has three bond pairs but no lone... Write the favourable factors for the formation of an ionic bond. Solution: Ionic bonds are formed when one or more electrons are transferred from one atom to another. As a result, the ability of neutral atoms to lose or gain electrons is required for the... Define the octet rule. Write its significance and limitations Solution: "Atoms can combine either by transferring valence electrons from one atom to another or by sharing their valence electrons in order to achieve the closest inert gas configuration by having... Draw the Lewis structures for the following molecules and ions : Solution: The lewis dot structures are: Write Lewis symbols for the following atoms and ions: Sand and and Solution: For S and S2- A sulphur atom has only 6 valence electrons, which is a very small number. As a result, the Lewis dot symbol for the letter S is The presence of a... The measurement of an unknown resistance R is to be carried out using Wheatstone bridge. Two students perform an experiment in two ways. The first student take R2 = 10Ω and R1 = 5Ω. The other student takes R2 = 1000 Ω and R1 = 500 Ω. In the standard arm, both take R3 = 5 Ω. Both find R = R2/R1 R3 = 10 Ω within errors. a) the errors of measurement of the two students are the same b) errors of measurement do depend on the accuracy with which R2 and R1 can be measured c) if the student uses large values of R2 and R1, the currents through the arms will be feeble. This will make determination of null point accurately more difficult d) Wheatstone bridge is a very accurate instrument and has no errors of measurement CBSE, Class 12, NCERT Exemplar, Physics The correct answer is b) errors of measurement do depend on the accuracy with which R2 and R1 can be measured c) if the student uses large values of R2 and R1, the currents through the arms will be... Write Lewis dot symbols for atoms of the following elements :e) N f) Br Solution: Nitrogen atoms have only five valence electrons in total. As a result, the Lewis dot symbol for N is Bromine, because the atom has only seven valence electrons. As a result,... Write Lewis dot symbols for atoms of the following elements :c) B d) O Solution: Boron atoms have only three valence electrons, which is a very small number. As a result, the Lewis dot symbols for B are as follows: The oxygen atom has only six valence... solution:The correct answer is a) number of charge carriers can change with temperature T b) time interval between two successive collisions can depend on T Consider a simple circuit in the figure.stands for a variable resistance R'. R' can vary from R0 to infinity. r is internal resistance of the battery, a) potential drop across AB is nearly constant as R' is varied b) current through R' is nearly a constant as R' is varied c) current I depends sensitively on R' d) I ≥V/r+R always solution: The correct answer is a) potential drop across AB is nearly constant as R' is varied d) I ≥V/r+R always Kirchhoff's junction rule is a reflection of a) conservation of current density vector b) conservation of charge c) the fact that the momentum with which a charged particle approaches a junction is unchanged as the charged particle leaves the junction d) the fact that there is no accumulation of charges at a junction solution: The correct answer is b) conservation of charge d) the fact that there is no accumulation of charges at a junction Which of the following characteristics of electrons determines the current in a conductor? v a) drift velocity alone b) thermal velocity alone c) both drift velocity and thermal velocity d) neither drift nor thermal velocity solution: The correct answer is a) drift velocity alone A metal rod of length 10 cm and a rectangular cross-section of 1 cm × 1/2 cm is connected to battery across opposite faces. The resistance will be a) maximum when the battery is connected across 1 cm × 1/2 cm faces b) maximum when the battery is connected across 10 cm × 1 cm faces c) maximum when the battery is connected across 10 cm × 1/2 cm faces d) same irrespective of the three faces solution:The correct solution is a) maximum when the battery is connected across 1 cm × 1/2 cm faces Two cells of emf's approximately 5V and 10V are to be accurately compared using a potentiometer of length 400 cm. a) the battery that runs the potentiometer should have voltage of 8V b) the battery of potentiometer can have a voltage of 15V and R adjusted so that the potential drop across the wire slightly exceeds 10V c) the first portion of 50 cm of wire itself should have a potential drop of 10V d) potentiometer is usually used for comparing resistances and not voltages Solution: The correct solution is b) the potentiometer's battery can be set to 15V and R adjusted so that the potential drop across the wire is a little higher than 10V. A resistance R is to be measured using a meter bridge. Student chooses the standard resistance S to be 100Ω. He finds the null point at l1 = 2.9 cm . He is told to attempt to improve the accuracy. Which of the following is a useful way? a) he should measure l1 more accurately b) he should change S to 1000 Ω and repeat the experiment c) he should change S to 3 Ω and repeat the experiment d) he should give up hope of a more accurate measurement with a meter bridge solution:The correct answer is c) he should change S to 3 Ω and repeat the experiment Two batteries of emf ε1 and ε2 and internal resistances r1 and r2 respectively are connected in parallel as shown in the figure.a) the equivalent emf εeq of the two cells is between ε1 and ε2 that is ε1 < εeq < ε2 b) the equivalent emf εeq is smaller than ε1 c) the εeq is given by εeq = ε1 + ε2 always d) εeq is independent of internal resistances r1 and r2 solution: The correct answer is a) the equivalent emf εeq of the two cells is between ε1 and ε2 that is ε1 < εeq < ε2 Consider a current-carrying wire in the shape of a circle. Note that as the current progresses along the wire, the direction of j changes in an exact manner, while the current I remain unaffected. The agent that is essentially responsible for is a) source of emf b) electric field produced by charges accumulated on the surface of wire c) the charges just behind a given segment of wire which push them just the right way by repulsion d) the charges ahead solution: The correct answer is b) electric field produced by charges accumulated on the surface of wire Write a test to differentiate between pentan-2-one and pentan-3-one. Solution: One can differentiate between pentan-2-one and pentan-3-one by iodoform test. Pentan-2-one have a –CO-CH3 group and therefore forms a yellow precipitate of Iodoform. Pentan-2-one gives a... Why is there a large difference in the boiling points of butanal and butane-1-ol? Solution: Butanal has no intermolecular hydrogen bonding but butan-1-ol has intermolecular hydrogen bonding. This bonding in butan-1-ol makes it more stable at a higher temperature than butanal. Which of the following is the correct representation for intermediate of nucleophilic addition reaction to the given carbonyl compound (A) : Solution: Option (A) and (B) are the answers. Reason: Benzophenone can be obtained by ____________. (i) Benzoyl chloride + Benzene + AlCl3 (ii) Benzoyl chloride + Diphenyl cadmium (iii) Benzoyl chloride + Phenyl magnesium chloride (iv) Benzene + Carbon monoxide + ZnCl2 Solution: Option (i) and (ii) are the answers Reason: Benzophenone can be obtained by the Friedel-Craft acylation reaction. The reaction is shown as Through which of the number of the following reactions of carbon atoms can be increased in the chain? (i) Grignard reaction (ii) Cannizaro's reaction (iii) Aldol condensation (iv) HVZ reaction Solution: Option (i) and (iii) are the answers. Reason: Grigned reaction and aldol condensation is used to increase the number of carbon attom in the chain as follows: Write Lewis dot symbols for atoms of the following elements : a) Mg b) Na Solution: Only two valence electrons exist in the magnesium atom. As a result, the Lewis dot symbols for Mg are as follows: Only one valence electron exists in the sodium atom. As a... Which of the following conversions can be carried out by Clemmensen Reduction? (i) Benzaldehyde into benzyl alcohol (ii) Cyclohexanone into cyclohexane (iii) Benzoyl chloride into benzaldehyde (iv) Benzophenone into diphenylmethane Solution: Option (ii) and (iv) are the answers. Reason: The carbonyl group of aldehydes and ketones is reduced to CH2​ group on treatment with zinc amalgam and concentrated hydrochloric acid... Explain the formation of a chemical bond. Answer: "A chemical bond is an attractive force that holds a chemical species' constituents together." For chemical bond formation, many theories have been proposed, including valence shell electron... Treatment of compound with NaOH solution yields(i) Phenol (ii) Sodium phenoxide (iii) Sodium benzoate (iv) Benzophenone Solution: Option (ii) and (iii) are the answers. Reason: Treatment of compound with NaOH yields sodium phenoxide and sodium by means of nucleophilic substitution reaction as follows 13. Which of the following compounds do not undergo aldol condensation? Solution: Option (ii) and (iv) are the answers. reason: Aldehydes and ketones and having at least one alpha-hydrogen undergo a reaction in the presence of dilute alkali as catalyst to beta-hydroxy... In Clemmensen Reduction carbonyl compound is treated with _____________. (i) Zinc amalgam + HCl (ii) Sodium amalgam + HCl (iii) Zinc amalgam + nitric acid (iv) Sodium amalgam + HNO3 Solution: Option (i) is the answer. Reason: From the above reaction carbonyl group is treated with Zn−Hg(Zinc Amalgum) and HCl Which of the following compounds will give butanone on oxidation with alkaline KMnO4 solution? (i) Butan-1-ol (ii) Butan-2-ol (iii) Both of these (iv) None of these Solution: Option (ii) is the answer. Which is the most suitable reagent for the following conversion?(i) Tollen's reagent (ii) Benzoyl peroxide (iii) I2 and NaOH solution (iv) Sn and NaOH solution Solution: Option (iii) is the answer. Reason: This reaction is called as lodoform reaction. Compound A and C in the following reaction are :_____________ Solution: Option (ii) is the answer. Reason: Structure of 'A' and type of isomerism in the above reaction are respectively. (i) Prop–1–en–2–ol, metamerism (ii) Prop-1-en-1-ol, tautomerism (iii) Prop-2-en-2-ol, geometrical isomerism (iv) Prop-1-en-2-ol, tautomerism Solution: Option (iv) is the answer. reason: Structure of A and the type of isomerism in the above reaction are Prop-1-en-2-ol, tautomerism respectively. Enol form tautomerises into keto... Which product is formed when the compoundis treated with concentrated aqueous KOH solution? Solution: Option (ii) is the answer. Reason: Benzaldhyde C6​H5​CHO on treatment with KOH yields the corresponding alcohol and acid. In this reaction, there is no alpha hydrogen atom present in... Cannizaro's reaction is not given by _____________. Solution: Option (iv) is the answer. Reason: CH3CHO will not give Cannizzaro's reaction because it contains a-hydrogen while other three compounds have no a-hydrogen. Hence, they will give... The reagent which does not react with both, acetone and benzaldehyde. (i) Sodium hydrogen sulphite (ii) Phenyl hydrazine (iii) Fehling's solution (iv) Grignard reagent Solution: Option (iii) is the answer. Reason: Aromatic aldehydes and ketones does not respond to Fehling's test. Sodium hydrogen sulphite,phenyl hydrazine, grignard reaction are common for carbonyl... Compound can be prepared by the reaction of _____________. The correct order of increasing acidic strength is _____________. (i) Phenol < Ethanol < Chloroacetic acid < Acetic acid (ii) Ethanol < Phenol < Chloroacetic acid < Acetic acid (iii) Ethanol < Phenol < Acetic acid < Chloroacetic acid (iv) Chloroacetic acid < Acetic acid < Phenol < Ethanol Solution: Option (iii) is the answer. Reason: The correct order of increasing acidic strength is Ethanol < Phenol < Acetic acid < Chloroacetic acid. Phenol is more acidic than ethanol... Which of the following compounds is most reactive towards nucleophilic addition reactions? Solution: Option (i) is the answer. Addition of water to alkynes occurs in acidic medium and the presence of Hg2+ ions as a catalyst. Which of the following products will be formed on addition of water to but-1-one under these conditions. Solution: Option (ii) is the answer. Reason: Addition of water to but-1-yne in the presence of H2​SO4​ and HgSO4​ gives 2-butaone. The addition takes place by markovnikoff's rule.... Arrange the following compounds in increasing order of dipole moment. CH3CH2CH3, CH3CH2NH2, CH3CH2OH Solution: CH3CH2CH3 < CH3CH2NH2 < CH3CH2OH The dipole moment of CH3CH2OH is greater than that of CH3CH2NH2. CH3CH2CH3 has the least dipole moment among the three given compounds because it is... Predict the product of the reaction of aniline with bromine in a non-polar solvent such as CS2. Solution: The products formed in the reaction of aniline with bromine in a non-polar solvent such as CS2 are 4-Bromoaniline and 2-Bromoaniline where 4-Bromoaniline is the major product. Under what reaction conditions (acidic/basic), the coupling reaction of aryldiazonium chloride with aniline is carried out? Solution: This reaction is carried out in a mild basic medium. This is an electrophilic substitution reaction. Aryldiazonium chloride reacts with aniline to form a yellow dye of p-Aminoazobenzene. Explain why MeNH2 is a stronger base than MeOH? Solution: MeNH2 is a stronger base than MeOH because of the lower electronegativity and the presence of the lone pair of electrons on the nitrogen atom in MeNH2. Why does the acetylation of —NH2 group of aniline reduce its activating effect? Class 12, NCERT Exemplar, Selina Solution: The acetylation of —NH2 group of aniline reduces its activating effect because the lone pair of electrons on the nitrogen of acetanilide interacts with oxygen atom due to resonance. Why is benzene diazonium chloride not stored and is used immediately after its preparation? Solution: At high temperatures, benzene diazonium chloride is highly soluble in water, and at low temperatures, it is a very stable compound in water. Because it is unstable, it should be used as... What is Hinsberg reagent? Solution: Hinsberg's reagent is benzene sulphonyl chloride, also known as $C_6H_5SOCl$. To distinguish between primary, secondary, and tertiary amines, Hinsberg's reagent is used. What is the best reagent to convert nitrile to primary amine? Solution: LiAlH4 and Sodium/Alcohol are the best reagents for converting nitrile to primary amine. The nitriles can be converted into a corresponding primary amine through reduction. What is the product when C6H5CH2NH2 reacts with HNO2? Solution: The main product is $C_6H_5CH_2OH$ When $C_6H_5CH_2NH_2$ reacts with HNO2, it produces an unstable diazonium salt, which is then converted to alcohol. When $C_6H_5CH_2NH_2$ reacts with... Why is NH2 group of aniline acetylated before carrying out nitration? Solution: The NH2 group of aniline is acetylated before nitration to control the nitration reaction and the formation of tarry oxidation products and nitro derivatives. P-nitroaniline is the main... What is the role of HNO3 in the nitrating mixture used for nitration of benzene? Solution: The nitration of organic compounds is done with a nitration mixture, which is a 1:1 solution of HNO3 and H2SO4. In the nitration of benzene, it acts as a base and provides electrophile. Which of the following reactions belong to electrophilic aromatic substitution? (i) Bromination of acetanilide (ii) Coupling reaction of aryldiazonium salts (iii) Diazotisation of aniline (iv) Acylation of aniline Solution: Option (i) and (ii) are the answers. Reason:... Refer to the graphs below and match the following: CBSE, Motion in a straight line, NCERT Exemplar, NCERT Exemplar, Physics Graph Characteristics a) i) has v > 0 and a < 0 throughout b) ii) has x > 0 throughout and has a point with v = 0 and a point with a = 0 c) iii) has a point with zero displacement for t... Under which of the following reaction conditions, aniline gives p-nitro derivative as the major product? (i) Acetyl chloride/pyridine followed by reaction with conc. H2SO4 + conc. HNO3 (ii) Acetic anhydride/pyridine followed by conc. H2SO4 + conc. HNO3 (iii) Dil. HCl followed by reaction with conc. H2SO4 + conc. HNO3 (iv) Reaction with conc. HNO3 + conc.H2SO4 Solution: Option (i) and (ii) are the answers. Reason: In addition to the nitro derivatives, direct nitration of aniline produces tarry oxidation products. Furthermore, in a strongly acidic... Which of the following reactions are correct? Solution: Option (i) and (iii) are the answers. Reason: Which of the following amines can be prepared by Gabriel synthesis. (i) Isobutyl amine (ii) 2-Phenylethylamine (iii) N-methyl benzylamine (iv) Aniline Solution: Option (i) and (ii) are the answers. Reason: Gabriel synthesis is used for the preparation of primary amines. Phthalimide on treatment with ethanolic potassium hydroxide forms potassium... Arenium ion involved in the bromination of aniline is __________. Solution: Option (i), (ii) and (iii) are the answers. Reason: Arenium ion involved in the bromination of aniline are as follows: The product of the following reaction is __________. Solution: Option (A) and (B) is the answer. Reason: The reagents that can be used to convert benzene diazonium chloride to benzene are __________. Solution: Option (ii) and (iii) are the answers. Reason: Hypophosphorous acid (phosphinic acid) and ethanol, for example, reduce diazonium salts to arenes, which are then oxidised to phosphorous... Which of the following species are involved in the carbylamine test? Solution: Option (i) and (ii) are the answers. Reason: Only RNC and CHCl3​ are involved in carbylamine reaction. Reduction of nitrobenzene by which of the following reagent gives aniline? (i) Sn/HCl (ii) Fe/HCl (iii) H2-Pd (iv) Sn/NH4OH Solution: Option (i), (ii), and (iii) are the answers. Reason: They are reducing agents. Which of the following cannot be prepared by Sandmeyer's reaction? (i) Chlorobenzene (ii) Bromobenzene (iii) Iodobenzene (iv) Fluorobenzene Solution: Option (iii) and (iv) are the answers. Reason: Sandmeyer's reaction is used for the preparation of chlorobenzene and bromobenzene. Which of the following methods of preparation of amines will not give the same number of carbon atoms in the chain of amines as in the reactant? (i) The reaction of nitrite with LiAlH4. (ii) The reaction of the amide with LiAlH4 followed by treatment with water. (iii) Heating alkyl halide with potassium salt of phthalimide followed by hydrolysis. (iv) Treatment of amide with bromine in the aqueous solution of sodium hydroxide. Solution: Option (iv) is the answer. Reason: In Hoffmann Bromide degradation as the word, suggest, the amide is reduced to an amine with 1- carbonless, so this is the method in which we don't get... Which of the following should be most volatile? Solution: Option (ii) is the answer. Reason: The order of boiling points of isomeric amines is 1 amine > 2 amines > 3 amines. 3 amines have no intermolecular association because there are no H... Among the following amines, the strongest Brönsted base is __________. Solution; Option (iv) is the answer. Reason: Option (iv) is the strongest Bronsted base as there is no delocalization of lone pair of electron of the atom which is not possible in aniline and in... The correct decreasing order of basic strength of the following species is _______. H2O, NH3, OH–, NH2– (i) NH2– > OH – > NH3 > H2O (ii) OH– > NH2– > H2O > NH3 (iii) NH3 > H2O > NH2– > OH– (iv) H2O > NH3> OH– > NH2– Solution: Option (i) is the answer. Reason: NH3​ is more basic than H2​O, therefore NH2−​ (Conjugate base of weak acid NH3​) is a stronger base than OH−. Solution; Option (iv) is the answer. Reason: Option(iv)is the strongest Bronsted base as there is no delocalisation of lone pair of electron of N atom which is not possible in aniline and in... Which of the following compounds is the weakest Brönsted base? Solution: Option (iii) is the answer. Reason: A Bronsted Lowry base is a proton acceptor or hydrogen ion acceptor. Amines have a stronger tendency to accept protons and are strong Bronsted bases.... Which of the following compound will not undergo an azo coupling reaction with benzene diazonium chloride. (i) Aniline (ii) Phenol (iii) Anisole (iv) Nitrobenzene Solution: Option (iv) is the answer. Reason: Diazonium cation is a weak electrophile and hence it reacts with electron-rich compounds containing electron-donating groups such as −OH, -$NH_2$ and... The best method for preparing primary amines from alkyl halides without changing the number of carbon atoms in the chain is (i) Hoffmann Bromamide reaction (ii) Gabriel phthalimide synthesis (iii) Sandmeyer reaction (iv) Reaction with Solution: Option (ii) is the answer. Reason: Best method for preparing primary aminos form alkyl halides without changing the number of carbon atoms in the chain is Gabriel synthesis. Because this... The reaction Ar + N2Cl– → (Cu/HCl)– ArCl + N2 + CuCl is named as _________. (i) Sandmeyer reaction (ii) Gatterman reaction (iii) Claisen reaction (iv) Carbylamine reaction Solution: Option (ii) is the answer. Reason: Diazonium salts in the presence of copper powder and halogen acid give aryl halide. Gattermann reaction is a variation of Sandmeyer reaction in which... Acid anhydrides on reaction with primary amines give ____________. (i) amide (ii) imide (iii) secondary amine (iv) imine Solution: Option (i) is the answer Reason: When acid anhydrides react with primary amines, they produce amide. The H atom of the amino group is replaced with an acyl group in this nucleophilic... The most reactive amine towards dilute hydrochloric acid is ___________. Solution: Option (ii) is the answer. Reason: The reactivity of amines is proportional to their basicity. If the R group is, the order of basicity is secondary amine ... Reduction of aromatic nitro compounds using Fe and HCl gives __________. (i) aromatic oxime (ii) aromatic hydrocarbon (iii) aromatic primary amine (iv) aromatic amide Solution: Option (iii) is the answer. Reason: Reduction of nitro aryl compounds in presence of Fe and HCl gives aromatic primary amines. In the nitration of benzene using a mixture of conc. and conc. ,the species which initiates the reaction is __________. Solution: Option (iii) is the answer. Reason: The gas evolved when methylamine reacts with nitrous acid is __________. (i) (ii) (iii) (iv) Methylamine reacts with HNO2 to form _________. The correct increasing order of basic strength for the following compounds is _________. (i) II < III < I (ii) III < I < II (iii) III < II < I (iv) II < I < III Solution: Option (iv) is the answer. Reason: Electron donating: group increases the basicity while electron-withdrawing group decreases the basicity of... Hoffmann Bromamide Degradation reaction is shown by __________. Solution: Option (ii) is the answer. Reason: Where the aryl amide is converted to arylamine in the presence of $Br_2$ and $NaOH$ . The best reagent for converting 2–phenylpropanamide into 2-phenylpropanolamine is _____. (i) excess H2 (ii) Br2 in aqueous NaOH (iii) iodine in the presence of red phosphorus (iv) LiAlH4 in ether Solution: Option (iv) is the answer. Reason: Amongst the given set of reactants, the most appropriate for preparing 2° amine is _____. (i) 2° R—Br + NH3 (ii) 2° R—Br + NaCN followed by (iii) 1° R— + RCHO followed by %H_2/Pt H_3O+$/heat The source of nitrogen in Gabriel synthesis of amines is _____________. (i) Sodium azide, NaN3 (ii) Sodium nitrite, NaNO2 (iii) Potassium cyanide, KCN (iv) Potassium phthalimide Solution: Option (iv) is the answer. Reason: Gabriel synthesis :The reaction is given to the image.Source of nitrogen atom is Gabriel synthesis is Potassium phthalamide. To prepare a 1° amine from an alkyl halide with simultaneous addition of one group in the carbon chain, the reagent used as a source of nitrogen is ___________. (i) Sodium amide, NaNH2 (ii) Sodium azide, NaN3 6. Which of the following reagents would not be a good choice for reducing an aryl nitro compound to an amine? (i) H2 (excess)/Pt (ii) LiAlH4 in ether (iii) Fe and HCl (iv) Sn and HCl Solution: Option (ii) is the answer. Reason: LiAlH4​/ether reduces aryl nitro compounds to azo compounds 2C6H5NO2→LiAIH4C6H5N=N-C6H5 5. Benzylamine may be alkylated as shown in the following equation : C6H5CH2NH2 + R—X → C6H5CH2NHR Which of the following alkyl halides is best suited for this reaction through SN1 mechanism? (i) CH3Br (ii) C6H5Br (iii) C6H5CH2Br (iv) C2H5 Br Solution: Option (iii) is the answer. Reason: C6​H5​CH2​Br is best suited for this reaction through SN​1 mechanism as the carbocation (C6​H5​CH2​) formed is resonance... Which of the following is the weakest Brönsted base? Solution: Option (A) is the answer. Reason: Aniline is the weakest Bronsted base due to delocalization of lone pair of electron... Amongst the following, the strongest base in aqueous medium is ____________. Solution: Option (iii) is the answer. Reason: Due to the electron releasing nature of the alkyl group, it (R) pushes electrons towards nitrogen and thus makes the uncharged electrons pair available... The correct IUPAC name for is (i) Allylmethylamine (ii) 2-amino-4-pentene (iii) 4-aminopent-1-ene (iv) N-methylprop-2-en-1-amine Solution: Option (iv) is the answer. Reason: $CH_2=CHCH_2-NHCH_3$​ N−methylprop−2−en−1−amine. Which of the following is a 3° amine?(i) 1-methylcyclohexylamine (ii) Triethylamine (iii) tert-butylamine (iv) N-methyl aniline Solution: Option (ii) is the answer. Reason: Triethylamine is a 3° amine because of amonia in which each hydrogen atom is substituted by an methyl group. Show the is a solution of CBSE, Class 10, Exercise 10f, Maths, Quadratic Equations, RS Aggarwal The given equation is $x^{2}+6 x+9=0$ Putting $x=-3$ in the given equation, we get $L H S=(-3)^{2}+6 \times(-3)+9=9-18+9=0=R H S$ $\therefore x=-3$ is a solution of the given equation. Find dy/dx in each of the following: CBSE, Class 12, Exercise 11.4, Maths, RD Sharma differentiating the equation on both sides with respect to x, we get, A train covers a distance of at a uniform speed. If the speed had been less then it would have taken 3 hours more to cover the same distance. Find the usual speed of the train. CBSE, Class 10, Exercise 10e, Maths, Quadratic Equations, RS Aggarwal Let the usual speed of the train be $x \mathrm{~km} / \mathrm{h}$. $\therefore$ Reduced speed of the train $=(x-8) \mathrm{km} / \mathrm{h}$ Total distance to be covered $=480 \mathrm{~km}$ Time... (A) (B) (C) (D) The correct option is option (D) $\therefore \mathrm{A}^{2}=\mathrm{A} \cdot \mathrm{A}=\left[\begin{array}{cc}2 & -1 \\ -1 & 2\end{array}\right] \cdot\left[\begin{array}{cc}2 & -1 \\ -1... A teacher on attempting to arrange the students for mass drill in the form of solid square found that 24 students were left. When he increased the size of the square by one student, he found that he was short of 25 students. Find the number of students. Let there be $x$ rows. Then, the number of students in each row will also be $x$. $\therefore$ Total number of students $=\left(x^{2}+24\right)$ According to the question: $\begin{array}{l}... The sum of a natural number and its square is Find the number. Let the required natural number be $x$. According to the given condition, $x+x^{2}=156$ $\Rightarrow x^{2}+x-156=0$ $\Rightarrow x^{2}+13 x-12 x-156=0$ $\Rightarrow x(x+13)-12(x+13)=0$... A manufacturer produces two Models of bikes – Model and Model . Model takes 6 man-hours to make per unit, while Model Y takes 10 man-hours per unit. There is a total of 450 man-hour available per week. Handling and Marketing costs are Rs 2000 and Rs 1000 per unit for Models and respectively. The total funds available for these purposes are Rs 80,000 per week. Profits per unit for Models and are Rs 1000 and Rs 500 , respectively. How many bikes of each model should the manufacturer produce so as to yield a maximum profit? Find the maximum profit. CBSE, Class 12, Linear Programming, Maths, NCERT Exemplar Solution: Let us take $\mathrm{x}$ an $\mathrm{y}$ to be the no. of models of bike produced by the manufacturer. From the question we have, Model $x$ takes 6 man-hours to make per unit Model $y$... Maximize subject to . Solution: It is given that: $Z=x+y$ subject to constraints, $x+4 y \leq 8$ $2 x+3 y \leq 12,3 x+y \leq 9, x \geq 0, y \geq 0$ Now construct a constrain table for the above, we have Here, it can be... Refer to Exercise 15. Determine the maximum distance that the man can travel. Solution: According to the solution of exercise 15, we have Maximize $Z=x+y$, subject to the constraints $2 x+3 y \leq 120 \ldots$ (i) $8 x+5 y \leq 400 \ldots$ (ii) $x \geq 0, y \geq 0$ Let's... Refer to Exercise 14. How many sweaters of each type should the company make in a day to get a maximum profit? What is the maximum profit? Solution: According to the solution of exercise 14, we have Maximize $Z=200 x+120 y$ subject to constrains $\begin{array}{l} 3 x+y \leq 600 \ldots \text { (i) } \\ x+y \leq 300 \ldots \text { (ii) }... Refer to Exercise 13. Solve the linear programming problem and determine the maximum profit to the manufacturer. Solution: According to the solution of exercise 13, we have The objective function for maximum profit $\mathrm{Z}=100 \mathrm{x}+170 \mathrm{y}$ Subject to constraints, $x+4 y \leq 1800 \ldots(i)$... Refer to Exercise 12. What will be the minimum cost? Solution: According to the solution of exercise 12, we have The objective function for minimum cost is $\mathrm{Z}=400 \mathrm{x}+200 \mathrm{y}$ Subject to the constrains; $5 x+2 y \geq 30 \ldots... Refer to Exercise 11. How many of circuits of Type A and of Type B, should be produced by the manufacturer so as to maximize his profit? Determine the maximum profit. Solution: According to the solution of exercise 11, we have Maximize $Z=50 x+60 y$ subject to the constraints $20 x+10 y \leq 2002 x+y \leq 20 \ldots$ (i) $10 x+20 y \leq 120 x+2 y \leq 12 \ldots$... A man rides his motorcycle at the speed of hour. He has to spend Rs 2 per km on petrol. If he rides it at a faster speed of hour, the petrol cost increases to Rs 3 per km. He has at most Rs 120 to spend on petrol and one hour's time. He wishes to find the maximum distance that he can travel. Express this problem as a linear programming problem. CBSE, Class 12, Linear Programming, NCERT Exemplar Solution: Suppose the man covers $\mathrm{x} \mathrm{km}$ on his motorcycle at the speed of $50 \mathrm{~km} / \mathrm{hr}$ and covers $\mathrm{y} \mathrm{km}$ at the speed of $50 \mathrm{~km} /... A company manufactures two types of sweaters: type A and type B. It costs Rs 360 to make a type A sweater and Rs 120 to make a tvpe B sweater. The companv can make at most 300 sweaters and spend at most Rs 72000 a day. The number of sweaters of type B cannot exceed the number of sweaters of type A by more than 100 . The company makes a profit of Rs 200 for each sweater of type A and Rs 120 for every sweater of type B. Formulate this problem as a LPP to maximize the profit to the company. Solution: Suppose $\mathrm{x}$ and $\mathrm{y}$ to be the number of sweaters of type $\mathrm{A}$ and type $\mathrm{B}$ respectively. The following constraints are: $360 x+120 y \leq 72000... A company manufactures two types of screws A and B. All the screws have to pass through a threading machine and a slotting machine. A box of Type A screws requires 2 minutes on the threading machine and 3 minutes on the slotting machine. A box of type screws requires 8 minutes of threading on the threading machine and 2 minutes on the slotting machine. In a week, each machine is available for 60 hours. On selling these screws, the company gets a profit of Rs 100 per box on type A screws and Rs 170 per box on type B screws. Formulate this problem as a LPP given that the objective is to maximize profit. Solution: Suppose that the company manufactures $\mathrm{x}$ boxes of type A screws and $y$ boxes of type B screws. The below table is constructed from the information provided:... A firm has to transport 1200 packages using large vans which can carry 200 packages each and small vans which can take 80 packages each. The cost for engaging each large van is Rs 400 and each small van is Rs 200 . Not more than Rs 3000 is to be spent on the iob and the number of large vans cannot exceed the number of small vans. Formulate this problem as a LPP given that the objective is to minimize cost. Solution: Suppose $\mathrm{x}$ and $\mathrm{y}$ to be the number of large and small vans respectively. The below constrains table is constructed from the information provided:... A manufacturer of electronic circuits has a stock of 200 resistors, 120 transistors and 150 capacitors and is required to produce two types of circuits A and B. Type A requires 20 resistors, 10 transistors and 10 capacitors. Type B requires 10 resistors, 20 transistors and 30 capacitors. If the profit on type A circuit is Rs 50 and that on type B circuit is Rs 60 , formulate this problem as a LPP so that the manufacturer can maximize his profit. Solution: Suppose $\mathrm{x}$ units of type A and $y$ units of type $\mathrm{B}$ electric circuits be produced by the manufacturer. The table is constructed from the information provided:... In Fig. 12.11, the feasible region (shaded) for a LPP is shown. Determine the maximum and minimum value of Solution: It is seen from the given figure, that the corner points are as follows: $\mathrm{R}(7 / 2,3 / 4), \mathrm{Q}(3 / 2,15 / 4), \mathrm{P}(3 / 13,24 / 13)$ and $\mathrm{S}(18 / 7,2 / 7)$ On... The feasible region for a LPP is shown in Fig. 12.10. Evaluate at each of the corner points of this region. Find the minimum value of , if it exists. Solution: It is given that: $Z=4 x+y$ In the figure given, $\mathrm{ABC}$ is the feasible region which is open unbounded. Here, we get $x+y=3\dots \dots(i)$ and $\quad x+2 y=4 \quad \ldots$ (ii) On... Refer to Exercise 7 above. Find the maximum value of . Solution: It is clearly seen that the evaluating table for the value of $Z$, the maximum value of $Z$ is 47 at $(3,2)$ The feasible region for a LPP is shown in Fig. 12.9. Find the minimum value of . Solution: It is seen from the given figure, that the feasible region is $\mathrm{ABCA}$. Corner points are $\mathrm{C}(0,3), \mathrm{B}(0,5)$ and for A, we have to solve equations $x+3 y=9$ and... Feasible region (shaded) for a LPP is shown in Fig. 12.8. Maximize . Solution: It is given that: $\mathrm{Z}=5 \mathrm{x}+7 \mathrm{y}$ and feasible region $\mathrm{OABC}$. Corner points of the feasible region are $\mathrm{O}(0,0), \mathrm{A}(7,0), \mathrm{B}(3,4)$... Determine the maximum value of if the feasible region (shaded) for a LPP is shown in Fig. 12.7. Solution: OAED is the feasible region, as shown in the figure At $A, y=0$ in eq. $2 x+y=104$ we obtain, $\mathrm{x}=52$ This is a corner point $A=(52,0)$ At $D, x=0$ in eq. $x+2 y=76$ we obtain,... Minimize subject to the constraints: . Solution: It is given that: $\mathrm{Z}=13 \mathrm{x}-15 \mathrm{y}$ and the constraints $\mathrm{x}+\mathrm{y} \leq 7$, $2 x-3 y+6 \geq 0, x \geq 0, y \geq 0$ Taking $x+y=7$, we have... Page 1 of 111234...10...»Last »
CommonCrawl
\begin{document} \title{\textbf{Stringy $E$-functions of varieties with $A$-$D$-$E$ singularities}\\ \date{}} \author{Jan Schepers\footnote{Research Assistant of the Fund for Scientific Research - Flanders (Belgium) (F.W.O.), \textsc{Katholieke Universiteit Leuven, Departement Wiskunde, Celestijnenlaan 200B, B-3001 Leuven, Belgium}, \emph{E-mail address}: [email protected].}} \maketitle \begin{center} \footnotesize{\textbf{Abstract}} \end{center} {\footnotesize The stringy $E$-function for normal irreducible complex varieties with at worst log terminal singularities was introduced by Batyrev. It is defined by data from a log resolution. If the variety is projective and Gorenstein and the stringy $E$-function is a polynomial, Batyrev also defined the stringy Hodge numbers as a generalization of the Hodge numbers of nonsingular projective varieties, and conjectured that they are nonnegative. We compute explicit formulae for the contribution of an $A$-$D$-$E$ singularity to the stringy $E$-function in arbitrary dimension. With these results we can say when the stringy $E$-function of a variety with such singularities is a polynomial and in that case we prove that the stringy Hodge numbers are nonnegative.} \\ \section{Introduction} \textbf{1.1.} In \cite{Batyrev}, Batyrev defined the stringy $E$-function for normal irreducible complex algebraic varieties, with at worst log terminal singularities. With this function he was able to formulate a topological mirror symmetry test for Calabi-Yau varieties with singularities. Before stating the definition of the stringy $E$-function, we recall some other definitions. Let $X$ be a complex algebraic variety. One defines the Hodge-Deligne polynomial $H(X;u,v)\in \mathbb{Z}[u,v]$ by \[H(X;u,v) = \sum_{i=0}^{2d} (-1)^i \sum_{p,q} h^{p,q}(H_c^i(X,\mathbb{C}))u^pv^q,\] where $h^{p,q}$ denotes the dimension of the $(p,q)$-component of the mixed Hodge structure on $H^i_c(X,\mathbb{C})$. A nice introduction to Deligne's mixed Hodge theory and to this definition can be found in \cite{Srinivas} (pay attention to the extra factor $(-1)^{p+q}$ that the author has inserted there). The Hodge-Deligne polynomial is a generalized Euler characteristic, that is, it satisfies: \begin{itemize} \item[(1)]$H(X)=H(X\setminus Y)+H(Y)$ where $Y$ is Zariski-closed in $X$, \item[(2)]$H(X\times X')=H(X)\cdot H(X')$. \end{itemize} Note that $H(X;1,1)=\chi(X)$, the topological Euler characteristic of $X$.\\ \noindent \textbf{1.2.} A normal irreducible complex variety $X$ is called $\mathbb{Q}$-Gorenstein if $rK_X$ is Cartier for some $r\in\mathbb{Z}_{>0}$. Take a log resolution $\varphi: \widetilde{X}\to X$ (i.e. a proper birational morphism from a nonsingular variety $\widetilde{X}$ such that the exceptional locus of $\varphi$ is a divisor whose components $D_1,\ldots,D_s$ are smooth and have normal crossings). Then we have $rK_{\widetilde{X}}-\varphi^*(rK_X)=\sum_i b_i D_i$, with $b_i\in \mathbb{Z}$. This is also formally written as $K_{\widetilde{X}}-\varphi^*(K_X)=\sum_i a_iD_i$, where $a_i=\frac{b_i}{r}$. The variety $X$ is called terminal, canonical, log terminal and log canonical if $a_i>0,a_i \geq 0,a_i > -1,a_i\geq -1,$ respectively, for all $i$ (this is independent of the chosen log resolution). The difference $K_{\widetilde{X}}-\varphi^*(K_X)$ is called the \emph{discrepancy}.\\ \noindent \textbf{1.3.} Now we are ready to define the stringy $E$-function. We discuss its properties and give the additional definitions of the stringy Euler number and the stringy Hodge numbers. All of this goes back to Batyrev \cite{Batyrev}.\\ \noindent \textbf{Definition.} Let $X$ be a normal irreducible complex variety with at most log terminal singularities and let $\varphi:\widetilde{X}\to X$ be a log resolution. Denote the irreducible components of the exceptional locus by $D_i$, $i\in I$, and write $D_J$ for $\cap_{j\in J} D_j$ and $D_J^{\circ}$ for $D_J\setminus \cup_{j\in I\setminus J} D_j$, where $J$ is any subset of $I$ ($D_{\emptyset}$ is taken to be $\widetilde{X}$). The stringy $E$-function of $X$ is \[E_{st}(X;u,v):= \sum_{J\subseteq I} H(D_J^{\circ};u,v) \prod_{j\in J} \frac{uv-1}{(uv)^{a_j+1}-1}, \] where $a_j$ is the discrepancy coefficient of $D_j$ and where the product $\prod_{j\in J}$ is 1 if $J=\emptyset$.\\ Batyrev proved that this definition is independent of the chosen log resolution. His proof uses motivic integration. An overview of this theory is provided in \cite{Veys}.\\ \noindent \textbf{Remark.} \begin{itemize} \item[(1)] If $X$ is smooth, then $E_{st}(X)=H(X)$ and if $X$ admits a crepant resolution $\varphi:\widetilde{X} \to X$ (i.e. such that the discrepancy is 0), then $E_{st}(X)=H(\widetilde{X})$. \item[(2)] If $X$ is Gorenstein (i.e. $K_X$ is Cartier), then all $a_i\in \mathbb{Z}_{\geq 0}$ and $E_{st}(X)$ becomes a rational function in $u$ and $v$. It is then an element of $\mathbb{Z}[[u,v]]\cap \mathbb{Q}(u,v)$. \item[(3)] The \emph{stringy Euler number} of $X$ is defined as \[\lim_{u,v \to 1} E_{st}(X;u,v)=\sum_{J\subseteq I} \chi(D_J^{\circ})\prod_{j\in J} \frac{1}{a_j+1}.\] \end{itemize} \noindent \textbf{1.4.} Assume moreover that $X$ is projective of dimension $d$. Then Batyrev proved the following instance of Poincar\'e and Serre duality: \begin{itemize} \item[(i)] $E_{st}(X;u,v)=(uv)^dE_{st}(X;u^{-1},v^{-1}),$ \item[(ii)]$E_{st}(X;0,0)=1$. \end{itemize} If $X$ has at worst Gorenstein canonical singularities and if $E_{st}(X;u,v)$ is a polynomial $\sum_{p,q} a_{p,q}u^pv^q$, he defined the \emph{stringy Hodge numbers} of $X$ as $h_{st}^{p,q}(X):=(-1)^{p+q}a_{p,q}$. It is clear that \begin{itemize} \item[(1)] they can only be nonzero for $0\leq p,q\leq d$, \item[(2)] $h_{st}^{0,0}=h_{st}^{d,d}=1$, \item[(3)] $h_{st}^{p,q}=h_{st}^{q,p}=h_{st}^{d-p,d-q}=h_{st}^{d-q,d-p}$, \item[(4)] if $X$ is smooth, the stringy Hodge numbers are equal to the usual Hodge numbers. \end{itemize} \noindent \textbf{Conjecture (Batyrev).} \textsl{The stringy Hodge numbers are nonnegative.}\\ \noindent \textbf{Example.} The conjecture is true for varieties that admit a crepant resolution. This is the case for all canonical surface singularities, which are exactly the two-dimensional $A$-$D$-$E$ singularities \cite[p.375]{Reid} (see also Theorem 5.1 for $m=3$).\\ \noindent \textbf{Remark.} For a complete surface $X$ with at most log terminal singularities, Veys showed that \[E_{st}(X)=\sum_{p,q\in \mathbb{Z}}(-1)^{p+q} h_{st}^{p,q} u^pv^q +\sum_{r\notin \mathbb{Z}} h_{st}^{r,r}(uv)^r, \] with all $h_{st}^{p,q}$ and $h_{st}^{r,r}$ nonnegative \cite[p.138]{Veys2}.\\ \noindent \textbf{1.5.} In this paper, we will compute in arbitrary dimension the contribution of an $A$-$D$-$E$ singularity to the stringy $E$-function. This has already been done by Dais and Roczen in the three-dimensional case (see \cite{DaisRoczen}), but their computation of some discrepancy coefficients in the $D$ and $E$ cases is inaccurate and this leads to incorrect formulae in these cases. We correct and considerably simplify their formulae (also for type $A$). We construct a log resolution for all higher dimensional $A$-$D$-$E$ singularities (based on the calculation by Dais and Roczen of a log resolution for the three-dimensional $A$-$D$-$E$'s), and again we are always able to obtain a fairly simple formula for their stringy $E$-function. For the contribution of an ($m-1$)-dimensional singularity of type $D_n$ (where $m$ is odd and $n=2k$ is even) we find for example \[ 1+\frac{(uv-1)}{((uv)^{(2k-1)(m-3)+1}-1)}\left(\sum_{i=1}^{2k-1}(uv)^{i(m-3)+1} +(uv)^{k(m-3)+1}\right). \] Then using our concrete formulae, we can prove the following theorem.\\ \noindent \textbf{Theorem.} \textsl{Let $X$ be a projective complex variety of dimension at least 3 with at most $A$-$D$-$E$ singularities. The stringy $E$-function of $X$ is a polynomial if and only if $X$ has dimension 3 and all singularities are of type $A_n$ ($n$ odd) and/or $D_n$ ($n$ even). In that case, the stringy Hodge numbers of $X$ are positive.}\\ In the next section we recall the definition of the $A$-$D$-$E$ singularities and we construct a log resolution for them. In section 3 and 4, we compute the Hodge-Deligne polynomials and the discrepancy coefficients that we need, respectively. In section 5 we give the resulting formulae and prove the theorem.\\ \section{$A$-$D$-$E$ singularities and their desingularization} \noindent \textbf{2.1. Definition.} By a $d$-dimensional ($d\geq 2$) $A$-$D$-$E$ singularity we mean a singularity that is analytically isomorphic to the germ at the origin of one of the following hypersurfaces in $\mathbb{A}^{d+1}_{\mathbb{C}}$ (with coordinates $(x_1,\ldots,x_{d+1})$): \[\begin{array}{clccl} (1)& x_1^{n+1}+x_2^2+x_3^2+\cdots+x_{d+1}^2=0 & & & (\text{type } A_n,n\geq 1),\\ & & & & \\ (2)& x_1^{n-1}+x_1x_2^2+x_3^2+\cdots+x_{d+1}^2=0 & & & (\text{type } D_n,n\geq 4),\\ & & & & \\ (3)& x_1^3+x_2^4+x_3^2+\cdots+x_{d+1}^2=0 & & & (\text{type } E_6),\\ & & & & \\ (4)& x_1^3+x_1x_2^3+x_3^2+\cdots+x_{d+1}^2=0 & & & (\text{type } E_7),\\ & & & & \\ (5)& x_1^3+x_2^5+x_3^2+\cdots+x_{d+1}^2=0 & & & (\text{type } E_8). \end{array} \] \noindent Some of their properties are listed in \cite[Remark 1.10]{DaisRoczen}.\\ \noindent \textbf{2.2.} We will now construct a log resolution for these singularities by performing successive blow-ups, but we will only do this for $d\geq 4$. The case $d=2$ is well known and the construction in the three-dimensional case can be found in detail in \cite[Section 2]{DaisRoczen}; in fact, our procedure is quite analogous. The main differences are: \begin{itemize} \item[(1)] For $d\geq 4$, every blow-up adds just one component to the exceptional locus, whereas you can get two planes intersecting in a line as new exceptional divisors after a single blow-up in the three-dimensional case (e.g. after the first blow-up in cases $D$ and $E$). \item[(2)] In the higher dimensional case, the analogue of this line will be a singular line on the exceptional divisor, thus in order to get a smooth normal crossings divisor one has to blow up in such lines, which is not necessary for $d=3$. \end{itemize} An example will make this clear: blow up in the singular point of the defining hypersurface in the $E_6$ case. For a suitable choice of coordinates one finds $\{z_3^2+z_4^2=0\} \subset \mathbb{P}^3_{\mathbb{C}}$ as equation of the exceptional locus for $d=3$, and for $d\geq 4$, one finds $\{z_3^2+z_4^2+\cdots+z_{d+1}^2=0\} \subset \mathbb{P}^d_{\mathbb{C}}$ (this is irreducible, but the line $\{z_3=\cdots=z_{d+1}=0\}$ is singular). In what follows we use the same name for a divisor $D$ at the moment of its creation as at all later stages (instead of speaking of the strict transform of $D$). We work out the details for the case of a $D_n$ singularity with even $n$ and we discuss the results shortly in the other cases. We write $m$ for the number of variables ($m\geq 5$) and use coordinates $(x_1,\ldots,x_m)$ on $\mathbb{A}^m$.\\ \noindent \textbf{2.3.} \begin{tabular}{|c|} \hline \textbf{Case} $\mathbf{A}$ \\ \hline \end{tabular}\\ \noindent Consider the hypersurface $X=\{x_1^{n+1}+x_2^2+\cdots+x_m^2=0\}\subset \mathbb{A}^m$ for $m\geq 5$.\\ \noindent \underline{(1) $n$ odd, $n=2k-1$, with $k\geq 1$.}\\ Blowing up an $A_n$ singularity yields an $A_{n-2}$ singularity (that lies on the exceptional locus) and nothing else happens. Thus after $k$ point blow-ups we already have a log resolution. The intersection diagram looks like \begin{center} \begin{picture}(70,10) \put(2,2){\circle*{2}} \put(12,2){\circle*{2}} \put(22,2){\circle*{2}} \put(52,2){\circle*{2}} \put(62,2){\circle*{2}} \put(2,2){\line(1,0){26}} \put(46,2){\line(1,0){16}} \put(0,6){$D_1$} \put(10,6){$D_2$} \put(20,6){$D_3$} \put(49,6){$D_{k-1}$} \put(61,6){$D_k$} \put(33,2){$\ldots$} \end{picture} \end{center} where $D_i$ is created after the $i$-th blow-up. At the moment of its creation, $D_i$ (for $i\in\{1,\ldots,k-1\}$) is isomorphic to the singular quadric $\{x_2^2+\cdots+x_m^2=0\}$ in $\mathbb{P}^{m-1}$, and its singular point is the center of the next blow-up. The last divisor $D_k$ is isomorphic to the nonsingular quadric in $\mathbb{P}^{m-1}$. In the end the intersection of two exceptional divisors is isomorphic to a nonsingular quadric in $\mathbb{P}^{m-2}$.\\ \noindent \underline{(2) $n$ even, $n=2k$, with $k\geq 1$.}\\ After $k$ point blow-ups the strict transform of $X$ is nonsingular, but the last created divisor $D_k$ still has a singular point, so we have to perform an extra blow-up (with exceptional divisor $D_{k+1}$ isomorphic to $\mathbb{P}^{m-2}$). As intersection diagram we find \begin{center} \begin{picture}(81,10) \put(3,2){\circle*{2}} \put(13,2){\circle*{2}} \put(23,2){\circle*{2}} \put(53,2){\circle*{2}} \put(63,2){\circle*{2}} \put(73,2){\circle*{2}} \put(3,2){\line(1,0){26}} \put(47,2){\line(1,0){26}} \put(1,6){$D_1$} \put(11,6){$D_2$} \put(21,6){$D_3$} \put(50,6){$D_{k-1}$} \put(62,6){$D_k$} \put(71,6){$D_{k+1}$} \put(34,2){$\ldots$} \end{picture} \end{center} with all $D_i$ ($i\in \{1,\ldots,k\}$) isomorphic to the singular quadric $\{x_2^2+\cdots+x_m^2=0\}$ in $\mathbb{P}^{m-1}$ at the moment of their creation. Again, all intersections are isomorphic to the nonsingular quadric in $\mathbb{P}^{m-2}$.\\ \noindent \textbf{2.4.} \begin{tabular}{|c|} \hline \textbf{Case} $\mathbf{D}$ \\ \hline \end{tabular}\\ \noindent Now we study $X=\{x_1^{n-1}+x_1x_2^2+x_3^2+\cdots+x_m^2=0\} \subset \mathbb{A}^m$ for $m\geq 5$ and $n\geq 4$. Notice that you also find singularities for $n=2$ and $n=3$, but they are analytically isomorphic to two $A_1$ and one $A_3$ singularity respectively.\\ \noindent \underline{(1) $n$ even, $n=2k$, with $k\geq 2$.}\\ \emph{Step $1$}: We blow up $X$ in the origin. Take $(x_1,\ldots,x_m)\times (z_1,\ldots,z_m)$ as coordinates on $\mathbb{A}^m\times \mathbb{P}^{m-1}$. Consider the reducible variety $X'$ in $\mathbb{A}^m\times \mathbb{P}^{m-1}$ given by the equations \[\left\{\begin{array}{lcc} x_1^{2k-1}+x_1x_2^2+x_3^2+\cdots+x_m^2=0 & & \\ x_iz_j=x_jz_i & & \forall\, i,j\in \{1,\ldots,m\}. \end{array}\right.\] In the open set $z_1\neq 0$, $X'$ is isomorphic to $\{x_1^2(x_1^{2k-3}+x_1x_2^2+x_3^2+\cdots+x_m^2)=0\} \subset \mathbb{A}^m$ by replacing $x_j$ by $x_1\frac{z_j}{z_1}$ and renaming the affine coordinate $\frac{z_j}{z_1}$ as $x_j$ for $j=2,\ldots,m$. The equation $x_1=0$ describes here the exceptional locus, while the other equation gives us the strict transform of $X$, in which we are interested. Their intersection is the first exceptional divisor, we call it $D_1$. We can do the same thing for any open set $z_i\neq 0$ and thus we can describe $X'$ by the following set of equations: \[\left\{\begin{array}{lcc} x_1^2(x_1^{2k-3}+x_1x_2^2+x_3^2+\cdots+x_m^2)=0 & & (1)\\ x_2^2(x_1^{2k-1}x_2^{2k-3}+x_1x_2+x_3^2+\cdots+x_m^2)=0 & & (2)\\ x_3^2(x_1^{2k-1}x_3^{2k-3}+x_1x_2^2x_3+1+x_4^2+\cdots+x_m^2)=0 & & (3)\\ \qquad\qquad \vdots & & \vdots\\ x_m^2(x_1^{2k-1}x_m^{2k-3}+x_1x_2^2x_m+x_3^2+\cdots+x_{m-1}^2+1)=0. & & (m) \end{array}\right.\] One sees from this that globally $D_1 \cong \{x_3^2+\cdots+x_m^2=0\}\subset \mathbb{P}^{m-1}$, which has a singular line $\{x_3=\cdots=x_m=0\}$ (located in charts (1) and (2)). Notice that for $k\geq 3$, we have a $D_{n-2}$ singularity in chart (1) and a singularity that is analytically isomorphic to an $A_1$ in the origin of chart (2). In the other charts both $D_1$ and the strict transform of $X$ are nonsingular, so we have no problems there. We will assume now that $k\geq 4$ and we will see later what happens if $k=2,3$.\\ \emph{Step $2$}: Let us first get rid of the $A_1$ singularity. Thus we blow up in the origin of chart (2). Since this blow-up is an isomorphism outside this point, we preserve the other coordinate charts and we replace chart (2) by the following charts: \[\left\{\begin{array}{lcc} x_1^4x_2^2(x_1^{4k-6}x_2^{2k-3}+x_2+x_3^2+\cdots+x_m^2)=0 & & (2.1)\\ x_2^4(x_1^{2k-1}x_2^{4k-6}+x_1+x_3^2+\cdots+x_m^2)=0 & & (2.2)\\ x_2^2x_3^4(x_1^{2k-1}x_2^{2k-3}x_3^{4k-6}+x_1x_2+1+x_4^2+\cdots+x_m^2)=0 & & (2.3)\\ \qquad\qquad \vdots & & \vdots\\ x_2^2x_m^4(x_1^{2k-1}x_2^{2k-3}x_m^{4k-6}+x_1x_2+x_3^2+\cdots+x_{m-1}^2+1)=0. & & (2.m) \end{array}\right.\] Now we see that the strict transform $\widetilde{X}$ of $X$ is nonsingular in this part, but we still have the singular line on $D_1$ (in charts (1) and (2.1) now). Our new exceptional divisor, we call it $E_1$, is globally a nonsingular quadric in $\mathbb{P}^{m-1}$. We check immediately that $D_1$ and $E_1$ intersect transversally outside the singular line of $D_1$: take a point $P=(0,0,\alpha_3,\ldots,\alpha_m)$ on their intersection in chart (2.1) for example (thus $\alpha_3^2+\cdots+\alpha_m^2=0$). We assume that $P$ does not lie on the singular line on $D_1$ (so at least one of the $\alpha_i$ is nonzero), since we will blow it up later. The local ring $\mathcal{O}_{P,\widetilde{X}}$ is isomorphic to $\bigl(\frac{\mathbb{C}[x_1,\ldots,x_m]}{I}\bigr)_{m_P}$ with $I=(x_1^{4k-6}x_2^{2k-3}+x_2+x_3^2+\cdots+x_m^2)$ and $m_P=\frac{(x_1,x_2,x_3-\alpha_3,\ldots,x_m-\alpha_m)}{I}$. As a $\mathbb{C}$-vector space, $\frac{m_P}{m_P^2}$ has dimension $m-1$ and is isomorphic to $\frac{(x_1,x_2,x_3-\alpha_3,\ldots,x_m-\alpha_m)}{(x_1^2,x_1x_2,x_2^2,x_3^2- 2\alpha_3x_3+\alpha_3^2,\ldots)+I}$. It is generated by the set $\{x_1,x_2,x_3-\alpha_3,\ldots,x_m-\alpha_m\}$ and the last $m-1$ generators are linearly dependent, since \begin{center} \begin{tabular}[t]{cccl} \multicolumn{4}{l}{$x_2+2\alpha_3(x_3-\alpha_3)+\cdots+2\alpha_m(x_m-\alpha_m)$}\\ & &=& $x_2+2\alpha_3x_3+\cdots+2\alpha_mx_m$\\ & &=& $x_1^{4k-6}x_2^{2k-3}+x_2+x_3^2+\cdots+x_m^2-(x_1^{4k-7}x_2^{2k-4})x_1x_2$\\ & & & $-(x_3^2- 2\alpha_3x_3+\alpha_3^2)-\cdots-(x_m^2- 2\alpha_mx_m+\alpha_m^2)$\\ & &=&0, \end{tabular} \end{center} and thus $x_1$ and $x_2$ must be linearly independent. Hence $D_1$ and $E_1$ have normal crossings at $(0,0,\alpha_3,\ldots,\alpha_m)$. Later on, we will not check the normal crossings condition any more, it will be satisfied for all divisors in the end.\\ \emph{Step $3$}: We tackle the $D_{n-2}$ singularity in chart (1) now. We blow up in its origin: \[\left\{\begin{array}{lcc} x_1^4(x_1^{2k-5}+x_1x_2^2+x_3^2+\cdots+x_m^2)=0 & & (1.1)\\ x_1^2x_2^4(x_1^{2k-3}x_2^{2k-5}+x_1x_2+x_3^2+\cdots+x_m^2)=0 & & (1.2)\\ x_1^2x_3^4(x_1^{2k-3}x_3^{2k-5}+x_1x_2^2x_3+1+x_4^2+\cdots+x_m^2)=0 & & (1.3)\\ \qquad\qquad \vdots & & \vdots\\ x_1^2x_m^4(x_1^{2k-3}x_m^{2k-5}+x_1x_2^2x_m+x_3^2+\cdots+x_{m-1}^2+1)=0. & & (1.m) \end{array}\right.\] It is no surprise that we find a $D_{n-4}$ singularity in the origin of chart (1.1) and an $A_1$ in the origin of chart (1.2). The newly created divisor, called $D_2$, intersects $D_1$ and has a singular line in charts (1.1) and (1.2); the singular line of $D_1$ from chart (1) is transferred to chart (1.2).\\ \emph{Step $4$}: We blow up in the origin of chart (1.2). The singularity is resolved and the new divisor $E_2$ intersects both $D_1$ and $D_2$: \[\left\{\begin{array}{lcc} x_1^8x_2^4(x_1^{4k-10}x_2^{2k-5}+x_2+x_3^2+\cdots+x_m^2)=0 & & (1.2.1)\\ x_1^2x_2^8(x_1^{2k-3}x_2^{4k-10}+x_1+x_3^2+\cdots+x_m^2)=0 & & (1.2.2)\\ x_1^2x_2^4x_3^8(x_1^{2k-3}x_2^{2k-5}x_3^{4k-10}+x_1x_2+1+x_4^2+\cdots+x_m^2)=0 & & (1.2.3)\\ \qquad\qquad \vdots & & \vdots\\ x_1^2x_2^4x_m^8(x_1^{2k-3}x_2^{2k-5}x_m^{4k-10}+x_1x_2+x_3^2+\cdots+x_{m-1}^2+1)=0. & & (1.2.m) \end{array}\right.\] The singular lines on $D_1$ and $D_2$ are separated and go to charts (1.2.2) and (1.2.1) respectively.\\ We continue in this way, performing alternate blow-ups in a $D_i$ and an $A_1$, until we have to blow up in a $D_4$ singularity.\\ \emph{Step $n-3$}: We blow up in the origin of the chart $x_1^{2k-4}(x_1^3+x_1x_2^2+x_3^2+\cdots+x_m^2)=0$. \[\left\{\begin{array}{lcc} x_1^{2k-2}(x_1+x_1x_2^2+x_3^2+\cdots+x_m^2)=0 & & (1')\\ x_1^{2k-4}x_2^{2k-2}(x_1^3x_2+x_1x_2+x_3^2+\cdots+x_m^2)=0 & & (2')\\ x_1^{2k-4}x_3^{2k-2}(x_1^3x_3+x_1x_2^2x_3+1+x_4^2+\cdots+x_m^2)=0 & & (3')\\ \qquad\qquad \vdots & & \vdots\\ x_1^{2k-4}x_m^{2k-2}(x_1^3x_m+x_1x_2^2x_m+x_3^2+\cdots+x_{m-1}^2+1)=0. & & (m') \end{array}\right.\] In fact $(j')$ stands here for\begin{tabular}[t]{c}($\underbrace{1.1\ldots 1}.j$)\\ $k-2$ times \end{tabular}. We get three singular points, all an\-a\-ly\-ti\-cal\-ly isomorphic to an $A_1$ singularity. Both present divisors (we call them of course $D_{k-2}$ and $D_{k-1}$) have a singular line and in fact all the singular points lie on the singular line of $D_{k-1}$. One of the singular points, the origin of chart $(2')$, lies on the intersection of $D_{k-2}$ and $D_{k-1}$. Note that the singular points $(0,i,0,\ldots,0)$ and $(0,-i,0,\ldots,0)$ of chart $(1')$ correspond to the points $(-i,0,\ldots,0)$ and $(i,0,\ldots,0)$ of chart $(2')$ respectively. \\ \emph{Step $n-2$}: We deal with the origin of chart $(2')$ first. Blowing it up yields a divisor $E_{k-1}$ that intersects $D_{k-1}$ and $D_{k-2}$: \[\left\{\begin{array}{lcc} x_1^{4k-4}x_2^{2k-2}(x_1^2x_2+x_2+x_3^2+\cdots+x_m^2)=0 & & (2'.1)\\ x_1^{2k-4}x_2^{4k-4}(x_1^3x_2^2+x_1+x_3^2+\cdots+x_m^2)=0 & & (2'.2)\\ x_1^{2k-4}x_2^{2k-2}x_3^{4k-4}(x_1^3x_2x_3^2+x_1x_2+1+x_4^2+\cdots+x_m^2)=0 & & (2'.3)\\ \qquad\qquad \vdots & & \vdots\\ x_1^{2k-4}x_2^{2k-2}x_m^{4k-4}(x_1^3x_2x_m^2+x_1x_2+x_3^2+\cdots+x_{m-1}^2+1)=0. & & (2'.m) \end{array}\right.\] The other two singularities lie in charts $(1')$ and $(2'.1)$. The singular lines on $D_{k-2}$ and $D_{k-1}$ get separated and go to charts $(2'.2)$ and $(2'.1)$, respectively.\\ \emph{Step $n-1$}: After a coordinate transformation the equation of chart $(1')$ becomes $x_1^{2k-2}(x_1x_2(x_2+2i)+x_3^2+\cdots+x_m^2=0$. To put the same point in the origin, we have to change the equation of chart $(2'.1)$ to $(x_1-i)^{4k-4}x_2^{2k-2}(x_1x_2(x_1-2i)+x_3^2+\cdots+x_m^2)=0$ for example. In this step we blow up both charts in the origin and we call the new divisor $F_1$: \begin{equation*} \begin{split} &\left\{\begin{array}{lcc} x_1^{2k}(x_2(x_1x_2+2i)+x_3^2+\cdots+x_m^2)=0 & & (1'.1)\\ x_1^{2k-2}x_2^{2k}(x_1(x_2+2i)+x_3^2+\cdots+x_m^2)=0 & & (1'.2)\\ x_1^{2k-2}x_3^{2k}(x_1x_2(x_2x_3+2i)+1+x_4^2+\cdots+x_m^2)=0 & & (1'.3)\\ \qquad\qquad \vdots & & \vdots\\ x_1^{2k-2}x_m^{2k}(x_1x_2(x_2x_m+2i)+x_3^2+\cdots+x_{m-1}^2+1)=0 & & (1'.m) \end{array}\right. \quad \text{ and } \\ &\quad \left\{\begin{array}{lcc} x_1^{2k}(x_1-i)^{4k-4}x_2^{2k-2}(x_2(x_1-2i)+x_3^2+\cdots+x_m^2)=0 & & (2'.1.1)\\ (x_1x_2-i)^{4k-4}x_2^{2k}(x_1(x_1x_2-2i)+x_3^2+\cdots+x_m^2)=0 & & (2'.1.2)\\ (x_1x_3-i)^{4k-4}x_2^{2k-2}x_3^{2k}(x_1x_2(x_1x_3-2i)+1+\cdots+x_m^2)=0 & & (2'.1.3)\\ \qquad\qquad \vdots & & \vdots\\ (x_1x_m-i)^{4k-4}x_2^{2k-2}x_m^{2k}(x_1x_2(x_1x_m-2i)+x_3^2+\cdots+1)=0. & & (2'.1.m) \end{array}\right. \end{split} \end{equation*} The last singular point and the singular line on $D_{k-1}$ are now in charts $(1'.2)$ and $(2'.1.1)$.\\ \emph{Step $n$}: Before blowing up the final singular point, we first do a coordinate transformation in chart $(1'.2)$ to get the equation $x_1^{2k-2}(x_2-2i)^{2k}(x_1x_2+x_3^2+\cdots+x_m^2)=0$ and in chart $(2'.1.1)$ to get $(x_1+2i)^{2k}(x_1+i)^{4k-4}x_2^{2k-2}(x_1x_2+x_3^2+\cdots+x_m^2)=0$. The new exceptional divisor is called $F_2$. \begin{equation*} \begin{split} &\left\{\begin{array}{lcc} x_1^{2k}(x_1x_2-2i)^{2k}(x_2+x_3^2+\cdots+x_m^2)=0 & & (1'.2.1)\\ x_1^{2k-2}(x_2-2i)^{2k}x_2^{2k}(x_1+x_3^2+\cdots+x_m^2)=0 & & (1'.2.2)\\ x_1^{2k-2}(x_2x_3-2i)^{2k}x_3^{2k}(x_1x_2+1+x_4^2+\cdots+x_m^2)=0 & & (1'.2.3)\\ \qquad\qquad \vdots & & \vdots\\ x_1^{2k-2}(x_2x_m-2i)^{2k}x_m^{2k}(x_1x_2+x_3^2+\cdots+x_{m-1}^2+1)=0 & & (1'.2.m) \end{array}\right. \quad \text{ and } \\ &\ \ \left\{\begin{array}{lcc} x_1^{2k}(x_1+2i)^{2k}(x_1+i)^{4k-4}x_2^{2k-2}(x_2+x_3^2+\cdots+x_m^2)=0 & & (2'.1.1.1)\\ (x_1x_2+2i)^{2k}(x_1x_2+i)^{4k-4}x_2^{2k}(x_1+x_3^2+\cdots+x_m^2)=0 & & (2'.1.1.2)\\ (x_1x_3+2i)^{2k}(x_1x_3+i)^{4k-4}x_2^{2k-2}x_3^{2k}(x_1x_2+1+\cdots+x_m^2)=0 & & (2'.1.1.3)\\ \qquad\qquad \vdots & & \vdots\\ (x_1x_m+2i)^{2k}(x_1x_m+i)^{4k-4}x_2^{2k-2}x_m^{2k}(x_1x_2+x_3^2+\cdots+1)=0. & & (2'.1.1.m) \end{array}\right. \end{split} \end{equation*} The singular line on $D_{k-1}$ is moved to charts $(1'.2.2)$ and $(2'.1.1.1)$.\\ In the next $k-1$ steps we blow up in the singular lines on the divisors $D_i$. This gives rise to new exceptional divisors which will be denoted by $G_i$. After $k-1$ steps we finally have a log resolution; we will perform steps $n+1$ and $n+k-1$ explicitly.\\ \emph{Step $n+1$}: To cover the singular line on $D_1$ completely, we have to perform the blow-up in charts (2.1) and (1.2.2). In chart (2.1) we have to blow up the variety $Y=\{x_1^4x_2^2(x_1^{4k-6}x_2^{2k-3}+x_2+x_3^2+\cdots+x_m^2)=0\}\subset \mathbb{A}^m$ in the line $\{x_2=\cdots=x_m=0\}$. The strict transform of $Y$ and the exceptional locus form a reducible variety in $\mathbb{A}^m\times \mathbb{P}^{m-2}$, given by the equations \[\left\{\begin{array}{lcc} x_1^4x_2^2(x_1^{4k-6}x_2^{2k-3}+x_2+x_3^2+\cdots+x_m^2)=0 & & \\ x_iz_j=x_jz_i & & \forall\, i,j\in \{2,\ldots,m\}, \end{array}\right.\] where $(z_2,\ldots,z_m)$ are homogenous coordinates on $\mathbb{P}^{m-2}$. As for a point blow-up, we can replace $x_j$ by $x_i\frac{z_j}{z_i}$ in the open set $z_i\neq 0$ and rename $\frac{z_j}{z_i}$ as $x_j$. Hence we get the following equations for $Y'$: \[\left\{\begin{array}{lcc} x_1^4x_2^3(x_1^{4k-6}x_2^{2k-4}+1+x_2x_3^2+\cdots+x_2x_m^2)=0 & & (2.1.2)\\ x_1^4x_2^2x_3^3(x_1^{4k-6}x_2^{2k-3}x_3^{2k-4}+x_2+x_3+x_3x_4^2+\cdots+x_3x_m^2)=0 & & (2.1.3)\\ \qquad\qquad \vdots & & \vdots\\ x_1^4x_2^2x_m^3(x_1^{4k-6}x_2^{2k-3}x_m^{2k-4}+x_2+x_3^2x_m+\cdots+x_{m-1}^2x_m+x_m)=0. & & (2.1.m) \end{array}\right.\] The equations after blowing up in $\{x_1=x_3=\cdots=x_m=0\}$ in chart (1.2.2) are: \[\left\{\begin{array}{lcc} x_1^3x_2^8(x_1^{2k-4}x_2^{4k-10}+1+x_1x_3^2+\cdots+x_1x_m^2)=0 & & (1.2.2.1)\\ x_1^2x_2^8x_3^3(x_1^{2k-3}x_2^{4k-10}x_3^{2k-4}+x_1+x_3+x_3x_4^2+\cdots+x_3x_m^2)=0 & & (1.2.2.3)\\ \qquad\qquad \vdots & & \vdots\\ x_1^2x_2^8x_m^3(x_1^{2k-3}x_2^{4k-10}x_m^{2k-4}+x_1+x_3^2x_m+\cdots+x_{m-1}^2x_m+x_m)=0. & & (1.2.2.m) \end{array}\right.\] \emph{Step $n+k-1$}: Here we have to consider charts $(1'.2.2)$ and $(2'.1.1.1)$ in which $D_{k-1}$ still has a singular line with equations $\{x_1=x_3=\cdots =x_m=0\}$ and $\{x_2=x_3=\cdots =x_m=0\}$, respectively. Blowing it up yields \begin{equation*} \begin{split} &\left\{\begin{array}{lcc} x_1^{2k-1}(x_2-2i)^{2k}x_2^{2k}(1+x_1x_3^2+\cdots+x_1x_m^2)=0 & & (1'.2.2.1)\\ x_1^{2k-2}(x_2-2i)^{2k}x_2^{2k}x_3^{2k-1}(x_1+x_3+\cdots+x_3x_m^2)=0 & & (1'.2.2.3)\\ \qquad\qquad \vdots & & \vdots\\ x_1^{2k-2}(x_2-2i)^{2k}x_2^{2k}x_m^{2k-1}(x_1+x_3^2x_m+\cdots+x_m)=0 & & (1'.2.2.m) \end{array}\right. \ \ \text{ and } \\ &\ \ \left\{\begin{array}{lc} x_1^{2k}(x_1+2i)^{2k}(x_1+i)^{4k-4}x_2^{2k-1}(1+x_2x_3^2+\cdots+x_2x_m^2)=0 & (2'.1.1.1.2)\\ x_1^{2k}(x_1+2i)^{2k}(x_1+i)^{4k-4}x_2^{2k-2}x_3^{2k-1}(x_2+x_3+\cdots+x_3x_m^2)=0 & (2'.1.1.1.3)\\ \qquad\qquad \vdots & \vdots\\ x_1^{2k}(x_1+2i)^{2k}(x_1+i)^{4k-4}x_2^{2k-2}x_m^{2k-1}(x_2+x_3^2x_m+\cdots +x_m)=0. & (2'.1.1.1.m) \end{array}\right. \end{split} \end{equation*} From these calculations, we can deduce the intersection diagram. We leave it to the reader to check the details. It can be easily seen that the same diagram is valid for $k=2,3$. \begin{center} \begin{picture}(160,45) \put(27,7){\circle*{2}} \put(57,7){\circle*{2}} \put(101,7){\circle*{2}} \put(131,7){\circle*{2}} \put(27,37){\circle*{2}} \put(57,37){\circle*{2}} \put(101,37){\circle*{2}} \put(131,37){\circle*{2}} \put(42,22){\circle*{2}} \put(72,22){\circle*{2}} \put(116,22){\circle*{2}} \put(143,22){\circle*{2}} \put(7,22){\circle*{2}} \put(17,22){\circle*{2}} \put(27,7){\line(1,0){50}} \put(91,7){\line(1,0){40}} \put(27,7){\line(0,1){30}} \put(57,7){\line(0,1){30}} \put(101,7){\line(0,1){30}} \put(131,7){\line(0,1){30}} \put(27,7){\line(1,1){30}} \put(57,7){\line(1,1){20}} \put(101,7){\line(1,1){30}} \put(131,7){\line(4,5){12}} \put(101,37){\line(1,-1){30}} \put(131,37){\line(4,-5){12}} \put(27,37){\line(1,-1){30}} \put(57,37){\line(1,-1){20}} \put(101,7){\line(-1,1){10}} \put(101,37){\line(-1,-1){10}} \put(27,7){\line(-2,3){10}} \put(27,7){\line(-4,3){20}} \put(17,22){\line(2,3){10}} \put(7,22){\line(4,3){20}} \put(25,40){$G_{k-1}$} \put(55,40){$G_{k-2}$} \put(99,40){$G_2$} \put(129,40){$G_{1}$} \put(25,1){$D_{k-1}$} \put(55,1){$D_{k-2}$} \put(99,1){$D_{2}$} \put(129,1){$D_{1}$} \put(1,21){$F_1$} \put(20,21){$F_2$} \put(45,21){$E_{k-1}$} \put(61,21){$E_{k-2}$} \put(119,21){$E_2$} \put(146,21){$E_1$} \put(81,22){$\ldots$} \put(81,7){$\ldots$} \end{picture} \end{center} $\phantom{some place}$ \noindent \underline{(2) $n$ odd, $n=2k+1$, with $k\geq 2$.}\\ The first $2k-4$ steps are completely analogous to the case where $n$ is even. Now we end up with the equation $x_1^{2k-4}(x_1^4+x_1x_2^2+x_3^2+\cdots+x_m^2)$ which has a $D_5$ singularity in the origin. Blowing this up gives one $A_3$ singularity on the new divisor $D_{k-1}$ (the equation of the first chart is $x_1^{2k-2}(x_1^2+x_1x_2^2+x_3^2+\cdots+x_m^2=0)$). We already know that this can be resolved by two consecutive blow-ups, creating divisors $F_1$ and $F_2$. Afterwards, the singular lines on the $D_i$ must be blown up. Explicit calculations will lead to the following intersection diagram: \begin{center} \begin{picture}(160,45) \put(19,7){\circle*{2}} \put(49,7){\circle*{2}} \put(93,7){\circle*{2}} \put(123,7){\circle*{2}} \put(19,37){\circle*{2}} \put(49,37){\circle*{2}} \put(93,37){\circle*{2}} \put(123,37){\circle*{2}} \put(34,22){\circle*{2}} \put(64,22){\circle*{2}} \put(108,22){\circle*{2}} \put(135,22){\circle*{2}} \put(7,17){\circle*{2}} \put(7,27){\circle*{2}} \put(19,7){\line(1,0){50}} \put(83,7){\line(1,0){40}} \put(19,7){\line(0,1){30}} \put(49,7){\line(0,1){30}} \put(93,7){\line(0,1){30}} \put(123,7){\line(0,1){30}} \put(19,7){\line(1,1){30}} \put(49,7){\line(1,1){20}} \put(93,7){\line(1,1){30}} \put(123,7){\line(4,5){12}} \put(93,37){\line(1,-1){30}} \put(123,37){\line(4,-5){12}} \put(19,37){\line(1,-1){30}} \put(49,37){\line(1,-1){20}} \put(93,7){\line(-1,1){10}} \put(93,37){\line(-1,-1){10}} \put(19,7){\line(-6,5){12}} \put(19,7){\line(-3,5){12}} \put(7,27){\line(6,5){12}} \put(7,17){\line(0,1){10}} \put(17,40){$G_{k-1}$} \put(47,40){$G_{k-2}$} \put(91,40){$G_2$} \put(121,40){$G_{1}$} \put(17,1){$D_{k-1}$} \put(47,1){$D_{k-2}$} \put(91,1){$D_{2}$} \put(121,1){$D_{1}$} \put(1,16){$F_1$} \put(1,26){$F_2$} \put(37,21){$E_{k-1}$} \put(53,21){$E_{k-2}$} \put(111,21){$E_2$} \put(138,21){$E_1$} \put(73,22){$\ldots$} \put(73,7){$\ldots$} \end{picture} \end{center} $\phantom{some place}$ \noindent \textbf{2.5.} \begin{tabular}{|c|} \hline \textbf{Case} $\mathbf{E_6}$ \\ \hline \end{tabular}\\ \noindent After blowing up in the origin we get an $A_5$ singularity and a singular line on the first exceptional divisor $D_1$. To resolve the $A_5$ singularity we need three more point blow-ups (creating $D_2,D_3$ and $D_4$) and in the end we blow up in the singular line (giving rise to a divisor $D_5$). We find as intersection graph:\\ \begin{center} \begin{picture}(60,38) \put(10,20){\circle*{2}} \put(40,2){\circle*{2}} \put(40,38){\circle*{2}} \put(50,12){\circle*{2}} \put(50,28){\circle*{2}} \put(10,20){\line(5,-3){30}} \put(10,20){\line(5,3){30}} \put(10,20){\line(5,1){40}} \put(10,20){\line(5,-1){40}} \put(40,2){\line(1,1){10}} \put(40,38){\line(1,-1){10}} \put(50,12){\line(0,1){16}} \put(3,19){$D_1$} \put(43,0){$D_2$} \put(53,11){$D_3$} \put(53,27){$D_4$} \put(43,38){$D_5$} \end{picture} \end{center} \noindent \textbf{2.6.} \begin{tabular}{|c|} \hline \textbf{Cases} $\mathbf{E_7}$ \textbf{and} $\mathbf{E_8}$ \\ \hline \end{tabular}\\ \noindent An $E_7$ becomes a $D_6$ after one step and calculating the intersections gives the following diagram \begin{center} \begin{picture}(80,57) \put(10,32){\circle*{2}} \put(21,34){\circle*{2}} \put(30,22){\circle*{2}} \put(30,52){\circle*{2}} \put(45,37){\circle*{2}} \put(60,22){\circle*{2}} \put(60,52){\circle*{2}} \put(72,37){\circle*{2}} \put(30,2){\circle*{2}} \put(10,12){\circle*{2}} \put(30,2){\line(-2,1){20}} \put(30,2){\line(-2,3){20}} \put(30,2){\line(0,1){50}} \put(30,2){\line(3,2){30}} \put(10,12){\line(0,1){20}} \put(10,32){\line(2,-1){20}} \put(10,32){\line(1,1){20}} \put(21,34){\line(1,2){9}} \put(21,34){\line(3,-4){9}} \put(30,22){\line(1,1){30}} \put(30,52){\line(1,-1){30}} \put(30,22){\line(1,0){30}} \put(60,22){\line(0,1){30}} \put(60,22){\line(4,5){12}} \put(60,52){\line(4,-5){12}} \put(33,0){$C_1$} \put(63,20){$D_1$} \put(32,17){$D_2$} \put(75,36){$E_1$} \put(48,36){$E_2$} \put(4,31){$F_1$} \put(24,33){$F_2$} \put(28,55){$G_2$} \put(58,55){$G_1$} \put(3,11){$H_1$} \end{picture} \end{center} \noindent where $C_1$ is the very first exceptional divisor and where $H_1$ arises after blowing up the singular line on $C_1$. The other divisors come from the $D_6$ singularity. Notice the difference between $F_1$ and $F_2$. It is easy to see that an $E_8$ singularity passes to an $E_7$ after one blow-up, with again a singular line on the first exceptional divisor $B_1$. We denote the divisor that appears after blowing up in this singular line by $I_1$ and we find the following intersection graph: \begin{center} \begin{picture}(83,64) \put(10,37){\circle*{2}} \put(21,39){\circle*{2}} \put(30,27){\circle*{2}} \put(30,57){\circle*{2}} \put(45,42){\circle*{2}} \put(60,27){\circle*{2}} \put(60,57){\circle*{2}} \put(75,37){\circle*{2}} \put(30,7){\circle*{2}} \put(10,17){\circle*{2}} \put(60,7){\circle*{2}} \put(75,17){\circle*{2}} \put(30,7){\line(-2,1){20}} \put(30,7){\line(-2,3){20}} \put(30,7){\line(0,1){50}} \put(30,7){\line(3,2){30}} \put(10,17){\line(0,1){20}} \put(10,37){\line(2,-1){20}} \put(10,37){\line(1,1){20}} \put(21,39){\line(1,2){9}} \put(21,39){\line(3,-4){9}} \put(30,27){\line(1,1){30}} \put(30,57){\line(1,-1){30}} \put(30,27){\line(1,0){30}} \put(60,27){\line(0,1){30}} \put(60,27){\line(3,2){15}} \put(60,57){\line(3,-4){15}} \put(30,7){\line(1,0){30}} \put(60,7){\line(0,1){20}} \put(60,7){\line(1,2){15}} \put(60,7){\line(3,2){15}} \put(75,17){\line(0,1){20}} \put(28,1){$C_1$} \put(63,25){$D_1$} \put(32,22){$D_2$} \put(78,36){$E_1$} \put(48,41){$E_2$} \put(4,36){$F_1$} \put(24,38){$F_2$} \put(28,60){$G_2$} \put(58,60){$G_1$} \put(3,16){$H_1$} \put(78,16){$I_1$} \put(58,1){$B_1$} \end{picture} \end{center} $\phantom{some place}$ \section{The Hodge-Deligne polynomials of the pieces of the exceptional locus} \noindent \textbf{3.1.} Denote by $a_r,b_r,c_r$ ($r\geq 2$) the Hodge-Deligne polynomials of \begin{itemize} \item $\{x_1^2+\cdots+x_r^2=0\}\subset \mathbb{P}^{r+1}_{\mathbb{C}}$, \item $\{x_1^2+\cdots+x_r^2=0\}\subset \mathbb{P}^r_{\mathbb{C}}$, \item $\{x_1^2+\cdots+x_r^2=0\}\subset \mathbb{P}^{r-1}_{\mathbb{C}}$, \end{itemize} respectively, where $\mathbb{P}^s$ gets coordinates $(x_1,\ldots,x_{s+1})$. We will be able to express all the needed Hodge-Deligne polynomials in terms of $a_r,b_r$ and $c_r$, and these last expressions are well known. For completeness we include their computation in the following lemma. \emph{From now on, we will write $w$ as abbreviation of $uv$}. \noindent \textbf{Lemma.} \textsl{The formulae for $a_r,b_r$ and $c_r$ are given in the following table}: \begin{center} \begin{tabular}[t]{c|c|c} & $r$ \textsl{even} & $r$ \textsl{odd} \\ \hline & & \\ $a_r$ & $\frac{w^{r+1}-1}{w-1}+w^{\frac{r}{2}+1} $ & $\frac{w^{r+1}-1}{w-1} $\\ & & \\ \hline & & \\ $b_r$ & $\frac{w^r-1}{w-1}+w^{\frac{r}{2}} $ & $\frac{w^r-1}{w-1} $\\ & & \\ \hline & & \\ $c_r$ & $\frac{w^{r-1}-1}{w-1}+w^{\frac{r}{2}-1} $ & $\frac{w^{r-1}-1}{w-1} $\\ & & \end{tabular} \end{center} \noindent \textbf{Proof:} Denote by $d_r$ the Hodge-Deligne polynomial of $\{x_1^2+\cdots+x_r^2+1=0\}\subset \mathbb{A}^r$. First we compute $d_r$ by induction on $r$. Since $d_2$ is the Hodge-Deligne polynomial of a conic with two points at infinity, it equals $w-1$. The variety $\{x_1^2+x_2^2+x_3^2+1=0\} \subset \mathbb{A}^3$ can be regarded as $\mathbb{P}^1\times \mathbb{P}^1$ minus a conic and thus $d_3=(w+1)^2-(w+1)=w^2+w$. For $r\geq 4$ we use the isomorphism $\{x_1^2+\cdots +x_r^2+1=0\}\cong \{x_1x_2+x_3^2+\cdots+x_r^2+1=0\}$. If $x_1=0$ in this last equation, then the contribution to $d_r$ is $wd_{r-2}$ and if $x_1\neq 0$, then it is $(w-1)w^{r-2}$, so we have the recursion formula $d_r=wd_{r-2}+(w-1)w^{r-2}$. From this it follows that $d_r=w^{r-1}-w^{\frac{r}{2}-1}$ if $r$ is even and $d_r=w^{r-1}+w^{\frac{r-1}{2}}$ if $r$ is odd. For $a_2$ we find $2w^2+w+1$ and we have the recursion formula $a_r=a_{r-1}+w^2d_{r-1}$ for $r\geq 3$. The formulae for $b_r$ and $c_r$ can be deduced similarly. $\blacksquare$\\ \noindent \textbf{3.2.} For the remainder of this section, we will calculate the Hodge-Deligne polynomials of the pieces $D_{J}^{\circ}$ (see the definition of the stringy $E$-function). Since we are mainly interested in the contribution of the singular point (by which we mean $E_{st}(X) - H(D_{\emptyset}^{\circ})=E_{st}(X)-H(X\setminus \{0\})$, where $X$ is a defining variety of an $A$-$D$-$E$ singularity), we will do this for $J\neq \emptyset$. We remark here the following. In the defining formula of the stringy $E$-function we need the Hodge-Deligne polynomials of the $D_J^{\circ}$ at the end of the resolution process. Notice however that we can compute them immediately after they are created, since a blow-up is an isomorphism outside its center. So we just have to subtract contributions of intersections with previously created divisors and already present centers of future blow-ups from the global Hodge-Deligne polynomial in the right way. The case of an $A$-$D$-$E$ surface singularity is well known and for threefold singularities we refer again to \cite{DaisRoczen}, so we consider here the higher dimensional case. Parallel to the previous section, we will work out the details for the case $D_n$, $n$ even, and state the results in the other cases. We use the same notations as in the previous section.\\ \noindent \textbf{3.3.} \begin{tabular}{|c|} \hline \textbf{Case} $\mathbf{A}$ \\ \hline \end{tabular}\\ \noindent From the description in (2.3), one gets the following:\\ \noindent \underline{(1) $n$ odd} \[\begin{array}{lccr} H(D_1^{\circ})=b_{m-1}-1 & & & \\ H(D_i^{\circ})=b_{m-1}-c_{m-1}-1 & & &(i=2,\ldots,k-1)\\ H(D_k^{\circ})=c_m-c_{m-1} & & &\\ H(D_i\cap D_{i+1})=c_{m-1} & & &(i=1,\ldots,k-1)\\ \end{array} \] \noindent \underline{(2) $n$ even} \[\begin{array}{lccr} H(D_1^{\circ})=b_{m-1}-1 & & & \\ H(D_i^{\circ})=b_{m-1}-c_{m-1}-1 & & &(i=2,\ldots,k)\\ H(D_{k+1}^{\circ})=w^{m-2}+\cdots+1-c_{m-1} & & &\\ H(D_i\cap D_{i+1})=c_{m-1} & & &(i=1,\ldots,k)\\ \end{array} \] $\phantom{some place}$ \noindent \textbf{3.4.} \begin{tabular}{|c|} \hline \textbf{Case} $\mathbf{D}$ \\ \hline \end{tabular}\\ \noindent \underline{(1) $n$ even}\\ All the needed information can be read off from the equations in (2.4). We follow the same steps.\\ \emph{Step $1$}: The first exceptional divisor is globally isomorphic to $\{x_3^2+\cdots+x_m^2=0\}\subset \mathbb{P}^{m-1}$, which has a singular line that contains the two singular points of the surrounding variety. Hence $H(D_1^{\circ})=a_{m-2}-(w+1)$.\\ \emph{Step $2$}: One sees that $E_1$ is a nonsingular quadric in $\mathbb{P}^{m-1}$ that intersects $D_1$ in $\{x_3^2+\cdots+x_m^2=0\}\subset\mathbb{P}^{m-2}$, for a suitable choice of coordinates. Thus $H(E_1^{\circ})=c_{m}-b_{m-2}$. The intersection of $D_1$ and $E_1$ contains one point of the singular line on $D_1$ and hence $H((D_1\cap E_1)^{\circ})=b_{m-2}-1$.\\ \emph{Step $3$}: Analogous to step 1 one finds that $D_2$ is isomorphic to $\{x_3^2+\cdots+x_m^2=0\}\subset \mathbb{P}^{m-1}$, with a singular line that contains two singular points of the surrounding variety. Now $D_2$ intersects $D_1$ in $\{x_3^2+\cdots+x_m^2=0\}\subset \mathbb{P}^{m-2}$. This intersection has exactly one point (the origin of coordinate chart (1.2)) in common with the singular lines on $D_2$ and $D_1$. The conclusion is that $H(D_2^{\circ})=a_{m-2}-(w+1)-b_{m-2}+1$ and $H((D_1\cap D_2)^{\circ})=b_{m-2}-1$.\\ \emph{Step $4$}: For $H(E_2^{\circ})$ we find $c_m-2b_{m-2}+c_{m-2}$, where $2b_{m-2}$ comes from the intersections with $D_1$ and $D_2$ and $c_{m-2}$ from the intersection with $D_1\cap D_2$. We also have that $H((D_1\cap E_2)^{\circ})=H((D_2\cap E_2)^{\circ})=b_{m-2}-c_{m-2}-1$, where the $-1$ comes from a point on the singular lines on the $D_i$. Finally $H(D_1\cap D_2\cap E_2)=c_{m-2}$.\\ Analogously, for all $i$ from 3 to $k-2$, we have $H(D_i^{\circ})=a_{m-2}-(w+1)-b_{m-2}+1$, $H((D_{i-1}\cap D_{i})^{\circ})=b_{m-2}-1$, $H(E_i^{\circ})=c_m-2b_{m-2}+c_{m-2}$, $H((D_{i-1}\cap E_{i})^{\circ})=H((D_i\cap E_{i})^{\circ})=b_{m-2}-c_{m-2}-1$ and $H(D_{i-1}\cap D_{i}\cap E_i)=c_{m-2}$.\\ \emph{Step $n-3$}: In this step three singular points are created, but since they are all on the singular line on $D_{k-1}$, we still find $H(D_{k-1}^{\circ})=a_{m-2}-(w+1)-b_{m-2}+1$ and $H((D_{k-2}\cap D_{k-1})^{\circ})=b_{m-2}-1$.\\ \emph{Step $n-2$}: Again nothing special happens: $H(E_{k-1}^{\circ})=c_m-2b_{m-2}+c_{m-2}$, $H((D_{k-2}\cap E_{k-1})^{\circ})=H((D_{k-1}\cap E_{k-1})^{\circ})=b_{m-2}-c_{m-2}-1$ and $H(D_{k-2}\cap D_{k-1}\cap E_{k-1})=c_{m-2}$.\\ \emph{Step $n-1$ and step $n$}: Both $F_1$ and $F_2$ are nonsingular quadrics in $\mathbb{P}^{m-1}$ and their intersection with $D_{k-1}$ is $\{x_3^2+\cdots+x_m^2=0\}\subset \mathbb{P}^{m-2}$, which has one point in common with the singular line on $D_{k-1}$. Thus $H(F_1^{\circ})=H(F_2^{\circ})=c_m-b_{m-2}$ and $H((D_{k-1}\cap F_1)^{\circ})=H((D_{k-1}\cap F_2)^{\circ})=b_{m-2}-1$.\\ \emph{Step $n+1$}: The singular line on $D_1$ is except for the origin of coordinate chart (2.1) covered by chart (1.2.2). But after the blow-up, exactly the intersection of $E_1$ and $G_1$ lies above the origin of chart (2.1). Thus to calculate $H(G_1^{\circ})$, it suffices to consider only charts (1.2.2.1) to $(1.2.2.m)$. In chart (1.2.2.3) $G_1$ is just isomorphic to $\mathbb{A}^{m-2}$. The piece of $G_1$ that is covered by chart (1.2.2.4) but not by (1.2.2.3) is isomorphic to $\mathbb{A}^{m-3}$ and so on, until we add an affine line to $G_1$ in chart $(1.2.2.m)$. The intersection of $G_1$ with $E_2$ is isomorphic to $\mathbb{P}^{m-3}$. It is not so hard to see that $H(D_1\cap E_2\cap G_1)=c_{m-2}$ (notice that the equations of (the strict transform of) $D_1$ in chart (1.2.2.3) for instance are $x_1=0$ and $1+x_4^2+\cdots+x_m^2=0$), and from this it follows that $H((D_1\cap G_1)^{\circ})=(w-1)c_{m-2}$ (the $w$ comes from the $x_2$-coordinate that can be chosen freely in every chart). Now we also have $H((E_2\cap G_1)^{\circ})=w^{m-3}+\cdots +1-c_{m-2}$ and $H(G_1^{\circ})=w^{m-2}+\cdots +w-(w^{m-3}+\cdots+1)-wc_{m-2}+c_{m-2}=w^{m-2}-1-(w-1)c_{m-2}$. One gets from charts (2.1.2) to $(2.1.m)$ that $H((E_1\cap G_1)^{\circ})=w^{m-3}+\cdots +1 -c_{m-2}$ and that $H(D_1\cap E_1\cap G_1)=c_{m-2}$. More conceptually, $G_1$ is a locally trivial $\mathbb{P}^{m-3}$-bundle over the singular line on $D_1$ and $E_1\cap G_1$ and $E_2\cap G_1$ are two fibers. Thus $H(G_1)=(w+1)(w^{m-3}+\cdots+1)$ and $H(E_i\cap G_1)=w^{m-3}+\cdots+1$. Furthermore, we can consider the singular line on $D_1$ as a family of $A_1$ singularities and thus $D_1\cap G_1$ is a family of nonsingular quadrics in $\mathbb{P}^{m-3}$. This implies that $H(D_1\cap G_1)=(w+1)c_{m-2}$ and $H(D_1\cap E_i\cap G_1)=c_{m-2}$. \\ In exactly the same way one finds that (for $i\in \{2,\ldots,k-2\}$) $H(G_i^{\circ})=w^{m-2}-1-(w-1)c_{m-2}$, $H((D_i\cap G_i)^{\circ})=(w-1)c_{m-2}$, $H((E_i\cap G_i)^{\circ})=H((E_{i+1}\cap G_i)^{\circ})=w^{m-3}+\cdots+1-c_{m-2}$ and $H(D_i\cap E_i\cap G_i)=H(D_i\cap E_{i+1}\cap G_i)=c_{m-2}$.\\ \emph{Step $n+k-1$}: This step looks very much like step $n+1$. It suffices to consider charts $(1'.2.2.1)$ to $(1'.2.2.m)$ to compute $H(G_{k-1}^{\circ})$. One checks that $H(D_{k-1}\cap F_1\cap G_{k-1})=H(D_{k-1}\cap F_2\cap G_{k-1})=c_{m-2}$, $H((F_1\cap G_{k-1})^{\circ})=H((F_2\cap G_{k-1})^{\circ})=w^{m-3}+\cdots+1-c_{m-2}$, $H((D_{k-1}\cap G_{k-1})^{\circ})=(w-2)c_{m-2}$ and thus $H(G_{k-1}^{\circ})=w^{m-2}+\cdots +w-2(w^{m-3}+\cdots+1)-(w-2)c_{m-2}$. From charts $(2'.1.1.1.2)$ to $(2'.1.1.1.m)$ we get $H(D_{k-1}\cap E_{k-1} \cap G_{k-1})=c_{m-2}$ and $H((E_{k-1}\cap G_{k-1})^{\circ})=w^{m-3}+\cdots +1-c_{m-2}$. A conceptual explanation like in step $n+1$ can be given here too.\\ \noindent \underline{(2) $n$ odd}\\ There are only 7 changes in comparison with the case where $n$ is even. First remark that $ F_1\cap G_{k-1}$ and $D_{k-1}\cap F_1\cap G_{k-1}$ are empty, but instead $H((F_1\cap F_2)^{\circ})=c_{m-1}-c_{m-2}$ and $H(D_{k-1}\cap F_1\cap F_2)=c_{m-2}$. The other 5 changes are the following: \[\begin{array}{l} H(F_1^{\circ})=b_{m-1}-b_{m-2}\\ H(F_2^{\circ})=c_m-c_{m-1}-b_{m-2}+c_{m-2}\\ H(G_{k-1}^{\circ})=w^{m-2}-1-(w-1)c_{m-2}\\ H((D_{k-1}\cap F_2)^{\circ})=b_{m-2}-c_{m-2}-1\\ H((D_{k-1}\cap G_{k-1})^{\circ})=(w-1)c_{m-2} \end{array}\] \noindent \textbf{3.5.} \begin{tabular}{|c|} \hline \textbf{Case} $\mathbf{E_6}$ \\ \hline \end{tabular}\\ \noindent We just list the results. \setlongtables \begin{longtable}{l} $H(D_1^{\circ})= a_{m-2}-w-1$\\ $H(D_2^{\circ})= b_{m-1}-b_{m-2}$\\ $H(D_3^{\circ})= b_{m-1}-b_{m-2}-c_{m-1}+c_{m-2}$\\ $H(D_4^{\circ})= c_m-b_{m-2}-c_{m-1}+c_{m-2}$\\ $H(D_5^{\circ})= w^{m-2}+\cdots +w-wc_{m-2}$\\ $H((D_1\cap D_2)^{\circ})=b_{m-2}-1$\\ $H((D_1\cap D_3)^{\circ})=H((D_1\cap D_4)^{\circ})=b_{m-2}-c_{m-2}-1$\\ $H((D_1\cap D_5)^{\circ})=wc_{m-2}$\\ $H((D_2\cap D_3)^{\circ})=H((D_3\cap D_4)^{\circ})=c_{m-1}-c_{m-2}$\\ $H((D_4\cap D_5)^{\circ})=w^{m-3}+\cdots +1-c_{m-2}$\\ $H(D_1\cap D_2\cap D_3)=H(D_1\cap D_3\cap D_4)=H(D_1\cap D_4\cap D_5)=c_{m-2}$ \end{longtable} \noindent \textbf{3.6.} \begin{tabular}{|c|} \hline \textbf{Cases} $\mathbf{E_7}$ \textbf{and} $\mathbf{E_8}$ \\ \hline \end{tabular}\\ \noindent Let us first treat the $E_8$ case. From the intersection diagram it follows that we have to compute 47 Hodge-Deligne polynomials (there are 12 divisors, 23 intersections of 2 divisors and 12 intersections of 3 divisors). But there are 20 polynomials coming from the `$D_6$ part' of the diagram that are left unchanged here. So we will only write down the other 27. \[\begin{array}{l} H(B_1^{\circ})= a_{m-2}-w-1\\ H(C_1^{\circ})= a_{m-2}-b_{m-2}-w\\ H(D_1^{\circ})=H(D_2^{\circ})= a_{m-2}-2b_{m-2}+c_{m-2}-w+1\\ H(E_1^{\circ})=H(F_1^{\circ})= c_m-2b_{m-2}+c_{m-2}\\ H(H_1^{\circ})=H(I_1^{\circ})= w^{m-2}+\cdots +w-wc_{m-2}\\ H((B_1\cap C_1)^{\circ})=H((B_1\cap I_1)^{\circ})=H((C_1\cap H_1)^{\circ})=wc_{m-2}\\ H((B_1\cap D_1)^{\circ})=H((B_1\cap E_1)^{\circ})= H((C_1\cap D_1)^{\circ})\\ \quad =H((C_1\cap D_2)^{\circ})= H((C_1\cap F_1)^{\circ})=H((D_1\cap D_2)^{\circ}) \\ \quad = H((D_1\cap E_1)^{\circ})=H((D_2\cap F_1)^{\circ})=b_{m-2}-c_{m-2}-1\\ H((E_1\cap I_1)^{\circ})=H((F_1\cap H_1)^{\circ})=w^{m-3}+\cdots +1-c_{m-2}\\ H(B_1\cap C_1\cap D_1)=H(B_1\cap D_1\cap E_1)=H(B_1\cap E_1\cap I_1)\\ \quad =H(C_1\cap D_1\cap D_2)=H(C_1\cap D_2\cap F_1)=H(C_1\cap F_1\cap H_1)=c_{m-2} \end{array}\] For the $E_7$ case, we can skip all expressions involving the divisors $B_1$ and/or $I_1$. This leaves us with 37 polynomials and apart from the following 5, they are all the same as in the $E_8$ case. \[\begin{array}{l} H(C_1^{\circ})=a_{m-2}-w-1\\ H(D_1^{\circ})=a_{m-2}-b_{m-2}-w\\ H(E_1^{\circ})=c_{m}-b_{m-2}\\ H((C_1\cap D_1)^{\circ})=H((D_1\cap E_1)^{\circ})=b_{m-2}-1 \\ \end{array}\] $\phantom{some place}$ \section{Computation of the discrepancy coefficients} \noindent \textbf{4.1.} In this section we compute the last data that we need: the discrepancy coefficients. As already mentioned in (1.4), all the two dimensional $A$-$D$-$E$'s admit a crepant resolution, this means that all the discrepancies are 0. For the three-dimensional case, the computations are done in \cite{DaisRoczen}, but the authors are a bit inaccurate. Let us again consider the case $D_n$, $n$ even, with $k=\frac{n}{2}$. The intersection diagram is as follows:\\[-10mm] \begin{center} \begin{picture}(160,45) \put(27,7){\circle*{2}} \put(57,7){\circle*{2}} \put(101,7){\circle*{2}} \put(131,7){\circle*{2}} \put(27,37){\circle*{2}} \put(57,37){\circle*{2}} \put(101,37){\circle*{2}} \put(131,37){\circle*{2}} \put(42,22){\circle*{2}} \put(72,22){\circle*{2}} \put(116,22){\circle*{2}} \put(143,22){\circle*{2}} \put(7,22){\circle*{2}} \put(17,22){\circle*{2}} \put(27,7){\line(1,0){50}} \put(91,7){\line(1,0){40}} \put(27,7){\line(0,1){30}} \put(57,7){\line(0,1){30}} \put(101,7){\line(0,1){30}} \put(131,7){\line(0,1){30}} \put(27,7){\line(1,1){30}} \put(57,7){\line(1,1){20}} \put(101,7){\line(1,1){30}} \put(131,7){\line(4,5){12}} \put(101,37){\line(1,-1){30}} \put(131,37){\line(4,-5){12}} \put(27,37){\line(1,-1){30}} \put(57,37){\line(1,-1){20}} \put(101,7){\line(-1,1){10}} \put(101,37){\line(-1,-1){10}} \put(27,7){\line(-2,3){10}} \put(27,7){\line(-4,3){20}} \put(17,22){\line(2,3){10}} \put(7,22){\line(4,3){20}} \put(27,37){\line(1,0){50}} \put(91,37){\line(1,0){40}} \put(25,40){$D''_{k-1}$} \put(55,40){$D''_{k-2}$} \put(99,40){$D''_2$} \put(129,40){$D''_{1}$} \put(25,1){$D_{k-1}'$} \put(55,1){$D_{k-2}'$} \put(99,1){$D_{2}'$} \put(129,1){$D_{1}'$} \put(1,21){$F_1$} \put(20,21){$F_2$} \put(45,21){$E_{k-1}$} \put(61,21){$E_{k-2}$} \put(119,21){$E_2$} \put(146,21){$E_1$} \put(81,22){$\ldots$} \put(81,7){$\ldots$} \end{picture} \end{center} Compared to the higher dimensional cases, the $D_i$ fall apart into two components $D_i'$ and $D''_{i}$, and there are no divisors $G_i$ needed. If we denote by $\varphi:\widetilde{X}\to X$ the log resolution, with $X$ the defining variety of the $D_n$ singularity and $\widetilde{X}$ the strict transform of $X$, then $\varphi$ can be decomposed into $k$ birational morphisms \[ \begin{array}{ccccccccccc} &\varphi_k & & & & & & \varphi_2& & \varphi_1& \\ \widetilde{X}=X_k & \longrightarrow& X_{k-1}&\longrightarrow & \cdots& \longrightarrow & X_2&\longrightarrow &X_1 & \longrightarrow &X_0=X, \end{array} \] where the exceptional locus of $\varphi_1$ is $\{D'_1,D''_1\}$, of $\varphi_i$ ($2\leq i\leq k-1$) is $\{D'_i,D''_i,E_{i-1}\}$ and of $\varphi_k$ is $\{F_1,F_2,E_{k-1}\}$, again using the same name for the divisors at any stage of the decomposition of $\varphi$. We can also decompose $K_{\widetilde{X}}-\varphi^*(K_X)$ as \[\left[\sum_{i=1}^{k-1} \varphi_k^*(\varphi_{k-1}^* \cdots (\varphi_{i+1}^*(K_{X_i}-\varphi_i^*(K_{X_{i-1}})))\cdots)\right] + K_{X_k}-\varphi_k^*(K_{X_{k-1}}).\] Dais and Roczen calculated that for instance $\varphi_2^*(D'_1)=D'_1+D'_2+E_1$ and $\varphi_2^*(D''_1)=D''_1+D''_2+E_1$, but $D'_1$ and $D''_1$ are not Cartier. Their sum $D'_1+D''_1$ is Cartier and it turns out that $\varphi_2^*(D'_1+D''_1)=D'_1+D''_1+D'_2+D''_2+E_1$ instead of $\cdots +2E_1$. This kind of error occurs also in the following stages for this type of singularity and also for type $D_n$, $n$ odd, and for types $E_6,E_7$ and $E_8$. In the next table, we list the discrepancies. We use notations analogous to our notations from section 2, but they differ from the notations in \cite{DaisRoczen}. The coefficients that we have corrected are in boldface.\\ \setlongtables \begin{longtable}{l|l|c} \multicolumn{2}{c|}{} & \\ \multicolumn{2}{c|}{Type of singularity} & \multicolumn{1}{c}{Discrepancy} \\ \multicolumn{2}{c|}{} & \\ \hline & & \\ $A_n$ & $\begin{array}{l} n \text{ even } \\ n=2k \\ k\geq 1 \end{array}$ & $\displaystyle{\sum_{i=1}^k i D_i + (n+2)D_{k+1}}$ \\ & & \\ \cline{2-3} \cline{2-3} & & \\ & $\begin{array}{l} n \text{ odd } \\ n=2k-1 \\ k\geq 1 \end{array}$ & $\displaystyle{\sum_{i=1}^k i D_i} $ \\ & & \\ \hline & & \\ $D_n$ & $\begin{array}{l} n \text{ even } \\ n=2k \\ k\geq 2 \end{array}$ & $\displaystyle{\sum_{i=1}^{k-1} \bigl(iD'_i+iD''_i+\mathbf{2i}E_i \bigr)+ \mathbf{k}F_1+\mathbf{k}F_2}$ \\ & & \\ \cline{2-3} & & \\ & $\begin{array}{l} n \text{ odd } \\ n=2k+1 \\ k\geq 2 \end{array}$ & $\displaystyle{\sum_{i=1}^{k-1} \bigl(iD'_i+iD''_i+\mathbf{2i}E_i \bigr)+ \mathbf{k}F_1+\mathbf{2k}F_2}$ \\ & & \\ \hline \multicolumn{2}{c|}{} & \\ \multicolumn{2}{c|}{$E_6$} & $D'_1+D''_1+\mathbf{2}D_2+\mathbf{4}D_3+\mathbf{6}D_4 $ \\ \multicolumn{2}{c|}{} & \\ \hline \multicolumn{2}{c|}{} & \\ \multicolumn{2}{c|}{$E_7$} & $C'_1+C''_1+2D'_1+2D''_1+4D'_2+4D''_2$ \\ \multicolumn{2}{c|}{} & $+\mathbf{3}E_1+\mathbf{7}E_2+\mathbf{6}F_1+ \mathbf{5}F_2$ \\ \multicolumn{2}{c|}{} & \\ \hline \multicolumn{2}{c|}{} & \\ \multicolumn{2}{c|}{$E_8$} & $B'_1+B''_1+2C'_1+2C''_1+4D'_1+4D''_1+7D'_2+7D''_2$\\ \multicolumn{2}{c|}{} & $+\mathbf{6}E_1+\mathbf{12}E_2 +\mathbf{10}F_1+ \mathbf{8}F_2$ \\ \multicolumn{2}{c|}{} & \\ \end{longtable} \noindent \textbf{Remark.} Dais and Roczen used their results to contradict a conjecture of Batyrev about the range of the string-theoretic index (see \cite[Conjecture 5.9]{Batyrev}, \cite[Remark 1.9]{DaisRoczen}). Luckily, this follows already from the formulae for the $A$ case, to which we do not correct anything. We will only simplify their formulae in this case. \noindent \textbf{4.2.} Now we consider the higher dimensional case. As an example, we will calculate the discrepancy coefficient of the divisor $E_i$ for an $(m-1)$-dimensional $D_n$ singularity, where $n$ is even, $i\in \{1,\ldots,k-1\}$ and $m\geq 5$. Let $X$ be the defining variety $\{x_1^{n-1}+x_1x_2^2+x_3^2+\cdots + x_m^2 = 0\}\subset\mathbb{A}^m$, and let $\varphi: \widetilde{X}\to X$ be the log resolution constructed in section 2. We take a coordinate chart that covers a piece of $E_i$; in the notation of section 2, this could be for example chart\begin{tabular}[t]{c}($\underbrace{1.1\ldots 1}$.2.3)\\ $i-1$ times \end{tabular}describing an open set $U\subset \widetilde{X}$: \[ y_1^{2k-2i+1}y_2^{2k-2i-1}y_3^{4k-4i-2}+y_1y_2+1+y_4^2+\cdots +y_m^2=0.\] In this chart, $y_1=0$ gives a local equation for divisor $D_{i-1}$, $y_2=0$ for $D_i$ and $y_3=0$ for our divisor $E_i$. The map $\varphi:U\to X$ can be found from the resolution process. Here it will be \[\varphi(y_1,\ldots,y_m)=(y_1y_2y_3^2,y_1^{i-1}y_2^{i}y_3^{2i-1}, y_1^{i-1}y_2^{i}y_3^{2i},y_1^{i-1}y_2^{i}y_3^{2i}y_4, \ldots, y_1^{i-1}y_2^{i}y_3^{2i}y_m).\] The section $\frac{dx_1\wedge \ldots \wedge dx_{m-1}}{2x_m}$ is locally a generator of the sheaf $\mathcal{O}_X(K_X)$ ($2x_m=\frac{\partial\, f}{\partial\, x_m}$, where $f$ is the equation of $X$) and we have to compare its pull-back under $\varphi$ with the generator $\frac{dy_1\wedge \ldots \wedge dy_{m-1}}{2y_m}$ of $\mathcal{O}_{\widetilde{X}}(K_{\widetilde{X}})|_U$. We have \[ \varphi^*(\frac{dx_1\wedge \ldots \wedge dx_{m-1}}{2x_m})=y_1^{(i-1)(m-3)}y_2^{i(m-3)}y_3^{2i(m-3)} \frac{dy_1\wedge \ldots \wedge dy_{m-1}}{2y_m},\] which learns us that the discrepancy coefficient of $E_i$ is $2i(m-3)$. And we get the discrepancy coefficient of $D_i$ for free, it is $i(m-3)$. In general, the following can be proven by this kind of calculations.\\ \noindent \textbf{Proposition.} \textsl{For all divisors that are created after a point blow-up, except for divisor $D_{\frac{n}{2}+1}$ in the $A_n$ ($n$ even) case, the discrepancy coefficient is ($m-3$) times the coefficient of the corresponding divisor(s) in the three-dimensional case (see the table in (4.1)).}\\ What about the other divisors$\,$? They are all created after blowing up a nonsingular surrounding variety in a point (case $A_n$, $n$ even) or a line (other cases). We consider again the case of a $D_n$ singularity, with $n$ even. Denote by $X^{(i)}$ the variety obtained after $n+i$ steps in the resolution process of section 2 ($i\in \{0,\ldots, k-2\}$). The log resolution $\varphi:\widetilde{X}\to X $ can be decomposed as follows: \[ \begin{array}{ccccccc} &\chi^{(i+1)} & & \varphi^{(i+1)}& & \psi^{(i)}& \\ \widetilde{X} & \longrightarrow& X^{(i+1)}&\longrightarrow & X^{(i)}&\longrightarrow &X, \end{array} \] where $\varphi^{(i+1)}$ is the blow-up of the singular line on the divisor $D_{i+1}\subset X^{(i)}$ and where $\chi^{(i+1)}$ and $\psi^{(i)}$ are compositions of other blow-ups. Notice that all the singular lines on $X^{(0)}$ are disjoint. Thus, to compute the discrepancy coefficient of $G_{i+1}$, it suffices to look at its coefficient in $K_{X^{(i+1)}}-(\psi^{(i)}\circ\varphi^{(i+1)})^*(K_X)$. This is equal to \[ K_{X^{(i+1)}}- (\varphi^{(i+1)})^*((\psi^{(i)})^*(K_X)-K_{X^{(i)}})- (\varphi^{(i+1)})^*(K_{X^{(i)}}). \] It follows from \cite[p.608]{GriffithsHarris} that the last term is $-K_{X^{(i+1)}}+(m-3)G_{i+1}$ ($X^{(i)}$ is nonsingular$\,$!). And in the second term we only get a nonzero coefficient for $G_{i+1}$ from $-(\varphi^{(i+1)})^*(-(i+1)(m-3)D_{i+1})$ (this follows from \cite[p.605]{GriffithsHarris}, and the exact coefficient is $2(i+1)(m-3)$ because the multiplicity of a generic point of the singular line on $D_{i+1}$ is 2). This gives us $2(i+1)(m-3)+(m-3)=(2i+3)(m-3)$ as discrepancy coefficient for $G_{i+1}$. In all other cases where we blow up in a line, the multiplicity of a generic point of the singular line will also be 2 and thus we have the following proposition.\\ \noindent \textbf{Proposition.} \textsl{For all divisors that are created after a blow-up in a singular line of another divisor $D$, the discrepancy coefficient is \[2(\text{discrepancy coefficient of }D) + (m-3).\]}\indent The reader may check that the same arguments give $(n+1)(m-3)+1$ as coefficient for $D_{\frac{n}{2}+1}$ in the case $A_n$, $n$ even.\\ \section{Formulae for the contribution of an $A$-$D$-$E$ singularity to the stringy $E$-function and application to Batyrev's conjecture} \noindent \textbf{5.1.} Let $X$ be a defining variety of an $A$-$D$-$E$ singularity; hence $X$ is a hypersurface in $\mathbb{A}^m$ ($m\geq 3$) with a singular point in the origin. By the contribution of the singular point to the stringy $E$-function, we mean $E_{st}(X)-H(X\setminus\{0\})$ (see (3.2)). Before stating the formulae, we first remark that we have to make a distinction between $m$ even and $m$ odd, because the required Hodge-Deligne polynomials depend on the parity of the dimension.\\ \noindent \textbf{Theorem.} \textsl{The contributions of the ($m-1$)-dimensional $A$-$D$-$E$ singularities ($m\geq 3$) are given in the following tables (where sums like $\sum_{i=2}^k$ must be interpreted as $0$ for $k=1$).} \setlongtables \begin{longtable}{l|l|c} \multicolumn{2}{c|}{} & \\ \multicolumn{2}{c|}{Type of singularity} & \multicolumn{1}{c}{Contribution of singular point for odd $m$} \\ \multicolumn{2}{c|}{} & \\ \hline & & \\ $A_n$ & $\begin{array}{l} n \text{ even } \\ n=2k \\ k\geq 1 \end{array}$ & $\displaystyle{1+\frac{(w-1)}{(w^{(2k+1)(m-3)+2}-1)}\left( \sum_{i=2}^{k+1} w^{(k+i)(m-3)+2} \right.}$\\ & & $\displaystyle{\left. +\sum_{i=1}^k w^{(k+i)(m-3)+\frac{m+1}{2}}+\sum_{i=1}^k w^{i(m-3)+\frac{m-1}{2}}+ \sum_{i=1}^k w^{i(m-3)+1} \right)}$ \\ & & \\ \cline{2-3} \cline{2-3} & & \\ & $\begin{array}{l} n \text{ odd } \\ n=2k-1 \\ k\geq 1 \end{array}$ & $\displaystyle{1+\frac{(w-1)}{(w^{k(m-3)+1}-1)}\left( \sum_{i=1}^{k} w^{i(m-3)+1}+ \sum_{i=1}^{k-1} w^{i(m-3)+\frac{m-1}{2}} \right)} $ \\ & & \\ \hline & & \\ $D_n$ & $\begin{array}{l} n \text{ even } \\ n=2k \\ k\geq 2 \end{array}$ & $\displaystyle{1+\frac{(w-1)}{(w^{(2k-1)(m-3)+1}-1)}\left( \sum_{i=1}^{2k-1} w^{i(m-3)+1}+ w^{k(m-3)+1} \right)}$ \\ & & \\ \cline{2-3} & & \\ & $\begin{array}{l} n \text{ odd } \\ n=2k+1 \\ k\geq 2 \end{array}$ & $\displaystyle{1+\frac{(w-1)}{(w^{2k(m-3)+1}-1)}\biggl( \sum_{i=1}^{2k} w^{i(m-3)+1} + w^{k(m-3)+\frac{m-1}{2}} \biggr)}$ \\ & & \\ \hline \multicolumn{2}{c|}{} & \\ \multicolumn{2}{c|}{$E_6$} & $\displaystyle{1+\frac{(w-1)}{(w^{6m-17}-1)}\Bigl(w^{6m-17}+w^{4m-11}+w^{3m-8} \Bigr.}$\\ \multicolumn{2}{c|}{} & $\displaystyle{\Bigl.+w^{m-2}+w^{\frac{9m-25}{2}}+w^{\frac{5m-13}{2}} \Bigr) }$\\ \multicolumn{2}{c|}{} & \\ \hline \multicolumn{2}{c|}{} & \\ \multicolumn{2}{c|}{$E_7$} & $\displaystyle{1+\frac{(w-1)}{(w^{9m-26}-1)}\left(w^{9m-26}+ w^{7m-20}+w^{6m-17}+w^{5m-14}\right.}$\\ \multicolumn{2}{c|}{} & $\displaystyle{\left.+w^{4m-11} +w^{3m-8}+w^{m-2} \right)}$ \\ \multicolumn{2}{c|}{} & \\ \hline \multicolumn{2}{c|}{} & \\ \multicolumn{2}{c|}{$E_8$} & $\displaystyle{1+\frac{(w-1)}{(w^{15m-44}-1)}\left(w^{15m-44}+w^{12m-35}+w^{10m-29}+ w^{9m-26}\right.}$\\ \multicolumn{2}{c|}{} & $\displaystyle{\left.+ w^{7m-20}+w^{6m-17}+w^{4m-11}+w^{m-2} \right)}$ \\ \multicolumn{2}{c|}{} & \\ \end{longtable} $\phantom{some place}$ \setlongtables \begin{longtable}{l|l|c} \multicolumn{2}{c|}{} & \\ \multicolumn{2}{c|}{Type of singularity} & \multicolumn{1}{c}{Contribution of singular point for even $m$} \\ \multicolumn{2}{c|}{} & \\ \hline & & \\ $A_n$ & $\begin{array}{l} n \text{ even } \\ n=2k \\ k\geq 1 \end{array}$ & $\displaystyle{1+\frac{(w-1)}{(w^{(2k+1)(m-3)+2}-1)}\left( \sum_{i=2}^{k+1} w^{(k+i)(m-3)+2} + \sum_{i=1}^k w^{i(m-3)+1} \right)}$ \\ & & \\ \cline{2-3} \cline{2-3} & & \\ & $\begin{array}{l} n \text{ odd } \\ n=2k-1 \\ k\geq 1 \end{array}$ & $\displaystyle{1+\frac{(w-1)}{(w^{k(m-3)+1}-1)}\left( \sum_{i=1}^{k} w^{i(m-3)+1}+ w^{\frac{m}{2}-1} \right)} $ \\ & & \\ \hline & & \\ $D_n$ & $\begin{array}{l} n \text{ even } \\ n=2k \\ k\geq 2 \end{array}$ & $\displaystyle{1+\frac{(w-1)}{(w^{(2k-1)(m-3)+1}-1)}\left( \sum_{i=1}^{2k-1} w^{i(m-3)+1}+ w^{k(m-3)+1}\right.}$\\ & & $\displaystyle{\left.+\sum_{i=0}^{k-2} w^{(k+i)(m-3)+\frac{m}{2}} + \sum_{i=0}^{k-1} w^{i(m-3)+\frac{m}{2}-1}+w^{\frac{m}{2}-1} \right)}$ \\ & & \\ \cline{2-3} & & \\ & $\begin{array}{l} n \text{ odd } \\ n=2k+1 \\ k\geq 2 \end{array}$ & $\displaystyle{1+\frac{(w-1)}{(w^{2k(m-3)+1}-1)}\biggl( \sum_{i=1}^{2k} w^{i(m-3)+1}+ \sum_{i=1}^{k-1} w^{(k+i)(m-3)+\frac{m}{2}} \biggr.}$\\ & & $\displaystyle{\biggl. +\sum_{i=0}^{k-1} w^{i(m-3)+\frac{m}{2}-1} \biggr)}$ \\ & & \\ \hline \multicolumn{2}{c|}{} & \\ \multicolumn{2}{c|}{$E_6$} & $\displaystyle{1+\frac{(w-1)}{(w^{6m-17}-1)}\Bigl(w^{6m-17}+w^{4m-11}+w^{3m-8} \Bigr.}$\\ \multicolumn{2}{c|}{} & $\displaystyle{\Bigl.+w^{m-2}+w^{\frac{11m-30}{2}}+w^{\frac{3m-8}{2}} \Bigr) }$\\ \multicolumn{2}{c|}{} & \\ \hline \multicolumn{2}{c|}{} & \\ \multicolumn{2}{c|}{$E_7$} & $\displaystyle{1+\frac{(w-1)}{(w^{9m-26}-1)}\Bigl(w^{9m-26}+ w^{7m-20}+w^{6m-17}+w^{5m-14}\Bigr.}$\\ \multicolumn{2}{c|}{} & $\displaystyle{\Bigl. +w^{4m-11}+w^{3m-8}+w^{m-2} +w^{\frac{17m-48}{2}}+w^{\frac{15m-42}{2}}+w^{\frac{11m-30}{2}}\Bigr.}$\\ \multicolumn{2}{c|}{} & $\displaystyle{\Bigl.+w^{\frac{9m-26}{2}} +w^{\frac{5m-14}{2}}+w^{\frac{3m-8}{2}}+w^{\frac{m-2}{2}}\Bigr)}$ \\ \multicolumn{2}{c|}{} & \\ \hline \multicolumn{2}{c|}{} & \\ \multicolumn{2}{c|}{$E_8$} & $\displaystyle{1+\frac{(w-1)}{(w^{15m-44}-1)}\Bigl(w^{15m-44}+w^{12m-35}+w^{10m-29}+ w^{9m-26}\Bigr.}$\\ \multicolumn{2}{c|}{} & $\displaystyle{\Bigl.+ w^{7m-20}+w^{6m-17}+w^{4m-11}+w^{m-2}+w^{\frac{29m-84}{2}}+w^{\frac{27m-78}{2}} \Bigr.}$\\ \multicolumn{2}{c|}{} & $\displaystyle{\Bigl. +w^{\frac{23m-66}{2}}+w^{\frac{17m-48}{2}}+ w^{\frac{15m-44}{2}}+w^{\frac{9m-26}{2}}+w^{\frac{5m-14}{2}}+w^{\frac{3m-8}{2}} \Bigr)}$ \\ \multicolumn{2}{c|}{} & \\ \end{longtable} \noindent \textbf{Proof:} \begin{itemize} \item Let us first consider the case where $m\geq 5$. We will focus again on the singularity of type $D_n$ for $n=2k$ and also for even $m$. All the other cases are completely analogous. We just insert the data from sections 2, 3 and 4 in the defining formula of the stringy $E$-function and we find the following formula for the contribution of the singularity: \begin{equation*} \scriptsize{ \begin{split} & \frac{(w^{m-1}-w^2+w^{\frac{m+2}{2}}-w^{\frac{m}{2}})}{(w^{m-2}-1)} + \sum_{i=2}^{k-1} \frac{(w^{m-2}-w+w^{\frac{m}{2}}-w^{\frac{m-2}{2}} )(w-1)}{(w^{i(m-3)+1}-1)} + \frac{(w^{m-2})(w-1)}{(w^{2m-5}-1)} \\ & \ + \sum_{i=2}^{k-1} \frac{(w^{m-2}-w^{m-3}-w^{\frac{m-2}{2}}+w^{\frac{m-4}{2}} )(w-1)}{(w^{2i(m-3)+1}-1)} + \frac{2w^{m-2}(w-1)}{(w^{k(m-3)+1}-1)}\\ & \ + \sum_{i=1}^{k-2} \frac{(w^{m-2}-w^{m-3}-w^{\frac{m-2}{2}}+ w^{\frac{m-4}{2}})(w-1)}{(w^{(2i+1)(m-3)+1}-1)} + \frac{(w^{m-2}-2w^{m-3}-w^{\frac{m-2}{2}}+2w^{\frac{m-4}{2}}) (w-1)}{(w^{(2k-1)(m-3)+1}-1)}\\ & \ + \sum_{i=1}^{k-2} \frac{(w^{m-2}-w+w^{\frac{m}{2}}-w^{\frac{m-2}{2}} )(w-1)}{(w^{i(m-3)+1}-1)(w^{(i+1)(m-3)+1}-1)} + \frac{(w^{m-2}-w+w^{\frac{m}{2}}-w^{\frac{m-2}{2}} )(w-1)}{(w^{m-2}-1)(w^{2m-5}-1)}\\ & \ + \sum_{i=2}^{k-1} \frac{(w^{m-3}-1+w^{\frac{m-2}{2}}-w^{\frac{m-4}{2}}) (w-1)^2}{(w^{i(m-3)+1}-1)(w^{2i(m-3)+1}-1)}+ \sum_{i=1}^{k-2} \frac{(w^{m-3}-1+w^{\frac{m-2}{2}}-w^{\frac{m-4}{2}}) (w-1)^2}{(w^{i(m-3)+1}-1)(w^{(2i+2)(m-3)+1}-1)}\\ &\ + \frac{2(w^{m-2}-w+w^{\frac{m}{2}}-w^{\frac{m-2}{2}} )(w-1)}{(w^{(k-1)(m-3)+1}-1)(w^{k(m-3)+1}-1)} + \sum_{i=1}^{k-2} \frac{(w^{m-3}-1+w^{\frac{m-2}{2}}-w^{\frac{m-4}{2}}) (w-1)^2}{(w^{i(m-3)+1}-1)(w^{(2i+1)(m-3)+1}-1)}\\ & \ + \frac{(w^{m-2}-2w^{m-3}-w+2+w^{\frac{m}{2}}-3w^{\frac{m-2}{2}}+2w^{\frac{m-4}{2}}) (w-1)}{(w^{(k-1)(m-3)+1}-1)(w^{(2k-1)(m-3)+1}-1)}\\ & \ + \sum_{i=1}^{k-1} \frac{(w^{m-3}-w^{\frac{m-4}{2}}) (w-1)^2}{(w^{2i(m-3)+1}-1)(w^{(2i+1)(m-3)+1}-1)} + \sum_{i=1}^{k-2} \frac{(w^{m-3}-w^{\frac{m-4}{2}}) (w-1)^2}{(w^{(2i+2)(m-3)+1}-1)(w^{(2i+1)(m-3)+1}-1)}\\ & \ + \frac{2(w^{m-3}-w^{\frac{m-4}{2}}) (w-1)^2}{(w^{k(m-3)+1}-1)(w^{(2k-1)(m-3)+1}-1)}\\& \ + \sum_{i=1}^{k-2} \frac{(w^{m-3}-1+w^{\frac{m-2}{2}}-w^{\frac{m-4}{2}})(w-1)^2}{(w^{i(m-3)+1}-1) (w^{(i+1)(m-3)+1}-1)(w^{(2i+2)(m-3)+1}-1)} \\ & \ + \sum_{i=1}^{k-1} \frac{(w^{m-3}-1+w^{\frac{m-2}{2}}-w^{\frac{m-4}{2}})(w-1)^2}{(w^{i(m-3)+1}-1) (w^{2i(m-3)+1}-1)(w^{(2i+1)(m-3)+1}-1)} \\ &\ + \sum_{i=1}^{k-2} \frac{(w^{m-3}-1+w^{\frac{m-2}{2}}-w^{\frac{m-4}{2}})(w-1)^2}{(w^{i(m-3)+1}-1) (w^{(2i+2)(m-3)+1}-1)(w^{(2i+1)(m-3)+1}-1)} \\ &\ + \frac{2(w^{m-3}-1+w^{\frac{m-2}{2}}-w^{\frac{m-4}{2}})(w-1)^2}{ (w^{(k-1)(m-3)+1}-1) (w^{k(m-3)+1}-1)(w^{(2k-1)(m-3)+1}-1)}. \end{split}} \end{equation*} The terms correspond to the following pieces of the exceptional locus (in that order): \begin{equation*} \begin{split} & D_1^{\circ},D_i^{\circ},E_1^{\circ},E_i^{\circ},F_i^{\circ},G_i^{\circ}, G_{k-1}^{\circ}, (D_i\cap D_{i+1})^{\circ},(D_1\cap E_1)^{\circ}, (D_i\cap E_i)^{\circ},\\ & \ (D_i\cap E_{i+1})^{\circ},(D_{k-1}\cap F_i)^{\circ},(D_i\cap G_i)^{\circ},(D_{k-1}\cap G_{k-1})^{\circ},(E_i\cap G_i)^{\circ}, \\ & \ (E_{i+1}\cap G_i)^{\circ},(F_i\cap G_{k-1})^{\circ},D_i\cap D_{i+1}\cap E_{i+1},D_i\cap E_i\cap G_i,\\ & \ D_i\cap E_{i+1}\cap G_i,D_{k-1}\cap F_i\cap G_{k-1}. \end{split} \end{equation*} By a very long but easy calculation, it can be proved by induction on $k$ that we indeed get the requested formula. We remark here that we have done the computations for $m\geq 5$, for $m=4$ and for $m=3$ separately, and then noticed that the formulae for $m\geq 5$ are correct in the other cases too. \item We can now explain why these formulae are also valid for $m=4$. For the $A_n$ case, this is not a surprise, since the intersection diagram for $m=4$ is the same as for $m\geq 5$. For the other cases, consider for example a singularity of type $D_n$, $n$ even. The blow-ups in the singular lines on the divisors $D_i$ in the higher dimensional case correspond here to blow-ups in the intersections $D'_i \cap D''_i$. Performing these unnecessary extra blow-ups yields just another log resolution, and the formula for the contribution of the singularity for that log resolution will be exactly the evaluation of the formula from the first part of the proof for $m=4$ (notice for instance that the Hodge-Deligne polynomial for $D_i^{\circ}$ becomes $2w^2-2w$ for $m=4$ and the Hodge-Deligne polynomials for $(D'_i)^{\circ}$ and $(D''_i)^{\circ}$ will both be $w^2-w$). \item For $m=3$ it can be checked easily that the formulae are correct but again we give a more conceptual explanation. Compared with the higher dimensional case, all divisors except the last one split into two (distinct) components in the $A_n$ case, for odd $n$. This is consistent with the Hodge-Deligne polynomials from (3.3), evaluated for $m=3$. For even $n$, we must notice that the last blow-up is unnecessary for surfaces; performing it anyway does not yield a crepant resolution any more (the last divisor has discrepancy coefficient 1, as it should be, according to (4.2)). This last divisor is irreducible and the first $\frac{n}{2}$ blow-ups each add two components to the exceptional locus (compare this with (3.3) again). For the $D_n$ case, the analogue of blowing up in a singular line on a divisor $D_i$ would be to blow up in $D_i$ itself, because it is just a line for $m=3$. Such a blow-up is an isomorphism, and the result is that the divisors $D_i$ are renamed as $G_i$. As intersection diagram one finds the same as in the higher dimensional case, but without the divisors $D_i$. To be able to compare this to (3.4), we must notice that it is logical to set $a_1=w+1$, $c_1=0$ and $b_1=1$ in (3.1). Then indeed all Hodge-Deligne polynomials that describe a piece of a divisor $D_i$ are 0 in (3.4) for $m=3$. For the $E$ cases the same sort of arguments apply. $\blacksquare$ \end{itemize} \noindent \textbf{5.2.} From now on, let $X$ be a projective algebraic variety with at most (a finite number of) $A$-$D$-$E$ singularities. Since the next results are trivial for surfaces, we will assume that $\dim X\geq 3$.\\ \noindent \textbf{Proposition.} \textsl{The stringy $E$-function of $X$ is a polynomial if and only if $\dim X = 3$ and $X$ has singularities of type $A_n$ ($n$ odd) and/or $D_n$ ($n$ even). }\\ \noindent \textbf{Proof:} It follows from theorem (5.1) that the contributions of the singular points for $m\geq 5$ can be written in the following form: \[ 1 + \frac{w^2(w^{\alpha}+a_{\alpha-1}w^{\alpha-1}+\cdots + a_0)}{w^{\alpha+1}+w^{\alpha}+\cdots +1 },\] where $\alpha \in \mathbb{Z}_{>0}$ and all $a_i \in \mathbb{Z}_{\geq 0}$. Such expressions or finite sums of such expressions can never be polynomials. For $m=4$ the contributions are given in the following table. \setlongtables \begin{longtable}{l|l|c} \multicolumn{2}{c|}{$_{\displaystyle{\text{Type of singularity}}}$} & \multicolumn{1}{c}{$_{\displaystyle{\text{Contribution of singular point}}}$} \\ \multicolumn{2}{c|}{} & \\ \hline & & \\ $A_n$ & $\begin{array}{l} n \text{ even } \\ n=2k \end{array}$ & $1+\frac{w^2(w^{2k+2}-w^{k+2}+w^k-1)}{w^{2k+3}-1}$ \\ & & \\ \cline{2-3} & & \\ & $\begin{array}{l} n \text{ odd } \\ n=2k-1 \end{array}$ & $w+1 $ \\ & & \\ \hline & & \\ $D_n$ & $\begin{array}{l} n \text{ even } \\ n=2k \end{array}$ & $2w+1$ \\ & & \\ \cline{2-3} & & \\ & $\begin{array}{l} n \text{ odd } \\ n=2k+1 \end{array}$ & $w+1+\frac{w^2(w^{2k}-w^{k+1}+w^{k-1}-1)}{w^{2k+1}-1}$ \\ & & \\ \hline \multicolumn{2}{c|}{} & \\ \multicolumn{2}{c|}{$E_6$} & $1+\frac{w^2(2w^6-2w^5+w^4-w^2+2w-2)}{w^7-1}$\\ \multicolumn{2}{c|}{} & \\ \hline \multicolumn{2}{c|}{} & \\ \multicolumn{2}{c|}{$E_7$} & $w+1+\frac{w^2(w^4-w^3+w-1)}{w^5-1}$ \\ \multicolumn{2}{c|}{} & \\ \hline \multicolumn{2}{c|}{} & \\ \multicolumn{2}{c|}{$E_8$} & $1+\frac{w^2(2w^7-w^6-w^5+2w^4-2w^3+w^2+w-2)}{w^8-1}$ \\ \multicolumn{2}{c|}{} & \\ \end{longtable} There are exactly two contributions that are polynomials and one sees again that adding a finite number of the non-polynomial expressions never gives a polynomial. $\blacksquare$\\ \noindent \textbf{Theorem.} \textsl{Let $X$ be a three-dimensional projective variety with at most singularities of type $A_n$ ($n$ odd) and/or $D_n$ ($n$ even). Then the stringy Hodge numbers of $X$ are nonnegative.}\\ \noindent \textbf{Proof:} Let us first consider the case where $X$ has one singularity of type $A_n$ ($n$ odd). Denote by $X_{ns}$ the nonsingular part of $X$, and let $\varphi:\widetilde{X}\to X$ be the log resolution as constructed in section 2. Then the stringy $E$-function of $X$ will be $E_{st}(X)=H(X_{ns})+uv+1$ and the Hodge-Deligne polynomial of $\widetilde{X}$ is $H(\widetilde{X})=H(X_{ns})+\frac{n+1}{2}(uv)^2+\frac{n+3}{2}(uv)+1$. The exceptional locus counts $\frac{n+1}{2}$ components $D_1,\ldots,D_{\frac{n+1}{2}}$ whose classes in $H^2(\widetilde{X},\mathbb{C})$ are linearly independent. This can be seen as follows. We embed $\widetilde{X}$ in a $\mathbb{P}^N$ and we intersect with a suitable hyperplane $Y$. Thanks to Grauert's contractibility criterion the intersection matrix of the curves $D_1\cap Y,\ldots,D_{\frac{n+1}{2}}\cap Y$ is negative definite, and thus the classes of these curves are linearly independent in $H^2(\widetilde{X}\cap Y,\mathbb{C})$. The weak Lefschetz theorem implies then that the classes of $D_1,\ldots,D_{\frac{n+1}{2}}$ are linearly independent in $H^2(\widetilde{X},\mathbb{C})$. Actually, these classes are all contained in $H^{1,1}(\widetilde{X})$ (\cite[p.163]{GriffithsHarris}). This means that $h^{1,1}(\widetilde{X})=h^{2,2}(\widetilde{X})\geq \frac{n+1}{2}$. Thus the coefficients of $(uv)^2$ and $uv$ in $H(X_{ns})$ are $\geq 0$ and $\geq -1$ respectively. Note also that the constant term of $H(X_{ns})$ will be zero and for all other coefficients $a_{p,q}$ of $u^pv^q$ in $H(X_{ns})$, $(-1)^{p+q}a_{p,q}$ will be $\geq 0$, since this is the case in $H(\widetilde{X})$. This implies that the stringy Hodge numbers of $X$ are nonnegative. If $X$ has one singularity of type $D_n$ ($n$ even), we can choose to start from the log resolution constructed by Dais and Roczen (\cite[Section 2]{DaisRoczen}) which yields $H(\widetilde{X})=H(X_{ns})+\frac{3n-2}{2}(uv)^2+\frac{3n+2}{2}(uv)+1$ with $\frac{3n-2}{2}$ components in the exceptional locus or we can use the log resolution analogous to section 2, which gives $H(\widetilde{X})=H(X_{ns})+(2n-2)(uv)^2+2n(uv)+1$ with $2n-2$ components in the exceptional locus and then apply the same argument. It is clear that nothing essential changes when there is more than one singularity. $\blacksquare$\\ \noindent \textbf{Example.} Consider the variety $X=\{xyz+t^3+w^3=0\} \subset \mathbb{P}^4$, where we use coordinates $(x,y,z,t,w)$. It is clear that the points $(1,0,0,0,0),(0,1,0,0,0)$ and $(0,0,1,0,0)$ are three-dimensional $D_4$ singularities. Thus, their contribution to the stringy $E$-function of $X$ is $3(2w+1)$. To calculate the Hodge-Deligne polynomial of $X$, we divide $X$ in three locally closed pieces: \[ X = (X\cap \{x\neq 0, y\neq 0\})\sqcup (X\cap \{x\neq 0,y=0\}) \sqcup (X\cap \{x=0\}).\] The Hodge-Deligne polynomial of the first piece is just $(w-1)w^2$ since $y,z,t,w$ have become affine coordinates and $y,t,w$ can be chosen freely, with $y\neq 0$. The second piece consists of three planes in $\mathbb{A}^3$, intersecting in a line and has Hodge-Deligne polynomial $3(w^2-w)+w$ and the third piece are three planes in $\mathbb{P}^3$, intersecting in a line, with contribution $3w^2+w+1$. Thus $H(X)=w^3+5w^2-w+1$ and $H(X_{ns})=w^3+5w^2-w-2$. It follows that the stringy $E$-function of $X$ is equal to $w^3+5w^2+5w+1$ and that the stringy Hodge numbers of $X$ are nonnegative.\\ \noindent \textbf{Acknowledgement:} I wish to thank Professor J. Steenbrink for his contribution to the proof of Theorem 5.2.\\ \footnotesize{ \end{document}
arXiv
Consider the following expression grammar. The semantic rules for expression evaluation are stated next to each grammar production. Assume the conflicts of this question are resolved using yacc tool and an LALR(1) parser is generated for parsing arithmetic expressions as per the given grammar. Consider an expression $3 \times 2 + 1$. What precedence and associativity properties does the generated parser realize? the question asks- What precedence and associativity properties does the generated parser realize? according to me + is having higher precedence than *. Since yaac prefers shift over reduce and it performs 2+1 first and then multiplied so how + and * have same precedence? @sushmita LALR parser is SR parser here we use right most derivation...that's why we use right associativity...because we reduced from right .. All the productions are in same level therefore all have same precedence. If a grammar has same precedence for 2 different operator (+,*) then it is ambiguous Grammar. yes i checked with that.Because on reading "num" ,we will go to the state that will reduce E->num. and there are no conflicts,is this correct ? Doubt:- 3 is look ahead,then you applied E->3 and reduce it to E,after that why is it E->3*?Here stack contains only E* as E->3 is reduced? Also this E->3 is reduced because there is no SR conflict on reading num? and yes E->3, this reduction took place because there was no SR conflict in that state where this reduction is done, otherwise, shift move would be taken over reduce by YACC. Can someone please explain how the preference of S over R is leading to right associativity? I have made the parsing table from DFA, using which input can be parsed and checked what happens when shift is favoured over reduce. Equal precedence and Right associativity observed. @Ayush Upadhyaya I am at state 0 and looking at 3... what shall I do according to your table? There is no reduce move from E-> num. None of the Options is correct here ... The answer has to be "precedence of + is higher than * " and "both * and + are right associative" although your answer explains that in certain cases when the grammar poses an RR conflict, the grammar rule which comes before will get a priority, it still is misleading as far as this grammar and question is concerned. Since the case that you are mentioning here (RR conflict priority decision) doesn't concern the grammar given in question, I think the correct option, according to you, that " precedence of + is higher than * " is simply wrong. Also after reading your answer, I feel that you don't clearly understand the precedence and associativity concepts. Always, remember, precedence is established first. Once the precedence rules are established, and there comes a case where two operators come one after another and have equal precedence (the operators may be same or different), only then, we apply the associativity rule. In the last part of your answer, you are using the associativity rule to derive the precedence of + and x which I find to be a reason in concluding the possibility of confusion that you might have regarding associativity and precedence. Nice catch though regarding RR conflict.
CommonCrawl
Only show content I have access to (17) Only show open access (2) Physics And Astronomy (20) Earth and Environmental Sciences (10) Language and Linguistics (3) Statistics and Probability (2) The European Physical Journal - Applied Physics (9) Journal of Materials Research (4) Advances in Animal Biosciences (3) Canadian Journal on Aging / La Revue canadienne du vieillissement (3) European Astronomical Society Publications Series (3) Journal of Fluid Mechanics (3) Journal of Linguistics (3) Proceedings of the International Astronomical Union (3) The Journal of Agricultural Science (3) British Journal of Nutrition (2) Epidemiology & Infection (2) Journal of Glaciology (2) Animal Genetic Resources/Resources génétiques animales/Recursos genéticos animales (1) Diamond Light Source Proceedings (1) Journal of Dairy Research (1) Marine Biodiversity Records (1) The British Journal of Psychiatry (1) The Journal of Navigation (1) Cambridge University Press (4) BSAS (6) International Astronomical Union (4) Canadian Association on Gerontology/L'Association canadienne de gerontologie CAG CJG (3) Linguistics Association of Great Britain (3) Nutrition Society (3) Ryan Test (3) International Glaciological Society (2) MBA Online Only Members (1) Mineralogical Society (1) The Royal College of Psychiatrists (1) Cambridge Handbooks in Psychology (1) Combinations of non-invasive indicators to detect dairy cows submitted to high-starch-diet challenge C. Villot, C. Martin, J. Bodin, D. Durand, B. Graulet, A. Ferlay, M.M. Mialon, E. Trevisi, M. Silberberg Journal: animal / Volume 14 / Issue 2 / February 2020 Published online by Cambridge University Press: 16 July 2019, pp. 388-398 Print publication: February 2020 High-starch diets (HSDs) fed to high-producing ruminants are often responsible for rumen dysfunction and could impair animal health and production. Feeding HSDs are often characterized by transient rumen pH depression, accurate monitoring of which requires costly or invasive methods. Numerous clinical signs can be followed to monitor such diet changes but no specific indicator is able to make a statement at animal level on-farm. The aim of this pilot study was to assess a combination of non-invasive indicators in dairy cows able to monitor a HSD in experimental conditions. A longitudinal study was conducted in 11 primiparous dairy cows fed with two different diets during three successive periods: a 4-week control period (P1) with a low-starch diet (LSD; 13% starch), a 4-week period with an HSD (P2, 35% starch) and a 3-week recovery period (P3) again with the LSD. Animal behaviour was monitored throughout the experiment, and faeces, urine, saliva, milk and blood were sampled simultaneously in each animal at least once a week for analysis. A total of 136 variables were screened by successive statistical approaches including: partial least squares-discriminant analysis, multivariate analysis and mixed-effect models. Finally, 16 indicators were selected as the most representative of a HSD challenge. A generalized linear mixed model analysis was applied to highlight parsimonious combinations of indicators able to identify animals under our experimental conditions. Eighteen models were established and the combination of milk urea nitrogen, blood bicarbonate and feed intake was the best to detect the different periods of the challenge with both 100% of specificity and sensitivity. Other indicators such as the number of drinking acts, fat:protein ratio in milk, urine, and faecal pH, were the most frequently used in the proposed models. Finally, the established models highlight the necessity for animals to have more than 1 week of recovery diet to return to their initial control state after a HSD challenge. This pilot study demonstrates the interest of using combinations of non-invasive indicators to monitor feed changes from a LSD to a HSD to dairy cows in order to improve prevention of rumen dysfunction on-farm. However, the adjustment and robustness of the proposed combinations of indicators need to be challenged using a greater number of animals as well as different acidogenic conditions before being applied on-farm. 107 - Cardiovascular Disease from Section 2 - Medical Conditions and Symptoms By Gerard J. Molloy, Hannah Durand, Eimear C. Morrissey Edited by Carrie D. Llewellyn, University of Sussex, Susan Ayers, City, University of London, Chris McManus, University College London, Stanton Newman, City, University of London, Keith J. Petrie, University of Auckland, Tracey A. Revenson, City University of New York, John Weinman, King's College London Book: Cambridge Handbook of Psychology, Health and Medicine Published online: 05 June 2019 Print publication: 16 May 2019, pp 451-454 Optimal location of set-aside areas to reduce nitrogen pollution: a modelling study L. Casal, P. Durand, N. Akkal-Corfini, C. Benhamou, F. Laurent, J. Salmon-Monviola, F. Vertès Journal: The Journal of Agricultural Science / Volume 156 / Issue 9 / November 2018 Published online by Cambridge University Press: 18 January 2019, pp. 1090-1102 Print publication: November 2018 Distributed models and a good knowledge of the catchment studied are required to assess mitigation measures for nitrogen (N) pollution. A set of alternative scenarios (change of crop management practices and different strategies of landscape management, especially different sizes and distribution of set-aside areas) were simulated with a fully distributed model in a small agricultural catchment. The results show that current practices are close to complying with current regulations, which results in a limited effect of the implementation of best crop management practices. The location of set-aside zones is more important than their size in decreasing nitrate fluxes in stream water. The most efficient location is the lower parts of hillslopes, combining the dilution effect due to the decrease of N input per unit of land and the interception of nitrate transferred by sub-surface flows. The main process responsible for the interception effect is probably uptake by grassland and retention in soils since the denitrification load tends to decrease proportionally to N input and, for the scenarios considered, is lower in the interception scenarios than in the corresponding dilution zones. The Neolithic Transition in the Western Mediterranean: a Complex and Non-Linear Diffusion Process—The Radiocarbon Record Revisited C Manen, T Perrin, J Guilaine, L Bouby, S Bréhard, F Briois, F Durand, P Marinval, J-D Vigne Journal: Radiocarbon / Volume 61 / Issue 2 / April 2019 Published online by Cambridge University Press: 31 October 2018, pp. 531-571 Print publication: April 2019 The Neolithic transition is a particularly favorable field of research for the study of the emergence and evolution of cultures and cultural phenomena. In this framework, high-precision chronologies are essential for decrypting the rhythms of emergence of new techno-economic traits. As part of a project exploring the conditions underlying the emergence and dynamics of the development of the first agro-pastoral societies in the Western Mediterranean, this paper proposes a new chronological modeling. Based on 45 new radiocarbon (14C) dates and on a Bayesian statistical framework, this work examines the rhythms and dispersal paths of the Neolithic economy both on coastal and continental areas. These new data highlight a complex and far less unidirectional dissemination process than that envisaged so far. What Do Clinical Supervisors Require to Teach Residents in Family Medicine How to Care for Seniors? Anik M. C. Giguere, Paule Lebel, Michèle Morin, Françoise Proust, Charo Rodríguez, Valerie Carnovale, Louise Champagne, France Légaré, Pierre-Hugues Carmichael, Bernard Martineau, Philippe Karazivan, Pierre J. Durand Journal: Canadian Journal on Aging / La Revue canadienne du vieillissement / Volume 37 / Issue 1 / March 2018 Published online by Cambridge University Press: 09 January 2018, pp. 32-49 Print publication: March 2018 We assessed clinicians' continuing professional development (CPD) needs at family practice teaching clinics in the province of Quebec. Our mixed methodology design comprised an environmental scan of training programs at four family medicine departments, an expert panel to determine priority clinical situations for senior care, a supervisors survey to assess their perceived CPD needs, and interviews to help understand the rationale behind their needs. From the environmental scan, the expert panel selected 13 priority situations. Key needs expressed by the 352 survey respondents (36% response rate) included behavioral and psychological symptoms of dementia, polypharmacy, depression, and cognitive disorders. Supervisors explained that these situations were sometimes complex to diagnose and manage because of psychosocial aspects, challenges of communicating with patients and families, and coordination of interprofessional teams. Supervisors also reported more CPD needs in long-term and home care, given the presence of caregivers and complexity of senior care in these settings. Strengthened Ebola surveillance in France during a major outbreak in West Africa: March 2014–January 2016 A. MAILLES, H. NOEL, D. PANNETIER, C. RAPP, Y. YAZDANPANAH, S. VANDENTORREN, P. CHAUD, J. M. PHILIPPE, B. WORMS, M. BRUYAND, M. TOURDJMAN, M. NAHON, E. BELCHIOR, E. LUCAS, J. DURAND, M. ZURBARAN, S. VAUX, B. COIGNARD, H. DE VALK, S. BAIZE, S. QUELET, F. BOURDILLON Journal: Epidemiology & Infection / Volume 145 / Issue 16 / December 2017 Published online by Cambridge University Press: 23 November 2017, pp. 3455-3467 An unprecedented outbreak of Ebola virus diseases (EVD) occurred in West Africa from March 2014 to January 2016. The French Institute for Public Health implemented strengthened surveillance to early identify any imported case and avoid secondary cases. Febrile travellers returning from an affected country had to report to the national emergency healthcare hotline. Patients reporting at-risk exposures and fever during the 21st following day from the last at-risk exposure were defined as possible cases, hospitalised in isolation and tested by real-time polymerase chain reaction. Asymptomatic travellers reporting at-risk exposures were considered as contact and included in a follow-up protocol until the 21st day after the last at-risk exposure. From March 2014 to January 2016, 1087 patients were notified: 1053 were immediately excluded because they did not match the notification criteria or did not have at-risk exposures; 34 possible cases were tested and excluded following a reliable negative result. Two confirmed cases diagnosed in West Africa were evacuated to France under stringent isolation conditions. Patients returning from Guinea (n = 531; 49%) and Mali (n = 113; 10%) accounted for the highest number of notifications. No imported case of EVD was detected in France. We are confident that our surveillance system was able to classify patients properly during the outbreak period. Evidence of positive selection towards Zebuine haplotypes in the BoLA region of Brangus cattle D. E. Goszczynski, C. M. Corbi-Botto, H. M. Durand, A. Rogberg-Muñoz, S. Munilla, P. Peral-Garcia, R. J. C. Cantet, G. Giovambattista The Brangus breed was developed to combine the superior characteristics of both of its founder breeds, Angus and Brahman. It combines the high adaptability to tropical and subtropical environments, disease resistance, and overall hardiness of Zebu cattle with the reproductive potential and carcass quality of Angus. It is known that the major histocompatibility complex (MHC, also known as bovine leucocyte antigen: BoLA), located on chromosome 23, encodes several genes involved in the adaptive immune response and may be responsible for adaptation to harsh environments. The objective of this work was to evaluate whether the local breed ancestry percentages in the BoLA locus of a Brangus population diverged from the estimated genome-wide proportions and to identify signatures of positive selection in this genomic region. For this, 167 animals (100 Brangus, 45 Angus and 22 Brahman) were genotyped using a high-density single nucleotide polymorphism array. The local ancestry analysis showed that more than half of the haplotypes (55.0%) shared a Brahman origin. This value was significantly different from the global genome-wide proportion estimated by cluster analysis (34.7% Brahman), and the proportion expected by pedigree (37.5% Brahman). The analysis of selection signatures by genetic differentiation (F st ) and extended haplotype homozygosity-based methods (iHS and Rsb) revealed 10 and seven candidate regions, respectively. The analysis of the genes located within these candidate regions showed mainly genes involved in immune response-related pathway, while other genes and pathways were also observed (cell surface signalling pathways, membrane proteins and ion-binding proteins). Our results suggest that the BoLA region of Brangus cattle may have been enriched with Brahman haplotypes as a consequence of selection processes to promote adaptation to subtropical environments. Modelling the interplay between nitrogen cycling processes and mitigation options in farming catchments P. DURAND, P. MOREAU, J. SALMON-MONVIOLA, L. RUIZ, F. VERTES, C. GASCUEL-ODOUX Journal: The Journal of Agricultural Science / Volume 153 / Issue 6 / August 2015 Published online by Cambridge University Press: 09 June 2015, pp. 959-974 Quantitative assessment of mitigation measures for nitrogen (N) pollution requires adequate models, good knowledge of catchment functioning and a thorough understanding of agricultural systems and stakeholder constraints. The current paper analyses a set of results from simulations, with two models, of agricultural changes in two catchments in different contexts with different constraints. The results show that reducing N inputs and increasing grassland areas are the most efficient measures, not only because they reduce N fluxes in streams but also because they enhance N use by agriculture and the whole catchment system. Introducing catch crops, hedgerows and riparian buffers are interesting complementary measures but of limited impact when implemented alone. These results are sensitive to the way mitigation measures are translated into model inputs, and their operational implications are discussed. Effect of inclination on the transition scenario in the wake of fixed disks and flat cylinders M. Chrust, C. Dauteuille, T. Bobinski, J. Rokicki, S. Goujon-Durand, J. E. Wesfreid, G. Bouchet, J. Dušek Journal: Journal of Fluid Mechanics / Volume 770 / 10 May 2015 Published online by Cambridge University Press: 30 March 2015, pp. 189-209 Print publication: 10 May 2015 We take up the old problem of Calvert (J. Fluid Mech., vol. 29, 1967, pp. 691–703) concerning the wake of a cylinder inclined with respect to the flow direction, and consider it from the viewpoint of transition to turbulence. For cylinders placed perpendicular to the flow direction, we address the disagreement between numerical simulation of the ideal axisymmetric configuration and experimental observations. We demonstrate that for a disk (a cylinder of aspect ratio infinity) and a flat cylinder of aspect ratio ${\it\chi}=6$ (ratio of diameter to height), the numerically predicted transition scenario is limited to very small inclination angles and is thus difficult to test experimentally. For inclination angles of about $4^{\circ }$ and more, a joint numerical and experimental study shows that the experimentally observed scenario agrees qualitatively well with the results of numerical simulations. For the flat cylinder ${\it\chi}=6$ , we obtain satisfactory agreement with regard to dependence of the critical Reynolds number ( $\mathit{Re}$ ) of the onset of vortex shedding on the inclination angle. Both for infinitely flat disks and cylinders of aspect ratio ${\it\chi}=6$ , a small inclination tends to promote vortex shedding, that is, to lower the instability threshold, whereas for inclination angles exceeding $20^{\circ }$ the opposite effect is exhibited. The Strouhal number of oscillations is found to be only very weakly dependent on the Reynolds number, and very good agreement is obtained between values reported by Calvert (J. Fluid Mech., vol. 29, 1967, pp. 691–703) at high Reynolds numbers and our simulations at $\mathit{Re}=250$ . In contrast, we observe relatively poor agreement in Strouhal numbers when comparing the results of our numerical simulations and the data acquired from the experimental set-up described in this paper. Closer analysis shows that confidence can be placed in the numerical results because the discrepancy can be attributed to the influence of the support system of the flat cylinder. Suggestions for improvement of the experimental set-up are provided. Contribution of livestock farming systems to the nitrogen cascade and consequences for farming regions P. Cellier, P. Rochette, P. Durand, P. Faverdin, P. J. Kuikman, J.-L. Peyraud Journal: Advances in Animal Biosciences / Volume 5 / Issue s1 / October 2014 Published online by Cambridge University Press: 25 September 2014, pp. 8-19 This article describes the nitrogen flows in the environment and points to the specificities of the livestock production. Till the beginning of the 20th century, the symbiotic fixation and the recycling of animal excreta supplied the nitrogen necessary for the fertility of soil. In 1913, the Haber-Bosch process allowed the industrial synthesis of ammonia and made possible the fertilisation without association of crop production with the livestock farming. The efficiency of the nitrogen in livestock farming is low with nearly half or more of the inputs losses to the environment. These losses have diverse impacts that intervene at various spatial scales owing to the nitrogen cascade. Quantitative assessment of nitrogen flows at the scale of regions started in the early 1980s in Western Europe and North America. These studies provided estimates of the spatial variability of nitrogen discharge within a region. They confirmed the differences between areas with a high animal density such as Brittany (western region, France) and other regions. It was also found that the same nitrogenous losses could lead to different levels of environmental impacts according to the sensibility of a given environment and its capacity to cope with nitrogen excess. Climate, soils characteristics, animal density, and proportions of agricultural land under annual and perennial crops are drivers of this sensibility. Origin, quantities and fate of nitrogen flows associated with animal production L. Delaby, J.-Y. Dourmad, F. Béline, P. Lescoat, P. Faverdin, J.-L. Fiorelli, F. Vertès, P. Veysset, T. Morvan, V. Parnaudeau, P. Durand, P. Rochette, J.-L. Peyraud Published online by Cambridge University Press: 25 September 2014, pp. 28-48 The nitrogen efficiency is the ratio between the output of nitrogen in the animal products and the input required for the livestock production. This ratio is a driver of the economic profitability and can be calculated at various levels of the production system: animal, field or farm. Calculated at the scale of the animal, it is generally low with less than half-ingested nitrogen remaining in the milk, the eggs or the meat in the form of proteins; the major part of the nitrogen being rejected in the environment. Significant gains were achieved in the past via the genetic improvement and the adjustment of feed supply. At the farm level, the efficiency increases to 45% to 50%, thanks to the recycling of animal excreta as fertilisers. From excretion to land application of manure, the losses of nitrogen are very variable depending on the animal species and the manure management system. Considering the risks of pollution swapping, all management and handling steps need to be considered. Collective initiatives or local rules on agricultural practices allow new opportunities to restore nitrogen balances on local territory. Nitrogen flows and livestock farming: lessons and perspectives J.-L. Peyraud, P. Cellier, F. Aarts, F. Béline, C. Bockstaller, M. Bourblanc, L. Delaby, J. Y. Dourmad, P. Dupraz, P. Durand, P. Faverdin, J. L. Fiorelli, C. Gaigné, P.J. Kuikman, A. Langlais, P. Le Goffe, P. Lescoat, T. Morvan, C. Nicourt, V. Parnaudeau, P. Rochette, F. Vertès, P. Veysset, O. Réchauchère, C. Donnars Experimental investigation of flow behind a cube for moderate Reynolds numbers L. Klotz, S. Goujon-Durand, J. Rokicki, J. E. Wesfreid Journal: Journal of Fluid Mechanics / Volume 750 / 10 July 2014 Published online by Cambridge University Press: 30 May 2014, pp. 73-98 Print publication: 10 July 2014 The wake behind a cube with a face normal to the flow was investigated experimentally in a water tunnel using laser induced fluorescence (LIF) visualisation and particle image velocimetry (PIV) techniques. Measurements were carried out for moderate Reynolds numbers between 100 and 400 and in this range a sequence of two flow bifurcations was confirmed. Values for both onsets were determined in the framework of Landau's instability model. The measured longitudinal vorticity was separated into three components corresponding to each of the identified regimes. It was shown that the vorticity associated with a basic flow regime originates from corners of the bluff body, in contrast to the two other contributions which are related to instability effects. The present experimental results are compared with numerical simulation carried out earlier by Saha (Phys. Fluids, vol. 16, 2004, pp. 1630–1646). Contributor affiliations By Frank Andrasik, Melissa R. Andrews, Ana Inés Ansaldo, Evangelos G. Antzoulatos, Lianhua Bai, Ellen Barrett, Linamara Battistella, Nicolas Bayle, Michael S. Beattie, Peter J. Beek, Serafin Beer, Heinrich Binder, Claire Bindschaedler, Sarah Blanton, Tasia Bobish, Michael L. Boninger, Joseph F. Bonner, Chadwick B. Boulay, Vanessa S. Boyce, Anna-Katharine Brem, Jacqueline C. Bresnahan, Floor E. Buma, Mary Bartlett Bunge, John H. Byrne, Jeffrey R. Capadona, Stefano F. Cappa, Diana D. Cardenas, Leeanne M. Carey, S. Thomas Carmichael, Glauco A. P. Caurin, Pablo Celnik, Kimberly M. Christian, Stephanie Clarke, Leonardo G. Cohen, Adriana B. Conforto, Rory A. Cooper, Rosemarie Cooper, Steven C. Cramer, Armin Curt, Mark D'Esposito, Matthew B. Dalva, Gavriel David, Brandon Delia, Wenbin Deng, Volker Dietz, Bruce H. Dobkin, Marco Domeniconi, Edith Durand, Tracey Vause Earland, Georg Ebersbach, Jonathan J. Evans, James W. Fawcett, Uri Feintuch, Toby A. Ferguson, Marie T. Filbin, Diasinou Fioravante, Itzhak Fischer, Agnes Floel, Herta Flor, Karim Fouad, Richard S. J. Frackowiak, Peter H. Gorman, Thomas W. Gould, Jean-Michel Gracies, Amparo Gutierrez, Kurt Haas, C.D. Hall, Hans-Peter Hartung, Zhigang He, Jordan Hecker, Susan J. Herdman, Seth Herman, Leigh R. Hochberg, Ahmet Höke, Fay B. Horak, Jared C. Horvath, Richard L. Huganir, Friedhelm C. Hummel, Beata Jarosiewicz, Frances E. Jensen, Michael Jöbges, Larry M. Jordan, Jon H. Kaas, Andres M. Kanner, Noomi Katz, Matthew S. Kayser, Annmarie Kelleher, Gerd Kempermann, Timothy E. Kennedy, Jürg Kesselring, Fary Khan, Rachel Kizony, Jeffery D. Kocsis, Boudewijn J. Kollen, Hubertus Köller, John W. Krakauer, Hermano I. Krebs, Gert Kwakkel, Bradley Lang, Catherine E. Lang, Helmar C. Lehmann, Angelo C. Lepore, Glenn S. Le Prell, Mindy F. Levin, Joel M. Levine, David A. Low, Marilyn MacKay-Lyons, Jeffrey D. Macklis, Margaret Mak, Francine Malouin, William C. Mann, Paul D. Marasco, Christopher J. Mathias, Laura McClure, Jan Mehrholz, Lorne M. Mendell, Robert H. Miller, Carol Milligan, Beth Mineo, Simon W. Moore, Jennifer Morgan, Charbel E-H. Moussa, Martin Munz, Randolph J. Nudo, Joseph J. Pancrazio, Theresa Pape, Alvaro Pascual-Leone, Kristin M. Pearson-Fuhrhop, P. Hunter Peckham, Tamara L. Pelleshi, Catherine Verrier Piersol, Thomas Platz, Marcus Pohl, Dejan B. Popović, Andrew M. Poulos, Maulik Purohit, Hui-Xin Qi, Debbie Rand, Mahendra S. Rao, Josef P. Rauschecker, Aimee Reiss, Carol L. Richards, Keith M. Robinson, Melvyn Roerdink, John C. Rosenbek, Serge Rossignol, Edward S. Ruthazer, Arash Sahraie, Krishnankutty Sathian, Marc H. Schieber, Brian J. Schmidt, Michael E. Selzer, Mijail D. Serruya, Himanshu Sharma, Michael Shifman, Jerry Silver, Thomas Sinkjær, George M. Smith, Young-Jin Son, Tim Spencer, John D. Steeves, Oswald Steward, Sheela Stuart, Austin J. Sumner, Chin Lik Tan, Robert W. Teasell, Gareth Thomas, Aiko K. Thompson, Richard F. Thompson, Wesley J. Thompson, Erika Timar, Ceri T. Trevethan, Christopher Trimby, Gary R. Turner, Mark H. Tuszynski, Erna A. van Niekerk, Ricardo Viana, Difei Wang, Anthony B. Ward, Nick S. Ward, Stephen G. Waxman, Patrice L. Weiss, Jörg Wissel, Steven L. Wolf, Jonathan R. Wolpaw, Sharon Wood-Dauphinee, Ross D. Zafonte, Binhai Zheng, Richard D. Zorowitz Edited by Michael Selzer, Stephanie Clarke, Leonardo Cohen, Gert Kwakkel, Robert Miller, Case Western Reserve University, Ohio Book: Textbook of Neural Repair and Rehabilitation Print publication: 24 April 2014, pp ix-xvi Edited by Michael E. Selzer, Stephanie Clarke, Leonardo G. Cohen, Gert Kwakkel, Robert H. Miller, Case Western Reserve University, Ohio First record of Osteomugil perusii (Teleostei: Mugilidae) in Indian waters M. Ashiq Ur Rahman, S. Ajmal Khan, P.S. Lyla, J.-D. Durand Journal: Marine Biodiversity Records / Volume 7 / 2014 Published online by Cambridge University Press: 03 March 2014, e15 The long-finned mullet, Osteomugil perusii was caught along the eastern and western coasts of India. The finding of 12 specimens extends the current knowledge about the distribution of this mullet in the western Indo-Pacific to the Indian coast. Increased incidence of Campylobacter jejuni-associated Guillain–Barré syndromes in the Greater Paris area V. SIVADON-TARDY, R. PORCHER, D. ORLIKOWSKI, E. RONCO, E. GAULT, J. ROUSSI, M.-C. DURAND, T. SHARSHAR, D. ANNANE, J.-C. RAPHAEL, F. MEGRAUD, J.-L. GAILLARD Journal: Epidemiology & Infection / Volume 142 / Issue 8 / August 2014 Published online by Cambridge University Press: 10 October 2013, pp. 1609-1613 The role of Campylobacter jejuni as the triggering agent of Guillain–Barré syndrome (GBS) has not been reassessed since the end of the 1990s in France. We report that the number of C. jejuni-related GBS cases increased continuously between 1996 and 2007 in the Paris region (mean annual increment: 7%, P = 0·007). The Science Case for PILOT I: Summary and Overview J. S. Lawrence, M. C. B. Ashley, J. Bailey, D. Barrado y Navascues, T. R. Bedding, J. Bland-Hawthorn, I. Bond, F. Boulanger, R. Bouwens, H. Bruntt, A. Bunker, D. Burgarella, M. G. Burton, M. Busso, D. Coward, M.-R. Cioni, G. Durand, C. Eiroa, N. Epchtein, N. Gehrels, P. Gillingham, K. Glazebrook, R. Haynes, L. Kiss, P. O. Lagage, T. Le Bertre, C. Mackay, J. P. Maillard, A. McGrath, V. Minier, A. Mora, K. Olsen, P. Persi, K. Pimbblet, R. Quimby, W. Saunders, B. Schmidt, D. Stello, J. W. V. Storey, C. Tinney, P. Tremblin, J. C. Wheeler, P. Yock Journal: Publications of the Astronomical Society of Australia / Volume 26 / Issue 4 / 2009 PILOT (the Pathfinder for an International Large Optical Telescope) is a proposed 2.5-m optical/infrared telescope to be located at Dome C on the Antarctic plateau. Conditions at Dome C are known to be exceptional for astronomy. The seeing (above ∼30 m height), coherence time, and isoplanatic angle are all twice as good as at typical mid-latitude sites, while the water-vapour column, and the atmosphere and telescope thermal emission are all an order of magnitude better. These conditions enable a unique scientific capability for PILOT, which is addressed in this series of papers. The current paper presents an overview of the optical and instrumentation suite for PILOT and its expected performance, a summary of the key science goals and observational approach for the facility, a discussion of the synergies between the science goals for PILOT and other telescopes, and a discussion of the future of Antarctic astronomy. Paper II and Paper III present details of the science projects divided, respectively, between the distant Universe (i.e. studies of first light, and the assembly and evolution of structure) and the nearby Universe (i.e. studies of Local Group galaxies, the Milky Way, and the Solar System). Grounding-line migration in plan-view marine ice-sheet models: results of the ice2sea MISMIP3d intercomparison Frank Pattyn, Laura Perichon, Gaël Durand, Lionel Favier, Olivier Gagliardini, Richard C.A. Hindmarsh, Thomas Zwinger, Torsten Albrecht, Stephen Cornford, David Docquier, Johannes J. Fürst, Daniel Goldberg, G. Hilmar Gudmundsson, Angelika Humbert, Moritz Hütten, Philippe Huybrechts, Guillaume Jouvet, Thomas Kleiner, Eric Larour, Daniel Martin, Mathieu Morlighem, Anthony J. Payne, David Pollard, Martin Rückamp, Oleg Rybak, Hélène Seroussi, Malte Thoma, Nina Wilkens Journal: Journal of Glaciology / Volume 59 / Issue 215 / 2013 Predictions of marine ice-sheet behaviour require models able to simulate grounding-line migration. We present results of an intercomparison experiment for plan-view marine ice-sheet models. Verification is effected by comparison with approximate analytical solutions for flux across the grounding line using simplified geometrical configurations (no lateral variations, no buttressing effects from lateral drag). Perturbation experiments specifying spatial variation in basal sliding parameters permitted the evolution of curved grounding lines, generating buttressing effects. The experiments showed regions of compression and extensional flow across the grounding line, thereby invalidating the boundary layer theory. Steady-state grounding-line positions were found to be dependent on the level of physical model approximation. Resolving grounding lines requires inclusion of membrane stresses, a sufficiently small grid size (<500 m), or subgrid interpolation of the grounding line. The latter still requires nominal grid sizes of <5 km. For larger grid spacings, appropriate parameterizations for ice flux may be imposed at the grounding line, but the short-time transient behaviour is then incorrect and different from models that do not incorporate grounding-line parameterizations. The numerical error associated with predicting grounding-line motion can be reduced significantly below the errors associated with parameter ignorance and uncertainties in future scenarios. A worldwide comparison of the best sites for submillimetre astronomy P. Tremblin, N. Schneider, V. Minier, G. Al. Durand, J. Urban Journal: Proceedings of the International Astronomical Union / Volume 8 / Issue S288 / August 2012 Over the past few years a major effort has been put into the exploration of potential sites for the deployment of submillimetre (submm) astronomical facilities. Amongst the most important sites are Dome C and Dome A on the Antarctic Plateau, and the Chajnantor area in Chile. In this context, we report on measurements of the sky opacity at 200 μm over a period of three years at the French-Italian station, Concordia, at Dome C, Antarctica. Based on satellite data, we present a comparison of the atmospheric transmission at 200, 350 μm between the best potential/known sites for submillimetre astronomy all around the world. The precipitable water vapour (PWV) was extracted from satellite measurements of the Infrared Atmospheric Sounding Interferometer (IASI) on the METOP-A satellite, between 2008 and 2010. We computed the atmospheric transmission at 200 μm and 350 μm using the forward atmospheric model MOLIERE (Microwave Observation LIne Estimation and REtrieval). This method allows us to compare known sites all around the world without the calibration biases of multiple in-situ instruments, and to explore the potential of new sites.
CommonCrawl
\begin{definition}[Definition:Inverse Matrix] Let $n \in \Z_{>0}$ be a (strictly) positive integer. Let $\mathbf A$ be a square matrix of order $n$. Let there exist a square matrix $\mathbf B$ of order $n$ such that: :$\mathbf A \mathbf B = \mathbf I_n = \mathbf B \mathbf A$ where $\mathbf I_n$ denotes the unit matrix of order $n$. Then $\mathbf B$ is called the '''inverse of $\mathbf A$''' and is usually denoted $\mathbf A^{-1}$. \end{definition}
ProofWiki
Protein–ligand pose and affinity prediction: Lessons from D3R Grand Challenge 3 Panagiotis I. Koukos1, Li C. Xue1 & Alexandre M. J. J. Bonvin ORCID: orcid.org/0000-0001-7369-13221 Journal of Computer-Aided Molecular Design volume 33, pages 83–91 (2019)Cite this article We report the performance of HADDOCK in the 2018 iteration of the Grand Challenge organised by the D3R consortium. Building on the findings of our participation in last year's challenge, we significantly improved our pose prediction protocol which resulted in a mean RMSD for the top scoring pose of 3.04 and 2.67 Å for the cross-docking and self-docking experiments respectively, which corresponds to an overall success rate of 63% and 71% when considering the top1 and top5 models respectively. This performance ranks HADDOCK as the 6th and 3rd best performing group (excluding multiple submissions from a same group) out of a total of 44 and 47 submissions respectively. Our ligand-based binding affinity predictor is the 3rd best predictor overall, behind only the two leading structure-based implementations, and the best ligand-based one with a Kendall's Tau correlation of 0.36 for the Cathepsin challenge. It also performed well in the classification part of the Kinase challenges, with Matthews Correlation Coefficients of 0.49 (ranked 1st), 0.39 (ranked 4th) and 0.21 (ranked 4th) for the JAK2, vEGFR2 and p38a targets respectively. Through our participation in last year's competition we came to the conclusion that template selection is of critical importance for the successful outcome of the docking. This year we have made improvements in two additional areas of importance: ligand conformer selection and initial positioning, which have been key to our excellent pose prediction performance this year. The Drug Design Data Resource (D3R) Grand Challenge (GC) of 2018 is the third iteration of the major docking competition organised by the D3R consortium [1, 2] and similarly to previous years, it has two goals. The first, is the assessment of the ability of docking algorithms to accurately predict the binding poses of a protein against a diverse set of small molecules, and the second, the evaluation of the performance of binding affinity prediction algorithms. The protein which is the focus of the pose prediction assessment is Cathepsin S—a member of the Cathepsin family. Cathepsins are proteases that are classified in three groups depending on the makeup of their catalytic site, with Cathepsin S being a member of the most populated group—cysteine proteases [3]. Its involvement in MHC class II antigen presentation is well established. Given that role, it should come as no surprise that it has been implicated in many pathological conditions such as cancer and diabetes. More recently it has been investigated for its role in pain perception [4] and cardiovascular and kidney [5] disease. It has long held an interest for the pharmaceutical industry [6] as evidenced by the plethora (more than 50 at time of writing) of human Cathepsin S structures with a bound ligand, that have been deposited in the Protein Data Bank (PDB) [7] over a time period that spans 15 years. In addition to the Cathepsin S-centric assessment, which also includes a binding affinity prediction component, binding affinity prediction approaches are evaluated in four subchallenges that focus on kinases. Kinases catalyse the process of phosphorylation through which a phosphate group is covalently bound to a protein substrate. Their role in cell signalling has been well understood for decades and they are involved in many aspects of cell differentiation and growth [8]. They are a primary target for cancer-related drug development [9]. Through our participation in last year's GC [10] we came to the conclusion that template selection is of critical importance for the successful outcome of the docking. This year we have made improvements in two additional areas of importance: ligand conformer selection and initial positioning. The impact of this is reflected in our improved performance in GC3, the results of which are presented and discussed here. HADDOCK (High Ambiguity Driven DOCKing) is our information-driven docking platform [11, 12]. For an introduction to HADDOCK and small molecule docking please review the contribution we made to last year's special issue on the D3R GC [10]. The main conclusion from our participation in last year's competition was that protein template selection is of crucial importance for the successful outcome of the docking. We used the protocol we came up with last year to select protein templates for this year's competition as well. We made improvements to the ligand conformer selection and placement protocols. Similar to last year, all new and untested parts of the protocol were benchmarked on existing protein–ligand complexes extracted from the PDB. In a departure from previous years, this year's competition is further divided in five subchallenges. Subchallenge 1 is the equivalent of the GC of previous competitions and has a pose and binding affinity prediction component. Subchallenges 2–5 only have a binding affinity component. We participated in subchallenges 1 and 2. Subchallenge 1 This challenge focused on Cathepsin S. For the first part of the challenge—pose prediction—we had to predict the binding pose of Cathepsin S against a set of 24 small molecules that were known to bind to it. There is a cross-docking stage, during which the structures of the target proteins are not known and a self-docking stage for which the bound protein structures—but not those of the compounds—are known. The organisers provided us initially with SMILES strings for the small molecules and the FASTA sequence of the protein, and for the self-docking stage with the coordinates of the bound receptor for each ligand. Additionally, two publicly available structures of the protein with a dimethylsulfoxide (DMSO) molecule and a sulfate ion (SO4) placed in the binding pocket were circulated to the participants because the aforementioned molecules were detected in some of the crystal structures. For the binding affinity prediction component of the challenge we had to rank the binding affinities of 136 compounds against the protein. Protein template selection This part of the protocol, as well as the reasoning behind it, are described in greater detail in our previous work and so will only be covered briefly. Using the provided FASTA sequence, we identified structures of Cathepsin S that had been deposited in the PDB. We filtered the results and kept only those structures where the protein was complexed with a non-covalently bound ligand, thus identifying 36 templates. We then proceeded to compare the crystallographic ligands to the target compounds using as a similarity measure the Tanimoto distance, as implemented in the fmcsR and chemmineR packages [13, 14]. In this way, we selected one protein template for each of the 24 target compounds, by identifying the template with the highest similarity ligand. The similarities of the crystallographic ligands to the prediction set compounds are shown in S.I. Fig. 1. For the self-docking challenge, we used the provided crystallographic structures retaining crystallographic waters and DMSO (target 14) or sulphate (targets 2, 17, 20, 22, 24 and 24) molecules. Ligand preparation Three-dimensional (3D) conformations of the ligands were generated with OpenEye OMEGA (v20170613) [15] using the SMILES strings as input. For every molecule, we sampled up to 500 conformers. We used the TanimotoCombo metric, as implemented in OpenEye ROCS [16], to compare the generated conformers to their respective crystallographic ligand in the identified templates (see "Protein template selection"). The TanimotoCombo metric combines shape and chemical similarity and allows us to select the conformers whose shape and chemical features resemble that of the crystallographic ligands. The top 10 scoring conformers were selected for ensemble docking. Each conformer was superimposed onto the crystallographic ligand in the template using the shape toolkit of the OpenEye suite. This protocol was benchmarked with existing Cathepsin S-ligand structures identified in the PDB. This allowed us to evaluate the impact our choices had on the quality of our poses. We used four Cathepsin S structures (PDBids: 3IEJ, 3KWN, 3MPE, 3MPF) [17,18,19] and their respective ligands. After selecting the protein template based on the protocol described in "Protein template selection", we selected the ligand conformers by their TanimotoCombo score and after superimposing them to the site of the crystallographic ligand, proceeded to refine them (see "Docking" below). For the self-docking challenge, we superimposed the protein template identified during the cross-docking challenge on the prediction set crystallographic structure. That allows us to superpose the generated conformers on the crystallographic ligand which is situated in the active site of the prediction set crystallographic structure because of the first superposition. We refined the ensemble of ligand conformations superimposed on their respective protein templates using the water refinement protocol of HADDOCK. All hydrogen atoms were kept (by default HADDOCK removes the non-polar hydrogens to save computing time). Since the ligand conformations were selected based on their similarity to the closest identified template (see above) and superimposed onto the ligand in the selected template, no exhaustive search was performed. Instead the initial poses were only subjected to a short energy minimization in which only interface residues were treated as flexible, followed by the explicit water refinement stage of HADDOCK. For this the system is solvated using an 8 Å shell of TIP3P [20] water molecules. The water refinement protocol consists of a first heating phase (100 MD integration steps at 100, 200, and 300 K) with weak position restraints on all atoms except those which belong to the side-chain of residues at the interface. The interface is defined as the set of residues whose atoms are within 5 Å of any atom of any binding partner. The second MD phase consists of 2500 integration steps at 300 K with positional restraints on all non-hydrogen atoms excluding the interface residues. The number of MD steps was doubled compared to HADDOCK's default value (1250) because this yielded higher quality structures during our benchmarking with the four PDB structures described in "Ligand preparation". The last cooling phase, consists of 500 integration steps at 300, 200 and 100 K, respectively, during which positional restraints are only used for the backbone atoms of the non-interface residues. A 2 fs time-step is used throughout the protocol for the integration of equation of motions. The number of water refined models was set to 200. We also modified the default HADDOCK scoring function for the refinement stage by halving the weight of the electrostatic energy term: $$HADDOC{K_{score}}=1.0 \times {E_{vdw}}+0.1 \times {E_{elec}}+1.0 \times {E_{desolv}}+0.1 \times {E_{AIR}}$$ This adjustment was motivated by internal benchmarking our group has performed on small molecule–protein complexes (data not shown). This scoring function is used to rank the generated models. The various terms are the intermolecular van der Waals (Evdw) and electrostatic (Eelec) energies calculated with the OPLS force field and an 8.5 Å non-bonded cutoff [21], an empirical desolvation potential (Edesolv) [22] and the ambiguous interaction restraints energy (EAIR) [11]. Note that in this case, since only refinement was performed without any restraints to drive the docking, EAIR is effectively 0. For the self-docking challenge, we follow the same protocol as for the cross-docking one, keeping all crystallographic waters and fixing the conformation of the protein, with the additional change of instructing HADDOCK to write PDB files containing the solvent molecules (water) present during the refinement stage. Binding affinity The binding affinity predictions are evaluated in two stages. The first stage takes place before the structures of the complexes (protein and ligand) are released by the organisers, which means that either only ligand information is used, or models of the complexes, and the second after, which allows participants to make use of the newly available structural information. For the first stage, we submitted both ligand-based and structure-based rankings and for the second only a structure-based one. Both approaches are described in detail in our previous D3R paper [10]. In short, the structure-based approach consists of the PRODIGY [23] method adapted for small molecules and trained on the 2P2I dataset [24] which makes use of the following function to score protein–ligand complexes by binding affinity: $$\Delta{{\text{G}}_{{\text{score}}}}=0.343794 \times {{\text{E}}_{{\text{elec}}}} - 0.037597 \times {\text{A}}{{\text{C}}_{{\text{CC}}}}+0.138738 \times {\text{A}}{{\text{C}}_{{\text{NN}}}}+0.160043 \times {\text{A}}{{\text{C}}_{{\text{OO}}}} - 3.088861 \times {\text{A}}{{\text{C}}_{{\text{XX}}}}+187.011384$$ where \({E_{elec}}\) is the intermolecular electrostatic energy calculated by the water refinement protocol of HADDOCK (see "Docking") and \(A{C_{CC}}\), \(A{C_{NN}}\), \(A{C_{OO}}\) and \(A{C_{XX}}\) are the counts of atomic contacts between carbon–carbon, nitrogen–nitrogen, oxygen–oxygen and all other atoms and polar hydrogens between the protein and the ligand, within a distance cut-off of 10.5 Å. We used the mean \({\triangle G_{{\text{score}}}}\) of the top 10 models of the water refinement (see "Docking") to rank the compounds. The ligand-based approach rests on the hypothesis that similar ligands complexed to our proteins of interest should have similar binding affinities. Using the BindingDB database [25] we identified 1839 compounds bound to Cathepsin S with IC50 values. We calculated the similarity of the prediction set to the training set using the Atom Pair measurement as a similarity measure. The similarity matrices of the BindingDB set were used to train a Support Vector Regression model with the libSVM library for MatLab [26] that was, in turn, used to predict the binding affinities of the prediction set. Fitting and RMSD calculations for generating the figures were performed using the McLachlan algorithm [27] as implemented in the program ProFit (http://www.bioinf.org.uk/software/profit/) from the SBGrid distribution [28]. Subchallenge 2 only had a binding affinity component. The participants had to predict binding affinities for three protein targets—the kinases vEGFR2, JAK2-SC2 and p38-α—and sets of 85, 89 and 72 compounds respectively. Some of the compounds were shared between the three targets. The organisers provided SMILES strings for all compounds along with FASTA sequences of the proteins. For this challenge, we only submitted ligand-based binding affinity rankings. The method is the same as the one described in "Binding affinity" section for subchallenge 1. The only difference was the training data availability. Using BindingDB we identified 7049, 4582 and 4563 compounds with IC50 binding affinity measurements for the vEGFR2, JAK2-SC2 and p38a kinases respectively. After the binding affinity rankings were released by the organisers, it quickly became apparent that for all three targets, the compounds could be classified into binding and non-binding sets since most compounds had the maximum detectable binding affinity of 10 µM. This prompted the organisers to alter the way the challenge would be evaluated into a classification and regression problem, where the identification of the binding set (compounds with a Kd < 10 µM) would be treated as a classification problem and the ranking of the binding compounds by binding affinity as a regression problem. Pose prediction The binding pose prediction was evaluated for the cross- and self-docking experiments. Our performance in the cross-docking experiment in terms of RMSD of the five submitted poses is shown in Fig. 1. Heavy-atom RMSD values of the cross-docking models from the reference structures. Every point corresponds to one model with five models per target. The models are ranked by HADDOCK score with the highest scoring ones being on the left of every block This analysis was carried out by superposing the interface areas of the models and their respective reference structures and calculating the heavy-atom RMSD (excluding any halogen atoms) of the compounds. The mean RMSD values across all models and targets for this experiment are 3.04 ± 2.03 Å, whereas for the self-docking experiment, the values improved to 2.67 ± 1.63 Å. Figure 2 highlights some of our top predictions. Superpositions of HADDOCK models on reference structures. Left: model 5 from target 1 (1.1 Å). Right: model 1 from target 8 (1.5 Å). The reference protein structure is shown in cartoon representation in white. The compounds are shown in stick representation in white and blue for the reference and model molecules respectively. Figure created with PyMOL [29] At least one of the 5 models submitted was of acceptable quality (RMSD ≤ 2.5 Å) in 17 of the 24 targets (71% success rate top5). Our scoring function is thus able to correctly rank the near-native solutions near the top as can be seen in S.I. Fig. 2. If one considers only the top-ranked pose, the performance remains impressive with 15 out of 24 targets with an acceptable quality model (63% success rate top1). Figure 3 shows the difference between the top and bottom ranked models for target 7. Despite these excellent results, there is still room for improvement, especially in scoring: if we only consider the targets for which we generated at least one acceptable model (17 out of 24), the top-scoring pose corresponds to the best pose in 5 of the 17 targets (29%). For the remaining 12 targets, the average difference between the top scoring and best poses is 0.55 ± 0.71 Å and 0.45 ± 0.61 Å for the cross- and self-docking experiments respectively. Superpositions of HADDOCK models on reference structures. Left: model 1 from target 7 (1.85 Å). Right: model 5 from same target (9.31 Å). Our scoring function can distinguish the near native model from the wrong one. The difference between the two molecules is a single torsional angle that has been rotated ~ 180°. Figure created with PyMOL The performance of HADDOCK relative to the other participants for both experiments can be seen in Fig. 4. Note that if we would only consider one submission (the best) per group our rank would be 6th for the cross-docking experiment (top panel in Fig. 4). Our performance in the two experiments (cross- vs. self-docking) is broken down by target in Fig. 5, revealing that our protocol is not very sensitive to the starting template. In most cases only rather small improvements in terms of RMSD are obtained when starting from the bound receptor including water. The single target for which we observe a significant deviation in the self-docking results compared to the cross-docking ones is the first one (see Fig. 5). The average RMSD for that target is 2.54 ± 1.29 Å and 4.13 ± 3.46 Å for cross- and self-docking experiments respectively. Model 5 of the self-docking experiment submission is mostly responsible for this significant change, since its RMSD is > 10. This is a repetition of what is shown in Fig. 3, with one of the models (model 5 in both cases) which has a torsional angle that is rotated by 180° compared to the rest of the submitted models and the reference structure. Heavy-atom RMSD values averaged over all models and all targets. Top: cross-docking experiment. Bottom: self-docking experiment. Every bar corresponds to a single submission. The error bars indicate the standard deviation of the mean RMSD. HADDOCK submission is represented by the dark-grey bar in both panels Comparison between the performance of HADDOCK in the cross-docking and self-docking stages. Every set of bars corresponds to the average heavy-atom RMSD of all five models for a target, with the light- and dark-grey coloured bars corresponding to the cross- and self-docking experiments respectively. The error bars indicate the standard deviation of the mean RMSD Binding affinity prediction Binding affinity predictions were performed in two stages—one before the organisers released the poses to the participants and one after. We participated in stage 1 with both ligand-based and structure-based approaches, while for stage 2 we only submitted a structure-based ranking. Figure 6 shows our performance compared to all participants. Ranking of the binding affinity predictions for Cathepsin S by correlation. Top: stage (1). Bottom: stage (2). Every bar corresponds to one submission with our ligand-based submission having a medium and the structure-based one a dark grey colour in both panels These results were rather surprising: The structure-based approach which was one of the top performers in last year's competition failed to produce an accurate ranking of the compounds, while our ligand-based predictor now performs as one of the best (even if the quality of the prediction is still limited). There was also no improvement for the structure-based ranking between stages 1 and 2 in contrast to GC2 where we noticed a significant improvement when using the crystallographic poses for ranking the compounds. One explanation for this could be that, compared to last year, we already had better quality poses for most of the targets for stage 1. On the other hand, our simple machine learning-based ligand-based approach is not only the most accurate ligand-based approach with a Kendall's Tau of 0.36 but the third most accurate method for both stages, behind only the top performing structure-based approaches. This challenge revolved around kinase binding affinity prediction. As was mentioned in "Materials and methods" section, this is a regression-classification problem. The overall results can be seen in Fig. 7. Binding affinity prediction correlation coefficients. Top: JAK2-SC2. Middle: vEGFR2. Bottom: p38a. The bars and the corresponding error bars represent the Kendall's Tau correlation between the binding affinity predictions and the binding set for every target. The black circles correspond to the Matthews Correlation Coefficient which was used to assess the accuracy of the classification of the compounds into binding and non-binding. The dark grey bars and their corresponding circles represent our submissions Despite the fact that our approach wasn't trained with classification in mind, the classification performance is better than that of the regression. Specifically, the Matthews Correlation Coefficient values are 0.49, 0.39 and 0.21 respectively for JAK2-SC2, vEGFR2 and p38a (see S.I. Fig. 3 for the classification rankings). The respective Kendall's Tau correlations are 0.15, 0.38 and 0.07. As is evident from the plot the two correlation metrics are not correlated. This means that an algorithm that accurately identifies the binders and non-binders does not necessarily rank the binders accurately. The performance differences cannot be accounted for by the difference in training set size, since we identified roughly the same number of compounds for JAK2-SC2 and p38a. Additionally, vEGFR2 had the biggest training set size but that is not translated into better performance for the classification or the regression. GC3 has allowed to implement the lessons that we learned by participating in GC2 and further experiment with additional optimisations. The conclusions that we can draw with regards to the pose prediction challenges are the following: Selecting the protein templates accurately has the largest effect on the outcome of the docking. By identifying templates that already have a ligand bound to them and selecting the one that is most similar to the prediction compounds, we are ensuring a protein binding interface that is highly compatible with the prediction compound. This removes the need for extensive sampling of the protein interface or ensemble docking. Moreover, this approach seems to be robust to low similarity (see S.I. Fig. 1) compounds. The majority of template ligands identified have a Tanimoto similarity of < 0.6. Selecting the ligand conformations. Identifying structures with existing compounds has the additional benefit that they can be used to select the compound structures to be used during docking. Generating 3D models of compounds from 2D information entails generating hundreds of conformers. By comparing the shape and chemical similarity of the conformers to existing compound structures we can reduce the number of conformers needed during docking and ensure the starting conformations are closer to the experimental structures. Making use of the template information by positioning the conformers in the binding interface. This last observation is only relevant for molecular simulation codes that, like HADDOCK, randomise the relative orientation and position of the partners prior to docking. We can use shape similarity to position the ensemble of conformers at the binding site and bypass the first two stages of HADDOCK (rigid-body energy minimisation and flexible refinement by simulated annealing in torsion angle space) and directly refine the complexes using a longer version of our water-refinement protocol. The applicability of our approach was demonstrated by its performance, with mean RMSD values of 3.04 Å and 2.67 Å for the cross-docking and self-docking experiments respectively. Our overall success rate when considering the top1 and top5 poses is 63% and 71%, respectively. These results place us as the 6th and 3rd best performers for the two challenges respectively. The binding affinity experiments present a greater challenge to the community as whole. Despite our competitive rankings in the classification as well as the regression challenges, it appears that reliable binding affinity predictors are still not within grasp. This holds true for both ligand and structure-based approaches. However, the surprisingly good classification results (especially given that the algorithm was optimised for regression rather than classification problems) make us optimistic that this can be improved in the future. The data and code used to train the ligand-based binding affinity predictor and rank the compounds are freely available on GitHub, together with our in-house scripts developed during our participation in the last two GC competitions. These can be accessed at following URL: https://github.com/haddocking/D3R-tools. Gaieb Z, Liu S, Gathiaka S, Chiu M, Yang H, Shao C, Feher VA, Walters WP, Kuhn B, Rudolph MG, Burley SK, Gilson MK, Amaro RE (2018) D3R Grand Challenge 2: blind prediction of protein–ligand poses, affinity rankings, and relative binding free energies. J Comput Aided Mol Des 32:1–20 Gathiaka S, Liu S, Chiu M, Yang H, Stuckey JA, Kang YN, Delproposto J, Kubish G, Dunbar JB, Carlson HA, Burley SK, Walters WP, Amaro RE, Feher VA, Gilson MK (2016) D3R Grand Challenge 2015: evaluation of protein–ligand pose and affinity predictions. J Comput Aided Mol Des 30:651–668 Wilkinson RDA, Williams R, Scott CJ, Burden RE (2015) Cathepsin S: therapeutic, diagnostic, and prognostic potential. Biol Chem 396:867 Ye L, Xiao L, Yang SY, Duan JJ, Chen Y, Cui Y, Chen Y (2017) Cathepsin S in the spinal microglia contributes to remifentanil-induced hyperalgesia in rats. Neuroscience 344:265–275 Sena BF, Figueiredo JL, Aikawa E (2017) Cathepsin S as an inhibitor of cardiovascular inflammation and calcification in chronic kidney disease. Front Cardiovasc Med 4:88 Wiener JJM, Sun S, Thurmond RL (2010) Recent advances in the design of cathepsin S inhibitors. Curr Top Med Chem 10:717–732 Berman HM, Westbrook J, Feng Z, Gilliland G, Bhat TN, Weissig H, Shindyalov IN, Bourne PE (2000) The protein data bank. Nucleic Acids Res 28:235–242 Arkhipov A, Shan Y, Das R, Endres NF, Eastwood MP, Wemmer DE, Kuriyan J, Shaw DE (2013) Architecture and membrane interactions of the EGF receptor. Cell 152:557–569 Roskoski R (2014) The ErbB/HER family of protein-tyrosine kinases and cancer. Pharmacol Res 79:34–74 Kurkcuoglu Z, Koukos PI, Citro N, Trellet ME, Rodrigues JPGLM, Moreira IS, Roel-Touris J, Melquiond ASJ, Geng C, Schaarschmidt J, Xue LC, Vangone A, Bonvin AMJJ (2018) Performance of HADDOCK and a simple contact-based protein–ligand binding affinity predictor in the D3R Grand Challenge 2. J Comput Aided Mol Des 32:175–185 Dominguez C, Boelens R, Bonvin AMJJ (2003) HADDOCK: a protein-protein docking approach based on biochemical or biophysical information. J Am Chem Soc 125:1731–1737 van Zundert GCP, Rodrigues JPGLM, Trellet M, Schmitz C, Kastritis PL, Karaca E, Melquiond ASJ, van Dijk M, de Vries SJ, Bonvin AMJJ (2016) The HADDOCK2.2 web server: user-friendly integrative modeling of biomolecular complexes. J Mol Biol 428:720–725 Wang Y, Backman TWH, Horan K, Girke T (2013) fmcsR: mismatch tolerant maximum common substructure searching in R. Bioinformatics 29:2792–2794 Cao Y, Charisi A, Cheng L-C, Jiang T, Girke T (2008) ChemmineR: a compound mining framework for R. Bioinformatics 24:1733 Omega Toolkit 2.6.4 (2017) OpenEye Scientific Software, Santa Fe Hawkins PCD, Skillman AG, Nicholls A (2007) Comparison of shape-matching and docking as virtual screening tools. J Med Chem 50:74–82 Ameriks MK, Bembenek SD, Burdett MT, Choong IC, Edwards JP, Gebauer D, Gu Y, Karlsson L, Purkey HE, Staker BL, Sun S, Thurmond RL, Zhu J (2010) Diazinones as P2 replacements for pyrazole-based cathepsin S inhibitors. Bioorg Med Chem Lett 20:4060–4064 Ameriks MK, Axe FU, Bembenek SD, Edwards JP, Gu Y, Karlsson L, Randal M, Sun S, Thurmond RL, Zhu J (2009) Pyrazole-based cathepsin S inhibitors with arylalkynes as P1 binding elements. Bioorg Med Chem Lett 19:6131–6134 Wiener DK, Lee-Dutra A, Bembenek S, Nguyen S, Thurmond RL, Sun S, Karlsson L, Grice CA, Jones TK, Edwards JP (2010) Thioether acetamides as P3 binding elements for tetrahydropyrido-pyrazole cathepsin S inhibitors. Bioorg Med Chem Lett 20:2379–2382 Jorgensen WL, Chandrasekhar J, Madura JD, Impey RW, Klein ML, Jorgensen WL, Chandrasekhar J, Madura JD, Impey RW, Klein ML (1983) Comparison of simple potential functions for simulating liquid water comparison of simple potential functions for simulating liquid water. J Chem Phys 79:926 Jorgensen WL, Tirado-Rives J (1988) The OPLS [optimized potentials for liquid simulations] potential functions for proteins, energy minimizations for crystals of cyclic peptides and crambin. J Am Chem Soc 110:1657–1666 Fernandez-Recio J, Totrov M, Abagyan R (2004) Identification of protein-protein interaction sites from docking energy landscapes. J Mol Biol 335:843–865 Vangone A, Bonvin AMJJ (2017) PRODIGY: a contact-based predictor of binding affinity in protein-protein complexes. Bio-protocol 7:e2124 Kastritis PL, Rodrigues JPGLM, Bonvin AMJJ (2014) HADDOCK2P2I: a biophysical model for predicting the binding affinity of protein–protein interaction inhibitors. J Chem Inf Model 54:826–836 Gilson MK, Liu T, Baitaluk M, Nicola G, Hwang L, Chong J (2016) BindingDB in 2015: a public database for medicinal chemistry, computational chemistry and systems pharmacology. Nucleic Acids Res 44:D1045 Chang C-C, Lin C-J (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol 2:27:1–27:27 McLachlan AD (1982) Rapid comparison of protein structures. Acta Crystallogr A 38:871–873 Morin A, Eisenbraun B, Key J, Sanschagrin PC, Timony MA, Ottaviano M, Sliz P (2013) Collaboration gets the most out of software. Elife. https://doi.org/10.7554/eLife.01456 The PyMOL Molecular Graphics System, Version 1.8, Schrödinger, LLC This work was supported by the European H2020 e-Infrastructure grants West-Life (Grant No. 675858), BioExcel (Grant No. 675728) and from the Dutch Foundation for Scientific Research (NWO) (TOP-PUNT Grant 718.015.001, Veni Grant 722.014.005). The authors declare no competing financial interests. The authors thanks Adrien Melquiond, Irina Moreira, and Mikael Trellet for useful discussions. Bijvoet Center for Biomolecular Research, Faculty of Science - Chemistry, Utrecht University, Padualaan 8, 3584 CH, Utrecht, The Netherlands Panagiotis I. Koukos, Li C. Xue & Alexandre M. J. J. Bonvin Panagiotis I. Koukos Li C. Xue Alexandre M. J. J. Bonvin Correspondence to Alexandre M. J. J. Bonvin. Below is the link to the electronic supplementary material. Supplementary material 1 (PDF 281 KB) Koukos, P.I., Xue, L.C. & Bonvin, A.M.J.J. Protein–ligand pose and affinity prediction: Lessons from D3R Grand Challenge 3. J Comput Aided Mol Des 33, 83–91 (2019). https://doi.org/10.1007/s10822-018-0148-4 Drug Design Data Resource
CommonCrawl
Dempwolff group In mathematical finite group theory, the Dempwolff group is a finite group of order 319979520 = 215·32·5·7·31, that is the unique nonsplit extension $2^{5\,.}\mathrm {GL} _{5}(\mathbb {F} _{2})$ of $\mathrm {GL} _{5}(\mathbb {F} _{2})$ by its natural module of order $2^{5}$. The uniqueness of such a nonsplit extension was shown by Dempwolff (1972), and the existence by Thompson (1976), who showed using some computer calculations of Smith (1976) that the Dempwolff group is contained in the compact Lie group $E_{8}$ as the subgroup fixing a certain lattice in the Lie algebra of $E_{8}$, and is also contained in the Thompson sporadic group (the full automorphism group of this lattice) as a maximal subgroup. Huppert (1967, p.124) showed that any extension of $\mathrm {GL} _{n}(\mathbb {F} _{q})$ by its natural module $\mathbb {F} _{q}^{n}$ splits if $q>2$, and Dempwolff (1973) showed that it also splits if $n$ is not 3, 4, or 5, and in each of these three cases there is just one non-split extension. These three nonsplit extensions can be constructed as follows: • The nonsplit extension $2^{3\,.}\mathrm {GL} _{3}(\mathbb {F} _{2})$ is a maximal subgroup of the Chevalley group $G_{2}(\mathbb {F} _{3})$. • The nonsplit extension $2^{4\,.}\mathrm {GL} _{4}(\mathbb {F} _{2})$ is a maximal subgroup of the sporadic Conway group Co3. • The nonsplit extension $2^{5\,.}\mathrm {GL} _{5}(\mathbb {F} _{2})$ is a maximal subgroup of the Thompson sporadic group Th. References • Dempwolff, Ulrich (1972), "On extensions of an elementary abelian group of order 25 by GL(5,2)", Rendiconti del Seminario Matematico della Università di Padova. The Mathematical Journal of the University of Padova, 48: 359–364, ISSN 0041-8994, MR 0393276 • Dempwolff, Ulrich (1973), "On the second cohomology of GL(n,2)", Australian Mathematical Society. Journal. Series A. Pure Mathematics and Statistics, 16: 207–209, doi:10.1017/S1446788700014221, ISSN 0263-6115, MR 0357639 • Griess, Robert L. (1976), "On a subgroup of order 215 . ¦GL(5,2)¦ in E8(C), the Dempwolff group and Aut(D8°D8°D8)" (PDF), Journal of Algebra, 40 (1): 271–279, doi:10.1016/0021-8693(76)90097-1, hdl:2027.42/21778, ISSN 0021-8693, MR 0407149 • Huppert, Bertram (1967), Endliche Gruppen (in German), Berlin, New York: Springer-Verlag, ISBN 978-3-540-03825-2, MR 0224703, OCLC 527050 • Smith, P. E. (1976), "A simple subgroup of M? and E8(3)", The Bulletin of the London Mathematical Society, 8 (2): 161–165, doi:10.1112/blms/8.2.161, ISSN 0024-6093, MR 0409630 • Thompson, John G. (1976), "A conjugacy theorem for E8", Journal of Algebra, 38 (2): 525–530, doi:10.1016/0021-8693(76)90235-0, ISSN 0021-8693, MR 0399193 External links • Dempwolff group at the atlas of groups.
Wikipedia
Effect of the deuterium on efficiency and type of adipogenic differentiation of human adipose-derived stem cells in vitro Evaluation and optimization of differentiation conditions for human primary brown adipocytes XingYun Wang, LiangHui You, … ChenBo Ji Human adipose tissue mesenchymal stem cells as a novel treatment modality for correcting obesity induced metabolic dysregulation Nitya Shree, Sunil Venkategowda, … Ramesh R. Bhonde Transcriptome analysis reveals brown adipogenic reprogramming in chemical compound-induced brown adipocytes converted from human dermal fibroblasts Yukimasa Takeda, Toshikazu Yoshikawa & Ping Dai A developed serum-free medium and an optimized chemical cocktail for direct conversion of human dermal fibroblasts into brown adipocytes Yukimasa Takeda & Ping Dai 20 Years with SGBS cells - a versatile in vitro model of human adipocyte biology Daniel Tews, Rolf E. Brenner, … Martin Wabitsch Regulation of PKCβ levels and autophagy by PML is essential for high-glucose-dependent mesenchymal stem cell adipogenesis Claudia Morganti, Sonia Missiroli, … Carlotta Giorgi Ginkgolic acid, a sumoylation inhibitor, promotes adipocyte commitment but suppresses adipocyte terminal differentiation of mouse bone marrow stromal cells Huadie Liu, Jianshuang Li, … Tao Yang Metabolic programming determines the lineage-differentiation fate of murine bone marrow stromal progenitor cells Michaela Tencerova, Elizabeth Rendina-Ruedy, … Moustapha Kassem Adipose tissue mitochondrial dysfunction in human obesity is linked to a specific DNA methylation signature in adipose-derived stem cells Miriam Ejarque, Victoria Ceperuelo-Mallafré, … Sonia Fernández-Veledo Alona V. Zlatska1,2, Roman G. Vasyliev1, Inna M. Gordiienko2,3, Anzhela E. Rodnichenko1, Maria A. Morozova4, Maria A. Vulf5, Dmytro O. Zubov1, Svitlana N. Novikova1, Larisa S. Litvinova5, Tatiana V. Grebennikova4,6, Igor A. Zlatskiy1,4 & Anton V. Syroeshkin4 Scientific Reports volume 10, Article number: 5217 (2020) Cite this article Stem-cell biotechnology In this study, we performed an adipogenic differentiation of human adipose-derived stem cells (ADSCs) in vitro with different deuterium content (natural, low and high) in the culture medium during differentiation process with parallel analysis of the gene expression, metabolic activity and cell viability/toxicity. After ADSCs differentiation into adipocytes we have done the analysis of differentiation process efficiency and determined a type of resulting adipocytes (by morphology, gene expression, UCP1 protein detection and adipokine production analysis). We have found that high (5 × 105 ppm) deuterium content significantly inhibit in vitro adipogenic differentiation of human ADSCs compared to the groups with natural (150 ppm) and low (30 ppm) deuterium content. Importantly, protocol of differentiation used in our study leads to white adipocytes development in groups with natural (control) and high deuterium content, whereas deuterium-depleted differentiation medium leads to brown-like (beige) adipocytes formation. We have also remarked the direct impact of deuterium on the cellular survival and metabolic activity. Interesting, in deuterium depleted-medium, the cells had normal survival rate and high metabolic activity, whereas the inhibitory effect of deuterated medium on ADSCs differentiation at least was partly associated with deuterium cytotoxicity and inhibitory effect on metabolic activity. The inhibitory effect of deuterium on metabolic activity and the subsequent decrease in the effectiveness of adipogenic differentiation is probably associated with mitochondrial dysfunction. Thus, deuterium could be considered as an element that affects the substance chirality. These findings may be the basis for the development of new approaches in the treatment of obesity, metabolic syndrome and diabetes through the regulation of adipose-derived stem cell differentiation and adipocyte functions. In the 21st century, non-communicable diseases (NCD) like obesity, metabolic syndrome and type 2 diabetes mellitus (T2DM) became the main medical problems of the humanity1,2,3,4. These diseases started in the Western world, but in parallel with the improving of human life standards, technological progress and the spread of the Western lifestyle also around the world. So, these diseases have become a global epidemic5. Currently, although there remains a correlation between the level of economic development and the frequency of these diseases, they have ceased to be a medical problem in high-income countries, but also have become an urgent item for the low-income and middle-income countries6. A characteristic feature of obesity, metabolic syndrome and T2DM is the defection of glucose and lipids metabolism, which is manifested in insulin resistance, impaired fasting glucose, dyslipidemia, high blood sugar, high serum triglycerides, imbalance of different types of lipoproteins in blood serum7,8. Prevention and treatment of these diseases include changing and controlling lifestyle, diet and the use of pharmaceuticals9. Despite the progress in medical science, pharmacology (the development of new substances for the correction of metabolism) and biotechnology (improving the process of insulin production), the solution to the problem of obesity, metabolic syndrome and T2DM requires new effective approaches. It should be taken into account that adipose tissue is one of the key players in the development of obesity, metabolic syndrome and T2DM10. Conversely, adipose tissue can also be considered as the main target for the prevention and treatment of these pathological conditions11. The adipose tissue in the human body can be classified by anatomical location (subcutaneous, visceral, intermuscular, yellow bone marrow and breast), as well as functions (white and brown fat)12. The main function of white adipose tissue is to preserve energy in the form of lipids, insulating organs, and endocrine function - the production of hormones, growth factors, cytokines, chemokines and other biologically active substances that regulate energy metabolism and many other body functions, which were called adipokines13. The function of brown adipose tissue is heat production during adaptive thermogenesis. In humans, unlike rodents (laboratory animals most widely used in medical experiments, including modeling of obesity, metabolic syndrome and T2DM), brown adipose tissue is present in significant amount only in newborns and infants14. Recently, the existence of active thermogenic adipose tissue in human adults was shown, but this adipose tissue differs from the classical brown adipose tissue in a number of aspects (development, morphology, gene expression, adipokine production etc)15. This adipose tissue is called "beige" or "brite" (brown in white) adipose tissue. All types of adipocytes arise from adipose-derived stem cells (ADSCs) in the process of differentiation. At present, a number of questions regarding the origin of beige adipocytes (from the same stem cell as white adipocytes, or from the same stem cell as brown adipocytes, or from own stem cell) remain debatable as well as the ability of white adipose tissue to transdifferentiate into brown/beige adipose tissue16. The ability to control the formation of new adipose tissue, to convert white adipose tissue into brown/beige adipose tissue, or to set the direction of differentiation of ADSCs into a specific subtype of adipocytes is an attractive target for the development of new pharmacological substances to treat obesity, metabolic syndrome and T2DM17. In addition to the search for new pharmacological substances targeted to adipose tissue functions and/or various other biochemical aspects of energy homeostasis, it is also important to study the role of water in human health, metabolism, and pathogenesis of different diseases. Water is the most common chemical substance on the Earth and constitute the largest mass part of living organisms in percentage ratio. Water is also a universal solvent in which the basic biochemical processes of living organisms occur. An important component of a healthy diet is consumption of drinking water instead of sugar-containing and carbonated beverages18,19. So, the modulation of biological and physicochemical properties of water is also a promising opportunity to improve the effectiveness of treatment for obesity, metabolic syndrome and T2DM. Deuterated water has the same chemical formula as ordinary water, but instead of two atoms of a light hydrogen isotope (protium) it contains two atoms of a heavy hydrogen isotope (deuterium). Deuterated water shows only slight cytotoxicity. In general, chemical reactions in deuterated water have slower rate than in ordinary water20,21,22,23. Deuterium can be considered as a regulator of the biological properties of normal and/or cancer cells24,25,26,27. One of the medicine trends is the development of deuterium-containing drugs28,29,30. The other direction refers to the role of isotopology D/H ratio and its change in water to be used as an adjuvant in cancer treatment31,32,33,34. Different D/H ratio is manifested in the form of kinetic isotope effect, which is characterized by a change in the biotransformation and excretion rates of the drugs35,36,37. Moreover, the methodological approaches for drugs quality control based on water isotopology, could reduce the toxic load of the body38,39. In our previous studies, we have shown that different deuterium concentration in growth medium affects the proliferation, migration and metabolic activity of cultured human adipose-derived stem cells (ADSCs)40. In present study, we consider the question whether deuterium is involved in regulation of ADSCs differentiation, different types of adipocytes development and an adipokine production by adipocytes. Adipogenic differentiation of ADSCs was chosen as a model for our in vitro pilot study, where we compared the efficiency of adipogenesis in media with different deuterium contents (natural, low and high). Types of water used in the study. Water preparing and testing (characterization) The following basic water samples with various deuterium content were used in this study: deuterium-depleted water (ddw) with D/H = 4 ± 2 ppm D (Sigma-Aldrich, USA); deuterated water (D2O) with D/H = 99 absolute at. % D (Sigma-Aldrich, USA). Growth media with various deuterium content were prepared by diluting deuterium-depleted and deuterated water. The following media for adipogenic differentiation were used in this study: № 1 - medium with D/H ratio 30 ppm; № 2 - the medium with the highest deuterium content - D/H ratio 500.000 ppm. The milliQ water (MilliQ-system, UK) served as a standard (control) with D/H ratio 150 ppm. MilliQ, deuterium-depleted and deuterated water had no differences in physical characteristics or in trace element composition, except the deuterium content. This excluded the multifactor influence in the system for all comparison groups. Detailed description of the method was presented in previous studies22,27,40,41. The deuterium content was controlled by multipass laser absorption spectroscopy on the Isotopic Water Analyzer-912-0032 (Los Gatos Research Inc., USA). Detailed description of this method was presented in previous studies22,27. Chemical analysis of water with various deuterium content was performed by inductively coupled plasma-mass spectrometry on the ICP-QMS Agilent 7500CE spectrometer (Agilent Technologies, USA). Detailed description of the method was presented in previous studies22,27. Calibration solutions with a high range of elements' concentration (from 0.1 μg/L to 100 μg/L) were used for the device calibration. The solutions were prepared with the international standard 2.74473.0100 "ICP Multi Element Standard Solution XXI CertiPUR" which contains the following elements: Ag, Al, As, Ba, Be, Bi, Ca, Cd, Co, Cr, Cs, Cu, Fe, Ga, In, K, Li, Mg, Mn, Na, Ni, Pb, Rb, Se, Sr, Tl, U, V, Zn, Hg. The concentration of all above-listed 24 elements in the milliQ, deuterium-depleted or heavy water did not exceed the upper detection limit (detection limit range – 0.1–10 ppm). The ADSCs cultivation (in vitro live cell experiment) The experiments with use of human cell culture in vitro were carried out in accordance with the human experiment issues of the Code of Ethics of the World Medical Association (Declaration of Helsinki). All procedures related to obtaining human biopsies, cell isolation and culturing were performed with written informed donor consent and in accordance with the laws of Ukraine. The study protocol was approved by the Bioethics Committee of the State Institute of Genetic and Regenerative Medicine NAMS of Ukraine (Kyiv, Ukraine). The cell culturing was carried out in the GMP/GTP-compliant biotechnological laboratory ilaya.regeneration (License to operate the Banks of human cord blood, other tissues and cells; issued by the Ministry of Health of Ukraine AE No. 186342 from 11.07.2018). In all cases the voluntary informed consents were signed by donors of ADSCs. ADSCs samples (n = 6) were obtained from abdominal subcutaneous fat tissue of the healthy donors (3 female and 3 male) with normal somatometric and biochemical parameters without signs of obesity and viral or microbial infection. The age of patients was 23 ± 4.0 years. Body mass index (BMI) of adipose tissue donors was 20 ± 1.3. The ADSCs were isolated from the lipoaspirate by enzymatic digestion in 0.1% collagenase IA and 0.1% pronase with 2% fetal bovine serum (FBS) (all - Sigma-Aldrich, USA) for 1 h at 37 °С. Detailed description of the method was presented in previous studies40,41,42,43,44,45,46. The obtained cell suspension was transferred to 25 cm2 cell culture flask (SPL, Korea) and cultured in the following growth medium: modified МЕМ-α (Sigma-Aldrich, USA) prepared from the powder diluted with milliQ water of natural isotope content supplemented with 10% FBS (Sigma-Aldrich, USA), 2mM L-glutamine, 100 U/ml penicillin, 100 μg/ml streptomycin and 1 ng/ml bFGF-2 (all from Sigma-Aldrich, USA). The cells were cultured in multi-gas incubator CB210 (Binder, Germany) at +37 °C in the atmosphere of saturated humidity, 5% СО2 and 5% О2. Before using in experement ADSCs were expanded until passage 3 and characterized according to criteria of International Society of Cellular Therapy42,43,44,45,47. Flow cytometry for the ADSCs phenotype determination The cell phenotype was assessed by flow cytometry (FACS) on BD FACSAria fluorescence-activated flow cell sorter-cytometer (BD Pharmingen, BD Horizon USA) and was performed in accordance to monoclonal antibodies manufacturer's instructions (BD Pharmingen, BD Horizon USA). BD FACS Diva 6.1 software (BD Pharmingen, BD Horizon USA) was used for analysis. Detailed description of the method was presented in previous studies41,46. Directed osteogenic differentiation of ADSCs Osteogenic differentiation was performed in DMEM medium with low glucose (1 g/L) (BioWest, France) prepared from the powder by dissolution with milliQ water of natural isotope content and supplemented with addition of 10% FBS, 100 nM dexamethasone, 10 mM β-glycerophosphate, 500 μg/ml ascorbate-2-phosphate (all – Sigma-Aldrich, USA). After 21 days fixation and staining of cells with Alizarin Red S were carried out on mineral deposits. Detailed description of the method was presented in previous studies46. Directed chondrogenic differentiation of ADSCs The evaluation of chondrogenic differentiation of ADSCs was carried out using the micromass culture method. For this, 500 000 cells were centrifuged for 10 min at 400 g in 15 ml test tubes (Nunc, USA). Further, the cell precipitate was cultured in a chondrogenic induction medium containing DMEM with 4.5 g/L glucose (BioWest, France) prepared from the powder by dissolution with milliQ water of natural isotope content supplemented with addition of 50 μg/ml ascorbate-2-phosphate, 40 μg/L L-proline, 100 μg/ml pyruvate sodium, 10 ng/ml rhTGF-β3, 10–7 M dexamethasone (all – Sigma-Aldrich, USA), 1% ITS supplement (all – Gibco, USA), 1.25 mg/ml of bovine serum albumin, BSA (BioWest, France). Detailed description of the method was presented in previous studies46. Directed adipogenic differentiation of ADSCs The ADSCs were seeded with a density of 30 000 cells per 1 cm2. In 24 h the growth medium was changed on the serum-free adipogenic differentiation media with low, natural and high deuterium content: (1) phase I induction (4 days) – DMEM high glucose (4.5 g/L) (BioWest, France) prepared from the powder by dissolution with milliQ water of natural isotope content and supplemented with: 1 µM dexamethasone, 0.1 µM hydrocortisone, 50 µM indomethacin, 250 µM isobutylmethylxanthine, 0.2 nM triiodothyronine, 5 µg/ml insulin, 1% fatty-acid mixture, 0.01% bovine serum albumin (BSA), 1% ITS supplement, 50 µM ascorbate-2 phosphate, 2 mM L-glutamine, (all – Sigma-Aldrich, USA); phase II differentiation (10 days): media with same content without dexamethasone, indomethacin and isobutylmethylxanthine. Control adipogenic induction and differentiation media had a natural deuterium content. Experimental growth media had a composition similar to the control one, but was prepared on deuterated and deuterium-depleted waters. Gene expression, viability and metabolic activity of the cells were analyzed on days 3, 7 and 14 after induction to assess ADSCs adipogenic differentiation. Cytochemistry, immunocytochemistry and histochemistry Detailed description of the method was presented in previous studies46. Briefly, to confirm the osteogenic and adipogenic differentiation, the cells were fixed for 20 min in 10% buffered formalin (Sigma, USA), washed with DPBS (Sigma-Aldrich, USA) and stained for 20 min with 2% solution of Alizarin Red S (pH 4.1; for detecting calcified extracellular matrix deposits) or 0.5% solution of Oil Red O (for staining of neutral lipids) and Romanowsky-Giemsa stain for counterstaining, respectively (all – Sigma-Aldrich, USA). The Oil Red O extraction was performed as described previously48. Measurements were performed on a LabSystems Multiskan PLUS spectrofluorometer (USA). To confirm chondrogenic differentiation spheroids were removed from the incubator after 21 days of culturing, the medium was carefully aspirated, and spheroids were washed twice with DPBS. Spheroids were fixed in 10% buffered formalin for 2 hrs, dehydrated with ethanol, cleared with xylen and finally embedded in paraffin. Then, they were sectioned at 5 µm thickness. Slides were washed twice with distilled water and were stained by filtered Alcian Blue staining solution in the dark for 45 minutes at room temperature. Staining solution was removed and washed twice with PBS (all – Sigma-Aldrich, USA). Cartilage became stained an intense blue46. The following primary antibodies were used for immunocytochemical staining: rabbit polyclonal against USP1 (Invitrogen, USA) and mouse monoclonal against PERILIPIN (R&D, USA). Secondary antibodies were donkey anti-mouse Alexa-488 and donkey anti-rabbit Alexa-647 conjugated (Thermo Fisher, USA). The cells were fixed for 20 minutes with cold 4% paraformaldehyde, permeabilized with intracellular staining for 15 min with 0.1% Triton X-100, blocked for 30 min in phosphate buffered saline with 0.1% Tween-20, 1% BSA, 5% FBS. The slides were incubated with primary antibodies overnight at 40 °C and with secondary antibodies for 1 hour at room temperature. Cell viability For the viability/cytotoxicity assessment, the cells were stained with РI (propidium iodide) (Sigma-Aldrich, USA) and FDA (fluorescein diacetate) (Sigma-Aldrich, USA) before differentiation and after 3, 7 and 14 days of adipogenic differentiation. Detailed description of the method was presented in previous studies40,41. The number of dead and living ADSCs in different groups was counted using inverted fluorescent microscopy AxioObserver A1 and ZEN 2012 software (Carl Zeiss). The cell viability was calculated as the ratio of living cells to the total number of cells and was expressed as a percentage according to the formula: $${\rm{Viability}}=({\rm{number}}\,{\rm{of}}\,{\rm{living}}\,{\rm{cells}}/{\rm{total}}\,{\rm{number}}\,{\rm{of}}\,{\rm{cells}})\times 100 \% $$ The ADSCs metabolic activity Detailed description of the method was presented in previous studies41,49. ADSCs metabolic activity was assessed on days 3, 7 and 14 of adipogenic differentiation. 10% of Alamar Blue (redox indicator; Thermo Fisher, USA) was added to the culture medium and incubated for 3 h according to manufacturer's instructions. Reduced Alamar Blue was detected at 540 nm vs 630 nm at LabSystems Multiskan PLUS spectrofluorometer (USA). Cell metabolic activity was calculated according to the following formula: $$ \% \,{\rm{of}}\,{\rm{reduction}}=(({\rm{\varepsilon }}{\rm{ox}})\,{\rm{\lambda }}2\cdot {\rm{A}}{\rm{\lambda }}1-({\rm{\varepsilon }}{\rm{ox}})\,{\rm{\lambda }}1\cdot {\rm{A}}{\rm{\lambda }}2){\rm{experiment}}/(({\rm{\varepsilon }}{\rm{ox}})\,{\rm{\lambda }}2\cdot {\rm{A}}{\prime} {\rm{\lambda }}1-({\rm{\varepsilon }}{\rm{ox}})\,\lambda 1\cdot {\rm{A}}{\prime} {\rm{\lambda }}2){\rm{control}};$$ where λ1 = 540 nm, λ2 = 630 nm (εox) λ2 = 34,798 Aλ1 – experimental sample absorption at λ1 = 540 nm Aλ2 – experimental sample absorption at λ2 = 630 nm A′ λ1 – control sample absorption at λ1 = 540 nm A′ λ2 - control sample absorption at λ1 = 630 nm The RT-qPCR assay Detailed description of the method was presented in previous studies46. Total RNA was isolated from ADSCs using NucleoZOL (MACHEREY-NAGEL GmbH & Co. KG, Germany) according to manufacturer's protocol. The RNA quality and concentration were determined with a spectrophotometer NanoDrop 1000 (Thermo Scientific, USA). 2 μg of isolated RNA were reverse transcribed to cDNA using RevertAid First Strand cDNA Synthesis kit (Thermo Scientific, USA). RT-qPCR was performed with a 7500 Real-Time PCR System (Applied Biosystems, CA, USA) using 5× HOT FIREPol EvaGreen qPCR Mix Plus (ROX) (Solis BioDyne, Estonia). The Applied Biosystems 7500 system software (V. 1.3.1) was used for data analysis. The primers sequences are listed in Table 1. The following PCR cycling conditions were applied: 10 min at 95 °C, 40 cycles of 10 s at 95 °C, and 40 min at 60 °C. Expression level of TATA-box binding protein (TBP) was used as internal control. Ct values for target genes were normalized against Ct value of TBP at the same threshold level. The relative quantification (comparative Ct (ΔΔCt) method) was used to compare the expression levels of the target genes with the internal control. Dissociation curve analysis was performed after every run to check the primers specificity. Results were presented in relative units. For all conditions, reaction was performed three times (each gene in triplicate). GraphPad Prism 4 (GraphPad Software, USA) and MS Excel were used for statistical analysis and graphic data presentation. Table 1 The list of primer sequences used in this study. Adipokine production After differentiation ADSCs in adipocytes, the content of the following 10 adipokines, cytokines and chemokines were determined in the incubation medium by the method of flow fluorimetry (multiplex analysis, Luminex xMAP): LEPTIN, ADIPONECTIN, ADIPSIN, VASPIN, CHEMERIN, TNF-α, IL-6, IL-8, IL-10, МСР-1, IP-10. This method included a multiplex immune reaction that took place on the various-diameter microparticles carrying the absorbed antibodies, and subsequent flow fluorescence analysis as well as simultaneous measurement of cytokine concentration. The procedures were conducted according to the Bio-Plex Pro Assay protocol. The results were recorded using an automatic photometer for Bio-Plex microplates (Bio-Plex 200 Systems, Bio-Rad, USA) and the Bio-Plex Manager ("Bio-Rad") software. The test substances concentration was determined from the calibration curve for each cytokine (the dynamic range 2–32 000 pg/mL) according to the manufacturer recommendations. Detailed description of the methods was presented in previous studies46. Microscopy examinations of live cells, cell cultures and cytological slides were carried out with inverted AxioObserver A1 microscope equipped with an AxioCam ERc 5 s digital camera (all – Carl Zeiss, Germany) and ZEN 2012 software. The data were reported as mean ± SD for each group. Statistical analyses were performed using one-way analysis of variance (ANOVA) in Origin Pro software. Differences were considered to be statistically significant when p < 0.05. Preparation and characterization of ADSCs Subcutaneous fat is a convenient source for ADSCs obtaining50. The possibility of in vivo converting white adipocytes of subcutaneous adipose tissue into beige/brown adipocytes was previously shown51. Thus, the in vitro study of ADSCs differentiation from subcutaneous adipose tissue is a valuable tool in the search for new substances targeted on adipocyte function and in the development of new methods for obesity, metabolic syndrome and T2DM treatment. Cells obtained from human lipoaspirate adherent were expanded in vitro until P3 and before experiments were characterized according to the ISCT position papers for compliance to minimal defined criteria of MSCs and ADSCs47. After expansion over three passages, the cells obtained from lipoaspirate presented homogeneous population of cells with fibroblast-like morphology (Fig. 1A). The study of ADSCs immunophenotype showed that they expressed characteristic positive stem cell markers and did not express negative markers (Fig. 1B). Сharacterization of ADSCs. (A) The ADSCs morphology at P3 before directed multilineage differentiation. Phase-contrast microscopy. Bar scale – 200 µm. (B) Representative FACS histograms the ADSCs immunophenotype at P3. (C) The ability to direct three-lineage differentiation of ADSCs in vitro. C1 – adipogenic differentiation, Oil Red and Romanovsky-Giemsa stain, the bar = 50 µm; C2 – osteogenic differentiation, Alizarin Red S stain, the bar = 100 µm; C3 – chondrogenic differentiation, Alcian Blue stain, bar scale = 100 µm. Also, ADSCs demonstrated key characteristic features of MSCs – the ability to directed orthodoxic three-lineage differentiation in vitro. So, after 14 days of cultivation in adipogenic differentiation medium, the fibroblast-like cells were converted into adipocytes containing lipid vacuoles (Fig. 1C1). The ADSCs also demonstrated ability to osteogenic differentiation and production of mineralized extracellular matrix. Alizarin Red S staining showed a positive reaction to the Ca content at day 21 after differentiation (Fig. 1C2). As for the chondrogenic differentiation, a dense chondroid was obtained after 21 days of chondrogenic induction (Fig. 1C3). Thus, ADSCs isolated from lipoaspirate, complied the minimal ISCT criteria for MSCs, such as adherence to plastic in standard culture conditions, fibroblast-like morphology, typical phenotype and ability for directed orthodoxic multilineage differentiation in vitro. To find out whether deuterium could be involved in regulation of adipogenesis, we composed adipogenic differentiation media on the base of deuterium-depleted and deuterated water. The adipogenic differentiation medium, that was made with milliQ water, served as control. Directed adipogenic differentiation of ADSCs with various deuterium content Differentiation of ADSCs into adipocytes is a complex multi-step process in which the stages of induction/commitment (adipocyte progenitors, days 0–3), early differentiation (pre-adipocytes, days 3–7) and terminal differentiation (mature adipocytes, days 7–14) can be distinguished. After 7 days of cultivation in the adipogenic differentiation medium, cells began to accumulate lipid vacuoles in all groups (Fig. 2A,B). However, there were noticeable differences in number, size and form of the lipid vacuoles in ADSCs that underwent differentiation in deuterated and deuterium-depleted inductive media compare to control. A significant number of cells with lipid vacuoles were observed in the control group, and only single adipocytes in the experimental groups. At the same time, the process of differentiation had no morphological differences between groups and followed the path of adipocyte formation with the fusion of the initial small lipid vacuoles into one or two large vacuoles located centrally in the cytoplasm, which is typical for white adipocytes. Adipogenic differentiation of ADSCs in media with various deuterium concentration. (A) Phase-contrast microscopy 7 days, (B) Light microscopy Oil Red O dye 7 days, (C) Phase-contrast microscopy 14 days, (D) light microscopy Oil Red O dye 14 days). Control – differentiation medium was made on the base of milliQ water; ddw – differentiation medium was made on the base of deuterium-depleted water; D2O – differentiation medium was made on the base of deuterated water. In each section: left column – the bar = 100 µm; right column – the bar = 50 µm. But on the 14th day there were no such a striking difference except for the group of deuterated differentiation medium, where lipid vacuoles were much smaller compared to other groups (Fig. 2C,D). As for the cells in deuterium depleted medium, they almost reached the control ones. At the same time, on the 14th day, significant morphological differences were observed between differentiated from ADSCs adipocytes in all groups. Thus, in the control group (natural deuterium content) and in the group with high deuterium content, predominantly unilocular adipocytes containing one or two large lipid vacuoles were formed. As indicated earlier, this morphological trait is a characteristic feature of white adipocytes. In the group with low deuterium content, single unilocular adipocytes were also found. However, most adipocytes had numerous small vacuoles located on the periphery of the cytoplasm. This morphology is characteristic for brown and beige adipocytes. The adipogenic differentiation efficiency was evaluated by Red Oil O dye extraction on the 7th аnd 14th days of the experiment (Fig. 3). Both high and low doses of deuterium inhibited adipogenic differentiation of ADSCs. On day 7, inhibition of adipogenic differentiation of ADSCs was the same in deuterium-depleted and deuterated media (Fig. 3A). However, adipogenic media on the base of deuterated water showed higher differentiation inhibition on day 14 (Fig. 3B). Diagrams represent results of Oil Red extraction after 7 (A) and 14 (B) days cultivation in adipo-inductive media with various deuterium content. Control – differentiation medium was made on the base of milliQ water; ddw – differentiation medium was made on the base of deuterium-depleted water; D2O – differentiation medium was made on the base of deuterated water. (The results are expressed as mean ± SD (n = 6), significant differences between groups: *p < 0.05 compared to control group, #p < 0.05 compared to ddw group). ADSCs gene expression profile in process of adipogenic differentiation in media with various deuterium content To obtain further insight into the molecular mechanisms of ADSCs during adipogenic differentiation in differentiation media with various deuterium content, we evaluated the mRNA expression levels of the key genes linked with adipogenesis. Evaluation of gene expression by cells was performed before the experiment and on days 3, 7 and 14. (Fig. 4). mRNA expression of all genes was normalized to TBP expression level. The fold-change was calculated over the control conditions that correspond to 1. Diagrams represent results of the mRNA expression level of adipogenic markers in ADSCs after adipogenic differentiation in media with various deuterium content after 3 (A), 7 (B) and 14 (C) days. Control – differentiation medium was made on the base of milliQ water, marked with a dotted line; ddw – differentiation medium was made on the base of deuterium-depleted water; D2O – differentiation medium was made on the base of deuterated water. (The results are presented as mean ± SD (n = 6), significant differences between groups: *p < 0.05 compared to control group, #p < 0.05 compared to ddw group). The ADSCs did not express the studied key adipogenesis genes in culture in vitro before differentiation (data not showed). At day 3 after starting differentiation ADSCs began to express master-gene of adipogenesis PPARG and other characteristics for adipocytes genes: FABP4 (encodes carrier protein for fatty-acid), LEP and ADIPON (key adipokines that in vivo regulate satiety, hunger and various aspects of energy metabolism in human body). At this point, expression of LPL and UCP1 genes was not detected, which coincided with morphological signs of the formation of immature white adipocytes. Thus, at the induction/commitment stage, the expression of adipogenesis genes was statistically significantly higher in the control group with natural deuterium content (Fig. 4A). According to the data on gene expression in groups with low and high deuterium content, inhibition of the induction/commit process of ADSCs in the adipogenic direction occurred. Based on gene expression analysis, at day 7th ADSCs adipogenic differentiation in deuterium-depleted medium was only partially inhibited comparing to control differentiation medium. Namely, the mRNA expression level of the key master regulator of adipogenic differentiation nuclear receptor PPARG and marker of pre-adipocytes and adipocytes FABP4 did not change in ADSCs differentiated under the influence of deuterium depleted water (Fig. 4B). Moreover, a statistically significant up-regulation of expression UCP1 and ADIPON genes occurred in the group with low deuterium content compared with the control group and the group with high deuterium content (Fig. 4B). In the group with high deuterium content, significantly lower level of expression of all the studied genes was observed, which indicated inhibition of adipogenic differentiation. On day 14th, the level of PPARG and FABP4 genes expression in the group with high deuterium content reached a level comparable to the groups with natural and low deuterium content. Moreover, the level of gene expression of the key adipokines LEP and ADIPON was statistically significantly lower than in groups with natural and low deuterium content. Statistically significant increase in the level of UCP1, LPL and ADIPON genes expression and lower level of FABP4 and LEP genes expression compared to the control group on the 14th day was observed. In total, this may reflect the switching the program of adipogenesis in the group with a low deuterium content to the formation of functional thermogenic brown/beige adipocytes. Detection of UCP1 expression on protein level by immunocytochemistry In order to confirm the expression of marker proteins of brown/beige adipocytes at the protein level we performed an immunocytochemical study (Fig. 5). Immunocytochemical detection of UCP1 in adipocytes, differentiated from ADSCs in media with different deuterium content. Control – differentiation medium was made on the base of milliQ water; ddw – differentiation medium was made on the base of deuterium-depleted water; D2O – differentiation medium was made on the base of deuterated water. In each section – the bar = 50 µm. Fluorescent microscopy. According to the results of immunocytochemical analysis on the 14th day of ADSCs differentiation using our serum-free adipogenic medium, adipocytes formed and that was confirmed by detection of the adipocyte-specific protein PERELIPIN (forms the membrane of lipid vacuoles). Moreover, only in group with a low deuterium content UCP1 (a marker of thermogenic brown/beige adipocytes) was detected at the protein level. So, in the group with the natural deuterium content in our serum-free differentiation protocol, the formation of white adipocytes occurred, what was confirmed by the morphological features described earlier (the formation of adipocytes with one or two large lipid vacuoles). In the group with high deuterium content white adipocytes also developed. However, it was noted significant inhibition and delay in time of the adipogenic differentiation process. The adipokine production by adipocytes differentiated from ADSCs in media with different deuterium content Multiplex analysis of 11 key produced adipokines showed that, despite the similar general qualitative spectrum of products, there are statistically significant differences in the quantitative production of cytokines (Table 2). It is important to note that in the group with low deuterium content we observed an increased production of obesity-protective adipokines (LEPTIN and ADIPONECTIN) and IL-6 (this cytokine increases the production of LEPTIN in adipocytes), and decreased production of pro-inflammatory TNF-α, IL-8, MCP-1, IP-10, ADIPSIN (inhibits the lipolysis) and IL-10 (inhibits the thermogenesis and browning). Despite the fact that adipocytes with morphological and phenotypic signs of white adipocytes developed in groups with natural and high deuterium content, statistically significant quantitative differences in the production of all adipokines were also observed between them. It can be explained by significant inhibition and lag of the adipogenesis process in an environment with increased deuterium content. Table 2 The adipokine production by adipocytes differentiated from ADSCs in media with different deuterium content (Control – differentiation medium was made on the base of milliQ water; ddw – differentiation medium was made on the base of deuterium-depleted water; D2O – differentiation medium was made on the base of deuterated water). Viability and metabolic activity of ADSCs in the process of adipogenesis in differentiation media with various deuterium content To clarify the question whether deuterium-mediated inhibition of ADSCs adipogenic differentiation was associated with cytotoxic effect of deuterium we analyzed viability and metabolic activity of differentiated ADSCs. Viability of ADSCs in inductive media with various deuterium content The cell viability/cytotoxicity was assessed with FDA and PI staining ADSCs before differentiation and during differentiation in adipogenic media with various deuterium content on 3, 7 and 14 days of cultivation (Table 3). The cell viability in culture of ADSCs in growth medium before differentiation was 97.3 ± 3.4% according to FDA/PI staining. On the 3rd day of differentiation, a statistically significant decrease in viability was observed in all groups, which can be caused by apoptosis and necrosis of part of the cells upon transition to both serum-free culturing conditions and apoptosis under the influence of differentiation factors. It is important to note that in the group with a high deuterium content on the 3rd day of the adipogenesis, there was a sharp decrease in cell viability, which was probably due to the cytotoxic effect of deuterium in the differentiating microenvironment. On the 7th and 14th days in a group with a high deuterium content, gradual increase in cell viability occurred, probably due to the adaptation of cells to a high deuterium content in the microenvironment. Table 3 The viability values of ADSCs cultured in media with various deuterium content after 7 and 14 days of cultivation (Control – differentiation medium was made on the base of milliQ water; ddw – differentiation medium was made on the base of deuterium-depleted water; D2O – differentiation medium was made on the base of deuterated water) (FDA/PI assay). Metabolic activity of ADSCs in inductive media with various deuterium content Alamar Blue assay was performed on 3rd, 7th and 14th day of ADSCs differentiation in adipogenic media with various deuterium content (Fig. 6). The metabolic activity of undifferentiated ADSCs cultured in growth medium was set at 100%, and their changes are given as percentage of controls. Diagrams represent results of the Alamar Blue assay (Metabolic activity) in media with various deuterium content after 0, 3, 7 and 14 days. Control – differentiation medium was made on the base of milliQ water; ddw – differentiation medium was made on the base of deuterium-depleted water; D2O – differentiation medium was made on the base of deuterated water. (The results are expressed as mean ± SD (n = 6), significant differences between groups: *p < 0.05 compared to control group). A decrease in metabolic activity in all groups on the 3rd day of adipogenesis was observed, what may be both due to switching the cells from growth (proliferation) processes to differentiation, and to adaptation to serum-free conditions. Subsequently, in groups with a natural and low deuterium content, an increase in metabolic activity occurred, especially pronounced in the group with low deuterium content. This fact may reflect the differentiation of ADSCs in brown/beige adipocytes when using our cocktail of differentiating factors in a microenvironment with low deuterium content. Importantly, metabolic activity was reduced in the group with a high deuterium content compared to the group with a natural and low deuterium content during the entire adipogenesis. It correlates with morphological characteristics and gene expression data and probably can be explained by the cytotoxic effect of deuterium at the induction stage/committing cells and its inhibitory effect on the differentiation of ADSCs. Obesity, metabolic syndrome and T2DM became a global public health problem in the 21st century. One of the key links in the pathogenesis of the development of these diseases is adipose tissue. In the last 10 years, the phenomenon of the existence of active thermogenic beige adipose tissue in adult humans was discovered. The possibility of in vivo conversion of subcutaneous white adipocytes into beige adipocytes was further shown. These facts, in total, in addition to upcoming pharmacological strategies for inhibiting the development of adipose tissue, can also be considered as a promising approach for the prevention and treatment of obesity, metabolic syndrome and T2DM by the formation of beige adipose tissue de novo and/or by the conversion of existing subcutaneous white adipocytes into beige adipocytes. In our study, the directed adipogenic differentiation of ADSCs with a two-step (induction + maturation) non-serotonous protocol based on the use of various hormones and small molecules led to the formation of classical white adipocytes in a microenvironment with a natural and high deuterium content. Interestingly, adipogenesis in a low deuterium content is diverted towards the development of brown/beige adipocytes under the influence of the same induction and differentiation factors. The differentiation of ADSCs in brown/beige adipocytes in microenvironment with low deuterium content was confirmed by morphological features, gene expression, UCP1 protein detection. Moreover, despite the fact that the resulting brown/beige adipocytes produced the same spectrum of adipokines as their counterparts (white adipocyte) obtained in the process of differentiation of ADSCs in microenvironment with natural and high deuterium content, but there were significant quantitative differences. In general, brown/beige adipocytes showed increased production of anti-obesity and lower production of pro-inflammatory adipokines. Thus, in addition to forming of beige adipocytes, which are functionally useful for the prevention and treatment of obesity, metabolic syndrome and T2DM, we also discovered the secretion of a protective spectrum of adipokines, which can provide additional benefits at the systemic level of the whole body (organism). Despite the difference of morphology and resulting type of differentiated from ADSCs adipocytes (brown/beige vs white), both ddw and D2O groups inhibited adipogenic differentiation as confirmed by gene expression and Red Oil O extraction assay. However, the inhibition of adipogenesis in D2O group was drastic which can be partially explained by acute deuterium cytotoxicity. In our previous study on the effect of deuterated and deuterium-depleted water on the proliferation and migration of ADSCs, the acute cytotoxic effect of high deuterium concentrations was also noted41. An interesting question for further research is to establish the mechanism of cell death (necrosis vs apoptosis) with high deuterium content, as well as the role of mitochondria in cell death. A comparative study of the mechanisms of cell death in conditions that promote growth (proliferation) and differentiation is also interesting. In case of ddw group, cell viability was not affected. This is in line with our previous observations as well as with other authors' reports41,52,53,54,55. At the same time, deuterium excess or depletion had opposite effect on the cellular metabolism: differentiated ADSCs metabolic activity was increased in ddw group and inhibited in D2O group (Fig. 6). This also corresponds to our earlier data on the effect of low and high deuterium content on the metabolic activity of ADSCs in growth conditions41. High concentrations of deuterium in the environment can lead to inhibition of many reactions associated with the energy component in the chain of biochemical reactions20,22,23. This can be explained by the fact that hydrogen is the main participant in electron transfer and is a reducing agent in many biochemical cascades30,35,37. Therefore, stronger deuterium bonds, when replacing protium, will lead to inhibition of such reactions. As for deuterium depletion, after classical ideas about the dilution of substances56, changes in the metabolic rate should be negligible at deuterium concentration below 150 ppm. However, the observed effects are significant57. This suggests that deuterium acts as an element, which is necessary as a rate regulator of biochemical reactions cascades. On the other hand, deuterium could be considered as an element that affects the substance chirality. That explains the mechanism of changes in many system parameters as a result of different D/H ratios22,27,41. In other words, the presence of deuterium or protium in a substance leads to various ways of further reactions. Accordingly, the entire metabolism can go in different (and unpredictable yet) directions (at different rates) depending on the presence of deuterium or protium in the initial or intermediate substance. In order to find out some mechanisms of the action of deuterium on the ADSCs adipogenic differentiation, we examined the mRNA expression of some adipogenesis markers. So, peroxisome proliferator-activated receptor gamma (PPAR-γ or PPARG) is a master gene (master regulator) of adipogenic differentiation and encodes a type II nuclear receptor. PPARG is mainly presented in adipose tissue, colon and macrophages and regulates fatty acid storage and glucose metabolism58,59,60. Here we did not observe significant changes in the PPARG expression in all comparison groups. But we observed inhibition of express PPARG gene at the stage of induction of adipogenesis in groups with both high and low deuterium content. Moreover, at the end of the differentiation process, the level of expression increase, which may be explained by the asynchrony of the differentiation process in the ADSCs population at the single cells level. FABP4 (fatty acid binding protein 4) or aP2 (adipocyte Protein 2) is a carrier protein for fatty acids that is primarily expressed in adipocytes and macrophages61. Blocking this protein is perspective for treating various diseases62,63,64,65,66. After our data, FABP4 expression was slightly reduced in both groups at the same time the level of PPARG gene expression reflected the overall delay in the process of ADSCs differentiation. Leptin (LEP) is a hormone predominantly made by adipose cells that helps to regulate energy balance by inhibiting hunger. Leptin expression level in both groups was slightly lower than in the control group. This suggests that the level of cell metabolism, namely, the absorption of lipids, takes place at low intensity. A high concentration of LEP in the medium with reduced expression of the leptin gene compared with the control group can be explained by different rates of leptin secretion, which, as is known from work67, is independent of the regulation of leptin mRNA expression due to the presence of vesicular leptin depots in adipocytes68. Moreover, since adipocytes themselves express leptin receptors69, hypothetically, this difference can also be associated with differences in autocrine signaling between different adipocyte subtypes. Adiponectin (ADIPON) is a hormone that regulates glucose metabolism and fatty acid oxidation70. Some studies have shown that ADIPON was inversely correlated with body weight index71. Studies of mice with elevated levels of adiponectin showed a decrease in adipocyte differentiation and increase in energy costs, which was due to the uncoupling of mitochondria72. It has also been shown that adiponectin and leptin alter insulin sensitivity in mice73. High ADIPON expression in ddw group could be associated with high metabolism level and high mitochondrial activity in context of ADSCs differentiation in this condition into beige adipocytes35,37,41. Another cause of high adiponectin expression could be a compensatory effect to low leptin expression as these two proteins act synergistically73,74. The low ADIPON expression in the D2O group may be associated with both general inhibition of the adipogenesis process and deviation of the direction of differentiation into white adipocytes with switching to energy storage by phosphorylation of fatty acids75. Lipoprotein lipase (LPL) is a water-soluble enzyme that hydrolyzes triglycerides in lipoproteins. It is also involved in promoting the cellular uptake of chylomicron remnants, cholesterol-rich lipoproteins, and free fatty acids76,77,78. In the deuterated medium, we did not observe LPL expression on day 7th of the experiment, and, it was significantly lower on day 14th. This may be explained by deuterium-triggered metabolic changes which led to the formation of white adipocytes. In the ddw medium there was no difference with the control group. UCP1 (uncoupling protein 1, also known as thermogenin)79,80,81 is an uncoupling protein found in the mitochondria of beige adipose tissue (BAT). Heat production using UCP1 in beige adipose tissue occurs with the separation of cellular respiration and phosphorylation, that is the rapid oxidation of nutrients occurs with a low intensity of ATP production82. According to our data, the decreased expression of UCP1 in a deuterated medium confirms that differentiation of ADSCs in this condition goes towards white adipocytes. High expression of UCP1 in ddw medium seems to be the result of the ADSCs differentiation into functional beige adipocytes. Consequently, the main mechanism of the inhibitory/stimulating effect of high/low deuterium concentrations should be sought in the work of the respiratory chain and the mitochondrial complex itself. Taking into account the fact that ddw directs the differentiation of adipocytes into their other subtype (brown/beige vs white adipocytes), the apparent "inhibition" of adipogenesis in the ddw group compared to the control medium actually reflects the differences in the formation of different adipocyte subtypes. Whereas in the case of a group with a high deuterium content compared with the control, the inhibition process of differentiation of white adipocytes from ADSCs actually occurs. In our study, we first obtained data on the effect of various deuterium concentrations on the efficiency and direction (brown/beige vs white adipocytes) of adipogenic ADSCs differentiation in an in vitro model system. For the possible practical application of these results, additional studies are needed that would allow a more detailed description of the molecular mechanisms of influence of deuterium various concentrations at the cellular level, as well as studies at the body level. In further experiments on laboratory animals, it is necessary to study the effect of low concentrations of deuterium on the functioning of various fat depots in various physiological and pathological situations (stress, high-fat diet, etc.)83,84. Altogether, our data revealed the importance of D/H ratio in culture medium for directed adipogenic differentiation of human ADSCs. However, the mechanisms that explain involvement different deuterium concentration in regulation ADSCs adipogenic differentiation are not clarified and this opens a field for the future research. We have demonstrated that both deuterium depleted and deuterated inductive media inhibit adipogenic differentiation of human ADSCs compared to medium with normal deuterium content. The inhibitory effect of deuterated medium could be partially explained by its increased cytotoxicity. However, surprisingly, with the same instructional signals (hormones and small molecules), the differentiation of ADSCs in the microenvironment with a low deuterium content deviated in direction of brown/beige adipocytes development. Thus, the separate or combined use of ddw and D2O may be used in the complex treatment of obesity, metabolic syndrome and T2DM. The data used to support the findings of this study is available from the corresponding author upon request. Qatanani, M. & Lazar, M. A. Mechanisms of obesity-associated insulin resistance: many choices on the menu. Genes Dev. 21, 1443–1455 (2007). Golay, A. & Ybarra, J. Link between obesity and type 2 diabetes. Best Pract. Res. Clin. Endocrinol Metab. 19, 649–663 (2005). Novelli, E. L. B. et al. The adverse effects of a high-energy dense diet on cardiac tissue. J. Nutrition Environ. Med. 12, 287–290 (2002). Bianchini, F., Kaaks, R. & Vainiuo, H. Overweight, obesity and cancer risk. Lancet Oncology 3, 565–574 (2002). World Health Organization, WHO, Obesity: Preventing and Managing the Global Epidemic. Report of a WHO Consultation (WHO Technical Report Series 894), http://www.who.int/nutrition/publications/obesity/WHO_TRS_894/en/ (2016). Dinsa, G. D., Goryakin, Y., Fumagalli, E. & Suhrcke, M. Obesity and socioeconomic status in developing countries: a systematic review. Obes Rev. 13(11), 1067–1079, https://doi.org/10.1111/j.1467-789X.2012.01017.x (2012). Peiris, A. N., Struve, M. F., Mueller, R. A., Lee, M. B. & Kissebah, A. H. Glucose metabolism in obesity: influence of body fat distribution. J. Clin. Endocrinol. Metab. 67(4), 760–767 (1988). Halenova, T. et al. P62 plasmid can alleviate diet-induced obesity and metabolic dysfunctions. Oncotarget 8(34), 56030–56040 (2017). Halenova, T. et al. Effect of C60 fullerene nanoparticles on the diet-induced obesity in rats. Int. J. Obesity. 42, 1987–1998 (2018). Green, C. J. & Hodson, L. The influence of dietary fat on liver fat accumulation. Nutrients. 6(11), 5018–5033 (2014). Farrigan, C. & Pang, K. Obesity market overview. Nat. Rev. Drug Discov. 1(4), 257–258, https://doi.org/10.1038/nrd781 (2002). Catrysse, L. & van Loo, G. Adipose tissue macrophages and their polarization in health and obesity. Cell Immunol. 330, 114–119, https://doi.org/10.1016/j.cellimm.2018.03.001 (2018). Kleinendorst, L., van Haelst, M. M. & van den Akker, E. L. T. Genetics of Obesity. Exp. Suppl. 111, 419–441, https://doi.org/10.1007/978-3-030-25905-1_19 (2019). Lizcano, F. The beige adipocyte as a therapy for metabolic diseases. Int. J. Mol. Sci. 20(20), pii:E5058; https://doi.org/10.3390/ijms20205058 (2019). Article PubMed Central Google Scholar Brown, A. C. Brown adipocytes from induced pluripotent stem cells-how far have we come? (Ann NY Acad. Sci.) https://doi.org/10.1111/nyas.14257 (2019). Villarroya, J. et al. New insights into the secretory functions of brown adipose tissue. J Endocrinol. pii: JOE-19-0295.R1; https://doi.org/10.1530/JOE-19-0295 (2019). Lee, J. H. et al. The role of adipose tissue mitochondria: regulation of mitochondrial function for the treatment of metabolic diseases. Int. J. Mol. Sci. 20(19), pii:E4924; https://doi.org/10.3390/ijms20194924 (2019). Nissensohn, M., Castro-Quezada, I. & Serra-Majem, L. Beverage and water intake of healthy adults in some European countries. Int. J. Food Sci. Nutr. 64(7), 801–805, https://doi.org/10.3109/09637486.2013.801406 (2013). Bylund, J. et al. Measuring sporadic gastrointestinal illness associated with drinking water - an overview of methodologies. J. Water Health. 15(3), 321–340, https://doi.org/10.2166/wh.2017.261 (2017). Atzrodt, J., Derdau, V., William, J. & Reid, M. Deuterium- and Tritium-Labelled Compounds: Applications in the Life Sciences. Angew. Chem. Int. Ed. 57, 1758–1784 (2018). Somlyai, G. Defeating Cancer! The Biological Effects of Deuterium Depletion. (Bloomington: Author House, 2002). Syroeshkin, A. V. et al. D/H control of chemical kinetics in water solutions under low deuterium concentrations. Chem. Eng. J. 377, 119827, https://doi.org/10.1016/j.cej.2018.08.213 (2019). Robins, R. J., Remaud, G. S. & Billault, I. Natural mechanisms by which deuterium depletion occurs in specific positions in metabolites. Eur. Chem. Bull. 1(1), 39–40 (2012). Zhang, K. et al. Lack of deuterium isotope effects in the antidepressant effects of (R)-ketamine in a chronic social defeat stress model. Psychopharmacology. 235, 3177–3185, https://doi.org/10.1007/s00213-018-5017-2 (2018). Charidemou, E., Ashmore, T. & Griffin, J. L. The use of stable isotopes in the study of human pathophysiology. Int. J. Biochem. Cell Biol. 93, 102–109 (2017). Mosin, O. & Ignatov, I. Biological Influence of Deuterium on Prokaryotic and Eukaryotic cells. J. Med. Physiol. Bioph. 1, 52–72 (2014). Syroeshkin, A. V. et al. The effect of the deuterium depleted water on the biological activity of the eukaryotic cells. J. Trace Elem. Med. Biol. 50, 629–633 (2018). Strekalova, T. et al. Deuterium content of water increases depression susceptibility: The potential role of a serotonin-related mechanism. Behav. Brain Res. 277, 237–244 (2015). Dzhimak, S. S., Basov, A. A. & Baryshev, M. G. Content of Deuterium in Biological fluids and organs: influence of deuterium depleted water on D/H gradient and the process of adaptation biochemistry. Bioph. Mol. Biol. 465, 370–373 (2015). Pomytkin, I. A. & Kolesova, O. E. Relationship between Natural concentration of heavy water isotopologes and rate of H2O2 generation by mitochondria. Bul. Exp. Biol. Med. 142(5), 570–572 (2006). Krempels, K., Somlyai, I., Somlyai, G., Somlyai, I. & Somlyai, G. A Retrospective evaluation of the effects of deuterium depleted water consumption on 4 patients with brain metastases from lung cancer. Integr. Cancer Ther. 7, 172–181 (2008). Lajos, R. et al. A miRNAs profile evolution of triple negative breast cancer cells in the presence of a possible adjuvant therapy and senescence inducer. Journal of B.U.ON. 23(3), 692–705 (2018). Somlyai, G. et al. Pre-clinical and clinical data confirm the anticancer effect of deuterium depletion. Biomacromol J. 2(1), 1–7 (2016). Yavari, K. & Kooshesh, L. Deuterium depleted water inhibits the proliferation of human MCF7 breast cancer cell lines by inducing cell cycle arrest. Nutr. Cancer. 71, 1019–1029 (2019). Hang, M., Huynh, V. & Meyer, T. J. Colossal kinetic isotope effects in proton-coupled electron transfer. PNAS. 101(36), 13138–13141 (2004). Basov, A. A. et al. Influence of deuterium depleted water on the isotope D/H composition of liver tissue and morphological development of rats at different periods of ontogenesis. Iranian Biomed. J. 23(2), 129–141 (2019). Boros, L. G. et al. Submolecular regulation of cell transformation by deuterium depleting water exchange reactions in the tricarboxylic acid substrate cycle. Med. Hypotheses. 87, 69–74 (2016). Xie, X. & Zubarev, R. A. On the effect of planetary stable isotope compositions on growth and survival of terrestrial organisms. PLoS One. 12(1), e0169296, https://doi.org/10.1371/journal.pone.0169296 (2017). Basov, A., Fedulova, L., Baryshev, M. & Dzhimak, S. Deuterium-depleted water influence on the isotope 2H/1H regulation in body and individual adaptation. Nut. 11, 1903, https://doi.org/10.3390/nu11081903 (2019). Zlatska, A. et al. In vitro study of deuterium effect on biological properties of human cultured adipose-derived stem cells. Sci. World J. 5454367, https://doi.org/10.1155/2018/5454367 (2018). Zlatska, O. V., Zubov, D. O., Vasyliev, R. G., Syroeshkin, A. V. & Zlatskiy, I. A. Deuterium effect on proliferation and clonogenic potential of human dermal fibroblasts in vitro. Probl. Cryobiol Cryomed. 28(1), 049–053 (2018). Kamm, R., Lammerding, J. & Mofrad, M. Cellular nanomechanics (Springer Handbook of Nanotechnology) (Springer Berlin Heidelberg, 2010). Murphy, M. B., Moncivais, K. & Caplan, A. I. Mesenchymal stem cells: environmentally responsive therapeutics for regenerative medicine. Exp. Mol. Med. 45, 54, https://doi.org/10.1038/emm.2013.94 (2013). Doorn, J., Moll, G. & Le Blanc, K. Therapeutic applications of mesenchymal stromal cells: paracrine effects and potential improvements. Tissue Eng. 18(2), 101–115, https://doi.org/10.1089/ten.teb.2011.0488 (2012). Caplan, A. I. Why are MSCs therapeutic? New data: new insight. 217, 318–324, https://doi.org/10.1002/path.2469 (2009). Vasyliev, R. G. et al. Comparative analysis of biological properties of large-scale expanded adult neural crest-derived stem cells isolated from human hair follicle and skin dermis. Stem Cells Int. 9640790, https://doi.org/10.1155/2019/9640790 (2019). Bourin, P. et al. Stromal cells from the adipose tissue-derived stromal vascular fraction and culture expanded adipose tissue-derived stromal/stem cells: a joint statement of the international federation for adipose therapeutics and science (IFATS) and the international society for cellular therapy (ISCT). Cytotherapy. 15(6), 641–648, https://doi.org/10.1016/j.jcyt.2013.02.006 (2013). Prockop, D., Phinney, D. & Blundell, B. Mesenchymal stem cells: methods and protocols. Methods Mol. Biol. 449, 192 (2008). O'Brien, J., Wilson, I. & Orton, T. Investigation of the alamar blue (resazurin) fluorescent dye for the assessment of mammalian cell cytotoxicity. Eur. J. Biochem. 267(17), 5421–5426 (2000). Gimble, J. M. & Guilak, F. Adipose-derived adult stem cells: isolation, characterization, and differentiation potential. Cytotherapy. 5(5), 362–369, https://doi.org/10.1080/14653240310003026 (2003). Baer, P. & Geiger Adipose-derived mesenchymal stromal/stem cells: tissue localization, characterization, and heterogeneity. Stem Cells Int. 2012, 1–11, https://doi.org/10.1155/2012/812693 (2012). Lobyshev, V. N. & Kalinichenko, L. P. Isotopic effects in biological systems. (ed. Nauka) (Moscow, 1978). Cleland, W. W. The use of isotope effects to determine enzyme mechanisms. JBC. 278(52), 51975–51984 (2003). Lewis, G. N. Biology of heavy water. Science. 79(2042), 151–153 (1934). Harvey, E. N. Biological effects of heavy water. Biol Bull. 66(2), 91–96 (1934). Goncharuk, V. V., Pleteneva, T. V., Uspenskaya, E. V. & Syroeshkin, A. V. Controlled chaos: Heterogeneous catalysis. J. Water Chem. Technol. 39, 325–330 (2017). McCluney, K. E. & Sabo, J. L. Tracing water sources of terrestrial animal populations with stable isotopes: laboratory tests with crickets and spiders. PLoS One. 5, 1–11 (2010). Greene, M. E. et al. Isolation of the human peroxisome proliferator activated receptor gamma cDNA: expression in hematopoietic cells and chromosomal mapping. Gene Expression. 4(4–5), 281–299 (1995). Elbrecht, A. et al. Molecular cloning, expression and characterization of human peroxisome proliferator activated receptors gamma 1 and gamma 2. Biochem. Biophys. Res. Com. 224(2), 431–437, https://doi.org/10.1006/bbrc.1996.1044 (1996). Michalik, L. et al. International union of pharmacology. LXI. Peroxisome proliferator-activated receptors. Pharmacol. Rev. 58(4), 726–741, https://doi.org/10.1124/pr.58.4.5 (2006). Baxa, C. A. et al. Human adipocyte lipid-binding protein: purification of the protein and cloning of its complementary DNA. Biochem. 28(22), 8683–8690, https://doi.org/10.1021/bi00448a003 (1989). Furuhashi, M. et al. Treatment of diabetes and atherosclerosis by inhibiting fatty-acid-binding protein aP2. Nature. 447(7147), 959–965, https://doi.org/10.1038/nature05844 (2007). Shum, B. O. et al. The adipocyte fatty acid-binding protein aP2 is required in allergic airway inflammation. J. Clin. Invest. 116(8), 2183–2192, https://doi.org/10.1172/JCI24767 (2006). Maeda, K. et al. Adipocyte/macrophage fatty acid binding proteins control integrated metabolic responses in obesity and diabetes. Cell Metab. 1(2), 107–119, https://doi.org/10.1016/j.cmet.2004.12.008 (2005). Hao, J. et al. Circulating adipose fatty acid binding protein is a new link underlying obesity-associated breast/mammary tumor development. Cell Metab. 28(5), 689–705, https://doi.org/10.1016/j.cmet.2018.07.006 (2018). Hao, J. et al. Expression of adipocyte/macrophage fatty acid-binding protein in tumor-associated macrophages promotes breast cancer progression. Cancer Res. 78(9), 2343–2355, https://doi.org/10.1016/j.cmet.2018.07.006 (2018). Barr, V. A., Malide, D., Zarnowski, M. J., Taylor, S. I. & Cushman, S. W. Insulin stimulates both leptin secretion and production by rat white adipose tissue. Endocrinol. 138(10), 4463–4472 (1997). Ye, F., Than, A., Zhao, Y., Goh, K. H. & Chen, P. Vesicular storage, vesicle trafficking, and secretion of leptin and resistin: the similarities, differences, and interplays. J. Endocrinol. 206(1), 27–36 (2010). Bornstein, S. R. et al. Immunohistochemical and ultrastructural localization of leptin and leptin receptor in human white adipose tissue and differentiating human adipose cells in primary culture. Diabetes. 49(4), 532–538 (2000). Díez, J. J. & Iglesias, P. The role of the novel adipocyte-derived hormone adiponectin in human disease. Europ. J. Endocrinol. 148(3), 293–300, https://doi.org/10.1530/eje.0.1480293 (2003). Ukkola, O. & Santaniemi, M. Adiponectin: a link between excess adiposity and associated comorbidities? J. Mol. Med. 80(11), 696–702, https://doi.org/10.1007/s00109-002-0378-7 (2002). Bauche, I. B. et al. Overexpression of adiponectin targeted to adipose tissue in transgenic mice: impaired adipocyte differentiation. Endocrinol. 148(4), 1539–1549, https://doi.org/10.1210/en.2006-0838 (2007). Yamauchi, T. et al. The fat-derived hormone adiponectin reverses insulin resistance associated with both lipoatrophy and obesity. Nature Medicine. 7(8), 941–946, https://doi.org/10.1038/90984 (2001). Nedvídková, J., Smitka, K., Kopský, V. & Hainer, V. Adiponectin, an adipocyte-derived protein. Physiological Res. 54(2), 133–140 (2005). Liu, M. & Liu, F. Up- and down-regulation of adiponectin expression and multimerization: mechanisms and therapeutic implication. Biochimie. 94(10), 2126–2130, https://doi.org/10.1016/j.biochi.2012.01.008 (2012). Mead, J. R., Irvine, S. A. & Ramji, D. P. Lipoprotein lipase: structure, function, regulation, and role in disease. J. Mol. Med. 80(12), 753–769, https://doi.org/10.1007/s00109-002-0384-9 (2002). Rinninger, F. et al. Lipoprotein lipase mediates an increase in the selective uptake of high density lipoprotein-associated cholesteryl esters by hepatic cells in culture. J. Lipid Res. 39(7), 1335–1348 (1998). Ma, Y. et al. Mutagenesis in four candidate heparin binding regions (residues 279-282, 291-304, 390-393, and 439-448) and identification of residues affecting heparin binding of human lipoprotein lipase. J. Lipid Res. 35(11), 2049–2059 (1994). Crichton, P. G., Lee, Y. & Kunji, E. R. The molecular features of uncoupling protein 1 support a conventional mitochondrial carrier-like mechanism. Biochimie. 134, 35–50, https://doi.org/10.1016/j.biochi.2016.12.016 (2017). Zhao, L. et al. Specific interaction of the human mitochondrial uncoupling protein 1 with free long-chain fatty acid. Structure. 25(9), 1371–1379.e3, https://doi.org/10.1016/j.str.2017.07.005 (2017). Chathoth, S. et al. Association of uncoupling protein 1 (UCP1) gene polymorphism with obesity: a case-control study. BMC Med Genet. 19(1), 203, https://doi.org/10.1186/s12881-018-0715-5 (2018). Kozak, L. P. & Anunciado-Koza, R. UCP1: its involvement and utility in obesity. Int. J. Obesity. 32(Suppl 7), S32–8, https://doi.org/10.1038/ijo.2008.236 (2008). Basov, A., Fedulova, L., Vasilevskaya, E. & Dzhimak, S. Possible mechanisms of biological effects observed in living systems during 2H/1H isotope fractionation and deuterium interactions with other biogenic isotopes. Molecules. 24, 4101, https://doi.org/10.3390/molecules24224101 (2019). Article CAS PubMed Central Google Scholar Halenova, T., Zlatskiy, I., Syroeshkin, A., Maximova, T. & Pleteneva, T. Deuterium-depleted water as adjuvant therapeutic agent for treatment of diet-induced obesity in rats. Molecules. 25, 23, https://doi.org/10.3390/molecules25010023 (2020). The publication has been prepared with the support of the "RUDN University Program 5–100". State Institute of Genetic and Regenerative Medicine NAMS of Ukraine, 67 Vyshgorodska Str., Kyiv, 04114, Ukraine Alona V. Zlatska, Roman G. Vasyliev, Anzhela E. Rodnichenko, Dmytro O. Zubov, Svitlana N. Novikova & Igor A. Zlatskiy Biotechnology Laboratory ilaya.regeneration, Medical Company ilaya, 9 I. Kramskogo Str., Kyiv, 03115, Ukraine Alona V. Zlatska & Inna M. Gordiienko R.E. Kavetsky Institute of Experimental Pathology, Oncology and Radiobiology NAS of Ukraine, 45 Vasylkivska Str., Kyiv, 03022, Ukraine Inna M. Gordiienko Peoples Friendship University of Russia (RUDN University), 6 Miklukho-Maklaya St, Moscow, 117198, Russian Federation Maria A. Morozova, Tatiana V. Grebennikova, Igor A. Zlatskiy & Anton V. Syroeshkin Immanuel Kant Baltic federal University (IKBFU), 6 Gaidara St, Kaliningrad, 236001, Russian Federation Maria A. Vulf & Larisa S. Litvinova Federal Research Center of Epidemiology and Microbiology named Gamalei, Moscow, Russian Federation Tatiana V. Grebennikova Alona V. Zlatska Roman G. Vasyliev Anzhela E. Rodnichenko Maria A. Morozova Maria A. Vulf Dmytro O. Zubov Svitlana N. Novikova Larisa S. Litvinova Igor A. Zlatskiy Anton V. Syroeshkin Conceptualization - Alona V. Zlatska, Roman G. Vasyliev, Dmytro O. Zubov, Igor A. Zlatskiy, Anton V. Syroeshkin; Data curation - Alona V. Zlatska, Roman G. Vasyliev, Inna M. Gordiienko, Anzhela E. Rodnichenko, Maria A. Vulf, Dmytro O. Zubov, Larisa S. Litvinova, Igor A. Zlatskiy; Formal analysis - Alona V. Zlatska, Roman G. Vasyliev, Maria A. Vulf, Dmytro O. Zubov, Larisa S. Litvinova, Tatiana V. Grebennikova, Igor A. Zlatskiy, Anton V. Syroeshkin; Investigation - Roman G. Vasyliev, Svitlana N. Novikova, Larisa S. Litvinova, Tatiana V. Grebennikova, Igor A. Zlatskiy, Anton V. Syroeshkin; Methodology - Alona V. Zlatska, Roman G. Vasyliev, Inna M. Gordiienko, Anzhela E. Rodnichenko, Maria A. Morozova, Maria A. Vulf, Dmytro O. Zubov, Svitlana N. Novikova, Larisa S. Litvinova, Tatiana V. Grebennikova, Igor A. Zlatskiy, Anton V. Syroeshkin; Project administration - Roman G. Vasyliev, Dmytro O. Zubov, Svitlana N. Novikova, Larisa S. Litvinova, Tatiana V. Grebennikova, Igor A. Zlatskiy, Anton V. Syroeshkin; Resources - Alona V. Zlatska, Roman G. Vasyliev, Dmytro O. Zubov, Larisa S. Litvinova, Igor A. Zlatskiy, Anton V. Syroeshkin; Software - Alona V. Zlatska, Roman G. Vasyliev, Anzhela E. Rodnichenko, Igor A. Zlatskiy; Supervision - Roman G. Vasyliev, Svitlana N. Novikova, Tatiana V. Grebennikova, Igor A. Zlatskiy, Anton V. Syroeshkin; Validation - Alona V. Zlatska, Roman G. Vasyliev, Dmytro O. Zubov, Larisa S. Litvinova, Tatiana V. Grebennikova, Igor A. Zlatskiy, Anton V. Syroeshkin; Visualization - Alona V. Zlatska, Roman G. Vasyliev, Anzhela E. Rodnichenko, Igor A. Zlatskiy; Writing original draft - Alona V. Zlatska, Roman G. Vasyliev, Inna M. Gordiienko, Maria A. Morozova, Igor A. Zlatskiy, Anton V. Syroeshkin; Writing review & editing - Alona V. Zlatska, Roman G. Vasyliev, Inna M. Gordiienko, Anzhela E. Rodnichenko, Maria A. Morozova, Maria A. Vulf, Dmytro O. Zubov, Svitlana N. Novikova, Larisa S. Litvinova, Tatiana V. Grebennikova, Igor A. Zlatskiy, Anton V. Syroeshkin; All authors reviewed the manuscript. Correspondence to Igor A. Zlatskiy. Zlatska, A., Vasyliev, R.G., Gordiienko, I.M. et al. Effect of the deuterium on efficiency and type of adipogenic differentiation of human adipose-derived stem cells in vitro. Sci Rep 10, 5217 (2020). https://doi.org/10.1038/s41598-020-61983-3
CommonCrawl
\begin{document} \title{Cross-encoded quantum key distribution exploiting time-bin and polarization states with qubit-based synchronization} \author{Davide Scalcon} \thanks{These authors contributed equally to this work.} \affiliation{ Dipartimento di Ingegneria dell'Informazione, Universit\`a degli Studi di Padova, via Gradenigo 6B, IT-35131 Padova, Italy\\ } \author{Costantino Agnesi} \thanks{These authors contributed equally to this work.} \affiliation{ Dipartimento di Ingegneria dell'Informazione, Universit\`a degli Studi di Padova, via Gradenigo 6B, IT-35131 Padova, Italy\\ } \author{Marco Avesani} \affiliation{ Dipartimento di Ingegneria dell'Informazione, Universit\`a degli Studi di Padova, via Gradenigo 6B, IT-35131 Padova, Italy\\ } \author{Luca Calderaro} \affiliation{ Dipartimento di Ingegneria dell'Informazione, Universit\`a degli Studi di Padova, via Gradenigo 6B, IT-35131 Padova, Italy\\ } \affiliation{ ThinkQuantum S.r.l., Via della Tecnica, 85, IT-36030 Sarcedo (VI), Italy\\ } \author{Giulio Foletto} \affiliation{ Dipartimento di Ingegneria dell'Informazione, Universit\`a degli Studi di Padova, via Gradenigo 6B, IT-35131 Padova, Italy\\ } \author{Andrea Stanco} \affiliation{ Dipartimento di Ingegneria dell'Informazione, Universit\`a degli Studi di Padova, via Gradenigo 6B, IT-35131 Padova, Italy\\ } \author{Giuseppe Vallone} \affiliation{ Dipartimento di Ingegneria dell'Informazione, Universit\`a degli Studi di Padova, via Gradenigo 6B, IT-35131 Padova, Italy\\ } \affiliation{ Dipartimento di Fisica e Astronomia, Università degli Studi di Padova, via Marzolo 8, 35131 Padova, Italy\\ } \affiliation{ Padua Quantum Technologies Research Center, Università degli Studi di Padova\\ } \author{Paolo Villoresi} \email{[email protected]} \affiliation{ Dipartimento di Ingegneria dell'Informazione, Universit\`a degli Studi di Padova, via Gradenigo 6B, IT-35131 Padova, Italy\\ } \affiliation{ Padua Quantum Technologies Research Center, Università degli Studi di Padova\\ } \date{\today} \begin{abstract} Robust implementation of quantum key distribution requires precise state generation and measurements, as well as a transmission that is resistant to channel disturbances. However, the choice of the optimal encoding scheme is not trivial and depends on external factors such as the quantum channel. In fact, stable and low-error encoders are available for polarization encoding, suitable for free-space channels, whereas time-bin encoding represent a good candidate for fiber-optic channels, as birefingence does not perturb this kind of states. Here we present a cross-encoded scheme where high accuracy quantum states are prepared through a self-compensating, calibration-free polarization modulator and transmitted using a polarization-to-time-bin converter. A hybrid receiver performs both time-of-arrival and polarization measurements to decode the quantum states and successfully leaded to a transmission over 50 km fiber spool without disturbances. Temporal synchronization between the two parties is performed with a qubit-based method that does not require additional hardware to share a clock reference. The system was tested in a 12 hour run and demonstrated good and stable performance in terms of key and quantum bit error rates. The flexibility of our approach represents an important step towards the development of hybrid networks with both fiber-optic and free-space links. \end{abstract} \maketitle \section{\label{Section:Introduction}Introduction} Advancements in our ability to detect and manipulate single quantum objects has led to the development of quantum technologies with disruptive potential in many different areas, including computing, sensors, simulations, cryptography, and telecommunications. {One of the most mature among quantum technologies} is quantum key distribution (QKD), which allows distant users to generate a shared secret key with unconditional security. QKD is characterized by a consolidated composable security framework~\cite{Renner2005,Scarani2008} and by rapid and continuous technical advancements~\cite{Pirandola2019rev}. In fact, several QKD field trials are being performed to demonstrate the real-world applicability of this technology~\cite{Dynes2019, Bacco2019, Avesani:21,Chen2021} and several start-ups and university spin-offs are being created to intercept the growing market demands. The most commonly used QKD protocol is the first one ever introduced, \textit{i.e.}, the BB84 protocol~\cite{Bennett2014_BB84}. It requires a transmitter, Alice, to send qubits encoded in two mutually unbiased bases. Then, a receiver, Bob, chooses an orthogonal basis for each received qubit and performs projective measurements. After correlating their results and performing classical post-processing, Alice and Bob end up with identical keys that can be securely used in cryptographic schemes such as the one-time pad. The effectiveness of BB84 implementations depends on the choice of the photonic degree of freedom that encodes the qubits. Common choices are the polarization and time-bin degrees of freedom. Polarization is usually preferred for free-space QKD implementations~\cite{Gong2018,Ko2018,QCosone}, even being exploited for satellite-based QKD links~\cite{Liao2018}. There are three main factors that encourage the use of polarization encoding for free-space links. The first factor is that atmospheric transmission does not change the polarization state of the transmitted qubits~\cite{Bonato:06}. This allows Alice and Bob to share a polarization reference frame that remains stable and eliminates the need of active components to compensate the unitary transformation introduced by the quantum channel. The second factor is that polarization encoders with long-term temporal stability and low intrinsic quantum bit error rate (QBER) can be designed and developed. In fact, the POGNAC polarization encoder, with an average of 0.05\%, has reported the lowest intrinsic QBER in scientific literature~\cite{Agnesi2020} while the iPOGNAC~\footnote{The iPOGNAC is object of the Italian Patent No. 102019000019373 filed on 21.10.2019 as well as of the \href{https://patentscope.wipo.int/search/en/detail.jsf?docId=WO2021078723}{International Patent Application no. PCT/EP2020/079471} filed on 20.10.2020.} reported a stable polarization output for over 24 hours~\cite{Avesani2020}. The third factor is that polarization receivers can be easily constructed with inexpensive optical components such as polarization beam splitters (PBS), half-wave plates (HWP) and quarter-wave plates (QWP) that guarantee high extinction ratios and stable performances over time. Unfortunately, polarization encoding has some drawbacks when propagating through a fiber channel. This is mainly due to the random changes of the fiber birefringence introduced by ambient conditions and mechanical stress. This causes a random rotation of the polarization and, as a consequence, increases the QBER. In turn, it lowers the secret key rate (SKR) up to the point where no quantum secure key can be established~\cite{Ding2017}. To prevent this, a polarization compensation system becomes essential. To make QKD performance independent of the polarization fluctuations of the optical fiber, time-bin encoding was introduced as it exploits time-of-arrival of photons and the relative phase between time bins~\cite{Bennett1992}. This encoding has been employed in many QKD field trials in deployed fibers~\cite{Dynes2019, Bacco2019}, as well as in the record-setting 421 km fiber QKD link demonstration of the BB84 protocol~\cite{Boaron2018}. However, time-bin has the disadvantage of requiring phase stabilization of the interferometers which encode and decode the superposition of time bins~\cite{Makarov:04}. In this work, we present a cross-encoded implementation of the BB84 QKD protocol where polarization is used for state encoding while time-bin in used to propagate the qubits along a quantum channel composed of 50 km long fiber spool. The iPOGNAC polarization encoder is used to generate the states required to perform QKD, which guarantees long-term temporal stability and low intrinsic QBER. The polarization encoding is then transformed to time-bin encoding to guarantee that the birefringence of the fiber-optic channel does not modify the quantum information. Quantum state decoding is achieved with a hybrid QKD receiver that performs both time-of-arrival and polarization measurements. In addition, temporal synchronization between the transmitter and the receiver is established using the qubit-based Qubit4Sync method~\cite{Calderaro2020}, without requiring supplementary hardware with respect to what is already needed for the quantum communication. Our work enables the implementation of flexible QKD systems that can convert the qubit encoding to best fit the characteristics of the quantum channel and represents a step towards the development of hybrid QKD networks where both fiber and free-space links are employed. \begin{figure*} \caption{Experimental setup. BS: beam splitter, FAB-BS: fast-axis-blocking BS, PBS: polarization beam splitter, $\phi$-mod: phase modulator, H/QWP: half/quarter-wave plate, VOA: variable optical attenuator, PC: polarization controller, TDC: Time-to-Digital Converter, SPD: single photon detector. Single mode fibers are in yellow, polarization maintaining fibers are in blue. } \label{fig:setup} \label{fig:Setup} \end{figure*} \section{\label{Section:Setup}Experimental setup} Our cross-encoded polarization and time-bin implementation of the three-state and one-decoy efficient BB84 protocol~\cite{Grunenfelder2018} is sketched in Fig.~\ref{fig:Setup} with the transmitter, Alice, on the left and the receiver, Bob, on the right. \subsection{\label{Subsection:transmitter}Transmitter} The laser source used at the transmitter is a gain-switched distributed feedback 1550 nm laser (Eblana EP1550-0-DM-H16-FM), emitting ~100 ps FWHM pulses at $R = 50 \ \rm{MHz}$ repetition rate. The state of these light pulses is then modulated by an encoder composed of three sections: an intensity modulator, a polarization encoder and a polarization to time-bin conversion stage. The intensity modulator is based on a fiber-optic Sagnac loop and includes a 70:30 beamsplitter (BS), a lithium-niobate phase modulator (iXBlue MPZ-LN-10), and a $1$m-long delay line~\cite{Roberts2018}. This scheme implements the decoy state method with one decoy by setting two possible mean photon numbers (signal $\mu = 0.60$ and decoy $\nu = 0.18$) of the transmitted pulse. These parameters are chosen in such a way that their ratio is $\mu/\nu \approx 3.33$ and the decoy intensity is sent with $P_\nu = 30\%$ probability ($P_\mu = 70\%$). The second section, the iPOGNAC~\cite{Avesani2020}, is used to modulate the polarization state of the light. The iPOGNAC offers fast polarization modulation with long-term stability, and a low intrinsic error rate, and, contrary to previous solutions, generates predetermined polarization states with a fixed reference frame in free-space. Moreover, it has also been tested in a field trial in an urban environment~\cite{Avesani:21}. This polarization encoder relies on an unbalanced Sagnac interferometer containing a lithium-niobate phase modulator, and with the BS replaced by a fiber-based PBS with a polarization-maintaining (PM) optical fiber input and outputs. A free-space segment (Thorlabs FiberBench), composed of a BS and a HWP, ensures the light entering the loop has the diagonal state of polarization (SOP) $\ket{D} = \left( \ket{H} + \ket{V} \right ) / \sqrt{2}$. Hence, the light is equally split into the clockwise (CW) and counterclockwise (CCW) modes of the loop. Thanks to the asymmetry of the interferometer, by properly setting the voltage and the timing of the pulses driving the phase modulator, one can control the SOP exiting the device as follows: \begin{equation} \ket{\Phi_{\mathrm{out}}^{\phi_{\mathrm{CW}} , \phi_{\mathrm{CCW}}}} = \frac{1}{\sqrt{2}} \left ( \ket{H} + e^{i (\phi_{\mathrm{CW}} - \phi_{\mathrm{CCW}})} \ket{V} \right ) \end{equation} where $\phi_{\mathrm{CW}}$ and $\phi_{\mathrm{CCW}}$ are the phases applied by the phase modulator to the CW and CCW propagating light pulses. In this experiment, the driving electric pulse amplitude was set to induce a $\pi/2$ radians phase shift, allowing the iPOGNAC to generate circular left $\ket{L} = \left( \ket{H} + i \ket{V} \right ) / \sqrt{2} $, circular right $\ket{R} = \left( \ket{H} - i \ket{V} \right ) / \sqrt{2} $ or diagonal $\ket{D}$ polarized light. Before being coupled again into a PM optical fiber, a QWP and a HWP are used to transform circular left and right SOPs into horizontal $\ket{H}$ and vertical $\ket{V}$ SOPs. Such transformation is achievable due to the iPOGNAC's long term stability and its ability to generate polarization states with a fixed reference frame. Finally, the transformation of polarization encoding to time-bin is performed. This is done by a PM fiber-based unbalanced Mach-Zehnder interferometer (UMZI) where the input element is a PBS, which maps horizontal and vertical components of the light into the early and late time slots of the two dimensional time-bin encoding \begin{equation} \alpha\ket{H} + \beta\ket{V} \longrightarrow \alpha\ket{E} + e^{i\phi_\mathrm{A}}\beta\ket{L} \end{equation} where $\phi_\mathrm{A}$ is the intrinsic phase of Alice's UMZI. The imbalance of the MZI is approximately 2.5 ns, obtained with a 0.5 long PM fiber. The scheme is thus able to generate the early $\ket{E}$, late $\ket{L}$ time-bin states and the superposition of the two $\ket{+} = \left (\ket{E} + e^{i\phi_\mathrm{A}} \ket{L} \right ) / \sqrt{2}$. These states are sufficient to implement the 3-state efficient BB84 protocol~\cite{Fung2006} where the key generating basis $\mathcal{Z} = \{\ket{E} , \ket{L}\}$ is sent with 90\% probability and the control state $\ket{+}$ is sent with 10\% probability. The time-bin encoded signals are then attenuated down to the single-photon regime by a variable optical attenuator, then sent trough the quantum channel. It is important to note that after the conversion to time-bin, the polarization degree-of-freedom contains no information as all the light exiting the UMZI shares the same SOP. This is guaranteed by two factors. First, by design, the fiber-based PBS couples the orthogonal polarization modes into the slow-axis of the PM fiber outputs. Second, the BS used to recombine the two arms of the UMZI is a fast-axis blocking (FAB) device. FAB devices have the characteristic of discarding polarization states of the light that are aligned to the fast axis of the PM fiber, as if embedded with polarizers at both ends. The whole system is managed by a computer, performing resource intensive tasks related to the protocol and handling classical communication. The electronic signals driving the laser and the modulators are controlled by a system-on-a-chip (SoC) which includes both a field programmable gate array (FPGA) and a CPU~\cite{Stanco2021} and is integrated on a dedicated board (Zedboard by Avnet). \subsection{Receiver} At the receiver side, the measurement basis is randomly selected by a 50:50 BS. One of the ports is directly sent to a superconducting nanowire single photon detector (SNSPD) with approximately $80 \%$ quantum efficiency (ID281 by ID Quantique). The overall time jtter of about $30$~ps, considering both the detector and the time-to-digital converter (quTAG by Qutools), allows the discrimination between the 2.5-ns-distant time-bins, effectively performing a measurement on the key generation basis as depicted in the upper half of Fig.~\ref{fig:interference}. This time-of-arrival measurement has the advantage of being independent of the polarization fluctuations introduced by the fiber-optic channel, and does not require active compensation. \begin{figure} \caption{Input and output state from the receiver's unbalanced Mach-Zehnder interferometer. Blue and red curves represent the two possible times of emission at the transmitter. The two lateral peaks correspond to a measurement in the key generating basis while the central peak is used to extract information on the control basis via a polarization measurement.} \label{fig:interference} \end{figure} The other output port of the basis-selection BS is sent to an UMZI that is identical to the one used at the transmitter. However, in this case the light is split equally between the two arms by the BS before being recombined by the PBS. Used in this way, the UMZI outputs horizontal or vertical SOPs depending on which arm light has traveled. Furthermore, as depicted in the lower half of Fig.~\ref{fig:interference}, the imbalance of the UMZI temporally distributes the light in the three-peak configuration often observed in time-bin experiments. Correspondingly, the output state from Bob's UMZI is \begin{equation} \ket{\Psi_E} = \frac{1}{\sqrt{2}} \left ( \ket{EE}\otimes\ket{V} + e^{i\phi_\mathrm{B}} \ket{EL}\otimes\ket{H} \right ) \end{equation} when Alice transmits $\ket{E}$, \begin{equation} \ket{\Psi_L} = \frac{1}{\sqrt{2}} \left ( \ket{LE}\otimes\ket{V} + e^{i\phi_\mathrm{B}} \ket{LL}\otimes\ket{H} \right ) \end{equation} when Alice transmits $\ket{L}$, and \begin{equation} \label{eq1} \begin{split} \ket{\Psi_+} = \frac{1}{2} ( & \ket{EE}\otimes\ket{V} + e^{i\phi_\mathrm{B}} \ket{EL}\otimes\ket{H} \\ & + e^{i\phi_\mathrm{A}} \ket{LE}\otimes\ket{V} + e^{i\left( \phi_\mathrm{A}+\phi_\mathrm{B}\right)} \ket{LL}\otimes\ket{H} ) \end{split} \end{equation} when Alice transmits $\ket{+}$, where $\phi_\mathrm{B}$ is the intrinsic phase of Bob's UMZI. The lateral peaks $\ket{EE}$ and $\ket{LL}$ correspond to light traveling along the short or long arms of both transmitter and receiver's UMZI and since those times-of-arrival are a measurement in the $\mathcal{Z}$ basis, they are used to generate the secret key. Since 50\% of the light falls in these lateral peaks, by taking into account both outputs of the FAB-BS, the overall probability of measuring in the key generation basis is 75\%. Only the central peak contains the superposition between the indistinguishable early-late $\ket{EL}$ and late-early $ \ket{LE}$ components, and the relative phase information between them is encoded in the polarization state of the light. In fact the output SOP of the central peek when $\ket{+}$ is transmitted by Alice, is \begin{equation} \ket{\psi} = \frac{1}{\sqrt{2}} \left ( \ket{H} + e^{i \theta} \ket{V} \right ) \end{equation} where $\theta = \phi_\mathrm{A} - \phi_\mathrm{B}$ is the phase difference between Alice's and Bob's UMZIs. An all-fiber electronic polarization controller (PC) composed of four piezoelectric actuators (EPC-400 by OZ Optics) is then used to transform the polarization state $\ket{\psi}$ into $\ket{D}$ state and projected in the $\{\ket{D}, \ket{A} = \left( \ket{H} - \ket{V} \right ) / \sqrt{2} \}$ basis. This projection is performed using a fiber PBS while the light signals are detected by two SNSPDs. Alternatively, a free-space setup with a liquid crystal, or a phase modulator with its fiber rotated by 45 degrees~\cite{Duplinskiy2017} could be used instead of the PC. These solutions give the advantage of a simpler control scheme, due to the presence of a single degree of freedom, but would increase the losses at the receiver. Contrary to the key generation basis, where no compensation is necessary, to perform the measurement in the control basis we need to actively compensate drifts of the relative phase shift $\theta$ of the two interferometers. This is done by acting on the PC in front of the measurement PBS. A coordinate descent algorithm~\cite{Wright2015} is used to minimize the measured QBER = $N_{A}/(N_D + N_{A})$ by controlling the state of the PC (labeled as Measure PC in Fig.~\ref{fig:setup}), where $N_D$ ($N_{A}$) is the number of counts in the detector associated with $\ket{D}$ ($\ket{A}$). This algorithm, described in~\cite{Agnesi2020}, was developed for polarization tracking in polarization-encoded fiber links, and was tested in an urban QKD field trial~\cite{Avesani:21}. It starts operating without interrupting the QKD when the QBER exceeds 1\%, and stops when it becomes smaller than 1\%. In our implementation the QBER is calculated rapidly by exploiting a public string of states, known to both Bob and Alice, that is interleaved with the exchange of secret qubits. The ratio between public and secret states is 4 to 36. However, it is important to consider that compensation in the control basis can be done without sharing any public string since the standard basis reconciliation procedure would reveal all the necessary information to estimate the QBER. This approach would have the advantage of dedicating 100\% of time to QKD, but could be prone to some latency due to the classical communication between Alice and Bob. We used this hybrid time-bin to polarization scheme in the receiver to decouple the needed interferometer with the phase compensation scheme. In fact, the phase tracking is often performed by acting on the interferometer itself, using devices like fiber stretchers~\cite{Boaron2018} or phase modulators~\cite{Wang2018} inserted in one of the optical paths. Here, instead, the interferometer is completely passive and enclosed in a box that improves its isolation from the environment. A drawback of this approach is that the polarization state at the entrance of the conversion stage must be fixed and known, so that the light exits through the correct port of the closing PBS. By manipulating the SOP in the channel, Eve could, in principle, prevent Bob from measuring the states she attacked in the control basis, thus gaining information on the key without increasing the QBER. To avoid this, in our implementation, the basis-selection BS is FAB, meaning that only the slow-axis polarized light is measured in either basis. In this way, Eve can no longer control the detection probability in each basis, but only the global one: if she modifies the polarization, the states do not contribute to the key and she gains no information. This closes the security loophole but introduces some losses to the receiver, as polarization fluctuations of the input light cause variations in the detection rate. To mitigate this effect, another PC (labeled as Channel PC in Fig.~\ref{fig:setup}) is placed in front of the receiver. This element maximizes the total detection rate using a coordinate descent algorithm in real-time using Bob's local data without requiring any communication with the transmitter. This PC is not involved in the measurement procedure but it is only a countermeasure to the possible degradation in the count rate due to polarization fluctuations. The temporal synchronization is achieved using the Qubit4Sync algorithm~\cite{Calderaro2020}. This implies that the two parties do not need a shared clock reference such as a pulsed laser~\cite{Boaron2018,Bacco2019, Dynes2019}. Alice's clock is recovered by Bob only using the time-of-arrival of qubits while the absolute time is recovered by sending an initial public string encoded in the first $10^6$ states of the QKD transmission. The Qubit4Sync algorithm was originally developed to work with polarization based QKD systems, making this work the first implementation of the the technique for time-bin encoded systems. \section{\label{Section:Results}Results} To test the performances of the developed cross-encoded QKD system, we performed a 12-hour-long QKD run exploiting a quantum channel that consisted of a 50km spool of single mode optical fiber (SM G.652.D) with 0.2 dB/km attenuation and 10 dB of additional attenuation. A summary of the main results obtained in this experiment can be found in Table~\ref{tab:table1}. \begin{table}[h] \caption{\label{tab:table1} Experimental results of the cross-encoded QKD system during the 12 hour run.} \begin{ruledtabular} \begin{tabular}{l|cc} \textrm{Parameter} & \textrm{Mean value} & \textrm{Standard deviation} \\ \colrule QBER $\mathcal{Z}$ [\%] & 0.76 & 0.08 \\ QBER $\ket{+}$ [\%] & 0.79 & 0.65 \\ SKR [kbps] & 16.0 & 1.6 \\ $R_{\mathrm{det}}$ [kHz] & 80.0 & 4.8 \\ \end{tabular} \end{ruledtabular} \end{table} The mean detection rate $R_{\mathrm{det}}$ was of ~$80\cdot10^{3}$ events per seconds. Considering that on average the source emitted $(\mu P_\mu + \nu P_\nu)\cdot R = 23.7\cdot10^{6}$ photons per second, the measured total losses were approximately 25 dB. The channel contribution to these losses is about 20 dB, while the remaining 5dB can be attributed to detectors efficiencies, insertion losses of optical components and fiber mating sleeves. \begin{figure} \caption{ Temporal evolution for the quantum bit error rate (QBER) of the key generating basis and of the control state measured every 80 seconds. The averages are 0.765\% and 0.792\% for the key generating basis and control state respectively.} \label{fig:QBER} \end{figure} \begin{figure} \caption{ Histogram of the distribution of the quantum bit error rate (QBER) of the key generating basis and of the control state. } \label{fig:QBER_DISTR} \end{figure} The temporal evolution of the QBER on the key generation basis and on the control state is reported in Fig.~\ref{fig:QBER}, while in Fig.~\ref{fig:QBER_DISTR} their distribution is reported. The $\mathcal{Z}$ basis QBER averages 0.765\% and remains stable throughout the whole experimental run, with a standard deviation of the 0.078\%. The control basis QBER takes greater values, with an average of 0.792\%, and distributes over a wider range, with a standard deviation of 0.651\%. Furthermore, it can be observed that the $\mathcal{Z}$ basis QBER is $ \leq 1\%$ for more than 99.8\% of the time without any compensation, while the control state QBER is $ \leq 1\%$ for 81\% of the time, and $ \leq 2.5\%$ for 99.2\% of the time. These results certify the stability of our system and its capacity of correcting the phase drifts of the UMZIs. The $\mathcal{Z}$ basis QBER stability is inherited from the characteristics of the iPOGNAC polarization modulator used to encode the qubit states, as well as to the resistance to fluctuations of time-bin encoding. This also demonstrates the robustness of the Qubit4Sync temporal synchronization method, which enabled highly accurate time-of-arrival measurements. On the other hand, fluctuations are observed for the control state QBER, mainly caused by phase drifts of the UMZIs. However, our polarization tracking techniques effectively compensated these drifts, without ever interrupting the QKD. The post-processing uses a modified version of the AIT QKD R10 software suite~\cite{AIT} following the finite-size analysis of Ref.~\cite{Rusca2018_APL} \begin{equation} \mathrm{SKR} = \frac{1}{t}\left[s_{0} + s_{1}(1 - h(\phi_\mathcal{Z})) - \lambda_{\rm EC} -\lambda_{\rm c} - \lambda_{\rm sec}\right] \label{eq:skr} \end{equation} where terms $s_{0}$ and $s_{1}$ are the lower bounds on the number of vacuum and single-photon detection events in the key generating $\mathcal{Z}$ basis, $\phi_\mathcal{Z}$ is the upper bound on the phase error rate in the $\mathcal{Z}$ basis corresponding to single photon pulses, $h(\cdot)$ is the binary entropy, $\lambda_{\rm EC}$ and $\lambda_{\rm c}$ are the number of bits published during the error correction and confirmation of correctness steps, $\lambda_{\rm sec} = 6 \log_2(\frac{19}{\epsilon_{\rm sec}})$ with $\epsilon_{\rm sec}= 10^{-10}$ is the security parameter associated to the secrecy analysis, and finally $t$ is the duration of the quantum transmission phase. Equation \eqref{eq:skr} is applied to $4\cdot 10^6$-bit-long key blocks, a value that was chosen to produce new secret keys at a rapid pace, approximately every 80 seconds. Increasing this value by a factor of 10 would have improved the SKR by about 20\%, at the cost of a much higher delay between the beginning of the experiment and the production of the first key. The SKR obtained during the experiment is shown in Fig.~\ref{fig:SKR}. It can be observed that our cross-encoded QKD system successfully generated secure keys without interruptions throughout the 12 hours of the experimental run and achieved an average SKR of around 16 kbps. This result is consistent with our simulation of the performance of the system, which also predicts its behavior for different values of the channel losses, shown in Fig. \ref{fig:Sim}. The simulation makes the strong assumption that the compensation mechanisms maintain their good performance also in conditions of strong losses, but this is in agreement with previous experiments in which the same algorithms were used for polarization correction and synchronization \cite{Agnesi2020,Calderaro2020}. \begin{figure} \caption{ Secret key rate (SKR) measured on sifted key blocks of $4\cdot 10^6$ bits (corresponding to approximately 80 seconds of acquisition). An average rate of around 16 kbps was observed. } \label{fig:SKR} \end{figure} \begin{figure} \caption{ Simulation of the SKR as function of the channel losses. All other physical parameters are fixed and depend on the features of the experimental setup. The error bar associated to the experimental data point represents three times the standard deviation.} \label{fig:Sim} \end{figure} \section{\label{Section:Conclusions}Conclusions} In this work we described a novel cross-encoded QKD scheme, based on the conversion between time-bin and polarization degrees of freedom, that implements the one-decoy, three-state BB84 protocol~\cite{Grunenfelder2018}. By exploiting the temporally stable iPOGNAC polarization encoder we obtained polarization qubits with low error~\cite{Avesani2020}, that were converted to time-bin to allow transmission that is immune to the birefringence of the fiber-optic channel. We implemented a hybrid receiver that performed time-of-arrival measurements for key generation as well as polarization measurements for the control states. Temporal synchronization was successfully achieved with the Qubit4Sync method~\cite{Calderaro2020} making our work the first implementation of time-bin encoded QKD that does not require dedicated hardware to share a temporal reference between the transmitter and the receiver. The developed system was tested on a 12 hours run using a 50 km fiber spool, showing a stable QBER of 0.765\% in the key basis and 0.792\% in the control state, and achieving an average SKR of of approximately 16 kbps without interruptions. This scheme can represent an important enabling technology for the envisioned continental-scale hybrid quantum networks that employ both fiber-optics and free-space links~\cite{Wehnereaam9288}. In fact, since the qubit modulation of our transmitter is based on the iPOGNAC, it can be promptly reconfigured to transmit polarization-encoded qubits for free-space scenarios or, as demonstrated in this work, to convert them to time-bin for efficient propagation in an optical fiber. In this way our transmitter is compatible with any quantum channel and the best possible encoding scheme can be chosen according to the characteristics of the link. \begin{acknowledgments} {\noindent Author Contributions: C.A., M.A., G.V., P.V. designed the transmitter. C.A., D.S., M.A. designed the receiver. A.S., M.A., D.S. developed the transmitter electronics and the FPGA-based control system. L.C., D.S., C.A. developed the transmitter and receiver control software. G.F. developed the post-processing and simulation software. D.S. performed the experiment. All authors discussed the results. C.A., D.S. wrote the manuscript with inputs from all the authors. \noindent Part of this work was supported by: MIUR (Italian Minister for Education) under the initiative ''Departments of Excellence'' (Law 232/2016); Agenzia Spaziale Italiana (2018-14-HH.0, CUP: E16J16001490001, {\it Q-SecGroundSpace}; 2020-19-HH.0, CUP: F92F20000000005, {\it Italian Quantum CyberSecurity I-QKD}). The AIT Austrian Institute of Technology is thanked for providing the initial elements of the post-processing software used here.} \end{acknowledgments} \end{document}
arXiv
Scalable Time-Stepping for Stiff Nonlinear PDEs PI: James V. Lambers, The University of Southern Mississippi Methods for numerical simulation of time-dependent phenomena have advanced considerably over the last several decades, as has the computing technology on which they are applied to increasingly high-resolution models. However, this very increase in available resolution reduces the effectiveness of well-established time-stepping methods, and by extension the overall feasibility of numerical simulation of such phenomena. As future advances in computing power will only exacerbate this problem, the proposed project is an effort to proactively address it by pursuing a new direction in the design of time-stepping methods and openly disseminating the resulting software to the community of researchers in relevant application areas from neuroscience to nanomanufacturing to wireless communication. The proposed project will develop and disseminate software that employs a componentwise approach to time-stepping to circumvent the difficulties caused by the phenomenon of stiffness. The new methods will provide a more practical alternative to researchers in many branches of science and engineering who rely on the numerical solution of time-dependent partial differential equations (PDEs). The rapid advancement of computing power in recent years has allowed higher resolution models, which has introduced greater stiffness into systems of ordinary differential equations (ODEs) that arise from discretization of time-dependent PDEs. This presents difficulties for explicit and implicit time-stepping methods, due to the coupling of components of the solution corresponding to low and high frequencies, that change at widely varying speeds. This goal is to be accomplished by continuing the evolution of Krylov subspace spectral (KSS) methods, which were developed by the PI [54] and have been advanced by the PI over the last several years [9,49,50,51,56,59] These methods feature explicit time-stepping with high-order accuracy, and stability that is characteristic of implicit methods. This ``best-of-both-worlds'' combination is achieved through a componentwise approach, in which each Fourier coefficient of the solution is computed using an approximation of the solution operator that is, in some sense, optimal for that coefficient. Initial work on KSS methods [9,37,49,50,51,52,53,54,55,56,57,58,59,60,69] that includes comparisons to many other time-stepping methods, has yielded promising results, in terms of accuracy, stability, and scalability. These results encourage their continued development, including enhancement of their accuracy and efficiency, as well as expansion of their applicability. To date, KSS methods have been applied almost exclusively to linear variable-coefficient PDEs on $n$-dimensional boxes, for $n=1,2,3$, with either periodic or homogeneous boundary conditions. The proposed project will feature generalization to KSS methods to nonlinear problems, and to problems on non-rectangular domains. KSS methods will be extended to nonlinear PDEs by combination with exponential propagation iterative (EPI) methods [79,80] that have shown much promise as effective stiff solvers, compared to existing explicit and implicit time-stepping methods [61]. However, EPI methods currently employ Krylov projection methods [39,40,81] to approximate products of matrix functions and vectors that can require high-dimensional Krylov subspaces, especially at higher spatial resolution. KSS methods, on the other hand, work with Krylov subspaces whose dimension is independent of the spatial resolution. The proposed project will include optimization of KSS methods so that EPI methods, when using KSS methods to approximate these products, will be much more efficient and scalable than in their present form. KSS methods will also be extended to PDEs on more general domains by combination with Fourier continuation (FC) methods [1,7,63,70] which are applicable to such domains. Specifically, the goal is to combine the spatial discretization of these methods with the approach to high-order time-stepping of KSS methods. As KSS methods are particularly effective for evaluating matrix function-vector products for matrices representing differential operators under periodic boundary conditions at high resolution, FC methods use periodic extensions to achieve high accuracy on general domains, and EPI methods use matrix function-vector products to achieve high-order accuracy for nonlinear PDEs, the combination of the three methods is a logical foundation for a more effective approach to overcoming stiffness. KSS methods represent the first application to time-dependent PDEs of techniques from the emerging research area of ``matrices, moments and quadrature'': the approximation of bilinear forms involving matrix functions by treating them as Riemann-Stieltjes integrals, and then applying Gaussian quadrature rules that are generated by the Lanczos algorithm. However, as beneficial as KSS methods can be in their own right, the broader impacts of the proposed project on the scientific computing community as a whole will stem from an understanding of the body of work from which they arose. Because of the widespread importance of simulation techniques as evidenced by the variety of applications, the proposed research effort will be accompanied by educational activities designed to acquaint the computational mathematics community with new avenues for advances, provide students at the PI's institution, many of whom belong to underrepresented groups in the mathematical sciences, with a foundation in computational mathematics that will enable them to join the front line of future advances, and introduce high school students to computational mathematics, a field that, while not among the Common Core standards, nonetheless plays a key role throughout the applied STEM fields, including science, medicine and engineering, and therefore serves as an ideal outlet for mathematically inclined students as well as a stepping stone to a career that suits their skills and interests. In addition to these goals, these educational activities, both within the PI's institution and the broader research community, will raise awareness and appreciation of the techniques from ``matrices, moments and quadrature'', which have many applications besides their role in KSS methods. Applications within numerical linear algebra include error estimation in iterative methods for solving linear systems [11,29,64] updating or downdating in least squares problems [20], selection of regularization parameters for ill-conditioned least squares problems [77], and estimation of determinants and traces of matrix functions [3,4]. There are also several applications of these techniques outside of numerical linear algebra. Approximation of bilinear forms is useful for approximation of the scattering amplitude [30], and estimating the trace of the inverse has applications in the study of fractals [74,82] lattice quantum chromodynamics [13,75] and crystals [67,68]. This outreach will encourage researchers to extend the application of these techniques to a wider variety of problems, in which computation of bilinear forms arises naturally. These activities will be complemented by outreach to high school students and teachers in Mississippi. The PI will travel to area mathematics classrooms and work through exploratory projects with students. They will have hands-on experiences with a variety of methods that produce approximate solutions to problems with which they are familiar, such as nonlinear equations and integrals, and be introduced to application areas. These activities and classroom time with a university researcher will give them an idea of how the mathematics that they are learning is applied in a variety of STEM careers. The development of componentwise time-stepping into a more effective approach for solving systems of ODEs than established time-stepping techniques, and the integration of this approach with EPI and FC methods, has the potential to impact several applications in which nonlinear, stiff time-dependent PDEs, of either parabolic or hyperbolic type, naturally arise. Just a few examples of such application areas include plasma physics, molecular dynamics, electrodynamics, optics, and porous media transport. The motivating force behind the research directions described in this proposal is the need to bridge a fundamental, and worsening, disconnect between the time-stepping methods that are most often used in applications, and the rapidly evolving computing environments in which they are used. Many of these methods, some of which have been in existence for over one hundred years, are not designed with stiff systems in mind. They fall into two broad categories: explicit methods, that require very small time steps in order to maintain stability, and implicit methods, that can use larger step sizes than explicit methods, but usually require the solution of a large and ill-conditioned system of equations during each time step. There are also implicit time-stepping methods, called stiff solvers, that are designed with stiff systems in mind (see, for example, [76]), but many methods of this type require the computation and inversion of the Jacobian, which can be quite expensive, especially for large-scale systems. Newton-Krylov methods [46] avoid this expense but require effective preconditioners, which, depending on the particular problem, may not be readily available [61]. Furthermore, for both explicit and implicit methods, these issues are exacerbated as the spatial resolution increases. For explicit methods, the time step must increase linearly (for hyperbolic problems) or quadratically (for parabolic problems) as the mesh spacing decreases, due to the CFL limit. For implicit methods, increased spatial resolution results in an increase in the condition number of the system that must be solved during each time step. Going forward, spatial resolution of models will only continue to increase with improvements in the efficiency of memory and processors, thus creating a situation that is worsening over time. These advances in hardware will foster a demand for, and expectation of, greater use of simulation in real-time (e.g., medicine) and/or high-data (e.g., reservoir simulation) applications that existing time-stepping methods will have difficulty meeting. Therefore, a fresh perspective is needed for the development of time-stepping methods that can more efficiently deal with stiffness. In many cases, the solution of such a system of ODEs can be expressed in terms of the approximate evaluation of a matrix function such as the exponential. Techniques for computing $f(A){\bf v}$, such as those described in [39,40], have been used for several years to solve systems of ODEs, but can encounter the same difficulties with stiffness. Techniques for computing the bilinear form ${\bf u}^T f(A) {\bf v}$ open the door to rethinking how time-stepping is carried out. Because they allow individual attention to be paid to each component of $f(A){\bf v}$ in some basis, even if those components are coupled, they can be used to construct new time-stepping methods, based on KSS methods, that are more effective for large-scale problems. A survey of the literature on time-stepping methods shows that recent advances mainly come from two strategies for mitigating the difficulties caused by stiffness: adaptive time-stepping (see, for example, [26,33,42,43,44,45,48,71,84]), in which the global time step used to compute the entire solution is varied according to an error estimate and/or a measure of the smoothness of the solution, and local time-stepping (see, for example, [6,10,16,17,19,22,23,34,35,47,62,78]), in which the time step is varied over the spatial domain so as to satisfy local stability criteria. The guiding principle behind both strategies is to limit the degradation of performance due to stiffness by reducing the time step either locally in time (for adaptive time-stepping) or in space (for local time-stepping) where higher spatial resolution is needed. Also worth noting are multiple time-stepping (MTS) methods [15,21,83], that mitigate the difficulties caused by stiffness by using appropriate time steps for stiff and non-stiff parts of the equation. By contrast, the approach in the proposed project is to confront stiffness directly by modifying the approximate solution operator in a componentwise manner, so that an increase in spatial resolution does not cause undue additional computational expense, even when using a fixed time step. However, this approach should not be seen as an alternative to the abovementioned strategies. KSS methods can benefit from adaptive time-stepping just like any other time-stepping method through the addition of error estimation; this enhancement will be part of the proposed project. Also, the approach taken by KSS methods of using different approximations for each Fourier component is amenable to local time-stepping, except that the locality is in Fourier space rather than physical space. Finally, MTS methods can benefit from applying KSS methods to the stiff parts of the problem. 2 A Componentwise Approach to Time-Stepping Consider a system of ODE of the form \begin{equation} \label{utAu} {\bf u}'(t) + A{\bf u} = {\bf 0}, \quad {\bf u}(t_0) = {\bf u}_0, \end{equation} where $A$ is a $N\times N$ matrix. One-step methods such yield an approximate solution of the form ${\bf u}(t_{n+1}) = f(A;\Delta t){\bf u}(t_n),$ where $\Delta t$ is the time step and $f(A;\Delta t)$ is a polynomial of $A$ approximates $e^{-A\Delta t}$ if the method is explicit, or a rational function of $A$ if the method is implicit. For example, consider the problem of computing ${\bf w} = e^{-At}{\bf v}$ for a given symmetric matrix $A$ and vector ${\bf v}$. An approach described in [40] is to apply the Lanczos algorithm to $A$ with initial vector ${\bf v}$ to obtain, at the end of the $j$th iteration, an orthogonal matrix $X_j$ and a tridiagonal matrix $T_j$ such that $X_j^T AX_j = T_j$. Then, we can compute the approximation \begin{equation} {\bf w}_j = \|{\bf v}\|_2 X_j e^{-T_j t} {\bf e}_1, \quad {\bf e}_1 = \left[ \begin{array}{cccc} 1 & 0 & \cdots & 0 \end{array} \right]^T. \label{eqlancapprox} \end{equation} As each column ${\bf x}_k$, $k=1,\ldots,j$, of $X_j$ is of the form ${\bf x}_k = p_{k-1}(A){\bf v}$, where $p_n(A)$ is a polynomial of degree $n$ in $A$, it follows that ${\bf w}_j$ is the product of a polynomial in $A$ of degree $j-1$ and ${\bf v}$. If the eigenvalues of $A$ are not clustered, which is the case if $A$ arises from a stiff PDE, a good approximation cannot be obtained using a few Lanczos iterations, just as a low-degree polynomial cannot accurately approximate an exponential function on a large interval. This can be alleviated using an outer iteration \begin{equation} {\bf w}_j^{m+1} \approx e^{-A\Delta t}{\bf w}_j^m, \quad m = 0, 1, \ldots, \quad {\bf w}_j^0 = {\bf v}, \label{outerit} \end{equation} for some $\Delta t \ll t$. However, this approach is not practical if $\Delta t$ must be chosen very small. Modifications described in [18,65,81] produce rational approximations that reduce the number of Lanczos iterations, but these methods require solving systems of linear equations in which the matrix is of the form $I-hA$, where $h$ is a parameter. Unless $h$ is chosen very small, these systems may be ill-conditioned. In summary, time-stepping methods that approximate the exponential using a polynomial or rational function tend to have difficulty with stiff systems, such as those that arise from PDEs. The solution of a stiff system includes coupled components that evolve at widely varying rates. As a result, explicit methods for such systems require small time steps, while implicit methods typically require the solution of ill-conditioned systems of linear equations that pose difficulties for direct and iterative solvers alike. The difficulties that many time-stepping methods have with stiffness is associated with the interaction between components of the solution of varying frequency, that vary at different speeds. However, it is not this interaction alone that necessarily precipitates the difficulties typically observed in time-stepping methods applied to stiff problems. A contributing factor is the attempt to treat such disparate components of the solution with the same function, whether polynomial or rational, that approximates $e^{-\lambda\Delta t}$. Unfortunately, such approximation is impractical when $\lambda$ varies within a large interval, which is precisely how the eigenvalues of $A$ behave when the problem (\ref{utAu}) is stiff. Componentwise time-stepping, on the other hand, can avoid these drawbacks, because each Fourier coefficient of the solution is computed using a different approximation of the solution operator. Rather than match terms of the Taylor expansion of the exponential, as in methods of Runge-Kutta type, the exponential function is approximated using an interpolating polynomial with frequency-dependent interpolation points. This increased flexibility can be exploited to obtain the advantage of implicit methods, their stability, without having to solve a large system of equations during each time step. However, considering such an approach raises important questions: If frequency-dependent polynomial approximations are to be used, how should the interpolation points be chosen, especially to achieve higher-order accuracy in time? How can this approach be extended to nonlinear PDEs, or PDEs defined on general domains? The next two sections will provide answers to both of these questions. 3 Matrices, Moments, Quadrature and PDEs This section describes how componentwise approximations can be obtained. We consider the linear parabolic PDE \begin{equation} \label{eqthepde} u_t + Lu = 0, \quad t > 0, \end{equation} where $L$ is a Sturm-Liouville operator, with appropriate initial conditions and periodic boundary conditions. Let $\langle \cdot, \cdot \rangle$ denote the standard inner product on $[0,2\pi]$. KSS methods [49,50,54,56] compute the solution $\tilde{u}(x,t_{n+1})$ from $\tilde{u}(x,t_n)$ at time $t_n$ by approximating the Fourier coefficients that would be obtained by applying the exact solution operator to $\tilde{u}(x,t_n)$, $$\hat{u}(\omega,t_{n+1}) = \left\langle \frac{1}{\sqrt{2\pi}} e^{i\omega x}, S(\Delta t)\tilde{u}(x,t_n) \right\rangle,$$ where $S(\Delta t) = e^{-L\Delta t}$ is the solution operator of the PDE. Clearly, such an approach requires an effective method for computing bilinear forms. Elements of Functions of Matrices In [29] Golub and Meurant described a method for computing quantities of the form \begin{equation} {\bf u}^T f(A){\bf v}, \label{basicquadform} \end{equation} where ${\bf u}$ and ${\bf v}$ are $N$-vectors, $A$ is an $N\times N$ symmetric positive definite matrix, and $f$ is a smooth function. Our goal is to apply this method with $A = L_N$, where $L_N$ is a spectral discretization of $L$, $f(\lambda) = e^{-\lambda t}$ for some $t$, and the vectors ${\bf u}$ and ${\bf v}$ are derived from $\hat{\bf e}_\omega$ and ${\bf u}^n$, where $\hat{\bf e}_\omega$ is a discretization of $\frac{1}{\sqrt{2\pi}}e^{i\omega x}$, and ${\bf u}^n$ is a discretization of the solution at time $t_n$ on an $N$-point uniform grid. Since the matrix $A$ is symmetric positive definite, it has real eigenvalues $b = \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_N = a > 0.$ and corresponding orthonormal eigenvectors ${\bf q}_j$, $j=1, \ldots, N$. Therefore, the quantity (\ref{basicquadform}) can be viewed as a Riemann-Stieltjes integral \begin{equation} {\bf u}^T f(A){\bf v} = \sum_{j=1}^N f(\lambda_{j}) {\bf u}^T {\bf q}_j {\bf q}_j^T{\bf v} \label{stieltjes1} = \int_a^b f(\lambda)\,d\alpha(\lambda),\end{equation} where the measure $d\alpha(\lambda)$ is derived from the coefficients of ${\bf u}$ and ${\bf v}$ in the basis of eigenvectors. As discussed in [11,27,28,29], this integral Gaussian, Gauss-Radau or Gauss-Lobatto quadrature rules, where the nodes and weights can be obtained using the Lanczos algorithm [12,24,25,32]. Figure 1 demonstrates why integrals of the form (\ref{stieltjes1}) can be approximated accurately with a small number of nodes in the case where $A$ is a discretization of a differential operator and the vector ${\bf u}$ is used to extract a particular Fourier coefficient of $f(A){\bf v}$. We examine the distribution $d\alpha(\lambda)$ in the case where ${\bf u} = {\bf v} = e^{i\omega x}$ for small and large values of $\omega$, and for $A$ discretizing a differential operator of the form $-\partial_x a(x) \partial_x$, with $a(x) > 0$ being a smooth function or a piecewise constant function. In either case, $d\alpha(\lambda)$ is mostly concentrated within a portion of the interval of integration $[a,b]$. Gaussian quadrature rules for such integrals naturally target these relevant portions [37,50,51]. Figure 1: The distribution $d\alpha(\lambda)$ from (\ref{stieltjes1}) where the matrix $A$ represents a spectral discretization of a 1-D, second-order differential operator with smooth leading coefficient (top plot) and discontinuous leading coefficient (bottom plot), where ${\bf u} = {\bf v}$ is a discretization of $e^{2ix}$ (solid curve) or $e^{64ix}$ (dashed curve). Block Gaussian Quadrature In the case ${\bf u} \neq {\bf v}$, there is the possibility that the weights may not be positive, which destabilizes the quadrature rule [2]. Instead, one can consider the approximation of the $2\times 2$ matrix integral \begin{equation} \label{blkform} \left[ \begin{array}{cc} {\bf u} & {\bf v} \end{array}\right]^T f(A) \left[ \begin{array}{cc} {\bf u} & {\bf v} \end{array}\right]. \end{equation} As discussed in [29], the most general $K$-node quadrature formula is of the form \begin{equation} \int_a^b f(\lambda )\,d\mu (\lambda ) = \sum_{j=1}^{2K} f(\lambda_j){\bf v}_j {\bf v}_j^T + error, \label{eq312} \end{equation} where, for each $j$, $\lambda_j$ is a scalar and ${\bf v}_j$ is a 2-vector. Each node $\lambda_j$ is an eigenvalue of the matrix \begin{equation} \label{blocklanc} {\cal T}_K = \left[ \begin{array}{cccc} M_1 & B_1^T & & \\ B_1 & M_2 & B_2^T & \\ & \ddots & \ddots & \ddots \\ & & B_{K-1} & M_K \end{array} \right], \end{equation} which is a block-tridiagonal matrix of order $2K$. The vector ${\bf v}_j$ consists of the first two elements of the corresponding normalized eigenvector. The matrices $M_j$ and $B_j$ are computed using the block Lanczos algorithm, which was proposed by Golub and Underwood in [31]. KSS Methods KSS methods [50,52] for (\ref{eqthepde}) begin by defining \begin{equation} R_0 = \left[ \begin{array}{cc} \hat{\bf e}_\omega & {\bf u}^n \end{array} \right], \label{KSSR0} \end{equation} where $\hat{\bf e}_\omega$ is a discretization of $\frac{1}{\sqrt{2\pi}}e^{i\omega x}$ and ${\bf u}^n$ is a discretization of the approximate solution $u(x,t)$ at time $t_n=n\Delta t$. Next we compute the $QR$ factorization of $R_0$, $$ R_0 = X_1 B_0 $$ which outputs \begin{equation} X_1 = \left[ \begin{array}{cc} \hat{\bf e}_\omega & \frac{{\bf u}_\omega^n}{\| {\bf u}_\omega^n \|_2} \end{array} \right] \label{x1} \end{equation} and $$ B_0 = \left[ \begin{array}{cc} 1 & \hat{\bf e}_\omega^H {\bf u}^n \\ 0 & \| {\bf u}_\omega^n \|_2 \end{array} \right], $$ where \begin{equation} {\bf u}_\omega^n = {\bf u}^n - \hat{\bf e}_\omega \hat{\bf e}_\omega^H{\bf u}^n = {\bf u}^n - \hat{\bf e}_\omega \hat{u}(\omega,t_n). \label{equw} \end{equation} Then, block Lanczos iteration is applied to the discretized operator $L_N$ with initial block $X_1(\omega)$, producing a block tridiagonal matrix ${\cal T}_K(\omega)$ of the form (\ref{blocklanc}), where each entry is a function of $\omega$. Then, each Fourier coefficient of the solution at $t_{n+1}$ can be expressed as \begin{equation} \label{blkcomp} [\hat{\bf u}^{n+1}]_\omega = \left[ B_0^H E_{12}^H e^{-{\cal T}_K(\omega)\Delta t} E_{12} B_0 \right]_{12}, \quad E_{12} = \left[ \begin{array}{cc} {\bf e}_1 & {\bf e}_2 \end{array} \right]. \end{equation} This algorithm has temporal accuracy $O(\Delta t^{2K-1})$ for parabolic problems [50]. Even higher-order accuracy, $O(\Delta t^{4K-2})$, is obtained for the second-order wave equation [52]. Furthermore, in [50,52], it is shown that under appropriate assumptions on the coefficients of the PDE, the 1-node KSS method is unconditionally stable. Generalization of this algorithm to higher dimensions and other PDEs is straightforward [38]. For the second-order wave equation, two matrices of the form $T_\omega$ are computed for each $\vec{\omega}$, corresponding to the solution and its time derivative. KSS methods have also been generalized to systems of coupled equations; the details are presented in [53]. In particular, it is shown in [60] that KSS methods are effective for systems of three equations in three spatial variables, through application to Maxwell's equations. KSS methods have been compared to several time-stepping methods, including finite-difference methods, Runge-Kutta methods and backward differentiation formulae [52,53,56,58]. Comparisons with a number of Krylov subspace methods based on exponential integrators [39,40,41], including preconditioned Lanczos iteration [65,81], are made in [59]. As discussed in the next section, KSS methods have also been compared to a variety of methods for computing matrix-function vector products for the purpose of solving nonlinear PDEs [9]. 4 Proposed Research Three significant drawbacks of KSS methods, as described in the previous section, are that the computational effort per time step, which can be made $O(N\log N)$ where $N$ is the number of grid points, is still substantial compared to other time-stepping methods due to the complexity of recursion coefficients and the need to compute functions of $N$ small matrices, to date, they have only been applied to linear PDEs, except for 1-node, first-order accurate KSS methods that have been applied to nonlinear diffusion equations for image processing [36,38], and they have only been applied to PDEs defined on domains on $n$-dimensional boxes, as opposed to domains with irregular boundaries. The goal of the proposed project is to take steps to overcome all three of these limitations, thus upgrading KSS methods into a broadly applicable approach to time-stepping that combines the best attributes of implicit and explicit methods. Acceleration Through Asymptotic Node Selection The central idea behind KSS methods is to compute each component of the solution, in some orthonormal basis, using an approximation that is, in some sense, optimal for that component. That is, each component uses its own polynomial approximation of $S(L_N;\Delta t)$, where the function $S$ is based on the solution operator of the PDE (e.g. $S(L_N;\Delta t) = e^{-L_N\Delta t}$ in the case of (\ref{eqthepde})), and $L_N$ is the discretization of the spatial differential operator. These polynomial approximations are obtained by interpolation of the function $S(\lambda;\Delta t)$ at selected nodes for each component. Then, the computed solution has the form [51] \begin{equation} \label{diagop} \mathbf{u}^{n+1} = S(L_N;\Delta t) \mathbf{u}^n = \sum_{j=0}^{2K} D_j(\Delta t) A^j \mathbf{u}^n, \end{equation} where $D_j(\Delta t)$ is a matrix that is diagonal in the chosen basis. The diagonal entries are the coefficients of these interpolating polynomials in the monomial basis, with each row corresponding to a particular Fourier component. In the original block KSS method [50,52], the interpolation points are obtained by performing block Lanczos iteration and then diagonalizing a $2K\times 2K$ matrix--for each component. We now describe a much faster way of obtaining interpolation points, introduced in [69], by studying the behavior of block Lanczos in the limit as $|\omega|\rightarrow\infty$, where $\omega$ is the wave number. To illustrate the approach, we use an example. As in the previous section, let $\mathbf{u}^n$ be a discretization of the approximate solution $u(x,t)$ at time $t_n=n\Delta t$ on a uniform $N$-point grid. Then, KSS methods use the initial block $R_0=\left[ \begin{array}{cc} \hat{\bf e}_\omega & \mathbf{u}^n \end{array} \right]$, for each $\omega = -N/2+1,\ldots,N/2$. As before, we start the block Lanczos algorithm by finding the $QR$-factorization of $R_0$: $$R_0 = X_1 B_0,$$ where \begin{equation} \label{eq:x1} X_1=\left[ \begin{array}{cc} \displaystyle{\hat{\bf e}_\omega} & \displaystyle{\frac{\mathbf{u}^n_\omega}{\|\mathbf{u}^n_\omega\|_2}} \end{array} \right] \quad \mbox{ and } \quad B_0= \left[ \begin{array}{cc} 1 & \hat{u}(\omega,t_n) \\ 0 & \|\mathbf{u}^n_\omega\|_2 \end{array} \right]. \end{equation} with $\mathbf{u}^n_\omega$ defined as in (\ref{equw}). We note that if the solution $u$ is continuous, then as $|\omega|\rightarrow\infty$, $|\hat{u}(\omega,t_n)|\rightarrow 0$, so that in the limit $B_0$ is diagonal. The next step is to compute \begin{equation} \label{eq:m1} M_1=X_1^HL_N X_1, \end{equation} where the matrix $L_N$ is a spectral discretization of the operator $L$ defined by $Lu=pu_{xx}+q(x)u$, with $p$ being a constant. Substituting the value of $X_1$ from (\ref{eq:x1}) into (\ref{eq:m1}) yields $$ M_1=\left[ \begin{array}{cc} \omega^2p+\bar{q} & \displaystyle{\frac{\widehat{L_N \mathbf{u}^n_\omega} (\omega)}{\|\mathbf{u}^n_\omega\|_2}} \\ \displaystyle{\frac{\overline{\widehat{L_N\mathbf{u}^n_\omega}(\omega)}}{\|\mathbf{u}^n_\omega\|_2}} & R(L_N,\mathbf{u}^n_\omega) \end{array} \right], $$ where $\bar{q}$ is the mean of $q(x)$ on $(0,2\pi)$, $\widehat{L_N \mathbf{u}^n_\omega }(\omega) = \hat{\bf e}_\omega^H L_N\mathbf{u}^n_\omega$ is the Fourier coefficient of the gridfunction $L_N \mathbf{u}^n$ corresponding to the wave number $\omega$, and $R(L_N,\mathbf{u}^n_\omega)=\displaystyle{\frac{\langle \mathbf{u}^n_\omega,L_N \mathbf{u}^n_\omega\rangle}{\langle \mathbf{u}^n_\omega,\mathbf{u}^n_\omega\rangle}}$ is the Rayleigh quotient of $L_N$ and $\mathbf{u}^n_\omega$. As $|\omega |$ increases, assuming the solution is sufficiently regular, the non-diagonal entries of $M_1$ become negligible; that is, $$ M_1 \approx \left[ \begin{array}{cc} \omega^2p+\bar{q} & 0 \\ 0 & R(L_N,\mathbf{u}^n) \end{array} \right]. $$ Proceeding with the iteration, and neglecting any terms that are Fourier coefficients or are of lower order in $\omega$, we obtain $$ R_1=L_NX_1-X_1M_1\approx\left[ \begin{array}{cc} \displaystyle{\tilde{\bf q}}\hat{\bf e}_\omega & \displaystyle{\frac{L_N\mathbf{u}^n_\omega}{\|\mathbf{u}^n_\omega\|_2}-R(L_N,\mathbf{u}^n_\omega)\frac{\mathbf{u}^n_\omega}{\|\mathbf{u}^n_\omega\|_2}} \end{array} \right], $$ where $\mathbf{q}$ is a vector consisting of the value of $q(x)$ at the grid points, $\tilde{\bf q} = \mathbf{q} - \bar{q}$, and multiplication of vectors is component-wise. To obtain $X_2$, we perform the $QR$-factorization $R_1 = X_2 B_1$. We note that the $(1,2)$ entry of $B_1$, modulo lower-order terms, is the Fourier coefficient $\hat{v}_1(\omega)$, where $$\mathbf{v}_1 = \tilde{\bf q} \left( \displaystyle{\frac{L_N\mathbf{u}^n_\omega}{\|\mathbf{u}^n_\omega\|_2}-R(L_N,\mathbf{u}^n_\omega)\frac{\mathbf{u}^n_\omega}{\|\mathbf{u}^n_\omega\|_2} } \right).$$ It follows that given sufficient regularity of the solution $u$, in the limit as $|\omega|\rightarrow\infty$, $B_1$, like $B_0$, approaches a diagonal matrix. Continuing this process, it can be seen that every (nonzero) off-diagonal entry of $M_j$ or $B_j$, for $j=1,2,\ldots,$ is a Fourier coefficient of some function that is a differential operator applied to $u$. Therefore, as long as the Fourier coefficients of $u$ decay to zero at a sufficiently high rate as $|\omega|\rightarrow\infty$, these off-diagonal entries will also decay to zero. It follows that in this high-frequency limit, the block tridiagonal matrix $\mathcal{T}_K$ produced by block Lanczos applied to $R_0$ as defined above converges to the matrix that would be obtained by applying ``non-block'' Lanczos iteration to the two columns of $R_0$ separately, and then alternating rows and columns of the tridiagonal matrices produced by these iterations. Therefore, by reordering the rows and columns of $\mathcal{T}_K$ in such a way that odd-numbered and even-numbered rows and columns are grouped together, as shown here in the $6\times 6$ case, $$ \left[ \begin{array}{cccccc} \times & & \times & & & \\ & \times & & \times & & \\ \times & & \times & & \times & \\ & \times & & \times & & \times \\ & & \times & & \times & \\ & & & \times & & \times \end{array} \right] \quad \longrightarrow \quad \left[ \begin{array}{cccccc} \times & \times & & & & \\ \times & \times & \times & & & \\ & \times & \times & & & \\ & & & \times & \times & \\ & & & \times & \times & \times \\ & & & & \times & \times \end{array} \right] $$ we find that the eigenvalue problem for this matrix decouples, and the block Gaussian quadrature nodes can be obtained by computing the eigenvalues of these smaller, tridiagonal matrices. For finite $\omega$, we can then use ``non-block'' Lanczos to at least estimate the true block Gaussian quadrature nodes. As a consequence of this decoupling, we can obtain approximations of half of the block Gaussian quadrature nodes for all Fourier components by applying ``non-block'' Lanczos iteration to the matrix $L_N$ with initial vector $\mathbf{u}^n$, the computed solution, as is done in Krylov subspace methods such as those described in [39,40,41]. These approximate nodes need only be computed once per time step, as they are independent of the frequency. To estimate the other half of the nodes, we can perform an asymptotic analysis of Lanczos iteration applied to $L_N$ with initial vector $\hat{\bf e}_\omega$, and express the approximate nodes in terms of the coefficients of the differential operator $L$. To illustrate this process, we continue with our example of solving $u_t + Lu = 0$ with $Lu = pu_{xx} + q(x)u$, where $p$ is constant. Carrying out three iterations, which corresponds to a fifth-order accurate KSS method for a parabolic PDE, we obtain the following recursion coefficients, after neglecting lower-order terms: [69] $$ \left[ \begin{array}{ccc} \alpha_1 & \overline{\beta_1} & 0 \\ \beta_1 & \alpha_2 & \overline{\beta_2} \\ 0 & \beta_2 & \alpha_3 \end{array} \right] \approx \left[ \begin{array}{ccc} p\omega^2 & \|\tilde{\bf q}\|_2 & 0 \\ \|\tilde{\bf q}\|_2 & p\omega^2 & 2p|\omega|\|\mathbf{q}_x\|_2/\|\tilde{\bf q}\|_2 \\ 0 & 2p|\omega|\|\mathbf{q}_x\|_2/\|\tilde{\bf q}\|_2 & p\omega^2 \end{array} \right].$$ It follows that the nodes can easily be estimated as \begin{equation} \lambda_{1,\omega} = p\omega^2, \quad \lambda_{2,\omega}, \lambda_{3,\omega} = p\omega^2 \pm \sqrt{\beta_1^2 + \beta_2^2}. \label{eigest} \end{equation} In [69] it is shown that this approach to estimating quadrature nodes also can be used if the leading coefficient $p$ is not constant. In [9] the approach is generalized further to problems with higher space dimension, other boundary conditions, non-self-adjoint operators, and systems of coupled PDEs. To demonstrate the effectiveness of this approach, we solve the 1-D parabolic problem \begin{equation} u_t - (p(x)u_{x})_x + q(x)u = 0, \quad 0 < x < 2\pi, \quad t > 0, \label{eqparabolic_1_pde} \end{equation} where the coefficients $p(x)$ and $q(x)$, given by \begin{equation} p(x) = 1, \quad q(x) = \frac{4}{3} + \frac{1}{4} \cos x, \label{eqsmooth_coefs_1d} \end{equation} are chosen to be smooth functions. The initial condition is \begin{equation} u(x,0) = 1 + \frac{3}{10}\cos x - \frac{1}{20} \sin 2x, \quad 0 < x < 2\pi, \label{eqsmooth_parabolic_1_ic} \end{equation} and periodic boundary conditions are imposed. We use $N$-point uniform grids, with $N=128,256,512$. The results are shown in Figure 2. It can be seen that as $N$ increases, the amount of time needed to achieve a given level of accuracy by (\ref{eqlancapprox}), allowed to run until convergence is achieved to a relative error tolerance of $10^{-7}$, is far greater than that required by a 4-node block KSS method with rapid node estimation. For the KSS method, the Krylov subspace dimension is always 4, whereas for (\ref{eqlancapprox}), the maximum number of iterations needed in a time step for $N=128,256,512$ was 21,35 and 60, respectively. On the other hand, the time required by the KSS method scales approximately linearly with $N$. Figure 2: Estimates of relative error at $t=1$ in the solution of (\ref{eqparabolic_1_pde}), (\ref{eqsmooth_coefs_1d}), (\ref{eqsmooth_parabolic_1_ic}), with periodic boundary conditions, computed by a 4-node block KSS method with rapid node estimation (solid blue curves), and Lanczos iteration as described in (\ref{eqlancapprox}) (dashed red curves) with various time steps. For both methods, the curves, as displayed from left to right, correspond to solutions computed on $N$-point grids for $N=128,256,512$. To date, this approach to rapidly estimating quadrature nodes has only been applied to specific classes of differential operators with certain boundary conditions [9,69]. In the proposed project, the goal is to develop an algorithm that can automatically carry out analyses of general linear differential operators, specified in terms of their coefficients and differentiation operators. The algorithm will apply to differential 1-D, 2-D or 3-D operators, with periodic boundary conditions as well as homogeneous Dirichlet, Neumann or Robin boundary conditions. Systems of coupled PDEs can also be accommodated, using an approach described in [53,60]. All of these avenues of generalization have been explored to an extent in [9]; the task for this project is to extrapolate to other differential operators based on patterns observed in previous analyses, at least through three iterations of Lanczos or Arnoldi iteration to produce quadrature nodes for time-stepping methods of up to 5th order. Generalization to Nonlinear PDEs KSS methods with one block quadrature node have been successfully applied to nonlinear diffusion equations from image processing [36,38] with first-order accuracy in time. In order to achieve higher-order accuracy, though, in addition to using more nodes, it is also necessary to account for the nonlinearity more carefully than with a simple linearization at each time step. To that end, we turn to exponential propagation iterative (EPI) methods, introduced by Tokman [79]. The idea behind these methods, applied to a nonlinear autonomous system of ODEs of the form \begin{equation} \label{gensys} \frac{d{\bf y}}{dt} = F({\bf y}(t)), \quad {\bf y}(t_0) = {\bf y}_0, \end{equation} is to use a Taylor expansion around ${\bf y}(t_n)$ to obtain an integral form \begin{equation} \label{intform} {\bf y}(t_{n+1}) = {\bf y}(t_n) + [e^{J\Delta t} - I] J^{-1} F({\bf y}(t_n)) + \int_{t_n}^{t_{n+1}} e^{J(t_{n+1}-\tau)} R({\bf y}(\tau))\,d\tau, \end{equation} where $J$ is the Jacobian of $F(y(t))$ and $R(y(t))$ is the nonlinear remainder function of the linearization of $F$. Then, the integral term is approximated numerically, which calls for the approximation of products of matrix functions $\varphi_k(J\Delta t)$ and vectors. Exponential integrator methods approximate these products by generating Krylov subspaces from the vectors, using Arnoldi or Lanczos iteration, and then evaluate the functions on the Hessenberg (or tridiagonal) matrices produced by the iteration. However, when the underlying system of ODEs is stiff, the effectiveness of this approach is reduced, because of the inability of any low-degree polynomial to approximate the exponential accurately on a large interval. It is worthwhile to explore whether methods of this type can be made more effective by replacing their Arnoldi- or Lanczos-based approaches to approximation of matrix functions with a componentwise approach such as those used by KSS methods, particularly with its enhancement through appropriate prescription of quadrature nodes. To illustrate the scalability of KSS methods, we compare several versions of EPI methods, as applied to a test problem [9]. The versions differ in the way in which they compute matrix function-vector products of the form $\varphi(A\tau)\mathbf{b}$: Standard Krylov projection, as in (\ref{eqlancapprox}), hereafter referred to as ``Krylov-EPI'', Using the KSS on the high-frequency portion ${\bf b}_H$ of ${\bf b}$ and Krylov projection on the low-frequency part ${\bf b}_L$, hereafter referred to as ``KSS-EPI'', Newton interpolation using Leja points [5], hereafter referred to as ``LEJA'', and Adaptive Krylov projection [72], hereafter referred to as ``AKP''. All of these approaches are used in the context of a 3rd-order, 2-stage EPI method [79] \begin{eqnarray} Y_1 & = & \mathbf{y}_n + \frac{1}{3}ha_{11} \varphi_1\left(\frac{1}{3}h A\right)F(\mathbf{y}_n), \label{EPI3} \\ \mathbf{y}_{n+1} & = & \mathbf{y}_n + h \varphi_1(hA)F(\mathbf{y}_n) + 3h b_1 \varphi_2(hA)[F(Y_1) - F(\mathbf{y}_n) - A(Y_1 - \mathbf{y}_n)], \label{epi3s2} \nonumber \end{eqnarray} where $a_{11} = 9/4$ and $b_1 = 32/81$, and $$R(Y_1) = F(Y_1) - F(\mathbf{y}_n) - A(Y_1 - \mathbf{y}_n).$$ For this method, $$\varphi_1(\lambda) = \frac{e^\lambda - 1}{\lambda}, \quad \varphi_2(\lambda) = \frac{e^\lambda- \lambda - 1}{\lambda^2}, \quad \varphi_3(\lambda) = \frac{e^\lambda(6-\lambda)-(6+5\lambda+2\lambda^2)}{\lambda^3}.$$ The test problem is the two-dimensional Allen-Cahn equation given by \begin{equation} \label{AllenCahn} u_t=\alpha\nabla^2u+u-u^3, \quad x,y\in [0,1], \quad t\in [0,0.2] \end{equation} with $\alpha=0.1$, using homogeneous Neumann boundary conditions and initial conditions given by $$u_0(x,y)=0.4+0.1\cos(2\pi x)\cos(2\pi y).$$ The $\nabla^2$ term is discretized using a centered finite difference. For KSS-EPI, the low-frequency portion $\mathbf{b}_L$ consists of all components with wave numbers $\omega_i \leq 7$, $i=1,2$. From Figure 3, it can be seen that for KSS-EPI, the number of overall iterations (matrix-vector multiplications $+$ FFTs) shows almost no sensitivity to the grid size, compared to Krylov-EPI, AKP and Leja interpolation, all of which exhibit substantial growth as the number of grid points increases. Figure 3: Average number of matrix-vector products, shown on a logarithmic scale, per matrix function-vector product evaluation for each method when solving the Allen-Cahn equation (\ref{AllenCahn}) using the 3rd-order EPI method (\ref{EPI3}). For KSS-EPI, FFTs are also included. For each method, bars correspond to grid sizes of $N=25,50,150,300$ points per dimension, from left to right. Left plot: $\Delta t = 0.2$. Right plot: $\Delta t = 0.0125$. The success of this approach relies on a determination of what is the low-frequency part of the solution, since different approaches are used to compute low- and high-frequency components [9]. In the proposed project, an algorithm for adaptively determining an appropriate cut-off, based on smoothness of the solution, will be developed. In addition, low-frequency analysis of block Lanczos iteration will be performed, similar to the high-frequency analysis described earlier in this section, so that low-frequency components can be computed using KSS methods rather than Krylov projection, thus reducing Krylov subspace dimension even further. Solving PDEs on General Domains KSS methods are most effective when applied to problems for which Fourier transforms (including Fourier sine and cosine transforms) can be used to efficiently achieve an approximate separation of high- and low-frequency components of the solution, so that appropriate approximate solution operators can be applied to each, even if these components are still coupled. For problems on non-rectangular domains, the lack of periodicity makes the use of the Fourier transform problematic. Fourier continuation (FC) methods [1,7,63,70] enable Fourier series approximation of non-periodic functions without the highly detrimental slow convergence and Gibbs phenomenon exhibited by Fourier series expansions of non-periodic functions. In [1] the FC approach has been combined with explicit time-stepping methods (specifically, fifth-order Runge-Kutta or Adams-Bashforth methods) to solve the compressible Navier-Stokes equations. The resulting methods achieve spectral accuracy in the interior of the domain, and fifth-order accuracy at domain boundaries. The approach was generalized to variable-coefficient PDEs in [70]. As part of the proposed project, FC methods will be combined with KSS methods (as well as EPI methods, for nonlinear problems). The key ingredient in the effectiveness of FC methods is its 1-D differentiation operator, that applies Fourier continuation along the appropriate dimension, employs the FFT to perform the differentiation with spectral accuracy, and then restricts the derivative to the domain. KSS methods function by transforming a Krylov subspace generated by the solution at time $t_n$ into Fourier space, applying frequency-dependent approximations of the solution operator to the Fourier coefficients, and then transforming back to physical space to obtain the solution at time $t_{n+1}$. In other words, evolution in time is accomplished through the application of pseudodifferential operators ($\psi d0$) with constant coefficients (namely, the operators represented by the matrices $D_j(\Delta t)$ in (\ref{diagop})), of which differentiation operators are a special case. In the proposed project, the FC methodology can be applied to these $\psi d0$ from (\ref{diagop}) as well. Techniques for rapid application of Fourier integral operators [8] and manipulation of symbols of $\psi d0$ [14] will be employed for the purpose of efficiently expressing these operators as combinations of 1-D operators, to which FC can readily be applied in the same way as it is for 1-D differentiation. Public Domain Software KSS methods are difficult to implement efficiently due to their componentwise nature. Therefore, to encourage adoption by researchers throughout science and engineering who need to solve PDEs, and further development by researchers in computational mathematics, the PI will disseminate public domain software that efficiently implements KSS methods for a variety of PDEs. This software will be designed to take advantage of parallel and vector architectures, especially for the computation of frequency-dependent quadrature nodes. It will also incorporate features commonly included in PDE solvers such as error estimation and adaptive time step selection. For error estimation, higher-order quadrature rules can easily be used. The PI's substantial background in computer science, which includes teaching graduate-level courses in programming and large-scale computing, and years of experience as a software engineer in industry, will guide the design and implementation of the software to ensure that an efficient and robust software package is produced. 5 Educational Activities The goals of the educational activities are threefold, addressing three different audiences: (1) academic and industrial mathematicians and computer scientists, (2) undergraduate and graduate students at the University of Southern Mississippi, including students from under-represented groups, and (3) high school students, teachers, and counselors. Academic and Industrial Mathematicians and Computer Scientists The PI will broadly disseminate the results of research described in Section 4 to the scientific computing community, in order to raise awareness of the potential of techniques from ``matrices, moments and quadrature''. This dissemination will take place through the following mechanisms: The PI will continue to participate in interdisciplinary conferences and workshops, and departmental seminars, both domestically and internationally. Such participation will not only serve the purpose of disseminating the results of the proposed research, but also highlighting the broader body of work on ``matrices, moments and quadrature'' and its usefulness in many applications. The PI will distribute software that implements the methods described in this proposal for the purpose of solving parabolic and hyperbolic PDEs. The MATLAB implementation of the software will be made available at the web site MATLABCentral, an open-source repository for the MATLAB user community. The MATLAB implementation will be posted on the PI's web site at USM. The software will be advertised through the online newsletter NA Digest, which has approximately 10,000 subscribers worldwide from the scientific computing community, and SIAM News. Undergraduate and Graduate Students at USM A major component of the project is the inclusion of USM students as active participants in the proposed research. The PI requests funding for two graduate students per year, as part of an ongoing effort to establish a comprehensive research program built around KSS methods that can make progress along paths pertaining to either specific applications or general analysis. The Department of Mathematics at USM is devoted to enhancing undergraduate research. As such, the active involvement of undergraduate students will also be sought, particularly for working with new code, as the topic is naturally appealing and can be used to stimulate students' interest in a variety of applied problems and mathematical techniques. Many USM students come from groups that are underrepresented in the mathematical sciences. Table 1, using data obtained from [66], shows that not only are women and African-Americans poorly represented in the mathematical sciences, but their representation is, for the most part, declining. Women African-Americans Category / Year 2000 2012 2000 2012 Mathematics and Statistics 47.8 43.1 7.7 5.4 Computer Science 28.0 18.2 9.5 10.6 All fields 57.3 57.4 8.3 9.9 Table 1: Percentages of all bachelor's degrees awarded to women and African-Americans. On the other hand, at USM, approximately 61% of the students are women, while 27% are African-American. As a significant percentage of mathematics majors come from these groups (for example, 80% of seniors graduating in 2009 with bachelors' degrees in mathematics were women), the PI will strongly encourage such students to participate in the project. With the support of requested travel funds, graduate and undergraduate students will be encouraged to participate in conferences such as the AMS Annual Meeting and Southeastern Sectional Meetings, the SIAM Annual Meeting, the International Conference on Spectral and High-Order Methods (ICOSAHOM), and SIAM section meetings. The PI will design special topics courses based on the proposed research, in order to acquaint students with the fundamentals of spectral methods, orthogonal polynomials, and Krylov subspace methods. New courses and modifications of existing courses, as well as their broader impacts on the curriculum, will be assessed through course evaluations completed by students, and exit and alumni surveys of mathematics majors that are currently administered by the department. A learning seminar series based on the project will also be organized, with participating students, the PI, and guest lecturers among the speakers, with travel for visitors supported by a combination of project and department funds. Mississippi High School Students, Teachers, and Counselors It is important that students about to select their majors be aware of career opportunities that are available through study of computational mathematics. As many students choose their majors either shortly before or during their freshman year, outreach should be performed for high school students as well. The PI will provide six lessons for the high school calculus and AP calculus classes at three regional high schools: Presbyterian Christian High School in Hattiesburg, Gulfport High School, and Jim Hill High School in Jackson. A sample of proposed topics and classroom explorations are: Nonlinear Equations: As students are learning various techniques for solving equations, it is important to impress upon them that these techniques are limited in their applicability, while simple approximation can produce a useful result in many more cases. To illustrate this, the students will be presented with an equation that cannot be solved using techniques available to them, and then, using graphs, learn how an approximate solution can be found using a method like bisection or Newton's method. Nonlinear Equations, Floating-point Arithmetic: The computation of a square root using a calculator can seem a magical black box to a student. It can also seem inconceivable that the reciprocal of an number can be computed without any divisions. Applications of Newton's method such as these are easy for a teacher to present, easy for a student to understand, and can be quite effective at arousing their interest in computational methods in general, and approximation by iteration in particular. Numerical Integration: Calculus students can explore the techniques they have learned for integrating polynomials and use these to approximate integrals of more general functions. One example is the evaluation of a definite integral of $e^{-x^2}$, for which their analytical techniques are useless. The fact that there is a huge difference in computational effort between the composite trapezoidal rule and Gaussian quadrature for this example can illustrate the sophistication of approximation techniques, even without explaining how Gaussian quadrature works, as it is too advanced for them. The following topics do not lend themselves as readily to classroom exercises for high school students, but are feasible for in-class demonstrations by the PI, aided by MATLAB: Image processing: Examples of image processing tasks, such as denoising or deblurring, will be accompanied by explanations of various ingredients of image processing techniques that calculus students can digest, such as the role of derivatives in detecting edges, and the role of the exponential function in diffusion that is used to accomplish denoising. Geometric methods of solving equations: Methods for solving equations that can readily be visualized, in order to demonstrate the benefits of using algebraic and geometric perspectives together. Such methods include Newton's method for nonlinear equations, and orthogonal transformations such as Householder reflections and Givens rotations, applied to small systems of linear equations. Wave propagation: Solutions to the wave equation with various media and boundary conditions, such as periodic, reflecting, or absorbing boundary conditions, will be illustrated through MATLAB simulations and demonstrations such as ripples in water. Given the large proportion of community and junior college transfer students who attend USM, the PI will also work with faculty at area institutions in southern Mississippi and provide similar lessons at a higher level, but addressing the same topics and applications. Beyond southern Mississippi, an ideal venue for such outreach is the Mississippi School for Mathematics and Science (MSMS), which offers advanced courses in mathematics to academically gifted juniors and seniors. The PI will coordinate with the MSMS administration to communicate with the students in a background-appropriate manner about the importance of computational mathematics in science and engineering, including information about possible career paths. This outreach is not intended solely for the students at these institutions; it is also essential that teachers are informed of the significance of computational mathematics, including its central role within science and engineering and the career opportunities that it can provide students, so that they can more effectively train and guide their students who express interest in mathematics. In addition to exposing students to interesting content, it is also critical that students learn about potential careers in mathematics and computational science. In addition to the classroom activities using MATLAB, the PI will provide examples of real-world and industrial applications. These will include a brief overview that will be accessible for high school students, and materials for the students to review at home. These will include information about fields of study and associated career paths. Workforce development and recruitment of U.S. students into these fields has proven challenging over the past decades. In addition to working with students and their teachers, the PI will also meet with guidance counselors at each school, provide a brief presentation about career paths, and distribute a flyer that can be provided to their college-bound students. The PI will conduct pre- and post-activity surveys of participating students to evaluate the effectiveness of their participation, classroom explorations, gains in career paths, and general level of interest. 6 Project Timeline The first phase of the project will be devoted to the following activities: Development of methods for componentwise selection of quadrature nodes for linear PDEs, using the discussion in Section 4 as a starting point. This approach has proven effective for certain specific classes of differential operators, but needs to be generalized and automated. Application of KSS methods to nonlinear PDEs through combination with EPI methods of Runge-Kutta type (EPIRK) [80]. The combined method will be applied to a variety of PDEs ranging from primarily diffusive to primarily advective, and will adaptively determine a cut-off between low- and high-frequency portions of the solution. The combination of KSS methods with Fourier continuation methods in MATLAB applied to linear parabolic PDEs. The smoothness of the solutions to these problems, as well as their linearity, provides starting point for combining these methods. The second phase of the project will be devoted to convergence analysis of KSS-EPI methods, as well as combinations of KSS methods with Fourier continuation methods in MATLAB for hyperbolic PDEs and nonlinear PDEs. The combined method already developed for linear parabolic PDEs will be adapted to handle the markedly different behavior of these problems. The final phase of the project will begin with a thorough convergence analysis, both spatial and temporal, of the newly developed and implemented KSS-FC methods. This phase will also focus on the creation of an implementation that will be conducive to vectorization and parallelization. Code from each of the two major phases of the project will be posted to MATLABCentral. The research will be primarily carried out by a graduate student under the supervision of the PI, while an undergraduate student will assist with programming tasks and numerical experimentation. As for educational activities, visits to each high school will take place annually. A learning seminar on ``matrices, moments and quadrature'' will be held in the department each fall semester. Research will be presented at the SIAM Annual Meeting and SIAM section meetings each year, as well as the bi-annual ICOSAHOM Meeting. The proposed project was inspired by the PI's career-defining ambition to develop methods for variable-coefficient PDEs that possess, as much as possible, desirable properties of spectral methods for constant-coefficient PDEs on rectangular domains with periodic boundary conditions: unconditional stability and high accuracy due to the ability to compute components of the solution independently through diagonalization of the spatial differential operator. While these properties are out of reach for problems that are not so unrealistically ``nice'', nonetheless, the evolution of KSS methods has been fruitful due to the incorporation of ideas beyond numerical methods for PDEs. It is the PI's intent that the proposed project, either directly through its research results or indirectly through its educational activities, inspires present and future researchers in computational mathematics to adopt a similarly broad perspective in their own quests to advance the discipline. Albin, N., Bruno, O. P.: A spectral FC solver for the compressible Navier-Stokes equations in general domains I: Explicit time-stepping. Journal of Computational Physics 230 (2011) 6248-6270. Atkinson, K.: An Introduction to Numerical Analysis, 2nd Ed. Wiley (1989) Bai, Z., Golub, G. H.: Bounds for the trace of the inverse and the determinant of symmetric positive definite matrices, Annals Numer. Math. 4 (1997) 29-38. Bai, Z., Golub, G. H.: Some unusual matrix eigenvalue problems. Proceedings of Vecpar '98--Third International Conference for Vector and Parallel Processing J. Palma, J. Dongarra and V. Hernandez Eds., Springer (1999) 4-19. Bergamaschi, L., Caliari, M., Martìnez, A., Vianello, M.: Comparing {L}eja and {K}rylov approximations of large scale matrix exponentials. Lecture Notes in Comput. Sci., 6th International Conference, Reading, UK, May 28--31, 2006, Proceedings, Part IV, Alexandrov, V. N., van Albada, G. D., Sloot, P. M. A., Dongarra, J. (eds.), Springer (2006) 685-692. Boscheri, W., Dumbser, M., Zanotti, O.: High order cell-centered Lagrangian-type finite volume schemes with time-accurate local time stepping on unstructured triangular meshes. J. Comput. Phys. 291 (2015) 120-150. Bruno, O. P., Lyon, M.: High-order unconditionally stable FC-AD solvers for general smooth domains I. Basic elements. Journal of Computational Physics 229(6) (2010) 2009-2033. Candes, E., Demanet, L., Ying, L.: Fast Computation of Fourier Integral Operators. SIAM J. Sci. Comput. 29(6) (2007) 2464-2493. Cibotarica, A., Lambers, J. V., Palchak, E. M.: Solution of Nonlinear Time-Dependent PDEs Through Componentwise Approximation of Matrix Functions. Submitted. Coquel, F., Nguyen, Q. L., Postel, M., Tran, Q. H.: Local time stepping applied to implicit-explicit methods for hyperbolic systems. Multiscale Model. Simul. 8(2) (2009/10) 540-570. Dahlquist, G., Eisenstat, S. C., Golub, G. H.: Bounds for the error of linear systems of equations using the theory of moments. J. Math. Anal. Appl. 37 (1972) 151-166. Davis, P., Rabinowitz, P.: Methods of Numerical Integration, 2nd Ed. Academic Press (1984) Dong, S., Lui, K.: Stochastic estimation with $z_2$ noise. Phys. Lett. B 328 (1994) 130-136/ Demanet, L., Ying, L.: Discrete Symbol Calculus. SIAM Review 53 (2011) 71-104. Demirel, A., Niegemann, J., Busch, K., Hochbruck, M.: Efficient multiple time-stepping algorithms of higher order. J. Comput. Phys. 285 (2015) 133-148. Diaz, J., Grote, M. J.: Energy conserving explicit local time stepping for second-order wave equations. SIAM J. Sci. Comput. 31(3) (2009) 1985-2014. Domingues, M. O., Gomes, S. M., Roussel, O., Schneider, K.: An adaptive multiresolution scheme with local time stepping for evolutionary PDEs. J. Comput. Phys. 227(8) (2008) 3758-3780. Druskin, V., Knizhnerman, L., Zaslavsky, M.: Solution of large scale evolutionary problems using rational Krylov subspaces with optimized shifts. SIAM J. Sci. Comput. 31 (2009), 3760-3780. El Soueidy, Ch. P., Younes, A., Ackerer, P.: Solving the advection-diffusion equation on unstructured meshes with discontinuous/mixed finite elements and a local time stepping procedure. Internat. J. Numer. Methods Engrg. 79(9) (2009) 1068-1093. Elhay, S., Golub, G. H., Kautsky, J.: Updating and downdating of orthogonal polynomials with data fitting applications. SIAM J. Matrix. Anal. 12(2) (1991) 327-353. Escribano, B., Akhmatskaya, E., Reich, S., Azpiroz, J. M.: Multiple-time-stepping generalized hybrid Monte Carlo methods. J. Comput. Phys. 280 (2015), 1-20. Ezziani, A., Joly, P.: Local time stepping and discontinuous Galerkin methods for symmetric first order hyperbolic systems. J. Comput. Appl. Math. 234(6) (2010) 1886-1895. Gassner, G. J., Hindenlang, F., Munz, C.-D. A Runge-Kutta based discontinuous Galerkin method with time accurate local time stepping. Adaptive high-order methods in computational fluid dynamics, Adv. Comput. Fluid Dyn. 2, World Sci. Publ., Hackensack, NJ (2011) 95-118. Gautschi, W.: Construction of Gauss-Christoffel quadrature formulas. Math. Comp. 22 (1986) 251-270. Gautschi, W.: Orthogonal polynomials--constructive theory and applications. J. of Comp. and Appl. Math. 12/13 (1985) 61-76. Georgiev, K., Kosturski, N., Margenov, S., Starý, J.: On adaptive time stepping for large-scale parabolic problems: computer simulation of heat and mass transfer in vacuum freeze-drying. J. Comput. Appl. Math. 226(2) (2009) 268-274. Golub, G. H.: Some modified matrix eigenvalue problems. SIAM Review 15 (1973) 318-334. Golub, G. H.: Bounds for matrix moments. Rocky Mnt. J. of Math. 4 (1974) 207-211. Golub, G. H., Meurant, G.: Matrices, Moments and Quadrature. Proceedings of the 15th Dundee Conference, June-July 1993, Griffiths, D. F., Watson, G. A. (eds.), Longman Scientific & Technical (1994) Golub, G. H., Stoll, M., Wathen, A.: Approximation of the scattering amplitude. Electron. T. Numer. Ana. 31 (2008) 178-203. Golub, G. H., Underwood, R.: The block Lanczos method for computing eigenvalues. Mathematical Software III, J. Rice Ed., (1977) 361-377. Golub, G. H., Welsch, J.: Calculation of Gauss Quadrature Rules. Math. Comp. 23 (1969) 221-230. Gresho, P. M., Griffiths, D. F., Silvester, D. J.: Adaptive time-stepping for incompressible flow. I. Scalar advection-diffusion. SIAM J. Sci. Comput. 30(4) (2008) 2018-2054. Grote, M. J., Mehlin, M., Mitkova, T.: Runge-Kutta-based explicit local time-stepping methods for wave propagation. SIAM J. Sci. Comput. 37(2) (2015) A747-A775. Grote, M. J., Mitkova, T.: Explicit local time-stepping methods for Maxwell's equations. J. Comput. Appl. Math. 234(12) (2010) 3283-3302. Guidotti, P., Kim, Y, Lambers, J. V.: Image Restoration with a New Class of Forward-Backward Diffusion Equations of Perona-Malik Type with Applications to Satellite Image Enhancement. SIAM J. Imag. Sci. 6(3) (2013) 1416-1444. Guidotti, P., Lambers, J. V., Sølna, K.: Analysis of 1-D Wave Propagation in Inhomogeneous Media. Numer. Funct. Anal. Opt. 27 (2006) 25-55. Guidotti, P., Longo, K.: Two Enhanced Fourth Order Diffusion Models for Image Denoising. J. Math. Imag. Vis. 40 (2011) 188-198. Hochbruck, M., Lubich, C.: A Gautschi-type method for oscillatory second-order differential equations, Numer. Math. 83 (1999) 403-426. Hochbruck, M., Lubich, C.: On Krylov Subspace Approximations to the Matrix Exponential Operator. SIAM J. Numer. Anal. 34 (1996) 1911-1925. Hochbruck, M., Lubich, C., Selhofer, H.: Exponential Integrators for Large Systems of Differential Equations. SIAM J. Sci. Comput. 19 (1998) 1552-1574. Ilie, S., Jackson, K. R., Enright, W. H.: Adaptive time-stepping for the strong numerical solution of stochastic differential equations. Numer. Algorithms 68(4) (2015) 791-812. Jannoun, G., Hachem, E., Veysset, J., Coupez, T.: Anisotropic meshing with time-stepping control for unsteady convection-dominated problems. Appl. Math. Model. 39(7) (2015) 1899-1916. Jansson, J., Logg, A.: Algorithms and data structures for multi-adaptive time-stepping. ACM Trans. Math. Software 35(3) (2008) Art. 17, 24 pp. Kay, D. A., Gresho, P. M., Griffiths, D. F., Silvester, D. J.: Adaptive time-stepping for incompressible flow. II. Navier-Stokes equations. SIAM J. Sci. Comput. 32(1) (2010) 111-128. Knoll, D. A., Keyes, D. E.: Jacobian-free Newton-Krylov methods: a survey of approaches and applications. J. Comput. Phys. 193 (2004) 357-397. Krivodonova, L.: An efficient local time-stepping scheme for solution of nonlinear conservation laws. J. Comput. Phys. 229(22) (2010) 8537-8551. Lai, J., Huang, J.: An adaptive linear time stepping algorithm for second-order linear evolution problems. Int. J. Numer. Anal. Model. 12(2) (2015) 230-253. Lambers, J. V.: Derivation of High-Order Spectral Methods for Time-dependent PDE using Modified Moments. Electron. T. Numer. Ana. 28 (2008) 114-135. Lambers, J. V.: Enhancement of Krylov Subspace Spectral Methods by Block Lanczos Iteration. Electron. T. Numer. Ana. 31 (2008) 86-109. Lambers, J. V.: Explicit High-Order Time-Stepping Based on Componentwise Application of Asymptotic Block Lanczos Iteration. Num. Lin. Alg. Appl. 19 (2012) 970-991. Lambers, J. V.: An Explicit, Stable, High-Order Spectral Method for the Wave Equation Based on Block Gaussian Quadrature. IAENG Journal of Applied Mathematics 38 (2008) 333-348. Lambers, J. V.: Implicitly Defined High-Order Operator Splittings for Parabolic and Hyperbolic Variable-Coefficient PDE Using Modified Moments. International Journal of Computational Science 2 (2008) 376-401. Lambers, J. V.: Krylov Subspace Methods for Variable-Coefficient Initial-Boundary Value Problems. Ph.D. Thesis, Stanford University (2003). Lambers, J. V.: Krylov Subspace Spectral Methods for the Time-Dependent Schrödinger Equation with Non-Smooth Potentials. Numer. Algorithms 51 (2009) 239-280. Lambers, J. V.: Krylov Subspace Spectral Methods for Variable-Coefficient Initial-Boundary Value Problems. Electron. T. Numer. Ana. 20 (2005) 212-234. Lambers, J. V.: A Multigrid Block Krylov Subspace Spectral Method for Variable-Coefficient Elliptic PDE. IAENG Journal of Applied Mathematics 39 (2009) 236-246. Lambers, J. V.: Practical Implementation of Krylov Subspace Spectral Methods. J. Sci. Comput. 32 (2007) 449-476. Lambers, J. V.: Spectral Methods for Time-dependent Variable-coefficient PDE Based on Block Gaussian Quadrature. Proceedings of the 2009 International Conference on Spectral and High-Order Methods (2010) in press. Lambers, J. V.: A Spectral Time-Domain Method for Computational Electrodynamics. Adv. Appl. Math. Mech. 1(6) (2009) 781-798. Loffeld, J., Tokman, M.: Comparative Performance of Exponential, Implicit and Explicit Integrators for Stiff Systems of ODEs. J. Comp. Appl. Math. 241 (2013) 45-67. Lörcher, F., Gassner, G., Munz, C.-D.: An explicit discontinuous Galerkin scheme with local time-stepping for general unsteady diffusion equations. J. Comput. Phys. 227(11) (2008) 5649-5670. Lyon, M., Bruno, O. P.: High-order unconditionally stable FC-AD solvers for general smooth domains II. Elliptic, parabolic and hyperbolic PDEs; theoretical considerations. Journal of Computational Physics 229 (2010) 3358-3381. Meurant, G.: The computation of bounds for the norm of the error in the conjugate gradient algorithm. Numer. Algorithms 16 (1997) 77-87. Moret, I., Novati, P.: RD-rational approximation of the matrix exponential operator. BIT 44 (2004) 595-615. National Science Foundation, Division of Science Resources Staistics: Women, Minorities, and Persons with Disabilities in Science and Engineering: 2009. NSF 09-305, Arlington, VA (2009). Ortner, B.: On the selection of measurement directions in second-rank tensor (e.g. elastic strain) determination of single crystals. J. Appl. Cryst. 22 (1989) 216-221. Ortner, B., Kräuter, A. R.: Lower bounds for the determinant and the trace of a class of Hermitian matrix. Linear Alg. Appl. 326 (1996) 147-180. Palchak, E. M., Cibotarica, A., Lambers, J. V.: Solution of Time-Dependent PDE Through Rapid Estimation of Block Gaussian Quadrature Nodes. Linear Alg. Appl. 468 (2015) 233-259. Bruno, O. P., Prieto, A.: Spatially dispersionless, unconditionally stable FC-AD solvers for variable-coefficient PDEs. J. Sci. Comput. 58(2) (2014) 331-366. Qiao, Z., Zhang, Z., Tang, T.: An adaptive time-stepping strategy for the molecular beam epitaxy models. SIAM J. Sci. Comput. 33(3) (2011) 1395-1414. Rainwater, G., Tokman, M.: A new class of split exponential propagation iterative methods of Runge-Kutta type (sEPIRK) for semilinear systems of ODEs. J. Comp. Phys. 269 (2014) 40-60. Sack, R. A., Donovan, A.: An algorithm for Gaussian quadrature given modified moments. Numer. Math. 18(5) (1972) 465-478. Sapoval, B., Gobron, T., Margolina, A.: Vibrations of fractal drums. Phys. Rev. Lett. 67 (1991) 2974-2977. Sexton, J. C., Weingarten, D. H.: Systematic expansion for full QCD based on the valence approximation. Report IBM T. J. Watson Research Center (1994). Shampine, L. F., Reichelt, M. W.: The MATLAB ODE suite. SIAM Journal of Scientific Computing 18 (1997) 1-22. Su, Z.: Computational Methods for least squares problems and clinical trials. Ph.D. Thesis, Stanford University (2005) Tirupathi, S., Hesthaven, J. S., Liang, Y., Parmentier, M.: Multilevel and local time-stepping discontinuous Galerkin methods for magma dynamics. Comput. Geosci. 19(4) (2015) 965-978. Tokman, M.: Efficient integration of large stiff systems of ODEs with exponential propagation iterative (EPI) methods. J. Comp. Phys. 213 (2006) 748-776. Tokman, M.: A new class of exponential propagation iterative methods of Runge-Kutta type (EPIRK). J. Comp. Phys. 230 (2011) 8762-8778. van den Eshof, J., Hochbruck, M.: Preconditioning Lanczos approximations to the matrix exponential. SIAM J. of Sci. Comput. 27 (2006) 1438-1457. Wu, S. Y., Cocks, J. A., Jayanthi, C. S.: An accelerated inversion algorithm using the resolvent matrix method. Comput. Phys. Commun. 71 (1992) 15-133. Zhang, P., Zhang, N., Deng, Y., Bluestein, D.: A multiple time stepping algorithm for efficient multiscale modeling of platelets flowing in blood plasma. J. Comput. Phys. 284 (2015) 668-686. Zhang, Z., Qiao, Z.: An adaptive time-stepping strategy for the Cahn-Hilliard equation. Commun. Comput. Phys.} 11(4) (2012) 1261-1278.
CommonCrawl
\begin{document} \title{Faster Sparse Minimum Cost Flow by Electrical Flow Localization} \begin{abstract} We give an $\widetilde{O}({m^{3/2 - 1/762} \log (U+W))}$ time algorithm for minimum cost flow with capacities bounded by $U$ and costs bounded by $W$. For sparse graphs with general capacities, this is the first algorithm to improve over the $\widetilde{O}({m^{3/2} \log^{O(1)} (U+W)})$ running time obtained by an appropriate instantiation of an interior point method [Daitch-Spielman, 2008]. Our approach is extending the framework put forth in [Gao-Liu-Peng, 2021] for computing the maximum flow in graphs with large capacities and, in particular, demonstrates how to reduce the problem of computing an electrical flow with general demands to the same problem on a sublinear-sized set of vertices---even if the demand is supported on the entire graph. Along the way, we develop new machinery to assess the importance of the graph's edges at each phase of the interior point method optimization process. This capability relies on establishing a new connections between the electrical flows arising inside that optimization process and vertex distances in the corresponding effective resistance metric. \end{abstract} \tableofcontents \section{Introduction} In the last decade, continuous optimization has proved to be an invaluable tool for designing graph algorithms, often leading to significant improvements over the best known combinatorial algorithms. This has been particularly true in the context of flow problems---arguably, some of the most prominent graph problems~\cite{daitch2008faster, christiano2011electrical,lee2013new, madry2013navigating, sherman2013nearly, kelner2014almost, lee2014path,peng2016approximate, madry2016computing, cmsv17,sherman2017generalized, sherman2017area, sidford2018coordinate, liu2019faster, liu2020faster, axiotis2020circulation, van2020bipartite, van2021minimum}. Indeed, these developments have brought a host of remarkable improvements in a variety of regimes, such as when seeking only approximate solutions, or when the underlying graph is dense. However, most of these improvements did not fully address the challenge of seeking \emph{exact} solutions in \emph{sparse} graphs. Fortunately, the improvements for that regime eventually emerged~\cite{madry2013navigating,madry2016computing,cmsv17,liu2019faster,liu2020faster,axiotis2020circulation}. They still suffered though from an important shortcoming: they all had a polynomial running time dependency in the graph's capacities, and hence---in contrast to the classical combinatorial algorithms---they did not yield efficient algorithms in the presence of arbitrary capacities. Recently, Gao, Liu and Peng \cite{gao2021fully} have finally changed this state of affairs by providing the first \emph{exact} maximum flow algorithm to break the $\tO{m^{3/2}\log^{O(1)} U}$ barrier for sparse graphs with \emph{general} capacities (bounded by $U$). Their approach, however, crucially relies on a preconditioning technique that is specific to the maximum flow problem and, in particular, having an $s$-$t$ demand---rather than a general one. As a result, the corresponding improvement held only for that particular problem. In this paper, we demonstrate how to circumvent these limitations and provide the first algorithm that breaks the $\tO{m^{3/2}\log^{O(1)} (U+W)}$ barrier for the \emph{minimum cost flow} problem in \emph{sparse} graphs with general demands, capacities (bounded by $U$), and costs (bounded by $W$). This algorithm runs in time $\tO{m^{3/2 - 1/762}\log (U+W)}$. \subsection{Previous work} In 2013, M\k{a}dry~\cite{madry2013navigating} presented the first running time improvement to the maximum flow problem since the $\tO{m\sqrt{n} \log U}$ algorithm of~\cite{goldberg1998beyond} in the regime of sparse graphs with small capacities. To this end, he presented an algorithm that runs in time $\tO{m^{10/7}\mathrm{poly}(U)}$, where $U$ is a bound on edge capacities, breaking past the $\tO{m^{3/2}}$ running time barrier that has for decades resisted improvement attempts. The main idea in that work was to use an interior point method with an improved number of iterations guarantee that was delivered via use of an adaptive re-weighting of the central path and careful perturbations of the problem instance. Building on this framework, a series of subsequent works~\cite{madry2016computing,liu2019faster,liu2020faster} has brought the runtime of sparse max flow down to $\tO{m^{4/3}\mathrm{poly}(U)}$. (With the most recent of these works crucially relying on nearly-linear time $\ell_p$ flows~\cite{kyng2019flows}.) In parallel~\cite{cmsv17,axiotis2020circulation}, the running time of the more general minimum cost flow problem was reduced to $\tO{m^{4/3}\mathrm{poly}(U)\log W}$, where $W$ is a bound on edge costs. However, even though these algorithms offer a significant improvement when $U$ is relatively small, the question of whether there exists an algorithm faster than $\tO{m^{3/2} \log^{O(1)} U}$ for sparse graphs with general capacities remained open. In fact, a polynomial dependence on capacities or costs seems inherent in the central path re-weighting technique used in all the aforementioned works. Recently, \cite{gao2021fully} finally made progress on this question by developing an algorithm for the maximum flow problem that runs in time $\tO{m^{3/2 - 1/328}\log U}$. The source of improvement here was different from previous works, in the sense that it was not based on decreasing the number of iterations of the interior point method. Instead, it was based on devising a data structure to solve the dynamically changing Laplacian system required by the interior point method in sublinear time per iteration. The new approach put forth by \cite{gao2021fully}, despite being quite different to the prior ones, still leaned on the preconditioning approach of~\cite{madry2016computing}, as well as on other properties that are specific to the maximum flow problem. For this reason, this improvement did not extend to the minimum cost flow problem with general capacities, for which the fastest known runtime was still $\tO{m \log(U+W)+ n^{1.5} \log^2 (U+ W)}$~\cite{van2021minimum} and $\widetilde{O}({m^{3/2} \log^{O(1)} (U+W)})$~\cite{daitch2008faster} in the sparse regime. \subsection{Our result} In this work, we give an algorithm for the minimum cost flow problem with a running time of $\tO{m^{3/2 - 1/762}\log (U+W)}$. This is the first improvement for sparse graphs with general capacities over~\cite{daitch2008faster}, which runs in time $\tO{m^{3/2}\log^{O(1)} (U+W)}$. Specifically, we prove that: \begin{theorem} \sloppy Given a graph $G(V,E)$ with edge costs $\boldsymbol{\mathit{c}}\in\mathbb{Z}_{[-W,W]}^m$, a demand $\boldsymbol{\mathit{d}}\in\mathbb{R}^n$, and capacities $\boldsymbol{\mathit{u}}\in\mathbb{Z}_{(0,U]}^m$, there exists an algorithm that with high probability runs in time $\widetilde{O}\left(m^{3/2-1/762}\log (U+W)\right)$ and returns a flow $\boldsymbol{\mathit{f}}\in[\mathbf{0},\boldsymbol{\mathit{u}}]$ in $G$ such that $\boldsymbol{\mathit{f}}$ routes the demand $\boldsymbol{\mathit{d}}$ and the cost $\langle \boldsymbol{\mathit{c}}, \boldsymbol{\mathit{f}}\rangle$ is minimized. \label{thm:main} \end{theorem} \subsection{High level overview of our approach} As we build on the approach presented in~\cite{gao2021fully}, we first briefly overview some of the key ideas introduced there that will also be relevant for our discussion. The maximum flow interior point method by \cite{madry2016computing} works by, repeatedly over $\tO{\sqrt{m}}$ steps, taking an electrical flow step that is a multiple of \[ \boldsymbol{\mathit{\widetilde{f}}} = \boldsymbol{\mathit{R}}^{-1} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \boldsymbol{\mathit{B}}^\top \mathbf{1}_{st}\,, \] where $\boldsymbol{\mathit{L}} = \boldsymbol{\mathit{B}}^\top \boldsymbol{\mathit{R}}^{-1} \boldsymbol{\mathit{B}}$ is a Laplacian matrix and $\boldsymbol{\mathit{r}}$ are resistances that change per step. However, $\boldsymbol{\mathit{\widetilde{f}}}$ has $m$ entries and takes $\tO{m}$ to compute, which gives the standard $\tO{m^{3/2}}$ bound. To go beyond this, \cite{gao2021fully} show that it suffices to compute $\boldsymbol{\mathit{\widetilde{f}}}$ for only a \emph{sublinear} number of high-congestion entries of $\boldsymbol{\mathit{\widetilde{f}}}$, where congestion is defined as $\boldsymbol{\mathit{\rho}}=\sqrt{\boldsymbol{\mathit{r}}}\boldsymbol{\mathit{\widetilde{f}}}$. By known linear sketching results, these edges can be detected by computing the inner product $\langle\boldsymbol{\mathit{q}},\boldsymbol{\mathit{\rho}}\rangle$ for a small number of randomly chosen vectors $\boldsymbol{\mathit{q}}\in\mathbb{R}^m$. Crucially, given a vertex subset $C\subseteq V$ of sublinear size that contains $s$ and $t$, this inner product can be equivalently written as the following sublinear-sized inner product \begin{align} \left\langle \boldsymbol{\mathit{q}},\boldsymbol{\mathit{\rho}}\right\rangle = \left\langle \boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}}\right), SC^+ \boldsymbol{\mathit{d}}\right\rangle \,, \label{eq:identity} \end{align} where $SC := SC(G,C)$ is the \emph{Schur complement} of $G$ onto $C$, $\boldsymbol{\mathit{d}}$ is equal to $\boldsymbol{\mathit{B}}^\top \mathbf{1}_{st}$, and $\boldsymbol{\mathit{\pi}}^C\left(\cdot\right)$ is a \emph{demand projection} onto $C$. Therefore, the problem is reduced to maintaining two quantities: $\boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}}\right)$ and $(SC(G,C))^+ \boldsymbol{\mathit{d}}$ in sublinear time per operation. The latter is computed by using the dynamic Schur complement data structure of~\cite{durfee2019fully}, and the former can be maintained by a careful use of random walks. We now describe our approach. Instead of using the interior point method formulation of~\cite{madry2016computing} which only applies to the maximum flow problem, we use the one by~\cite{axiotis2020circulation} for the, more general, minimum cost flow problem. There are now several obstacles to making this approach work by maintaining the quantity $\langle \boldsymbol{\mathit{q}},\boldsymbol{\mathit{\rho}}\rangle$: \paragraph{Preconditioning} A significant difference between~\cite{madry2016computing} and~\cite{axiotis2020circulation} is that while the former is able to guarantee that the magnitude of the electrical potentials computed in each step is inversely proportional to the duality gap, meaning that a large duality gap implies potential embeddings of low stretch, no such preconditioning method is known for minimum cost flow. In fact, \cite{axiotis2020circulation} used demand perturbations to show that a \emph{weaker} bound on the potentials can be achieved, which was still sufficient for their purposes. Unfortunately, this bound is not strong enough to be used in the analysis of~\cite{gao2021fully}. In order to alleviate this issue, we completely remove preconditioning from the picture by only requiring a bound on the \emph{energy} of the electrical potentials (instead of their magnitude). In particular, given an approximate demand projection ${\widetilde{\boldsymbol{\mathit{\pi}}}}^{C}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}}\right)$, identity \eqref{eq:identity} is used to detect congested edges. In~\cite{gao2021fully}, there is a uniform upper bound on the entries of the potential embedding $\boldsymbol{\mathit{\phi}} = SC^+ \boldsymbol{\mathit{d}}$ because of preconditioning, thus the error in $(\ref{eq:identity})$ can be bounded by \begin{align*} \left\|\widetilde{\boldsymbol{\mathit{\pi}}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}}\right) - \boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\right\|_1 \left\|\boldsymbol{\mathit{\phi}}\right\|_\infty\,. \end{align*} As we do not have a good bound on $\left\|\boldsymbol{\mathit{\phi}}\right\|_\infty$, we instead use an alternative upper bound on the error: \begin{align*} \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}\left(\widetilde{\boldsymbol{\mathit{\pi}}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}}\right) - \boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\right)} \sqrt{E_{\boldsymbol{\mathit{r}}}\left(\boldsymbol{\mathit{\phi}}\right)}\,, \end{align*} where $\mathcal{E}_{\boldsymbol{\mathit{r}}}(\cdot)$ gives the energy to route a demand with resistances $\boldsymbol{\mathit{r}}$, and $E_{\boldsymbol{\mathit{r}}}(\cdot)$ gives the energy of a potential embedding with resistances $\boldsymbol{\mathit{r}}$. As the standard interior point method step satisfies $E_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{\phi}}) \leq 1$, all our efforts focus on ensuring that \begin{align} \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}\left(\widetilde{\boldsymbol{\mathit{\pi}}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}}\right) - \boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\right)} \leq \varepsilon \label{eq:energy_error} \end{align} for some error parameter $\varepsilon$. One issue is the fact that the energy depends on the current resistances, therefore even if at some point the error of the demand projection is low, after a few iterations it might increase because of resistance changes. We deal with this issue by taking the stability of resistances along the central path into account. This allows us to upper bound how much this error increases after a number of iterations. The resistance stability lemma is a generalization of the one used in~\cite{gao2021fully}. Unfortunately, even though (\ref{eq:energy_error}) seems like the right type of guarantee, it is unclear how to ensure that it is always true. Specifically, it involves efficiently computing the hitting probabilities from some vertex $v$ to $C$ in an appropriate norm, which ends up being non-trivial. Instead, we show that the following \emph{weaker} error bound can be ensured with high probability: \begin{align} \left|\left\langle \widetilde{\boldsymbol{\mathit{\pi}}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}}\right) - \boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}}\right), \boldsymbol{\mathit{\phi}}\right\rangle\right| \leq \varepsilon\,, \label{eq:potential_error} \end{align} where $\boldsymbol{\mathit{\phi}}$ is a \emph{fixed} potential vector with $E_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{\phi}}) \leq 1$. Interestingly, this guarantee is still sufficient for our purposes. \paragraph{Costs and general demand} There is a fundamental obstacle to using the approach of~\cite{gao2021fully} once edge costs are introduced. In particular, for the maximum flow problem, the demand pushed by the electrical flow in each iteration is an $s$-$t$ demand, so---up to scaling---it is always constant. In minimum cost flow on the other hand, the augmenting flow is a multiple of $\boldsymbol{\mathit{c}} - \boldsymbol{\mathit{R}}^{-1} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \boldsymbol{\mathit{B}}^\top \boldsymbol{\mathit{c}}$. Here it is not possible to locate a sublinear number of congested edges just by looking at the electrical flow term $\boldsymbol{\mathit{R}}^{-1} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \boldsymbol{\mathit{B}}^\top \boldsymbol{\mathit{c}}$, as there might be significant cancellations with $\boldsymbol{\mathit{c}}$. We instead use the following equivalent form: $\frac{\frac{1}{\boldsymbol{\mathit{s}}^+} - \frac{1}{\boldsymbol{\mathit{s}}^-}}{\boldsymbol{\mathit{r}}} - \boldsymbol{\mathit{R}}^{-1} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \boldsymbol{\mathit{B}}^\top \frac{\frac{1}{\boldsymbol{\mathit{s}}^+} - \frac{1}{\boldsymbol{\mathit{s}}^-}}{\boldsymbol{\mathit{r}}}$, which allows us to ignore the first term because it is small and concentrate on the electrical flow term. One issue that arises is the fact that the demand vector $\boldsymbol{\mathit{B}}^\top \frac{\frac{1}{\boldsymbol{\mathit{s}}^+}-\frac{1}{\boldsymbol{\mathit{s}}^-}}{\boldsymbol{\mathit{r}}}$ now depends on slacks, and as a result changes throughout the interior point method. This issue can be handled relatively easily. A more significant issue concerns the vertex sparsifier. In fact, the vertex sparsifier framework around which~\cite{gao2021fully} is based only accepts demands that are supported on the vertex set $C$ of the sparsifier. As $|C|$ is sublinear in $n$, this only captures demands with sublinear support, one such example being max flow with support $2$. However, our demand vector $\boldsymbol{\mathit{B}}^\top \frac{\frac{1}{\boldsymbol{\mathit{s}}^+} - \frac{1}{\boldsymbol{\mathit{s}}^-}}{\boldsymbol{\mathit{r}}}$ in general will be supported on $n$ vertices. Even though it might seem impossible to get around this issue, we show that the special structure of $C$ allows us to push the demand to a small number of vertices. More specifically, we show that if one projects all of the demand onto $C$, the flow induced by this new demand will not differ much from the one with the original demand. Concretely, given a Laplacian system $\boldsymbol{\mathit{L}} \boldsymbol{\mathit{\phi}} = \boldsymbol{\mathit{d}}$, we decompose it into two systems $\boldsymbol{\mathit{L}} \boldsymbol{\mathit{\phi}}^{(1)} = \boldsymbol{\mathit{\pi}}^C(\boldsymbol{\mathit{d}})$ and $\boldsymbol{\mathit{L}} \boldsymbol{\mathit{\phi}}^{(2)} = \boldsymbol{\mathit{d}} - \boldsymbol{\mathit{\pi}}^C(\boldsymbol{\mathit{d}})$, where $\boldsymbol{\mathit{\pi}}^C(\boldsymbol{\mathit{d}})$ is the projection of $\boldsymbol{\mathit{d}}$ onto $C$. Intuitively, the latter system computes the electrical flow to push all demands to $C$, and the former to serve this $C$-supported demand. We show that, as long as $C$ is a \emph{congestion reduction subset} (as it is also the case in~\cite{gao2021fully}), $\boldsymbol{\mathit{\phi}}^{(2)}$ has negligible contribution in the electrical flow, thus it can be ignored. More specifically, in Section~\ref{sec:Fsystem} we prove the following lemma: \begin{replemma}{lem:non-projected-demand-contrib} Consider a graph $G(V,E)$ with resistances $\boldsymbol{\mathit{r}}$ and Laplacian $\boldsymbol{\mathit{L}}$, a $\beta$-congestion reduction subset $C$, and a demand $\boldsymbol{\mathit{d}} = \delta \boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}}$ for some $\delta > 0$ and $\boldsymbol{\mathit{q}} \in[-1,1]^m$. Then, the potential embedding defined as \begin{align*} \boldsymbol{\mathit{\phi}} = \boldsymbol{\mathit{L}}^+ \left(\boldsymbol{\mathit{d}} - \boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{d}}\right)\right) \end{align*} has congestion $\delta \cdot \tO{1/\beta^2}$, i.e. $\left\|\frac{\boldsymbol{\mathit{B}} \boldsymbol{\mathit{\phi}}}{\sqrt{\boldsymbol{\mathit{r}}}}\right\|_\infty \leq \delta \cdot \tO{1/\beta^2}$. \end{replemma} Now, for computing $\boldsymbol{\mathit{\phi}}^{(1)}$, we need to get an approximate estimate of $\boldsymbol{\mathit{\pi}}^C(\boldsymbol{\mathit{d}})$. Even though the most natural approach would be to try to maintain $\boldsymbol{\mathit{\pi}}^C(\boldsymbol{\mathit{d}})$ under vertex insertions to $C$, this approach has issues related to the fact that our error guarantee is based on a \emph{fixed} potential vector. In particular, if we used an estimate of $\boldsymbol{\mathit{\pi}}^C(\boldsymbol{\mathit{d}})$, then the potential vector in (\ref{eq:potential_error}) would depend on the randomness of this estimate, and as a result the high probability guarantee would not work. Instead, we show that it is not even neccessary to maintain $\boldsymbol{\mathit{\pi}}^C(\boldsymbol{\mathit{d}})$ very accurately. In fact, it suffices to \emph{exactly} compute it only every few iterations of the algorithm, and use this estimate for the calculation. What allows us to do this is the following lemma, which bounds the change of $\boldsymbol{\mathit{\pi}}^C(\boldsymbol{\mathit{d}})$ measured in energy, after a sequence of vertex insertions and resistance changes. \begin{replemma}{lem:old_projection_approximate} Consider a graph $G(V,E)$ with resistances $\boldsymbol{\mathit{r}}^0$, $\boldsymbol{\mathit{q}}^0\in[-1,1]^m$, a $\beta$-congestion reduction subset $C^0$, and a fixed sequence of updates, where the $i$-th update $i\in\{0,T-1\}$ is of the following form: \begin{itemize} \item {\textsc{AddTerminal}($v^i$): Set $C^{i+1} = C^{i} \cup \{v^i\}$ for some $v^i\in V\backslash C^i$, $q_e^{i+1} = q_e^{i}, r_e^{i+1} = r_e^{i}$} \item {\textsc{Update}($e^i,\boldsymbol{\mathit{q}},\boldsymbol{\mathit{r}}$): Set $C^{i+1} = C^{i}$, $q_e^{i+1} = q_e$ $r_e^{i+1} = r_e$, where $e^i\in E(C^{i})$} \end{itemize} Then, with high probability, \[ \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}^T} \left(\boldsymbol{\mathit{\pi}}^{C^0,\boldsymbol{\mathit{r}}^0}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S^0}{\sqrt{\boldsymbol{\mathit{r}}^0}}\right) - \boldsymbol{\mathit{\pi}}^{C^T,\boldsymbol{\mathit{r}}^T}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S^T}{\sqrt{\boldsymbol{\mathit{r}}^T}}\right)\right)} \leq \tO{\max_{i\in\{0,\dots,T-1\}} \left\|\frac{\boldsymbol{\mathit{r}}^T}{\boldsymbol{\mathit{r}}^i}\right\|_\infty^{1/2} \beta^{-2}} \cdot T\,. \] \end{replemma} If we call this demand projection estimate $\boldsymbol{\mathit{\pi}}_{old}$, the quantity that we would like to maintain (\ref{eq:identity}) now becomes \begin{align*} \left\langle\boldsymbol{\mathit{q}},\boldsymbol{\mathit{\rho}}\right\rangle \approx \left\langle \boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}}\right), SC^+ \boldsymbol{\mathit{\pi}}_{old}\right\rangle \,. \end{align*} Therefore all that's left is to efficiently maintain approximations to demand projections of the form $\boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}}\right)$. \paragraph{Bounding demand projections.} An important component for showing that demand projections can be updated efficiently is bounding the magnitude of an entry $\pi_v^{C\cup\{v\}}(\boldsymbol{\mathit{B}}^\top \frac{\mathbf{1}_{e}}{\sqrt{r_e}})$ of the projection, for some fixed edge $e=(u,w)$. This is apparent in the following identity which shows how a demand projection changes after inserting a vertex: \begin{align} \boldsymbol{\mathit{\pi}}^{C\cup\{v\}}\left(\boldsymbol{\mathit{d}}\right) = \boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{d}}\right) + \pi_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{d}}\right)\cdot \left(\mathbf{1}_v - \boldsymbol{\mathit{\pi}}^C(\mathbf{1}_v)\right)\,. \label{eq:addonevertex} \end{align} In~\cite{gao2021fully} this projection entry is upper bounded by $(p_v^{C\cup\{v\}}(u)+p_v^{C\cup\{v\}}(w)) \cdot \frac{1}{\sqrt{r_e}}$, where $p_v^{C\cup\{v\}}(u)$ is the probability that a random walk starting at $u$ hits $v$ before $C$. This bound can be very bad as $r_e$ can be arbitrarily small, although in the particular case of max flow it is possible to show that such low-resistance edges cannot get congested and thus are not of interest. In order to overcome this issue, we provide a different bound, which in contrast works best when $r_e$ is small. \begin{replemma}{st_projection1} Consider a graph $G(V,E)$ with resistances $\boldsymbol{\mathit{r}}$ and a subset of vertices $C\subseteq V$. For any vertex $v\in V\backslash C$ we have that \begin{align*} \left|\pi_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\mathbf{1}_e}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\right| \leq (p_v^{C\cup\{v\}}(u) + p_v^{C\cup\{v\}}(w)) \cdot \frac{\sqrt{r_e}}{R_{eff}(v,e)} \,. \end{align*} \end{replemma} Here $R_{eff}(v,e)$ is the effective resistance between $v$ and $e$. In fact, together with the other upper bound mentioned above, this implies that \[ \left|\pi_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\mathbf{1}_e}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\right| \leq (p_v^{C\cup\{v\}}(u) + p_v^{C\cup\{v\}}(w)) \cdot \frac{1}{\sqrt{R_{eff}(v,e)}} \,, \] which no longer depends on the value of the resistance $r_e$. As we will see, it suffices to approximate $\pi_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\mathbf{1}_e}{\sqrt{\boldsymbol{\mathit{r}}}}\right)$ up to additive accuracy roughly $\widehat{\eps} \cdot (p_v^{C\cup\{v\}}(u) + p_v^{C\cup\{v\}}(w)) / \sqrt{R_{eff}(C,v)}$ for some error parameter $\widehat{\eps} > 0$. Thus, Lemma~\ref{st_projection1} immediately implies that for any edge $e$ such that $R_{eff}(v,e) \gg R_{eff}(C,v)$, this term is small enough to begin with, and thus can be ignored. \paragraph{Important edges.} In order to ensure that the demand projection can be updated efficiently, we focus only on the demand coming from a special set of edges, which we call \emph{important}. These are the edges that are close (in effective resistance metric) to $C$ relative to their own resistance $r_e$. In fact, the farther an edge is from $C$ in this sense, the smaller its worst-case congestion, and so non-important edges do not influence the set of congested edges that we are looking for. At a high level, this is because parts of the graph that are very far in the potential embedding have minimal interactions with each other. \begin{definition}[Important edges] An edge $e\in E$ is called $\varepsilon$-\emph{important} (or just \emph{important}) if $R_{eff}(C,e) \leq r_e / \varepsilon^2$. \end{definition} Based on the above discussion, we seek to find congested edges \emph{only} among important edges. \begin{replemma}{lem:important_edges}[Localization lemma] Let $\boldsymbol{\mathit{\phi}}^*$ be any solution of \begin{align*} \boldsymbol{\mathit{L}} \boldsymbol{\mathit{\phi}}^* = \delta \cdot \boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{p}}}{\sqrt{\boldsymbol{\mathit{r}}}} \right)\,, \end{align*} where $\boldsymbol{\mathit{r}}$ are any resistances, $\boldsymbol{\mathit{p}}\in[-1,1]^m$, and $C\subseteq V$. Then, for any $e\in E$ that is not $\varepsilon$-important we have $\left|\frac{\boldsymbol{\mathit{B}}\boldsymbol{\mathit{\phi}}^*}{\sqrt{\boldsymbol{\mathit{r}}}}\right|_e \leq 12\varepsilon$. \end{replemma} One issue is that the set of important edges changes whenever $C$ changes. However, we show that, because of the stability of resistances along the central path, the set of important edges only needs to be updated once every few iterations. \section{Preliminaries} \subsection{General} For any $k\in \mathbb{Z}_{\geq 0}$, we denote $[k] = \{1,2,\dots,k\}$. For any $x\in\mathbb{R}^n$ and $C\subseteq [n]$, we denote by $x_C\in\mathbb{R}^{|C|}$ the restriction of $x$ to the entries in $C$. Similarly for a matrix $A$, subset of rows $C$, and subset of columns $F$, we denote by $A_{CF}$ the submatrix that results from keeping the rows in $C$ and the columns in $F$. When not ambiguous, we use the corresponding uppercase symbol to a symbol denoting a vector, to denote the diagonal matrix of that vector. In other words $\boldsymbol{\mathit{R}} = \mathrm{diag}(\boldsymbol{\mathit{r}})$. Given $x,y\in\mathbb{R}$ and $\alpha\in\mathbb{R}_{\geq 1}$, we say that $x$ and $y$ $\alpha$-approximate each other and write $x \approx_{\alpha} y$ if $\alpha^{-1} \leq x/y \leq \alpha$. When a graph $G(V,E)$ is clear from context, we will use $n = |V|$ and $m=|E|$. We use $\boldsymbol{\mathit{B}}\in\mathbb{R}^{m\times n}$ to denote the edge-vertex incidence matrix of $G$ and, given some resistances $\boldsymbol{\mathit{r}}$, we use $\boldsymbol{\mathit{L}}\in\mathbb{R}^{n\times n}$ to denote the Laplacian $\boldsymbol{\mathit{L}} = \boldsymbol{\mathit{B}}^\top \boldsymbol{\mathit{R}}^{-1} \boldsymbol{\mathit{B}}$. \subsection{Minimum cost flow} Given a directed graph $G(V,E)$ with costs $\boldsymbol{\mathit{c}}\in\mathbb{R}^m$, demands $\boldsymbol{\mathit{d}}\in\mathbb{R}^n$, and capacities $\boldsymbol{\mathit{u}}\in\mathbb{R}_{> 0}^m$, the \emph{minimum cost flow problem} asks to compute a flow $\boldsymbol{\mathit{f}}$ that \begin{itemize} \item{routes the demand: $\boldsymbol{\mathit{B}}^\top \boldsymbol{\mathit{f}} = \boldsymbol{\mathit{d}}$} \item{respects the capacities: $\mathbf{0} \leq \boldsymbol{\mathit{f}} \leq \boldsymbol{\mathit{u}}$, and} \item{minimizes the cost: $\langle \boldsymbol{\mathit{c}}, \boldsymbol{\mathit{f}}\rangle$.} \end{itemize} We will denote such an instance of the minimum cost flow by the tuple $(G(V,E),\boldsymbol{\mathit{c}},\boldsymbol{\mathit{d}},\boldsymbol{\mathit{u}})$. \subsection{Electrical flows} \begin{definition}[Energy of a potential embedding] Consider a graph $G(V,E)$ with resistances $\boldsymbol{\mathit{r}}$ and a potential embedding $\boldsymbol{\mathit{\phi}}$. We denote by \[ E_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{\phi}}) = \sum\limits_{e\in E} \frac{(B\boldsymbol{\mathit{\phi}})_e^2}{r_e} \] the total energy of the electrical flow induced by $\boldsymbol{\mathit{\phi}}$. \end{definition} \begin{definition}[Energy to route a demand] Consider a graph $G(V,E)$ with resistances $\boldsymbol{\mathit{r}}$, and a vector $\boldsymbol{\mathit{d}}\in\mathbb{R}^n$. If $\boldsymbol{\mathit{d}}$ is a demand ($\langle 1,d\rangle = 0$), we denote by \[ \mathcal{E}_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{d}}) = \underset{\boldsymbol{\mathit{\phi}}:\, B^\top \frac{B\boldsymbol{\mathit{\phi}}}{\boldsymbol{\mathit{r}}} = \boldsymbol{\mathit{d}}}{\min}\, E_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{\phi}}) \] the total energy that is required to route the demand $\boldsymbol{\mathit{d}}$ with resistances $\boldsymbol{\mathit{r}}$. We extend this definition for a $\boldsymbol{\mathit{d}}$ that is not a demand vector ($\langle\mathbf{1}, \boldsymbol{\mathit{d}}\rangle \neq 0$), as $\mathcal{E}_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{d}}) = \mathcal{E}_{\boldsymbol{\mathit{r}}}\left(\boldsymbol{\mathit{d}} - \frac{\langle \mathbf{1}, \boldsymbol{\mathit{d}}\rangle}{n} \cdot \mathbf{1}\right)$. \end{definition} \begin{fact}[Energy statements] Consider a graph $G(V,E)$ with resistances $\boldsymbol{\mathit{r}}$. \begin{itemize} \item {For any $x,y\in \mathbb{R}^n$, we have $\sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}(x+y)} \leq \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}(x)} + \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}(y)} $.} \item{ and a vector $\boldsymbol{\mathit{d}}\in\mathbb{R}^n$. Then, \[ \underset{\phi: E_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{\phi}}) \leq 1}{\max}\, \langle \boldsymbol{\mathit{d}}, \boldsymbol{\mathit{\phi}}\rangle = \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{d}})}\,. \]} \item{ For any resistances $\boldsymbol{\mathit{r}}' \leq \alpha \boldsymbol{\mathit{r}}$ for some $\alpha \geq 1$ and any $\boldsymbol{\mathit{d}}\in\mathbb{R}^n$, we have $\mathcal{E}_{\boldsymbol{\mathit{r}}'}(\boldsymbol{\mathit{d}}) \leq \alpha \cdot \mathcal{E}_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{d}})$ } \end{itemize} \end{fact} \begin{proof} We let $\boldsymbol{\mathit{L}} = \boldsymbol{\mathit{B}}^\top \boldsymbol{\mathit{R}}^{-1} \boldsymbol{\mathit{B}}$ be the Laplacian of $G$ with resistances $\boldsymbol{\mathit{r}}$. For the first one, we have $\sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}(x+y)} = \left\|x+y\right\|_{\boldsymbol{\mathit{L}}^+} \leq \left\|x\right\|_{\boldsymbol{\mathit{L}}^+}+\left\|y\right\|_{\boldsymbol{\mathit{L}}^+} = \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}(x)} + \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}(y)} $, where we used the triangle inequality. The second one follows since $E_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{\phi}}) = \left\|\boldsymbol{\mathit{\phi}}\right\|_{\boldsymbol{\mathit{L}}}^2$ and $\mathcal{E}_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{d}}) = \left\|\boldsymbol{\mathit{d}}\right\|_{\boldsymbol{\mathit{L}}^+}^2$ and the norms $\left\|\cdot\right\|_{\boldsymbol{\mathit{L}}}$ and $\left\|\cdot\right\|_{\boldsymbol{\mathit{L}}^+}$ are dual. For the third one, we note that $\left(\boldsymbol{\mathit{B}}^\top \boldsymbol{\mathit{R}}'^{-1} \boldsymbol{\mathit{B}}\right)^+ \preceq \alpha \cdot \left(\boldsymbol{\mathit{B}}^\top \boldsymbol{\mathit{R}}^{-1} \boldsymbol{\mathit{B}}\right)^+$, and so $\mathcal{E}_{\boldsymbol{\mathit{r}}'}(\boldsymbol{\mathit{d}}) = \left\|\boldsymbol{\mathit{d}}\right\|_{(\boldsymbol{\mathit{B}}^\top \boldsymbol{\mathit{R}}'^{-1} \boldsymbol{\mathit{B}})^+}^2 \leq \alpha \left\|\boldsymbol{\mathit{d}}\right\|_{(\boldsymbol{\mathit{B}}^\top \boldsymbol{\mathit{R}}^{-1} \boldsymbol{\mathit{B}})^+}^2 = \alpha \cdot \mathcal{E}_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{d}})$. \end{proof} \begin{definition}[Effective resistances] Consider a graph $G(V,E)$ with resistances $\boldsymbol{\mathit{r}}$ and any pair of vertices $u,v\in V$. We denote by $R_{eff}(u,v)$ the energy required to route $1$ unit of flow from $u$ to $v$, i.e. $R_{eff}(u,v) = \mathcal{E}_{\boldsymbol{\mathit{r}}}\left(\mathbf{1}_u - \mathbf{1}_v\right)$. This is called the \emph{effective resistance between $u$ and $v$}. We extend this definition to work with vertex subsets $X,Y\subseteq V$, such that $R_{eff}(X,Y)$ is the effective resistance between the vertices $x,y$ that result from contracting $X$ and $Y$. When used as an argument of $R_{eff}$, an edge $e=(u,v)\in E$ is treated as the vertex subset $\{u,v\}$. \end{definition} \begin{definition}[Schur complement] Given a graph $G(V,E)$ with Laplacian $\boldsymbol{\mathit{L}}\in\mathbb{R}^{n\times n}$ and a vertex subset $C\subseteq V$ as well as $F = V\backslash C$, $SC(G,C) := \boldsymbol{\mathit{L}}_{CC} - \boldsymbol{\mathit{L}}_{CF}\boldsymbol{\mathit{L}}_{FF}^{-1}\boldsymbol{\mathit{L}}_{FC}$ (or just $SC$) is called the \emph{Schur complement of $G$ onto $C$}. \end{definition} \begin{fact}[Cholesky factorization] Given a matrix $\boldsymbol{\mathit{L}}\in\mathbb{R}^{n\times n}$, a subset $C\subseteq[n]$, and $F = [n] \backslash C$, we have \begin{align*} \boldsymbol{\mathit{L}}^+ = \begin{pmatrix} \boldsymbol{\mathit{I}} & -\boldsymbol{\mathit{L}}_{FF}^{-1} \boldsymbol{\mathit{L}}_{FC}\\ \mathbf{0} & \boldsymbol{\mathit{I}} \end{pmatrix} \begin{pmatrix} \boldsymbol{\mathit{L}}_{FF}^{-1} & \mathbf{0}\\ \mathbf{0} & SC(\boldsymbol{\mathit{L}},C)^+ \end{pmatrix} \begin{pmatrix} \boldsymbol{\mathit{I}} & \mathbf{0}\\ -\boldsymbol{\mathit{L}}_{CF} \boldsymbol{\mathit{L}}_{FF}^{-1} & \boldsymbol{\mathit{I}} \end{pmatrix}\,. \end{align*} \end{fact} \subsection{Random walks} \begin{definition}[Hitting probabilities] Consider a graph $G(V,E)$ with resistances $\boldsymbol{\mathit{r}}$. For any $u,v\in V$, $C\subseteq V$, we denote by $p_v^{C,\boldsymbol{\mathit{r}}}(u)$ the probability that for random walk that starts from $u$ and uses edges with probability proportional to $\frac{1}{\boldsymbol{\mathit{r}}}$, the first vertex of $C$ to be visited is $v$. When not ambiguous, we will use the notation $p_v^C(u)$. \label{def:hitting_probabilities} \end{definition} \begin{definition}[Demand projection] Consider a graph $G(V,E)$ and a demand vector $\boldsymbol{\mathit{d}}$. For any $v\in V$, $C\subseteq V$, we define $\pi_v^{C,\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{d}}) = \sum\limits_{u\in V} d_u p_v^{C,\boldsymbol{\mathit{r}}}(u)$ and call the resulting vector $\boldsymbol{\mathit{\pi}}^{C,\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{d}})\in\mathbb{R}^{n}$ the \emph{demand projection of $\boldsymbol{\mathit{d}}$ onto $C$}. When not ambiguous, we will use the notation $\boldsymbol{\mathit{\pi}}^C(\boldsymbol{\mathit{d}})$. \label{def:demand_projection} \end{definition} For convenience, when we write $\boldsymbol{\mathit{\pi}}^C(\boldsymbol{\mathit{d}})$ we might also refer to the restriction of this vector to $C$. This will be clear from the context, and, as $\pi_v^C(\boldsymbol{\mathit{d}}) = 0$ for any $v\notin C$, no ambiguity is introduced. \begin{fact}[\cite{gao2021fully}] Given a graph $G(V,E)$ with Laplacian $\boldsymbol{\mathit{L}}$, a vertex subset $C\subseteq V$, and $\boldsymbol{\mathit{d}}\in\mathbb{R}^n$, we have \[ \boldsymbol{\mathit{\pi}}^C(\boldsymbol{\mathit{d}}) = \boldsymbol{\mathit{d}}_{C} - \boldsymbol{\mathit{L}}_{CF}\boldsymbol{\mathit{L}}_{FF}^{-1} \boldsymbol{\mathit{d}}_{F}\\ \,. \] Additionally, \[ \left[\boldsymbol{\mathit{L}}^+\boldsymbol{\mathit{d}}\right]_C = SC^+\boldsymbol{\mathit{\pi}}^C(\boldsymbol{\mathit{d}}) \,,\] where $SC$ is the Schur complement of $G$ onto $C$. \label{fact:demand_proj} \end{fact} An important property of the demand projection is that the energy required to route it is upper bounded by the energy required to route the original demand. The proof can be found in Section~\ref{sec:aux}. \begin{lemma}\label{lem:sc-energy-bd} Let $\boldsymbol{\mathit{d}}$ be a demand vector, let $\boldsymbol{\mathit{r}}$ be resistances, and let $C \subseteq V$ be a subset of vertices. Then \[ \mathcal{E}_{\boldsymbol{\mathit{r}}}\left( \boldsymbol{\mathit{\pi}}^C(\boldsymbol{\mathit{d}}) \right) \leq \mathcal{E}_{\boldsymbol{\mathit{r}}}\left( \boldsymbol{\mathit{d}} \right) \,. \] \end{lemma} The following lemma relates the effective resistance between a vertex and a vertex set, to the energy to route a particular demand, based on a demand projection. \begin{lemma}[Effective resistance and hitting probabilities] Given a graph $G(V,E)$ with resistances $\boldsymbol{\mathit{r}}$, any vertex set $A\subseteq V$ and vertex $u\in V\backslash A$, we have $R_{eff}(u,A) = \mathcal{E}_{\boldsymbol{\mathit{r}}}(\mathbf{1}_u - \boldsymbol{\mathit{\pi}}^A(\mathbf{1}_u))$. \label{lem:effective_hitting} \end{lemma} \begin{proof} Let $\boldsymbol{\mathit{L}}$ be the Laplacian of $G$ with resistances $\boldsymbol{\mathit{r}}$ and $F = V\backslash A$. We first prove that \[ \mathcal{E}_{\boldsymbol{\mathit{r}}}(\mathbf{1}_u - \boldsymbol{\mathit{\pi}}^A(\mathbf{1}_u)) = \mathbf{1}_u^\top \boldsymbol{\mathit{L}}_{FF}^{-1}\mathbf{1}_u \,.\] This is because \begin{align*} & \mathcal{E}_{\boldsymbol{\mathit{r}}}(\mathbf{1}_u - \boldsymbol{\mathit{\pi}}^A(\mathbf{1}_u)) \\ & = \langle \mathbf{1}_u - \boldsymbol{\mathit{\pi}}^A(\mathbf{1}_u), \boldsymbol{\mathit{L}}^+ (\mathbf{1}_u - \boldsymbol{\mathit{\pi}}^A(\mathbf{1}_u)) \rangle\\ & = \bigg\langle \mathbf{1}_u - \begin{pmatrix}\mathbf{0} & \mathbf{0}\\-\boldsymbol{\mathit{L}}_{AF} \boldsymbol{\mathit{L}}_{FF}^{-1} & \boldsymbol{\mathit{I}}\end{pmatrix} \mathbf{1}_u, \boldsymbol{\mathit{L}}^+ \left(\mathbf{1}_u - \begin{pmatrix}\mathbf{0} & \mathbf{0} \\ -\boldsymbol{\mathit{L}}_{AF}\boldsymbol{\mathit{L}}_{FF}^{-1} & \boldsymbol{\mathit{I}} \end{pmatrix} \mathbf{1}_u\right) \bigg\rangle\\ & = \bigg\langle\begin{pmatrix}\boldsymbol{\mathit{I}} & \mathbf{0}\\-\boldsymbol{\mathit{L}}_{AF}\boldsymbol{\mathit{L}}_{FF}^{-1} & \boldsymbol{\mathit{I}}\end{pmatrix} \left(\mathbf{1}_u + \begin{pmatrix}\mathbf{0}\\\boldsymbol{\mathit{L}}_{AF} \boldsymbol{\mathit{L}}_{FF}^{-1} \mathbf{1}_u\end{pmatrix}\right),\\ &\quad\quad\,\begin{pmatrix}\boldsymbol{\mathit{L}}_{FF}^{-1} & \mathbf{0} \\ \mathbf{0} & SC(\boldsymbol{\mathit{L}},A)^+\end{pmatrix} \begin{pmatrix}\boldsymbol{\mathit{I}} & \mathbf{0}\\-\boldsymbol{\mathit{L}}_{AF}\boldsymbol{\mathit{L}}_{FF}^{-1} & \boldsymbol{\mathit{I}}\end{pmatrix} \left(\mathbf{1}_u + \begin{pmatrix}\mathbf{0}\\\boldsymbol{\mathit{L}}_{AF}\boldsymbol{\mathit{L}}_{FF}^{-1} \mathbf{1}_u \end{pmatrix}\right) \bigg\rangle\\ & = \left\langle\mathbf{1}_u, \begin{pmatrix}\boldsymbol{\mathit{L}}_{FF}^{-1} & \mathbf{0} \\ \mathbf{0} & SC(\boldsymbol{\mathit{L}},A)^+\end{pmatrix} \mathbf{1}_u\right\rangle\\ &=\langle \mathbf{1}_u, \boldsymbol{\mathit{L}}_{FF}^{-1} \mathbf{1}_u\rangle \,. \end{align*} On the other hand, note that $R_{eff}(u,A) = {\widehat{R}}_{eff}(u,{\mathit{\widehat{a}}})$, where ${\widehat{R}}$ are the effective resistances in a graph $\widehat{G}$ that results after contracting $A$ to a new vertex ${\mathit{\widehat{a}}}$. It is easy to see that the Laplacian of this new graph is \begin{align*} \widehat{\boldsymbol{\mathit{L}}} = \begin{pmatrix} \boldsymbol{\mathit{L}}_{FF} & \boldsymbol{\mathit{L}}_{FA} \mathbf{1} \\ \mathbf{1}^\top \boldsymbol{\mathit{L}}_{AF} & \mathbf{1}^\top \boldsymbol{\mathit{L}}_{FA} \mathbf{1} \end{pmatrix}\,. \end{align*} We look at the system $\widehat{\boldsymbol{\mathit{L}}} \begin{pmatrix}x\\a\end{pmatrix} = \mathbf{1}_u - \mathbf{1}_{{\mathit{\widehat{a}}}}$, where $a$ is a scalar. The solution is given by \begin{align*} & \boldsymbol{\mathit{x}} = \boldsymbol{\mathit{L}}_{FF}^{-1} \left(\mathbf{1}_u - a\cdot \boldsymbol{\mathit{L}}_{FA} \mathbf{1}\right)\,. \end{align*} However, as $\mathbf{1}\in \ker(\widehat{\boldsymbol{\mathit{L}}})$ by the fact that it is a Laplacian, we can assume that $a=0$ by shifting. Therefore $\boldsymbol{\mathit{x}} = \boldsymbol{\mathit{L}}_{FF}^{-1} \mathbf{1}_u$, and so we can conclude that \begin{align*} R_{eff}(u,A) & = \langle\mathbf{1}_u - \mathbf{1}_{{\mathit{\widehat{a}}}}, \widehat{\boldsymbol{\mathit{L}}}^+ (\mathbf{1}_u - \mathbf{1}_{{\mathit{\widehat{a}}}})\rangle\\ & = \langle\mathbf{1}_u, \boldsymbol{\mathit{L}}_{FF}^{-1} \mathbf{1}_u\rangle \,. \end{align*} So we have proved that $R_{eff}(u,A) = \mathcal{E}_{\boldsymbol{\mathit{r}}}(\mathbf{1}_u - \boldsymbol{\mathit{\pi}}^A(\mathbf{1}_u))$ and we are done. \end{proof} Finally, the following lemma relates the effective resistance between a vertex and a vertex set, to the effective resistance between vertices. \begin{lemma} Given a graph $G(V,E)$ with resistances $\boldsymbol{\mathit{r}}$, any vertex set $A\subseteq V$ and vertex $u\in V\backslash A$, we have \[ \frac{1}{|A|} \cdot \underset{v\in A}{\min}\, R_{eff}(u,v) \leq R_{eff}(u,A) \leq \underset{v\in A}{\min}\, R_{eff}(u,v)\,.\] \label{lem:multi_effective_resistance} \end{lemma} \begin{proof} Let $\boldsymbol{\mathit{L}}$ be the Laplacian of $G$ with resistances $\boldsymbol{\mathit{r}}$, and note that $R_{eff}(u,v) = \left\|\boldsymbol{\mathit{L}}^{+/2} (\mathbf{1}_u - \mathbf{1}_v)\right\|_2^2$ and, by Lemma~\ref{lem:effective_hitting}, $R_{eff}(u,A) = \left\|\boldsymbol{\mathit{L}}^{+/2} (\mathbf{1}_u - \boldsymbol{\mathit{\pi}}^A(\mathbf{1}_u))\right\|_2^2$. Expanding the latter, we have \begin{align*} R_{eff}(u,A) & = \left\|\sum\limits_{v\in A} \pi_v^A(\mathbf{1}_u)\cdot\boldsymbol{\mathit{L}}^{+/2} (\mathbf{1}_u - \mathbf{1}_v)\right\|_2^2\\ & = \sum\limits_{v\in A}\left(\pi_v^A(\mathbf{1}_u)\right)^2 \left\| \boldsymbol{\mathit{L}}^{+/2} (\mathbf{1}_u - \mathbf{1}_v)\right\|_2^2 + \sum\limits_{v\in A} \sum\limits_{\substack{v'\in A\\v'\neq v}} \pi_v^A(\mathbf{1}_u) \pi_{v'}^A(\mathbf{1}_u) \langle \mathbf{1}_u - \mathbf{1}_{v'}, \boldsymbol{\mathit{L}}^{+} (\mathbf{1}_u - \mathbf{1}_v)\rangle\,. \end{align*} Now, note that $\pi_v^A(\mathbf{1}_u),\pi_{v'}^A(\mathbf{1}_u) \geq 0$. Additionally, let $\boldsymbol{\mathit{\phi}} = \boldsymbol{\mathit{L}}^{+} (\mathbf{1}_u - \mathbf{1}_v)$ be the potential embedding that induces a $1$-unit electrical flow from $v$ to $u$. As the potential embedding stretches between $\phi_v$ and $\phi_u$, we have that $\phi_{v'} \leq \phi_u$, so $\langle \mathbf{1}_u - \mathbf{1}_{v'}, \boldsymbol{\mathit{L}}^+ (\mathbf{1}_u - \mathbf{1}_v)\rangle = \phi_u- \phi_{v'} \geq 0$. Therefore, \begin{align*} R_{eff}(u,A) & \geq \sum\limits_{v\in A}\left(\pi_v^A(\mathbf{1}_u)\right)^2 \left\| \boldsymbol{\mathit{L}}^{+/2} (\mathbf{1}_u - \mathbf{1}_v)\right\|_2^2\\ & \geq \frac{1}{|A|} \sum\limits_{v\in A} \pi_v^A(\mathbf{1}_u) \cdot R_{eff}(u,v)\\ & \geq \frac{1}{|A|}\underset{v\in A}{\min}\, R_{eff}(u,v)\,, \end{align*} where we used the Cauchy-Schwarz inequality and the fact that $\sum\limits_{v\in A} \pi_v^A(\mathbf{1}_u) = 1$. \end{proof} \section{Interior Point Method with Dynamic Data Structures} \label{sec:ipm} The goal of this section is to show that, given a data structure for approximating electrical flows in sublinear time, we can execute the min cost flow interior point method with total runtime faster than $\tO{m^{3/2}}$. \subsection{LP formulation and background} We present the interior point method setup that we will use, which is from~\cite{axiotis2020circulation}. Our goal is to solve the following minimum cost flow linear program: \begin{align*} & \min\, \left\langle \boldsymbol{\mathit{c}}, \boldsymbol{\mathit{C}}\boldsymbol{\mathit{x}}\right\rangle\\ & \mathbf{0} \leq \boldsymbol{\mathit{f}}^0 + \boldsymbol{\mathit{C}} \boldsymbol{\mathit{x}} \leq \boldsymbol{\mathit{u}}\,, \end{align*} where $\boldsymbol{\mathit{f}}^0$ is a flow with $\boldsymbol{\mathit{B}}^\top \boldsymbol{\mathit{f}}^0= \boldsymbol{\mathit{d}}$ and $\boldsymbol{\mathit{C}}$ is an $m\times (m-n+1)$ matrix whose image is the set of circulations in $G$. In order to use an interior point method, the following log barrier objective is defined: \begin{align} \underset{\boldsymbol{\mathit{x}}}{\min}\, F_{\mu}(\boldsymbol{\mathit{x}}) = \left\langle \frac{\boldsymbol{\mathit{c}}}{\mu}, \boldsymbol{\mathit{C}}\boldsymbol{\mathit{x}} \right\rangle -\sum\limits_{e\in E}\left(\log\left(\boldsymbol{\mathit{f}}^0 + \boldsymbol{\mathit{C}}\boldsymbol{\mathit{x}}\right)_e + \log\left(\boldsymbol{\mathit{u}} - (\boldsymbol{\mathit{f}}^0 + \boldsymbol{\mathit{C}}\boldsymbol{\mathit{x}})\right)_e \right)\,. \label{eq:logbarrier} \end{align} For any parameter $\mu > 0$, the optimality condition of (\ref{eq:logbarrier}) is called the \emph{centrality condition} and is given by \begin{align} \boldsymbol{\mathit{C}}^\top \left(\frac{\boldsymbol{\mathit{c}}}{\mu} + \frac{1}{\boldsymbol{\mathit{s}}^+} - \frac{1}{\boldsymbol{\mathit{s}}^-}\right) = \mathbf{0}\,, \label{eq:centrality} \end{align} where $\boldsymbol{\mathit{f}} = \boldsymbol{\mathit{f}}^0 + \boldsymbol{\mathit{C}}\boldsymbol{\mathit{x}}$, and $\boldsymbol{\mathit{s}}^+ = \boldsymbol{\mathit{u}} - \boldsymbol{\mathit{f}}$, $\boldsymbol{\mathit{s}}^- = \boldsymbol{\mathit{f}}$ are called the \emph{positive} and \emph{negative} slacks of $\boldsymbol{\mathit{f}}$ respectively. This leads us to the following definitions. \begin{definition}[$\mu$-central flow] Given a minimum cost flow instance with costs $\boldsymbol{\mathit{c}}$, demands $\boldsymbol{\mathit{d}}$ and capacities $\boldsymbol{\mathit{u}}$, as well as a parameter $\mu > 0$, we will say that a flow $\boldsymbol{\mathit{f}}$ (and its corresponding slacks $\boldsymbol{\mathit{s}}$ and resistances $\boldsymbol{\mathit{r}}$) is $\mu$-central if $\boldsymbol{\mathit{B}}^\top \boldsymbol{\mathit{f}} = \boldsymbol{\mathit{d}}$, $\boldsymbol{\mathit{s}} > \mathbf{0}$, and it satisfies the centrality condition (\ref{eq:centrality}), i.e. \begin{align*} \boldsymbol{\mathit{C}}^\top \left(\frac{\boldsymbol{\mathit{c}}}{\mu} + \frac{1}{\boldsymbol{\mathit{s}}^+} - \frac{1}{\boldsymbol{\mathit{s}}^-}\right) = \mathbf{0}\,. \end{align*} Additionally, we will denote such flow by $\boldsymbol{\mathit{f}}(\mu)$ (and its corresponding slacks and resistances by $\boldsymbol{\mathit{s}}(\mu)$ and $\boldsymbol{\mathit{r}}(\mu)$, respectively). \end{definition} \begin{definition}[$(\mu,\alpha)$-central flow] Given parameters $\mu > 0$ and $\alpha\geq 1$, we will say that a flow $\boldsymbol{\mathit{f}}$ with resistances $\boldsymbol{\mathit{r}} > \mathbf{0}$ is $(\mu,\alpha)$-central if $\boldsymbol{\mathit{r}} \approx_{\alpha} \boldsymbol{\mathit{r}}(\mu)$. We will also call its corresponding slacks $\boldsymbol{\mathit{s}}$ and resistances $\boldsymbol{\mathit{r}}$ $(\mu,\alpha)$-central. \end{definition} Given a $\mu$-central flow $\boldsymbol{\mathit{f}}$ and some step size $\delta > 0$, the standard (Newton) step to obtain an approximately $\mu/(1+\delta)$-central flow $\boldsymbol{\mathit{f}}' = \boldsymbol{\mathit{f}} + \boldsymbol{\mathit{\widetilde{f}}}$ is given by \begin{align*} \boldsymbol{\mathit{\widetilde{f}}} = & -\frac{\delta}{\mu} \frac{\boldsymbol{\mathit{c}}}{\boldsymbol{\mathit{r}}} + \frac{\delta}{\mu} \boldsymbol{\mathit{R}}^{-1}\boldsymbol{\mathit{B}}\boldsymbol{\mathit{L}}^+ \boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{c}}}{\boldsymbol{\mathit{r}}}\\ & = \delta \frac{\frac{1}{\boldsymbol{\mathit{s}}^+} - \frac{1}{\boldsymbol{\mathit{s}}^-}}{\boldsymbol{\mathit{r}}} - \delta \boldsymbol{\mathit{R}}^{-1}\boldsymbol{\mathit{B}}\boldsymbol{\mathit{L}}^+ \boldsymbol{\mathit{B}}^\top \frac{\frac{1}{\boldsymbol{\mathit{s}}^+} - \frac{1}{\boldsymbol{\mathit{s}}^-}}{\boldsymbol{\mathit{r}}}\\ & = \delta \cdot g(\boldsymbol{\mathit{s}}) - \delta \boldsymbol{\mathit{R}}^{-1}\boldsymbol{\mathit{B}}\boldsymbol{\mathit{L}}^+ \boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}}) \end{align*} where $\boldsymbol{\mathit{r}} = \frac{1}{(\boldsymbol{\mathit{s}}^+)^2} + \frac{1}{(\boldsymbol{\mathit{s}}^-)^2}$ and we have denoted $g(\boldsymbol{\mathit{s}}) = \frac{\frac{1}{\boldsymbol{\mathit{s}}^+} - \frac{1}{\boldsymbol{\mathit{s}}^-}}{\boldsymbol{\mathit{r}}}$. \begin{fact} Using known scaling arguments, can assume that costs and capacities are bounded by $\mathrm{poly}(m)$, while only incurring an extra logarithmic dependence in the largest network parameter~\cite{gabow1983scaling}. \end{fact} We also use the fact that the resistances in the interior point method are never too large, which is proved in Appendix~\ref{sec:aux}. \begin{fact} For any $\mu\in(1/\mathrm{poly}(m),\mathrm{poly}(m))$, we have $\left\|\boldsymbol{\mathit{r}}(\mu)\right\|_\infty \leq m^{\tO{\log m}}$. \end{fact} \subsection{Making progress with approximate electrical flows} The following lemma shows that we can make $k$ steps of the interior point method by computing $O(k^4)$ $(1+O(k^{-6}))$-\emph{approximate} electrical flows. The proof is essentially the same as in~\cite{gao2021fully}, but we provide it for completeness in Appendix~\ref{sec:proof_lem_approx_central}. \begin{lemma} Let $\boldsymbol{\mathit{f}}^1,\dots,\boldsymbol{\mathit{f}}^{T+1}$ be flows with slacks $\boldsymbol{\mathit{s}}^t$ and resistances $\boldsymbol{\mathit{r}}^t$ for $t\in[T+1]$, where $T = \frac{k}{\varepsilon_{\mathrm{step}}}$ for some $k\leq \sqrt{m}/10$ and $\varepsilon_{\mathrm{step}} = 10^{-5}k^{-3}$, such that \begin{itemize} \item $\boldsymbol{\mathit{f}}^1$ is $(\mu,1+\varepsilon_{\mathrm{solve}}/8)$-central for $\varepsilon_{\mathrm{solve}} = 10^{-5} k^{-3}$ \item {For all $t\in[T]$ and $e\in E$, $f_e^{t+1} = \begin{cases} f_e(\mu) + \varepsilon_{\mathrm{step}} \sum\limits_{i=1}^{t} \widetilde{f}_e^i & \text{if $\exists i\in[t]:\widetilde{f}_e^{i} \neq 0$}\\ f_e^1 & \text{otherwise} \end{cases}$, where \[ \boldsymbol{\mathit{\widetilde{f}}}^{*t} = \delta g(\boldsymbol{\mathit{s}}^t) - \delta (\boldsymbol{\mathit{R}}^t)^{-1} \boldsymbol{\mathit{B}}\left(\boldsymbol{\mathit{B}}^\top (\boldsymbol{\mathit{R}}^t)^{-1} \boldsymbol{\mathit{B}}\right)^+ \boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}}^t) \] for $\delta = \frac{1}{\sqrt{m}}$ and \[ \left\|\sqrt{\boldsymbol{\mathit{r}}^t}\left(\boldsymbol{\mathit{\widetilde{f}}}^{*t} - \boldsymbol{\mathit{\widetilde{f}}}^{t}\right) \right\|_\infty \leq \varepsilon \] for $\varepsilon = 10^{-6} k^{-6}$. } \end{itemize} Then, setting $\varepsilon_{\mathrm{step}} = \varepsilon_{\mathrm{solve}} = 10^{-5} k^{-3}$ and $\varepsilon = 10^{-6} k^{-6}$ we get that $\boldsymbol{\mathit{s}}^{T+1} \approx_{1.1} \boldsymbol{\mathit{s}}\left(\mu/(1+\varepsilon_{\mathrm{step}}\delta)^{k\varepsilon_{\mathrm{step}}^{-1}}\right)$. \label{lem:approx_central} \end{lemma} From now and for the rest of Section~\ref{sec:ipm} we fix the values of $\varepsilon_{\mathrm{step}}, \varepsilon_{\mathrm{solve}}, \varepsilon$ based on this lemma. Using this lemma together with the following recentering procedure also used in~\cite{gao2021fully}, we can exactly compute a $\left({\mu}/{(1+\varepsilon_{\mathrm{step}}/\sqrt{m})^{k\varepsilon_{\mathrm{step}}^{-1}}}\right)$-central flow. \begin{lemma} Given a flow $\boldsymbol{\mathit{f}}$ with slacks $\boldsymbol{\mathit{s}}$ such that $\boldsymbol{\mathit{s}}\approx_{1.1} \boldsymbol{\mathit{s}}(\mu)$ for some $\mu > 0$, we can compute $\boldsymbol{\mathit{f}}(\mu)$ in $\tO{m}$. \label{lem:recenter} \end{lemma} \subsection{The \textsc{Locator} data structure} \label{sec:locator_def} From the previous lemma it becomes obvious that the only thing left is to maintain in sublinear time an approximation to \[ \delta g(\boldsymbol{\mathit{s}}^t) - \delta (\boldsymbol{\mathit{R}}^t)^{-1} \boldsymbol{\mathit{B}}(\boldsymbol{\mathit{B}}^\top (\boldsymbol{\mathit{R}}^t)^{-1} \boldsymbol{\mathit{B}})^+ \boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}}^t)\,. \] for $\delta = 1 / \sqrt{m}$. This is the job of the $(\alpha,\beta,\varepsilon)$-\textsc{Locator} data structure, which computes all the entries of this vector that have magnitude $\geq \varepsilon$. We note that the guarantees of this data structure are similar to the ones in~\cite{gao2021fully}, but our locator requires an extra parameter $\alpha$ which is a measure of how much resistances can deviate before a full recomputation has to be made. \begin{definition}[$(\alpha,\beta,\varepsilon)$-\textsc{Locator}] An $(\alpha,\beta,\varepsilon)$-\textsc{Locator} is a data structure that maintains valid slacks $\boldsymbol{\mathit{s}}$ and resistances $\boldsymbol{\mathit{r}}$, and can support the following operations against oblivious adversaries with high probability: \begin{itemize} \item{$\textsc{Initialize}(\boldsymbol{\mathit{f}})$: Set $\boldsymbol{\mathit{s}}^+ = \boldsymbol{\mathit{u}}-\boldsymbol{\mathit{f}}$, $\boldsymbol{\mathit{s}}^-=\boldsymbol{\mathit{f}}$, $\boldsymbol{\mathit{r}} = \frac{1}{(\boldsymbol{\mathit{s}}^+)^2} + \frac{1}{(\boldsymbol{\mathit{s}}^-)^2}$. } \item{$\textsc{Update}(e,\boldsymbol{\mathit{f}})$: Set $s_e^+ = u_e-f_e$, $s_e^-=f_e$, $r_e = \frac{1}{(s_e^+)^2} + \frac{1}{(s_e^-)^2}$. Works under the condition that \[ r_e^{\max} / \alpha \leq r_e \leq \alpha \cdot r_e^{\min}\,, \] where $r_e^{\max}$ and $r_e^{\min}$ are the maximum and minimum resistance values that edge $e$ has had since the last call to $\textsc{BatchUpdate}$.} \item{$\textsc{BatchUpdate}(Z,\boldsymbol{\mathit{f}})$: Set $s_e^+ = u_e - f_e, s_e^- = f_e, r_e = \frac{1}{(s_e^+)^2} + \frac{1}{(s_e^-)^2}$ for all $e \in Z$. } \item{$\textsc{Solve}()$: Let \begin{align} \boldsymbol{\mathit{\widetilde{f}}}^* = \delta g(\boldsymbol{\mathit{s}}) - \delta \boldsymbol{\mathit{R}}^{-1} \boldsymbol{\mathit{B}}(\boldsymbol{\mathit{B}}^\top \boldsymbol{\mathit{R}}^{-1} \boldsymbol{\mathit{B}})^+ \boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})\,, \label{def:ele} \end{align} where $\delta = \frac{1}{\sqrt{m}}$. Returns an edge set $Z$ of size $\tO{\varepsilon^{-2}}$ that with high probability contains all $e$ such that $\sqrt{r_e} \left|\widetilde{f}_e^*\right| \geq \varepsilon$. } \end{itemize} The data structure works as long as the total number of calls to $\textsc{Update}$, plus the sum of $|Z|$ for all calls to $\textsc{BatchUpdate}$ is $O(\beta m)$. \label{def:locator} \end{definition} In Section~\ref{sec:locator} we will prove the following lemma, which constructs an $(\alpha,\beta,\varepsilon)$-\textsc{Locator} and outlines its runtime guarantees: \begin{lemma}[Efficient $(\alpha,\beta,\varepsilon)$-\textsc{Locator}] For any graph $G(V,E)$ and parameters $\alpha \geq 1$, $\beta \in(0,1)$, $\varepsilon \geq \tOm{\beta^{-2} m^{-1/2}}$, and $\widehat{\eps} \in \left(\tOm{\beta^{-2} m^{-1/2}},\varepsilon\right)$, there exists an $(\alpha,\beta,\varepsilon)$-\textsc{Locator} for $G$ with the following runtimes per operation: \begin{itemize} \item {$\textsc{Initialize}(\boldsymbol{\mathit{f}})$: $\tO{m \cdot \left(\widehat{\eps}^{-4} \beta^{-8} + \widehat{\eps}^{-2} \varepsilon^{-2} \alpha^2 \beta^{-4}\right)}$.} \item{$\textsc{Update}(e,\boldsymbol{\mathit{f}})$: $\tO{m \cdot \frac{\widehat{\eps} \alpha^{1/2}}{\varepsilon^{3}} + \widehat{\eps}^{-4} \varepsilon^{-2} \beta^{-8} + \widehat{\eps}^{-2} \varepsilon^{-4} \alpha^2 \beta^{-6}}$ amortized. } \item{$\textsc{BatchUpdate}(Z,\boldsymbol{\mathit{f}})$: $\tO{m \cdot \frac{1}{\varepsilon^2} + |Z|\cdot \frac{1}{\varepsilon^2 \beta^2} }$. } \item{$\textsc{Solve}()$: $\tO{\beta m \cdot \frac{1}{\varepsilon^2}}$. } \end{itemize} \label{lem:locator} \end{lemma} Note that even though a $\textsc{Locator}$ computes a set that contains all $\varepsilon$-congested edges, it does not return the actual flow values. The reason for that is that it only works against oblivious adversaries, and allowing (randomized) flow values to affect future updates constitutes an adaptive adversary. As in~\cite{gao2021fully}, we resolve this by sanitizing the outputs through a different data structure called $\textsc{Checker}$, which computes the flow values and works against semi-adaptive adversaries. As the definition and implementation of $\textsc{Checker}$ is orthogonal to our contribution and also does not affect the final runtime, we defer the discussion to Appendix~\ref{sec:checker}. To simplify the presentation in this section, we instead define the following idealized version of it, called $\textsc{PerfectChecker}$. \begin{definition}[$\varepsilon$-\textsc{PerfectChecker}] For any error $\varepsilon > 0$, an $\varepsilon$-$\textsc{PerfectChecker}$ is an oracle that given a graph $G(V,E)$, slacks $\boldsymbol{\mathit{s}}$, resistances $\boldsymbol{\mathit{r}}$, supports the following operations: \begin{itemize} \item{\textsc{Update}$(e,\boldsymbol{\mathit{f}})$: Set $s_e^+ = u_e - f_e$, $s_e^- = f_e$, $r_e = \frac{1}{(s_e^+)^2} + \frac{1}{(s_e^-)^2}$. } \item{\textsc{Check}$(e)$: Compute a flow value $\widetilde{f}_e$ such that $\sqrt{r_e} \left|\widetilde{f}_e - \widetilde{f}_e^*\right| \leq \varepsilon$, where \[ \boldsymbol{\mathit{\widetilde{f}}}^* = \delta g(\boldsymbol{\mathit{s}}) - \delta \boldsymbol{\mathit{R}}^{-1} \boldsymbol{\mathit{B}} (\boldsymbol{\mathit{B}}^\top \boldsymbol{\mathit{R}}^{-1} \boldsymbol{\mathit{B}})^+ \boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}}) \,, \] with $\delta = 1/\sqrt{m}$. If $\sqrt{r_e} \left|\widetilde{f}_e\right| < \varepsilon / 2$ return $0$, otherwise return $\widetilde{f}_e$. } \end{itemize} \label{def:perfect_checker} \end{definition} \subsection{The minimum cost flow algorithm} Now, we will show how the data structure defined in Section~\ref{sec:locator_def} can be used to make progress along the central path. The main lemma that analyzes the performance of the minimum cost flow algorithm given access to an $(\alpha,\beta,\varepsilon)$-\textsc{Locator} is Lemma~\ref{lem:mincostflow}. Also, the skeleton of the algorithm is described in Algorithm~\ref{alg:main}. \begin{algorithm} \begin{algorithmic}[1] \caption{Minimum Cost Flow} \Procedure{\textsc{MinCostFlow}}{$G,\boldsymbol{\mathit{c}},\boldsymbol{\mathit{d}},\boldsymbol{\mathit{u}}$} \State $\boldsymbol{\mathit{\bar{f}}}, \mu = \textsc{Initialize}(G,\boldsymbol{\mathit{c}}, \boldsymbol{\mathit{d}},\boldsymbol{\mathit{u}})$ \Comment{Lemma~\ref{lem:init}. $\boldsymbol{\mathit{\bar{f}}}$ is $\mu$-central at all times.} \State $i = 0$ \While {$\mu \geq m^{-10}$} \If {$i$ is a multiple of $\lfloor\varepsilon_{\mathrm{solve}} \sqrt{\beta m} / k\rfloor$} \Comment{Re-initialize when $|C|$ exceeds $O(\beta m)$.} \State $\mathcal{L} = \textsc{Locator}.\textsc{Initialize}(\boldsymbol{\mathit{\bar{f}}})$ with error $\varepsilon/2$ \EndIf \If {$i$ is a multiple of $\lfloor\varepsilon_{\mathrm{solve}} \sqrt{\beta_{\textsc{Checker}} m} / k\rfloor$} \State $\mathcal{C}^i = \textsc{Checker}.\textsc{Initialize}(\boldsymbol{\mathit{\bar{f}}}, \varepsilon, \beta_{\textsc{Checker}})$ for $i\in[k\varepsilon_{\mathrm{step}}^{-1}]$ \EndIf \If {$i$ is a multiple of $\lfloor 0.5\alpha^{1/4} / k - 1\rfloor$} \Comment{Update important edges when $\mathcal{L}.\boldsymbol{\mathit{r}}^0$ expires} \State $\mathcal{L}.\textsc{BatchUpdate}(\emptyset)$ \label{line:batchupdate1} \EndIf \State $\boldsymbol{\mathit{\bar{f}}}, \mu = \textsc{MultiStep}(\boldsymbol{\mathit{\bar{f}}}, \mu)$ \If {$i$ is a multiple of $\widehat{T}$} \State $Z = \emptyset$ \For {$e \in E$} \State $\bar{s}_e^+ = u_e - \bar{f}_e$, $\bar{s}_e^- = \bar{f}_e$ \If {$\bar{s}_e^+ \not\approx_{\varepsilon_{\mathrm{solve}} / 16} \mathcal{L}.s_e^+$ or $\bar{s}_e^- \not\approx_{\varepsilon_{\mathrm{solve}} / 16} \mathcal{L}.s_e^-$} \State $\mathcal{C}^i.\textsc{Update}(e, \boldsymbol{\mathit{\bar{f}}})$ for $i\in[k\varepsilon_{\mathrm{step}}^{-1}]$ \State $Z = Z\cup\{e\}$ \EndIf \EndFor \State $\mathcal{L}.\textsc{BatchUpdate}(Z, \boldsymbol{\mathit{\bar{f}}})$ \label{line:batchupdate2} \Else \For {$e \in E$} \State $\bar{s}_e^+ = u_e - \bar{f}_e$, $\bar{s}_e^- = \bar{f}_e$ \If {$\bar{s}_e^+ \not\approx_{\varepsilon_{\mathrm{solve}} / 8} \mathcal{L}.s_e^+$ or $\bar{s}_e^- \not\approx_{\varepsilon_{\mathrm{solve}} / 8} \mathcal{L}.s_e^-$} \State $\mathcal{C}^i.\textsc{Update}(e, \boldsymbol{\mathit{\bar{f}}})$ for $i\in[k\varepsilon_{\mathrm{step}}^{-1}]$ \State $\mathcal{L}.\textsc{Update}(e, \boldsymbol{\mathit{\bar{f}}})$ \EndIf \EndFor \EndIf \State $i = i + 1$ \EndWhile \State \Return $\textsc{Round}(G,\boldsymbol{\mathit{c}},\boldsymbol{\mathit{d}},\boldsymbol{\mathit{u}},\boldsymbol{\mathit{\bar{f}}})$ \Comment{Lemma~\ref{lem:rounding}} \EndProcedure \label{alg:main} \end{algorithmic} \end{algorithm} \begin{lemma}[\textsc{MinCostFlow}] Let $\mathcal{L}$ be an $(\alpha,\beta,\varepsilon)-\textsc{Locator}$, $\boldsymbol{\mathit{f}}$ be a $\mu$-central flow where $\mu = \mathrm{poly}(m)$, and $k\in\left[m^{1/316}\right]$, $\beta \geq \tOm{k^3 / m^{1/4}}$, $\widehat{T} \in\left[\tO{m^{1/2} / k}\right]$ be some parameters. There is an algorithm that with high probability computes a $\mu'$-central flow $\boldsymbol{\mathit{f}}'$, where $\mu' \leq m^{-10}$. Additionally, the algorithm runs in time $\tO{m^{3/2} / k}$, plus \begin{itemize} \item $\tO{k^3 \beta^{-1/2}}$ calls to $\mathcal{L}.\textsc{Initialize}$, \item $\tO{m^{1/2} k^3}$ calls to $\mathcal{L}.\textsc{Solve}$, \item $\tO{m^{1/2} \left(k^6\widehat{T} + k^{15}\right)}$ calls to $\mathcal{L}.\textsc{Update}$, \item $\tO{m^{1/2} \alpha^{-1/4}}$ calls to $\mathcal{L}.\textsc{BatchUpdate}(\emptyset)$, and \item { $\tO{m^{1/2} k^{-1} \widehat{T}^{-1}}$ calls to $\mathcal{L}.\textsc{BatchUpdate}(Z,\boldsymbol{\mathit{\bar{f}}})$ for some $Z\neq \emptyset,\boldsymbol{\mathit{\bar{f}}}$. Additionally, the sum of $|Z|$ over all such calls is $\tO{mk^3\beta^{1/2}}$.} \end{itemize} \label{lem:mincostflow} \end{lemma} The proof appears in Appendix~\ref{proof_lem_mincostflow}. Its main ingredient is the following lemma, which easily follows from Lemma~\ref{lem:approx_central} and essentially shows how $k$ steps of the interior point method can be performed in $\tO{m}$ instead of $\tO{m k}$. Its proof appears in Appendix~\ref{proof_lem_multistep}. \begin{lemma}[\textsc{MultiStep}] Let $k\in\{1,\dots,\sqrt{m}/10\}$. We are given $\boldsymbol{\mathit{f}}(\mu)$, an $(\alpha,\beta,\varepsilon/2)$-\textsc{Locator} $\mathcal{L}$, and an $\varepsilon$-\textsc{PerfectChecker} $\mathcal{C}$, such that \begin{itemize} \item{ $\mathcal{L}.\boldsymbol{\mathit{r}} = \mathcal{C}.\boldsymbol{\mathit{r}}$ are $(\mu,1+\varepsilon_{\mathrm{solve}}/8)$-central resistances, and } \item{ $\mathcal{L}.\boldsymbol{\mathit{r}}^0$ are $(\mu^0,1+\varepsilon_{\mathrm{solve}}/8)$-central resistances, where $\mu^0 \leq \mu \cdot (1 + \varepsilon_{\mathrm{step}}/\sqrt{m})^{\widehat{T}}$ and $\widehat{T} = (0.5\alpha^{1/4} - k)\varepsilon_{\mathrm{step}}^{-1}$. Additionally, for any resistances $\boldsymbol{\mathit{\widehat{r}}}$ that $\mathcal{L}$ had at any point since the last call to $\mathcal{L}.\textsc{BatchUpdate}$, $\boldsymbol{\mathit{\widehat{r}}}$ are $(\hat{\mu}, 1.1)$-central for some $\hat{\mu}\in[\mu,\mu^0]$. } \end{itemize} Then, there is an algorithm that with high probability computes $\boldsymbol{\mathit{f}}(\mu')$, where $\mu' = \mu / (1+\varepsilon_{\mathrm{step}}/\sqrt{m})^{k\varepsilon_{\mathrm{step}}^{-1}}$. The algorithm runs in time $\tO{m}$, plus $O(k^{16})$ calls to $\mathcal{L}.\textsc{Update}$, $O(k^4)$ calls to $\mathcal{L}.\textsc{Solve}$, and $O(k^{16})$ calls to $\mathcal{C}.\textsc{Update}$ and $\mathcal{C}.\textsc{Check}$. Additionally, $\mathcal{L}.\boldsymbol{\mathit{r}}$ and $\mathcal{C}$ are unmodified. \label{lem:multistep} \end{lemma} \begin{algorithm} \begin{algorithmic}[1] \caption{MultiStep} \Procedure{\textsc{MultiStep}}{$\boldsymbol{\mathit{f}}, \mu$} \Comment{Makes equivalent progress to $k$ interior point method steps} \State $\boldsymbol{\mathit{\widehat{r}}} = \mathcal{L}.\boldsymbol{\mathit{r}}$ \Comment{Save resistances to restore later} \For{$i=1,\dots, k\varepsilon_{\mathrm{step}}^{-1}$} \State $Z = \mathcal{L}.\textsc{Solve}()$ \For {$e\in Z$} \Comment{$Z$: Set of edges with sufficiently changed flow} \State $\widetilde{f}_e = \mathcal{C}^i.\textsc{Check}(e)$ \If {$\widetilde{f}_e \neq 0$} \State $f_e = f_e + \varepsilon_{\mathrm{step}} \widetilde{f}_e$ \State $\mathcal{L}.\textsc{Update}(e, \boldsymbol{\mathit{f}})$ \State $\mathcal{C}^j.\textsc{TemporaryUpdate}(e, \boldsymbol{\mathit{f}})$ for $j \in \left[i+1,k\varepsilon_{\mathrm{step}}^{-1}\right]$ \EndIf \EndFor \EndFor \State $\mu = \mu / (1 + \varepsilon_{\mathrm{step}}/\sqrt{m})^{k\varepsilon_{\mathrm{step}}^{-1}}$ \State $\boldsymbol{\mathit{f}} = \textsc{Recenter}(\boldsymbol{\mathit{f}}, \mu)$ \Comment{Lemma~\ref{lem:recenter}} \For {$e\in E$} \If {$\mathcal{L}.r_e \neq \widehat{r}_e$} \State $\mathcal{L}.\textsc{Update}(e,\boldsymbol{\mathit{\widehat{r}}})$ \Comment{Return $\textsc{Locator}$ resistances to their original state} \EndIf \EndFor \State Call $\mathcal{C}^i.\textsc{Rollback}()$ to undo all $\textsc{TemporaryUpdate}$s for all $\mathcal{C}^i$ \State \Return $\boldsymbol{\mathit{f}}, \mu$ \EndProcedure \label{alg:multistep} \end{algorithmic} \end{algorithm} \subsection{Proof of Theorem~\ref{thm:main}} \paragraph{Correctness.} First of all, we apply capacity and cost scaling~\cite{gabow1983scaling} to make sure that $\left\|\boldsymbol{\mathit{c}}\right\|_\infty, \left\|\boldsymbol{\mathit{u}}\right\|_\infty = \mathrm{poly}(m)$. These incur an extra factor of $\log(U+W)$ in the runtime. We first get an initial solution to the interior point method by using the following lemma: \begin{lemma}[Interior point method initialization, Appendix A in~\cite{axiotis2020circulation}] Given a min cost flow instance $\mathcal{I} = \left(G(V,E),\boldsymbol{\mathit{c}},\boldsymbol{\mathit{d}},\boldsymbol{\mathit{u}}\right)$, there exists an algorithm that runs in time $O(m)$ and produces a new min cost flow instance $\mathcal{I}'=\left(G'(V',E'),\boldsymbol{\mathit{c}}',\boldsymbol{\mathit{d}}',\boldsymbol{\mathit{u}}'\right)$, where $|V'| = O(|V|)$ and $|E'|=O(|E|)$, as well as a flow $\boldsymbol{\mathit{f}}$ such that \begin{itemize} \item{$\boldsymbol{\mathit{f}}$ is $\mu$-central for $\mathcal{I}'$ for some $\mu = \Theta\left(\left\|\boldsymbol{\mathit{c}}\right\|_2\right)$} \item{Given an optimal solution for $\mathcal{I}'$, an optimal minimum cost flow solution for $\mathcal{I}$ can be computed in $O(m)$} \end{itemize} \label{lem:init} \end{lemma} Therefore we now have a $\mathrm{poly}(m)$-central solution for an instance $\mathcal{I}$. We can now apply Lemma~\ref{lem:mincostflow} to get a $\mu'$-central solution with $\mu'\leq m^{-10}$. Then we can apply the following lemma to round the solution, which follows from Lemma 5.4 in~\cite{axiotis2020circulation}. \begin{lemma}[Interior point method rounding] Given a min cost flow instance $\mathcal{I}$ and a $\mu$-central flow $\boldsymbol{\mathit{f}}$ for $\mu \leq m^{-10}$, there is an algorithm that runs in time $\widetilde{O}(m)$ and returns an optimal integral flow. \label{lem:rounding} \end{lemma} By Lemma~\ref{lem:init}, this solution can be turned into an exact solution for the original instance. As Lemma~\ref{lem:mincostflow} succeeds with high probability, the whole algorithm does too. \paragraph{Runtime.} To determine the final runtime, we analyze each operation in Algorithm~\ref{alg:main} separately. The \textsc{Initialize} (Lemma~\ref{lem:init}) and \textsc{Round} (Lemma~\ref{lem:rounding}) operations take time $\tO{m}$. Now, the runtime of Lemma~\ref{lem:mincostflow} is $\tO{m^{3/2} / k}$ plus the runtime incurred because of calls to the locator $\mathcal{L}$. We will use the runtimes per operation from Lemma~\ref{lem:locator}. {\bf $\mathcal{L}$.\textsc{Solve}}: This operation is run $\tO{m^{1/2} k^3}$ times, and each of these costs $\tO{\frac{\beta m}{\varepsilon^2}} = \tO{m k^{12} \beta}$. Therefore in total $\tO{m^{3/2} k^{15} \beta}$. We pick $\beta$ by $m^{3/2} k^{15} \beta \leq m^{3/2} /k$ as $\beta = k^{-16}$, so the runtime is $\tO{m^{3/2} / k}$. Note that this satisfies the constraint $\beta \geq \tOm{k^3 / m^{1/4}}$ as long as $k\leq \tOm{m^{1/76}}$. {\bf $\mathcal{L}$.\textsc{BatchUpdate}}: This is run $\tO{m^{1/2} / \alpha^{1/4}}$ times with empty arguments, each of which takes time $\tO{m /\varepsilon^2} = \tO{m k^{12}}$. The total runtime because of these is $\tO{m^{3/2} k^{12} \alpha^{-1/4}}$. As we need this to be below $\tO{m^{3/2} / k}$, we set $\alpha = k^{52}$. This operation is also run $\tO{m^{1/2} k^{-1} \widehat{T}^{-1}} $ times with some non-empty argument $Z$, each of which takes time $\tO{m/\varepsilon^2 + |Z|/(\varepsilon^2\beta^2)} = \tO{m k^{12} + k^{44}|Z|}$. As by Lemma~\ref{lem:mincostflow} the total sum of $|Z|$ over all calls is $\tO{mk^3 \beta^{1/2}} = \tO{mk^{-5}}$, we get a runtime of \[ \tO{m^{1/2} k^{-1} \widehat{T}^{-1} \cdot mk^{12} + k^{44} \cdot mk^{-5}} = \tO{m^{3/2} k^{11} \widehat{T}^{-1} + mk^{39}}\,.\] In order to set the first term to be at most $\tO{m^{3/2} / k}$, we set $\widehat{T} = k^{12}$. Therefore the total runtime of this operation is $\tO{m^{3/2} / k + m k^{39}}$. {\bf $\mathcal{L}$.\textsc{Update}}: This is run $\tO{m^{1/2} \left(k^6 \widehat{T} + k^{15}\right)} =\tO{m^{1/2} k^{18}}$ times and the amortized cost per operation is \begin{align*} & \tO{m \cdot \frac{\widehat{\eps} \alpha^{1/2}}{\varepsilon^{3}} + \widehat{\eps}^{-4} \varepsilon^{-2} \beta^{-8} + \widehat{\eps}^{-2} \varepsilon^{-4} \alpha^2 \beta^{-6}}\\ & =\tO{m \cdot k^{44} \widehat{\eps} + k^{140} \widehat{\eps}^{-4} + k^{224} \widehat{\eps}^{-2}}\,, \end{align*} so in total \[ m^{3/2} k^{62} \widehat{\eps} + m^{1/2} k^{158} \widehat{\eps}^{-4} + m^{1/2} k^{242} \widehat{\eps}^{-2}\,. \] As we need the first term to be $\tO{m^{3/2} / k}$, we set $\widehat{\eps} = k^{-63}$. Therefore the total runtime is \[ \tO{m^{3/2} / k + m^{1/2} k^{410} + m^{1/2} k^{368}} = \tO{m^{3/2} / k + m^{1/2} k^{410}}\,. \] {\bf $\mathcal{L}$.\textsc{Initialize}}: This is run $\tO{k^{3} \beta^{-1/2}} = k^{11}$ times in total, and the runtime for each run is \[ \tO{m \cdot \left(\widehat{\eps}^{-4} \beta^{-8} + \widehat{\eps}^{-2} \varepsilon^{-2} \alpha^2 \beta^{-4}}\right) = \tO{m \cdot \left(k^{380} + k^{306}\right)} = \tO{m\cdot k^{380}} \,,\] so in total $\tO{m k^{380}}$. Therefore, for the whole algorithm, we get $\tO{m^{3/2} / k + m^{1/2} k^{410} + mk^{380}}$ which after balancing gives $k=m^{1/762}$. \section{An Efficient $(\alpha,\beta,\varepsilon)$-\textsc{Locator}} \label{sec:locator} In this section we will show how to implement an $(\alpha,\beta,\varepsilon)$-\textsc{Locator}, as defined in Definition~\ref{def:locator}. In order to maintain the approximate electrical flow $\boldsymbol{\mathit{\widetilde{f}}}$ required by Lemma~\ref{lem:locator} we will keep a vertex sparsifier in the form of a sparsified Schur complement onto some vertex set $C$. As in~\cite{gao2021fully}, we choose $C$ to be a \emph{congestion reduction subset}. \begin{definition}[Congestion reduction subset~\cite{gao2021fully}]\label{def:cong_red} Given a graph $G(V,E)$ with resistances $\boldsymbol{\mathit{r}}$ and any parameter $\beta\in(0,1)$, a vertex subset $C\subseteq V$ is called a \emph{$\beta$-congestion reduction subset} (or just \emph{congestion reduction subset}) if: \begin{itemize} \item{$|C| \leq O(\beta m)$} \item{For any $u\in V$, a random walk starting from $u$ that visits $\tOm{\beta^{-1} \log n}$ distinct vertices hits $C$ with high probability} \item{ If we generate $\mathrm{deg}(u)$ random walks from each $u\in V\backslash C$, the expected number of these that hit some fixed $v\in V\backslash C$ before $C$ is $\tO{1/\beta^2}$. Concretely: \begin{align} \sum\limits_{u\in V} \mathrm{deg}(u) \cdot p_v^{C\cup\{v\}}(u) \leq \tO{1/\beta^2}\,. \label{eq:cong_red} \end{align} } \end{itemize} \end{definition} The following lemma shows that such a vertex subset can be constructed efficiently: \begin{lemma}[Construction of congestion reduction subset~\cite{gao2021fully}] Given a graph $G(V,E)$ with resistances $\boldsymbol{\mathit{r}}$ and a parameter $\beta\in(0,1)$, there is an algorithm that generates a $\beta$-congestion reduction subset in time $\tO{m/\beta^2}$. \label{lem:cong_red} \end{lemma} Intuitively, (\ref{eq:cong_red}) says that ``not too many'' random walks go through a given vertex before reaching $C$. This property is crucial for ensuring that when inserting a new vertex into $C$, the data structure will not have to change too much. As we will see in Section~\ref{sec:Fsystem}, this property plays an even more central role when general demands are introduced, as it allows us to show that the demands outside $C$ can be pushed to $C$. Additionally, in Section~\ref{sec:important} we will use it to show that edges that are too far from $C$ in effective resistance metric are not \emph{important}, in the sense that neither can they get congested, nor can their demand congest anything else. \input{Fsystem} \input{important_edges} \subsection{Proving Lemma~\ref{lem:locator}} \label{sec:proof_lem_locator} Before moving to the description of how $\textsc{Locator}$ works and its proof, we will provide a lemma which bounds how fast a demand projection changes. We will use the following observation, which states that if our congestion reduction subset $C$ contains an $\beta m$-sized uniformly random edge subset, then with high probability, effective resistance neighborhoods that are disjoint from $C$ only have $\tO{\beta^{-1}}$ edges. Note that this is will be true throughout the algorithm as long as the resistances do not depend on the randomness of $C$. This is true, as resistance updates are only ever given as inputs to $\textsc{Locator}$. \begin{lemma}[Few edges in a small neighborhood] \label{lem:small-neighborhood2} Let $\beta\in(0,1)$ be a parameter and $C$ be a vertex set which contains a subset of $\beta m$ edges sampled at random. Then with high probability, for any $v \in V\backslash C$ we have that $\vert N_E(v, R_{eff}(C,v) / 2) \vert \leq 10\beta^{-1} \ln m$, where \[ N_E(v,R) := \{e\in E\ |\ R_{eff}(e,v) \leq R\} \,. \] \end{lemma} \begin{proof} Suppose that for some vertex $v\in V\backslash C$, $\vert N_E(u,R_{eff}(C,v) / 2) \vert \geq 10 \beta^{-1} \ln m$. Since by construction $C$ contains a random edge subset of size $\beta m$, with high probability $N_E(u,R_{eff}(C,u)/2) \cap C \neq \emptyset$, so there exists $u\in C$ such that $R_{eff}(u,v) \leq R_{eff}(C,v) / 2$. This is a contradiction since $u\in C$ implies $R_{eff}(C,v) \leq R_{eff}(u,v)$. Union bounding over all $v$ yields the claim. \end{proof} Using this fact, we can now show that the change of the demand projection (measured in energy) is quite mild. The proof of the following lemma can be found in Appendix~\ref{proof_lem_projection_change_energy}. \begin{lemma}[Projection change] Consider a graph $G(V,E)$ with resistances $\boldsymbol{\mathit{r}}$, $\boldsymbol{\mathit{q}}\in[-1,1]^m$, and a $\beta$-congestion reduction subset $C$. Then, with high probability, \[ \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}\left( \boldsymbol{\mathit{\pi}}^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}}\right) - \boldsymbol{\mathit{\pi}}^{C}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\right)} \leq \tO{\beta^{-2}} \,. \] \label{lem:projection_change_energy} \end{lemma} The above lemma can be applied over multiple vertex insertions and resistance changes, to bound the overall energy change. This is shown in the following lemma, which is proved in Appendix~\ref{sec:proof_old_projection_approximate}: \begin{lemma} Consider a graph $G(V,E)$ with resistances $\boldsymbol{\mathit{r}}^0$, $\boldsymbol{\mathit{q}}^0\in[-1,1]^m$, a $\beta$-congestion reduction subset $C^0$, and a fixed sequence of updates, where the $i$-th update $i\in\{0,\dots,T-1\}$ is of the following form: \begin{itemize} \item {\textsc{AddTerminal}($v^i$): Set $C^{i+1} = C^{i} \cup \{v^i\}$ for some $v^i\in V\backslash C^i$, $q_e^{i+1} = q_e^{i}, r_e^{i+1} = r_e^{i}$} \item {\textsc{Update}($e^i,\boldsymbol{\mathit{q}},\boldsymbol{\mathit{r}}$) Set $C^{i+1} = C^{i}$, $q_e^{i+1} = q_e$ $r_e^{i+1} = r_e$, where $e^i\in E(C^{i})$} \end{itemize} Then, with high probability, \[ \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}^T} \left(\boldsymbol{\mathit{\pi}}^{C^0,\boldsymbol{\mathit{r}}^0}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S^0}{\sqrt{\boldsymbol{\mathit{r}}^0}}\right) - \boldsymbol{\mathit{\pi}}^{C^T,\boldsymbol{\mathit{r}}^T}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S^T}{\sqrt{\boldsymbol{\mathit{r}}^T}}\right)\right)} \leq \tO{\max_{i\in\{0,\dots,T-1\}} \left\|\frac{\boldsymbol{\mathit{r}}^T}{\boldsymbol{\mathit{r}}^i}\right\|_\infty^{1/2} \beta^{-2}} \cdot T\,. \] \label{lem:old_projection_approximate} \end{lemma} We are now ready to describe the $\textsc{Locator}$ data structure. We will give an outline here, and defer the full proof to Appendix~\ref{sec:full_proof_lem_locator}. The goal of an $(\alpha,\beta,\varepsilon)$-\textsc{Locator} is, given some flow $\boldsymbol{\mathit{f}}$ with slacks $\boldsymbol{\mathit{s}}$ and resistances $\boldsymbol{\mathit{r}}$, to compute all $e\in E$ such that $\sqrt{r_e} \left|\widetilde{f}_e^*\right| \geq \varepsilon$, where \[ \boldsymbol{\mathit{\widetilde{f}}}^* = \delta g(\boldsymbol{\mathit{s}}) - \delta \boldsymbol{\mathit{R}}^{-1} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})\] ($\boldsymbol{\mathit{L}} = \boldsymbol{\mathit{B}}^\top \boldsymbol{\mathit{R}}^{-1} \boldsymbol{\mathit{B}}$), where $\delta = 1/\sqrt{m}$. If we set $\rho_e^* = \sqrt{r_e} \widetilde{f}_e^*$, we can equivalently write \[ \boldsymbol{\mathit{\rho}}^* = \delta \sqrt{\boldsymbol{\mathit{r}}} g(\boldsymbol{\mathit{s}}) - \delta \boldsymbol{\mathit{R}}^{-1/2} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})\,,\] and require to find all the entries of $\boldsymbol{\mathit{\rho}}^*$ with magnitude at least $\varepsilon$. As $\delta\left\|\sqrt{\boldsymbol{\mathit{r}}} g(\boldsymbol{\mathit{s}})\right\|_\infty \leq \delta \leq \varepsilon / 100$, we can concentrate on the second term, and denote \[ \boldsymbol{\mathit{\rho}}'^* = \delta \boldsymbol{\mathit{R}}^{-1/2} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})\] for convenience. First, we use Lemma~\ref{lem:non-projected-demand-contrib} to show that \[ \delta \left\|\boldsymbol{\mathit{R}}^{-1/2} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \left(g(\boldsymbol{\mathit{s}}) - \boldsymbol{\mathit{\pi}}^{C}(g(\boldsymbol{\mathit{s}}))\right)\right\|_\infty \leq \delta \cdot \tO{\beta^{-2}} \leq \varepsilon / 100 \,. \] Now, let's set $\boldsymbol{\mathit{\pi}}_{old} = \boldsymbol{\mathit{\pi}}^{C^0}\left(g(\boldsymbol{\mathit{s}}^0)\right)$, where $C^0$ was the vertex set of the sparsifier and $\boldsymbol{\mathit{s}}^0$ the slacks after the last call to $\textsc{BatchUpdate}$. As we will be calling $\textsc{BatchUpdate}$ at least every $T$ calls to $\textsc{Update}$ for some $T\geq 1$, Lemma~\ref{lem:old_projection_approximate} implies that \[ \delta \left\|\boldsymbol{\mathit{R}}^{-1/2} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \left(\boldsymbol{\mathit{\pi}}^{C}(g(\boldsymbol{\mathit{s}})) - \boldsymbol{\mathit{\pi}}_{old}\right)\right\|_\infty \leq \delta \cdot \tO{\alpha \beta^{-2}} T \leq \varepsilon / 100 \,, \] as long as $T \leq \varepsilon \sqrt{m} / \tO{\alpha \beta^{-2}}$. Importantly, we will never be \emph{removing} vertices from $C$, so $C^0\subseteq C$. This implies that it suffices to find the large entries of \[ \delta \boldsymbol{\mathit{R}}^{-1/2} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \boldsymbol{\mathit{\pi}}_{old}\,. \] Now, note that for any edge $e$ that was \emph{not} $\varepsilon / (100\alpha)$-important for $C^0$ and corresponding resistances $\boldsymbol{\mathit{r}}^0$, we have \begin{align*} & \delta \left|\boldsymbol{\mathit{R}}^{-1/2} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \boldsymbol{\mathit{\pi}}_{old}\right|_e\\ & \leq \delta \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{\pi}}^{C^0}(\boldsymbol{\mathit{B}}^\top \frac{\mathbf{1}_e}{\sqrt{\boldsymbol{\mathit{r}}}}))}\sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{\pi}}_{old})}\\ & \leq \delta \cdot \sqrt{\alpha} \frac{\varepsilon}{100\alpha} \cdot \sqrt{2\alpha m}\\ & = \varepsilon / 50\,, \end{align*} where we used Lemma~\ref{st_projection1_energy} and the fact that $\mathcal{E}_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{\pi}}^{C^0}(g(\boldsymbol{\mathit{s}}^0))) \leq 2 \mathcal{E}_{\boldsymbol{\mathit{r}}}(g(\boldsymbol{\mathit{s}}^0)) \leq 2 \alpha m$. Therefore it suffices to approximate \[ \delta \boldsymbol{\mathit{I}}_S \boldsymbol{\mathit{R}}^{-1/2} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \boldsymbol{\mathit{\pi}}_{old} \,,\] where $S$ was the set of $\frac{\varepsilon}{100\alpha}$-important edges last computed during the last call to $\textsc{BatchUpdate}$. Now, we will use the sketching lemma from (Lemma 5.1, \cite{gao2021fully} v2), which shows that in order to find all $\Omega(\varepsilon)$ large entries of this vector, it suffices to compute the inner products \begin{align*} & \delta \left\langle \boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S^i}{\sqrt{\boldsymbol{\mathit{r}}}}\right), SC^+ \boldsymbol{\mathit{\pi}}_{old} \right\rangle \end{align*} for $i\in[\tO{\varepsilon^{-2}}]$ up to $O(\varepsilon)$ accuracy. Here $SC$ is the Schur complement onto $C$. Based on this, there are two types of quantities that we will maintain: \begin{itemize} \item $\tO{1/\varepsilon^2}$ approximate demand projections $\widetilde{\boldsymbol{\mathit{\pi}}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S^i}{\sqrt{\boldsymbol{\mathit{r}}}}\right)$, and \item an approximate Schur complement $\widetilde{SC}$ of $G$ onto $C$. \end{itemize} For the latter, we will directly use the dynamic Schur complement data structure $\textsc{DynamicSC}$ that was also used by~\cite{gao2021fully} and is based on~\cite{durfee2019fully}. For completeness, we present this data structure in Appendix~\ref{sec:maintain_schur}. For the former, we will need $\tO{1/\varepsilon^2}$ data structures for maintaining demand projections onto $C$, under vertex insertions to $C$. The guarantees of each such a data structure, that we call an $(\alpha,\beta,\varepsilon)$-\textsc{DemandProjector}, are as follows. \begin{definition}[$(\alpha,\beta,\varepsilon)$-\textsc{DemandProjector}] Let $\widehat{\eps}\in(0,\varepsilon)$ be a tradeoff parameter. Given a graph $G(V,E)$, resistances $\boldsymbol{\mathit{r}}$, and a vector $\boldsymbol{\mathit{q}}\in[-1,1]^m$, an $(\alpha,\beta,\varepsilon)$-\textsc{DemandProjector} is a data structure that maintains a vertex subset $C\subseteq V$ and an approximation to the demand projection $\boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}}\right)$, with high probability under oblivious adversaries. The following operations are supported: \begin{itemize} \item{$\textsc{Initialize}(C,\boldsymbol{\mathit{r}},\boldsymbol{\mathit{q}},S,\mathcal{P})$: Initialize the data structure in order to maintain an approximation of $\boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right)$, where $C\subseteq V$ is a $\beta$-congestion reduction subset, $\boldsymbol{\mathit{r}}$ are resistances, $\boldsymbol{\mathit{q}}\in[-1,1]^m$, and $S\subseteq E$ is a subset of $\gamma$-important edges. $\mathcal{P} = \{\mathcal{P}^{u,e,i} \ |\ u\in V, e\in E, u\in e, i\in[h]\}$ for some $h\in\mathbb{Z}_{\geq 1}$, is a set of independent random walks from $u$ to $C$ for any $u$. } \item{$\textsc{AddTerminal}(v,{\widetilde{R}}_{eff}(C,v))$: Insert $v$ into $C$. Also, ${\widetilde{R}}_{eff}(C,v)$ is an estimate of $R_{eff}(C,v)$ such that ${\widetilde{R}}_{eff}(C,v)\approx_2 R_{eff}(C,v)$. Returns an estimate \[ \widetilde{\mathit{\pi}}_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}}\right) \] for the demand projection of $\boldsymbol{\mathit{q}}$ onto $C\cup \{v\}$ at coordinate $v$ such that \[ \left| \widetilde{\mathit{\pi}}_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}}\right) - \pi_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}}\right) \right| \leq \frac{\widehat{\eps}}{\sqrt{R_{eff}(v,C)}}\,. \] } \item{$\textsc{Update}(e,\boldsymbol{\mathit{r}}',\boldsymbol{\mathit{q}}')$: Set $r_e = r_e'$ and $q_e = q_e'$, where $e\in E(C)$, and $q_e'\in[-1,1]$. Furthermore, $r_e'$ satisfies the inequality $r_e^{\max} / \alpha \leq r_e' \leq \alpha \cdot r_e^{\min}$, where $r_e^{\min}$ and $r_e^{\max}$ represent the minimum, respectively the maximum values that the resistance of $e$ has had since the last call to $\textsc{Initialize}$. } \item{$\textsc{Output}()$: Output $\widetilde{\boldsymbol{\mathit{\pi}}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right)$ such that such that after $T \leq n^{O(1)}$ calls to $\textsc{AddTerminal}$, for any fixed vector $\boldsymbol{\mathit{\phi}}$, $E_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{\phi}}) \leq 1$, with high probability \[ \left| \left\langle \widetilde{\boldsymbol{\mathit{\pi}}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right) - \boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right), \boldsymbol{\mathit{\phi}}\right \rangle \right| \leq \widehat{\eps} \cdot \sqrt{\alpha} \cdot T \,. \] } \end{itemize} \label{def:demand_projector} \end{definition} We will implement such a data structure in Section~\ref{sec:demand_projection}, where we will prove the following lemma: \begin{lemma}[Demand projection data structure] For any graph $G(V,E)$ and parameters $\widehat{\eps}\in(0,\varepsilon)$, $\beta\in(0,1)$, there exists an $(\alpha,\beta,\varepsilon)$-\textsc{DemandProjector} for $G$ which, given access to $h=\widetilde{\Theta}(\widehat{\eps}^{-4}\beta^{-6}+\widehat{\eps}^{-2}\beta^{-2}\gamma^{-2})$ precomputed independent random walks from $u$ to $C$ for each $e \in E$, $u \in e$, has the following runtimes per operation: \begin{itemize} \item{\textsc{Initialize}: $\tO{m}$. } \item{\textsc{AddTerminal}: $\tO{\widehat{\eps}^{-4}\beta^{-8}+\widehat{\eps}^{-2} \beta^{-6} \gamma^{-2}}$. } \item{\textsc{Update}: $O(1)$.} \item{\textsc{Output}: $O(\beta m + T)$, where $T$ is the number of calls made to $\textsc{AddTerminal}$ after the last call to $\textsc{Initialize}$.} \end{itemize} \label{lem:ds} \end{lemma} Now we describe the way we will use the $\textsc{DemandProjector}$s and $\textsc{DynamicSC}$ to get an $(\alpha,\beta,\varepsilon)$-\textsc{Locator} $\mathcal{L}$. \begin{algorithm} \begin{algorithmic}[1] \caption{\textsc{Locator} $\mathcal{L}$.\textsc{Initialize}} \Procedure{$\mathcal{L}$.\textsc{Initialize}}{$\boldsymbol{\mathit{f}}$} \State $\boldsymbol{\mathit{s}}^+ = \boldsymbol{\mathit{u}} - \boldsymbol{\mathit{f}}$, $\boldsymbol{\mathit{s}}^- = \boldsymbol{\mathit{f}}$, $\boldsymbol{\mathit{r}} = \frac{1}{(\boldsymbol{\mathit{s}}^+)^2} + \frac{1}{(\boldsymbol{\mathit{s}}^-)^2}$ \State $\boldsymbol{\mathit{Q}} =$ Sketching matrix produced by (Lemma 5.1, \cite{gao2021fully} v2) \State $\textsc{DynamicSC} = \textsc{DynamicSC}.\textsc{Initialize}(\boldsymbol{\mathit{G}},\emptyset,\boldsymbol{\mathit{r}},\varepsilon,\beta)$ \State $C = \textsc{DynamicSC}.C$ \Comment{$\beta$-congestion reduction subset} \State Estimate ${\widetilde{R}}_{eff}(C,e) \approx_{4} R_{eff}(C,e)$ using Lemma~\ref{lem:approx_effective_res} \State $S = \left\{e\in E\ |\ {\widetilde{R}}_{eff}(C,e) \leq r_e \cdot \left(\frac{100\alpha}{\varepsilon}\right)^2\right\}$ \State $h = \widetilde{\Theta}\left( \widehat{\eps}^{-4} \beta^{-6} + \widehat{\eps}^{-2} \varepsilon^{-2}\alpha^2 \beta^{-2} \right)$ \State Sample walks $\mathcal{P}^{u,e,i}$ from $u$ to $C$ for $e \in E\setminus E(C)$, $u \in e$, $i \in [h]$ (Lemma 5.15, \cite{gao2021fully} v2) \State $\textsc{DP}^i = \textsc{DemandProjector}.\textsc{Initialize}(C,\boldsymbol{\mathit{r}},\boldsymbol{\mathit{q}}^i,S,\mathcal{P})$ for all rows $\boldsymbol{\mathit{q}}^i$ of $\boldsymbol{\mathit{Q}}$ \State $\mathcal{L}.\textsc{BatchUpdate}(\emptyset)$ \EndProcedure \end{algorithmic} \end{algorithm} {\bf $\mathcal{L}$.\textsc{Initialize}}: Every time $\mathcal{L}.\textsc{Initialize}$ is called, we first generate a $\beta$-congestion reduction subset $C$ based on Lemma~\ref{lem:cong_red} (takes time $\tO{m/\beta^2}$), then a sketching matrix $\boldsymbol{\mathit{Q}}$ and its rows $\boldsymbol{\mathit{q}}^i$ for $i\in\left[\tO{1/\varepsilon^2}\right]$ as in~(Lemma 5.1, \cite{gao2021fully} v2) (takes time $\tO{m/\varepsilon^2}$), and finally random walks $\mathcal{P}^{u,e,i}$ from $u$ to $C$ for each $u\in V$, $e\in E\backslash E(C)$ with $u\in e$, and $i\in [h]$, where $h = \tO{ \widehat{\eps}^{-4} \beta^{-6} + \widehat{\eps}^{-2} \varepsilon^{-2}\alpha^2 \beta^{-2} }$ as in (Lemma 5.15, \cite{gao2021fully} v2) (takes time $\tO{h/\beta^2}$ for each $(u,e)$). We also compute ${\widetilde{R}}_{eff}(C,u) \approx_{2} R_{eff}(C,u)$ for all $u\in V$ as described in Lemma~\ref{lem:approx_effective_res} so that we can let $S$ be a subset of $\varepsilon/(100\alpha)$-important edges that contains all $\varepsilon/(200\alpha)$-important edges. This takes time $\tO{m}$. Then, we call \textsc{DynamicSC}.\textsc{Initialize}$(G,C,\boldsymbol{\mathit{r}}, O(\varepsilon),\beta)$ (from Appendix~\ref{sec:maintain_schur}) to initialize the dynamic Schur complement onto $C$, with error tolerance $O(\varepsilon)$, which takes time $\tO{m \cdot \frac{1}{\varepsilon^4\beta^4}}$, as well as $\textsc{DemandProjector}.\textsc{Initialize}(C,\boldsymbol{\mathit{r}},\boldsymbol{\mathit{q}},S,\mathcal{P})$ for the $\tO{1/\varepsilon^2}$ $\textsc{DemandProjector}$s, i.e. one for each $\boldsymbol{\mathit{q}} \in \{\boldsymbol{\mathit{q}}^i\ |\ i\in[\tO{1/\varepsilon^2}]\}$. Also, we compute \[ \boldsymbol{\mathit{\pi}}^{old} = \boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})\right)\,, \] which takes $\tO{m}$ as in $\textsc{DemandProjector}.\textsc{Initialize}$. All of this takes $\tO{m\cdot \left(\frac{1}{\widehat{\eps}^4\beta^8} + \frac{\alpha^2}{\widehat{\eps}^2 \varepsilon^2 \beta^4}\right)}$. \begin{algorithm} \begin{algorithmic}[1] \caption{\textsc{Locator} $\mathcal{L}$.\textsc{Update} and $\mathcal{L}$.\textsc{BatchUpdate}} \Procedure{\textsc{Update}}{$e = (u,w),\boldsymbol{\mathit{f}}$} \State $s_e^+ = u_e - f_e$, $s_e^- = f_e$, $r_e = \frac{1}{(s_e^+)^2} + \frac{1}{(s_e^-)^2}$ \State ${\widetilde{R}}_{eff}(C,u) = \textsc{DynamicSC}.\textsc{AddTerminal}(u)$ \State ${\widetilde{R}}_{eff}(C\cup\{u\},w) = \textsc{DynamicSC}.\textsc{AddTerminal}(w)$ \State $C = C\cup\{u,w\}$ \For {$i=1,\dots,\tO{1/\varepsilon^2}$} \State $\textsc{DP}^i.\textsc{AddTerminal}(u, {\widetilde{R}}_{eff}(C,u))$ \State $\textsc{DP}^i.\textsc{AddTerminal}(w, {\widetilde{R}}_{eff}(C\cup\{u\},w))$ \EndFor \State $\textsc{DynamicSC}.\textsc{Update}(e,r_e)$ \For {$i=1\dots \tO{1/\varepsilon^2}$} \State $\textsc{DP}^i.\textsc{Update}(e,\boldsymbol{\mathit{r}},\boldsymbol{\mathit{q}}^i)$ \EndFor \EndProcedure \Procedure{\textsc{BatchUpdate}}{$Z,\boldsymbol{\mathit{f}}$} \State $\boldsymbol{\mathit{s}}^+ = \boldsymbol{\mathit{u}} - \boldsymbol{\mathit{f}}$, $\boldsymbol{\mathit{s}}^- = \boldsymbol{\mathit{f}}$, $\boldsymbol{\mathit{r}} = \frac{1}{(\boldsymbol{\mathit{s}}^+)^2} + \frac{1}{(\boldsymbol{\mathit{s}}^-)^2}$ \State Estimate ${\widetilde{R}}_{eff}(C,e) \approx_{4} R_{eff}(C,e)$ using Lemma~\ref{lem:approx_effective_res} \State $S = \left\{e\in E\ |\ {\widetilde{R}}_{eff}(C,e) \leq r_e \cdot \left(\frac{100\alpha}{\varepsilon}\right)^2\right\}$ \Comment{$\frac{\varepsilon}{100\alpha}$-important edges} \For {$e=(u,w)\in Z$} \State $\textsc{DynamicSC}.\textsc{AddTerminal}(u)$ \State $\textsc{DynamicSC}.\textsc{AddTerminal}(w)$ \State $C = C\cup\{u,w\}$ \State $\textsc{DynamicSC}.\textsc{Update}(e,r_e)$ \EndFor \For {$i=\left[\tO{1/\varepsilon^2}\right]$} \State $\textsc{DP}^i.\textsc{Initialize}(C,\boldsymbol{\mathit{r}}, \boldsymbol{\mathit{q}}^i, S, \mathcal{P})$ \EndFor \State $\boldsymbol{\mathit{\pi}}_{old} = \frac{1}{\sqrt{m}} \cdot \boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\frac{1}{\boldsymbol{\mathit{s}}^+} - \frac{1}{\boldsymbol{\mathit{s}}^-}}{\boldsymbol{\mathit{r}}}\right)$ \Comment{Compute exactly using Laplacian solve} \EndProcedure \end{algorithmic} \end{algorithm} {\bf $\mathcal{L}$.\textsc{Update}}: Now, whenever $\mathcal{L}.\textsc{Update}$ is called on an edge $e$, either $e\in E(C)$ or $e\notin E(C)$. In the first case we simply call $\textsc{Update}$ on $\textsc{DynamicSC}$ and all \textsc{DemandProjector}s. In the second case, we first call $\textsc{DynamicSC}.\textsc{AddTerminal}$ on one endpoint $v$ of $e$. After doing this we can also get an estimate ${\widetilde{R}}_{eff}(C,v)\approx_{2} R_{eff}(C,v)$ by looking at the edges between $C$ and $v$ in the sparsified Schur complement. By the guarantees of the expander decomposition used inside $\textsc{DynamicSC}$~\cite{gao2021fully}, the number of expanders containing $v$, amortized over all calls to $\textsc{DynamicSC}.\textsc{AddTerminal}$, is $O(\mathrm{poly}\log(n))$. As the sparsified Schur complement contains $\tO{1/\varepsilon^2}$ neighbors of $v$ from each expander, the amortized number of neighbors of $v$ in the sparsified Schur complement is $\tO{1/\varepsilon^2}$, and the amortized runtime to generate them (by random sampling) is $\tO{1/\varepsilon^2}$. Given the resistances $r_1,\dots,r_l$ of these edges, setting ${\widetilde{R}}_{eff}(C,v) = \left(\sum\limits_{i=1}^l r_i^{-1}\right)^{-1}$ we guarantee that ${\widetilde{R}}_{eff}(C,v) \approx_{1+O(\varepsilon)} R_{eff}(C,v)$, by the fact that $\textsc{DynamicSC}$ maintains an $(1+O(\varepsilon))$-approximate sparsifier of the Schur complement. Then, we call $\textsc{AddTerminal}(v,{\widetilde{R}}_{eff}(C,v))$ on all \textsc{DemandProjector}s. After repeating the same process for the other endpoint of $e$, we finally call $\textsc{Update}$ on $\textsc{DynamicSC}$ and all \textsc{DemandProjector}s. This takes time $\tO{\frac{1}{\varepsilon^2 \beta^2}}$ because of the Schur complement and amortized $\tO{m\cdot \frac{\widehat{\eps} \alpha^{1/2}}{\varepsilon} + \frac{1}{\widehat{\eps}^4\beta^{8}} + \frac{\alpha^2}{\widehat{\eps}^2\varepsilon^2\beta^{-6}}}$ for each of the demand projectors, so the total amortized runtime is $\tO{m\cdot \frac{\widehat{\eps} \alpha^{1/2}}{\varepsilon^3} + \frac{1}{\widehat{\eps}^4\varepsilon^2\beta^{8}} + \frac{\alpha^2}{\widehat{\eps}^2\varepsilon^4\beta^6}}$. {\bf $\mathcal{L}$.\textsc{BatchUpdate}}: When $\mathcal{L}.\textsc{BatchUpdate}$ is called on a set of edges $Z$, we add them one by one in the $\textsc{DynamicSC}$ data structure following the same process as in $\mathcal{L}.\textsc{Update}$. For the demand projectors, we first manually insert the endpoints of these edges into $C$ and then re-initialize all \textsc{DemandProjector}s, by calling $\textsc{Initialize}$ with a new subset $S$ of $\frac{\varepsilon}{200\alpha}$-important edges that contains all $\frac{\varepsilon}{100\alpha}$-important edges. Such a set can be computed by estimating $R_{eff}(C,u)$ for all $u\in V\backslash C$ up to a constant factor and, by Lemma~\ref{lem:approx_effective_res}, takes time $\tO{m}$. Also, we compute \[ \boldsymbol{\mathit{\pi}}^{old} = \boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})\right)\,, \] which takes $\tO{m}$ as in $\textsc{DemandProjector}.\textsc{Initialize}$. The total runtime of this is $\tO{ m / \varepsilon^2 + |Z| / (\beta^2\varepsilon^2) }$. \begin{algorithm} \begin{algorithmic}[1] \caption{\textsc{Locator} $\mathcal{L}$.\textsc{Solve}} \Procedure{\textsc{Solve}}{$ $} \State $\widetilde{SC} = \textsc{DynamicSC}.\widetilde{SC}()$ \State $\boldsymbol{\mathit{\phi}}_{old} = \widetilde{SC}^+ \boldsymbol{\mathit{\pi}}_{old}$ \State $\boldsymbol{\mathit{v}} = \mathbf{0}$ \For {$i=1,\dots,\tO{1/\varepsilon^2}$} \State $\widetilde{\boldsymbol{\mathit{\pi}}}^i = \textsc{DP}^i.\textsc{Output}()$ \State $v_i = \langle \widetilde{\boldsymbol{\mathit{\pi}}}^i , \boldsymbol{\mathit{\phi}}_{old} \rangle$ \EndFor \State $Z = \textsc{Recover}(\boldsymbol{\mathit{v}}, \varepsilon/100)$ \Comment{Recovers all $\varepsilon/2$-congested edges (Lemma 5.1, \cite{gao2021fully} v2)} \State \Return $Z$ \EndProcedure \end{algorithmic} \end{algorithm} {\bf $\mathcal{L}$.\textsc{Solve}}: When $\mathcal{L}.\textsc{Solve}$ is called, we set $\widetilde{SC} = \textsc{DynamicSC}.SC()$, call $\textsc{Output}$ on all \textsc{DemandProjector}s to obtain vectors $\widetilde{\boldsymbol{\mathit{\pi}}}^i$ which are estimators for $\boldsymbol{\mathit{\pi}}^C(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}^i_S}{\sqrt{\boldsymbol{\mathit{r}}}})$ in the sense of Definition~\ref{def:demand_projector}. Then we compute $v_i = \langle \widetilde{\boldsymbol{\mathit{\pi}}}^i , \widetilde{SC}^+ \boldsymbol{\mathit{\pi}}_{old}\rangle$ where $\boldsymbol{\mathit{\pi}}_{old}$ is the demand projection that was computed exactly the last time $\textsc{BatchUpdate}$ was called. These computed terms represent an approximation to the update in $(\boldsymbol{\mathit{Q}}\boldsymbol{\mathit{\rho}})_i$ between two consecutive calls of $\mathcal{L}.\textsc{Solve}$. As we will show in the appendix, $\langle \widetilde{\boldsymbol{\mathit{\pi}}}^i , \widetilde{SC}^+ \boldsymbol{\mathit{\pi}}_{old}\rangle$ is an $\varepsilon$-additive approximation of $\langle \boldsymbol{\mathit{\pi}}^{C}(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}^i}{\sqrt{\boldsymbol{\mathit{r}}}}), \boldsymbol{\mathit{L}}^+ \boldsymbol{\mathit{\pi}}^{C}(\boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})) \rangle$ for all $i\in\left[\tO{1/\varepsilon^2}\right]$. The key fact that makes this approximation feasible is that although updates to the demand projection are hard to approximate with few samples, when hitting them with the deterministic vector $\boldsymbol{\mathit{\pi}}_{old}$, the resulting inner products strongly concentrate. The runtime of this is $\tO{\beta m / \varepsilon^2}$. Using these computed values with the $\ell_2$ heavy hitter data structure (Lemma 5.1, \cite{gao2021fully} v2) we get all edges with congestion more than $\varepsilon$. The total runtime is $\tO{\beta m / \varepsilon^2}$. \section{The Demand Projection Data Structure} \label{sec:demand_projection} The main goal of this section is to construct an $(\alpha,\beta,\varepsilon)$-\textsc{DemandProjector}, as defined in Definition~\ref{def:demand_projector}, and thus prove Lemma~\ref{lem:ds}. The most important operation that needs to be implemented in order to prove Lemma~\ref{lem:ds} is to maintain the demand projection after inserting a vertex $v\in V\backslash C$ to $C$. In order to do this, we use the following identity from~\cite{gao2021fully}: \begin{align} \boldsymbol{\mathit{\pi}}^{C\cup\{v\}}\left(\boldsymbol{\mathit{d}}\right) = \boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{d}}\right) + \pi_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{d}}\right) \cdot \left(\mathbf{1}_v - \boldsymbol{\mathit{\pi}}^C(\mathbf{1}_v)\right)\,, \label{eq:projection_update} \end{align} where $\boldsymbol{\mathit{d}}$ is any demand (in our case, we have $\boldsymbol{\mathit{d}} = \boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}$ for some $S\subseteq E$). For this, we need to compute approximations to $\pi_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right)$ and $\boldsymbol{\mathit{\pi}}^C\left(\mathbf{1}_v\right)$. In Section~\ref{sec:approx1}, we will show that if $S$ is a subset of $\gamma$-important edges, we can efficiently estimate $\pi_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right)$ up to additive accuracy $\frac{\widehat{\eps}}{R_{eff}(C,v)}$ by sampling random walks to $C$ starting only from edges with relatively high resistance. For the remaining edges, the $\gamma$-importance property will imply that we are not losing much by ignoring them. Then, in Section~\ref{sec:approx2} we will show how to approximate $\mathbf{1}_v - \boldsymbol{\mathit{\pi}}^C(\mathbf{1}_v)$. This is equivalent to estimating the hitting probabilities from $v$ to $C$. The guarantee that we would ideally like to get is on the error to route \begin{align} \mathcal{E}_{\boldsymbol{\mathit{r}}}\left(\widetilde{\boldsymbol{\mathit{\pi}}}^C(\mathbf{1}_v) - \boldsymbol{\mathit{\pi}}^C(\mathbf{1}_v)\right) \leq \widehat{\eps}^2 R_{eff}(C,v)\,. \label{eq:ideal_energy_bound} \end{align} Note that this is not possible to do efficiently for general $C$. For example, suppose that the hitting distribution is uniform. In this case, $\Omega(|C|)$ random walks are required to get a bound similar to (\ref{eq:ideal_energy_bound}). However, it might still be possible to guarantee it by using the structure of $C$, and this would simplify some parts of our analysis. Instead, we are going to work with the following weaker approximation bound: For any fixed potential vector $\boldsymbol{\mathit{\phi}}\in\mathbb{R}^n$ with $E_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{\phi}}) \leq 1$, we have w.h.p. \begin{align} \left|\left\langle \widetilde{\boldsymbol{\mathit{\pi}}}^C(\mathbf{1}_v) - \boldsymbol{\mathit{\pi}}^C(\mathbf{1}_v), \boldsymbol{\mathit{\phi}}\right\rangle\right| \leq \widehat{\eps} \sqrt{R_{eff}(C,v)}\,. \label{eq:less_than_ideal_energy_bound} \end{align} Now, using these estimation lemmas, we will bound how our demand projection degrades when inserting a new vertex into $C$. This is stated in the following lemma and proved in Appendix~\ref{proof_lem_insert1}. \begin{lemma}[Inserting a new vertex to $C$] Consider a graph $G(V,E)$ with resistances $\boldsymbol{\mathit{r}}$, $\boldsymbol{\mathit{q}}\in[-1,1]^m$, a $\beta$-congestion reduction subset $C$, and $v\in V\backslash C$. We also suppose that we have an estimate of the $C-v$ effective resistance such that ${\widetilde{R}}_{eff}(C,v) \approx_{2} R_{eff}(C,v)$, as well as to independent random walks $\mathcal{P}^{u,e,i}$ for each $u\in V\backslash C$, $e\in E\backslash E(C)$ with $u\in e$, $i\in[h]$, where each random walk starts from $u$ and ends at $C$. If we let $S$ be a subset of $\gamma$-important edges for $\gamma > 0$, then for any error parameter $\widehat{\eps} > 0$ we can compute $\widetilde{\mathit{\pi}}_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\in\mathbb{R}$ and $\widetilde{\mathit{\pi}}^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\in\mathbb{R} \in \mathbb{R}^n$ such that with high probability \begin{align*} \left|\widetilde{\mathit{\pi}}_{v}^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right) - \pi_{v}^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\right| \leq \frac{\widehat{\eps}}{ \sqrt{R_{eff}(C,v)}}\,, \end{align*} as long as $h=\tOm{\widehat{\eps}^{-4}\beta^{-6}+\widehat{\eps}^{-2}\beta^{-2}\gamma^{-2}}$. Furthermore, for any fixed $\boldsymbol{\mathit{\phi}}$, $E_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{\phi}}) \leq 1$, after $T$ insertions after the last call to $\textsc{Initialize}$, with high probability \begin{align*}\label{eq:insert_error} \left|\left\langle \widetilde{\boldsymbol{\mathit{\pi}}}^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right) - \boldsymbol{\mathit{\pi}}^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right) , \boldsymbol{\mathit{\phi}}\right\rangle\right| \leq \widehat{\eps} T\,, \end{align*} as long as $h = \tOm{\widehat{\eps}^{-2} \beta^{-4} \gamma^{-2}}$. \label{lem:insert1} \end{lemma} \begin{algorithm} \begin{algorithmic}[1] \caption{\textsc{DemandProjector} \textsc{DP}.\textsc{AddTerminal}} \Procedure{\textsc{DP}.\textsc{AddTerminal}}{$v,{\widetilde{R}}_{eff}(C,v)$} \If {$v\in C$} \State \Return \EndIf \State $t = t + 1$ \State $\widetilde{\mathit{\pi}}_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right) = 0$ \For {$u,e\in S,i$ such that $\mathcal{P}^{u,e,i} \ni v$ and ${\widetilde{R}}_{eff}(C,v) \leq \frac{1}{\left( \min\{\widehat{\eps} / \tO{\beta^{-2}}, \gamma / 4\} \right)^2} r_e$} \If {$e = (u,*)$} \State $\widetilde{\mathit{\pi}}_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right) = \widetilde{\mathit{\pi}}_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right) + \frac{1}{h} \frac{q_e}{\sqrt{r_e}}$ \Else \State $\widetilde{\mathit{\pi}}_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right) = \widetilde{\mathit{\pi}}_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right) - \frac{1}{h} \frac{q_e}{\sqrt{r_e}}$ \EndIf \State Shortcut $\mathcal{P}^{u,e,i}$ at $v$ \EndFor \State $h' = \tO{\widehat{\eps}^{-2}\beta^{-4}\gamma^{-2}} $ \State $\widetilde{\boldsymbol{\mathit{\pi}}}^{C}\left(\mathbf{1}_v\right) = \mathbf{0}$ \For {$i=1,\dots h'$} \State Run random walk from $v$ to $C$ with probabilities prop. to $\boldsymbol{\mathit{r}}^{-1}$, let $u$ be the last vertex \State $\widetilde{\mathit{\pi}}_u^{C}\left(\mathbf{1}_v\right) = \widetilde{\mathit{\pi}}_u^{C}\left(\mathbf{1}_v\right) + \frac{1}{h'}$ \EndFor \State $\widetilde{\boldsymbol{\mathit{\pi}}}^{C \cup \{v\}}(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}) =\widetilde{\boldsymbol{\mathit{\pi}}}^{C}(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}) + \widetilde{\mathit{\pi}}_v^{C \cup \{v\}}(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}) \cdot (\mathbf{1}_v - \widetilde{\mathit{\pi}}^C(\mathbf{1}_v))$ \State $C = C\cup\{v\}$, $F = F\backslash \{v\}$ \EndProcedure \end{algorithmic} \end{algorithm} \subsection{Estimating $\pi_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right)$} \label{sec:approx1} There is a straightforward algorithm to estimate $\pi_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right)$. For each edge $e=(u,w)\in E\backslash E(C)$, sample a number of random walks from $u$ and $w$ until they hit $C\cup\{v\}$. Then, add to the estimate $\frac{q_e}{\sqrt{r_e}}$ times the fraction of the random walks starting from $u$ that contain $v$, minus $\frac{q_e}{\sqrt{r_e}}$ times the fraction of the random walks starting from $w$ that contain $v$. \cite{gao2021fully} uses this sampling method together with the following concentration bound, to get a good estimate if the resistances of all congested edges are sufficiently large. \begin{lemma}[Concentration inequality 1~\cite{gao2021fully}] Let $S = X_1 + \dots + X_n$ be the sum of $n$ independent random variables. The range of $X_i$ is $\{0,a_i\}$ for $a_i\in[-M,M]$. Let $t,E$ be positive numbers such that $t \leq E$ and $\sum\limits_{i=1}^n \left|\mathbb{E}[X_i]\right| \leq E$. Then \[ \Pr\left[|S - \mathbb{E}[S]| > t\right] \leq 2 \exp\left(-\frac{t^2}{6EM}\right) \,.\] \label{conc1} \end{lemma} Unfortunately, in our setting there is no reason to expect these resistances to be large, so the variance of this estimate might be too high. We have already introduced the concept of important edges in order to alleviate this problem, and proved that we only need to look at important edges. Even if all edges of which the demand projection is estimated are important (i.e. close to $C$), however, $v$ can still be far from $C$. This is an issue, since we don't directly estimate projections onto $C$, but instead estimate the projection onto $C\cup\{v\}$ and then from $v$ onto $C$. Intuitively, however, if $v$ is far from $C$, it should also be far from the set of important edges, so the insertion of $v$ should not affect their demand projection too much. As the distance upper bound between an important edge and $C$ is relative to the scale of the resistance of that edge, this statement needs be more fine-grained in order to take the resistances of important edges into account. More concretely, in the following lemma, which is proved in Appendix~\ref{proof_estimate1}, we show that if we only compute demand projection estimates for edges $e$ such that $r_e \geq c^2 R_{eff}(C,v)$ for some appropriately chosen $c > 0$, then we can guarantee a good bound on the number of random walks we need to sample. For the remaining edges, we will show that the energy of their contributions to the projection is negligible, so that we can reach to our desired statement in Lemma~\ref{estimate1_final}. \begin{lemma} Consider a graph $G(V,E)$ with resistances $\boldsymbol{\mathit{r}}$, $\boldsymbol{\mathit{q}}\in[-1,1]^n$, a $\beta$-congestion reduction subset $C$, as well as $v\in V\backslash C$. If for some $c>0$ we are given a set of edges \[ S' \subseteq \left\{e\in E\backslash E(C)\ |\ \text{$R_{eff}(C,v) \leq \frac{1}{c^2} r_e$}\right\} \,, \] then for any $\delta_1' > 0$ we can compute $\widetilde{\mathit{\pi}}_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_{S'}}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\in\mathbb{R}$ such that with high probability \[ \left|\widetilde{\mathit{\pi}}_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_{S'}}{\sqrt{\boldsymbol{\mathit{r}}}}\right) - \pi_{v}^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_{S'}}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\right| \leq \frac{\delta_1'}{\beta c\sqrt{R_{eff}(C,v)}} \,.\] The algorithm requires access to $\tO{\delta_1'^{-2} \log n \log \frac{1}{\beta}}$ independent random walks from $u$ to $C$ for each $u\in V\backslash C$ and $e\in E\backslash E(C)$ with $u\in e$. \label{estimate1} \end{lemma} This leads us to the desired statement for this section, whose proof appears in Appendix~\ref{proof_estimate1_final}. \begin{lemma}[Estimating $\pi_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}}\right)$] Consider a graph $G(V,E)$ with resistances $\boldsymbol{\mathit{r}}$, $\boldsymbol{\mathit{q}}\in[-1,1]^n$, a $\beta$-congestion reduction subset $C$, as well as $v\in V\backslash C$. If we are given a set $S$ of $\gamma$-important edges for some $\gamma \in (0,1)$ and an estimate ${\widetilde{R}}_{eff}(C,v)\approx_{2} R_{eff}(C,v)$, then for any $\delta_1 \in(0,1)$ we can compute $\widetilde{\mathit{\pi}}_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_{S}}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\in\mathbb{R}$ such that with high probability \begin{align} \left|\widetilde{\mathit{\pi}}_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_{S}}{\sqrt{\boldsymbol{\mathit{r}}}}\right) - \pi_{v}^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_{S}}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\right| \leq \frac{\delta_1}{\sqrt{R_{eff}(C,v)}} \,. \end{align} The algorithm requires $\tO{\delta_1^{-4} \beta^{-6} + \delta_1^{-2} \beta^{-2} \gamma^{-2}}$ independent random walks from $u$ to $C$ for each $u\in V\backslash C$ and $e\in E\backslash E(C)$ with $u\in e$. Additionally, we have \[ \left|\pi_{v}^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_{S}}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\right| \leq \frac{1}{\gamma \sqrt{R_{eff}(C,v)}} \cdot \tO{\frac{1}{\beta^2}} \,.\] \label{estimate1_final} \end{lemma} \subsection{Estimating $\boldsymbol{\mathit{\pi}}^{C}(\mathbf{1}_v)$} \label{sec:approx2} In contrast to the quantity $\pi_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}}\right)$, where there are cancellations between its two components $\pi_v^{C\cup\{v\}}\left(\sum\limits_{e=(u,w)\in E}\frac{q_e}{\sqrt{r_e}} \mathbf{1}_u\right)$ and $\pi_v^{C\cup\{v\}}\left(\sum\limits_{e=(u,w)\in E}-\frac{q_e}{\sqrt{r_e}} \mathbf{1}_w\right)$ (as $\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}}$ sums up to $\mathbf{0}$), in $\boldsymbol{\mathit{\pi}}^C\left(\mathbf{1}_v\right)$ there are no cancellations. The goal is to simply estimate the hitting probabilities from $v$ to the vertices of $C$, which can be done by sampling a number of random walks from $v$ to $C$. As discussed before, even though ideally we would like to have an error bound of the form $\sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}(\widetilde{\boldsymbol{\mathit{\pi}}}^C(\mathbf{1}_v) - \boldsymbol{\mathit{\pi}}^C(\mathbf{1}_v))} \leq \delta_2 \sqrt{R_{eff}(C,v)}$, our analysis is only able to guarantee that for any fixed potential vector $\boldsymbol{\mathit{\phi}}$ with $E_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{\phi}}) \leq 1$, with high probability $\left|\langle \boldsymbol{\mathit{\phi}}, \widetilde{\boldsymbol{\mathit{\pi}}}^C(\mathbf{1}_v) - \boldsymbol{\mathit{\pi}}^C(\mathbf{1}_v)\rangle\right| \leq \delta_2 \sqrt{R_{eff}(C,v)}$. However, this is still sufficient for our purposes. In Appendix~\ref{proof_conc2} we prove the following general concentration inequality, which basically states that we can estimate the desired hitting probabilities as long as we have a bound on the $\ell_2$ norm of the potentials $\boldsymbol{\mathit{\phi}}$ weighted by the hitting probabilities. \begin{lemma}[Concentration inequality 2] Let $\boldsymbol{\mathit{\pi}}$ be a probability distribution over $[n]$ and $\widetilde{\boldsymbol{\mathit{\pi}}}$ an empirical distribution of $Z$ samples from $\boldsymbol{\mathit{\pi}}$. For any $\boldsymbol{\mathit{\bar{\phi}}}\in\mathbb{R}^n$ with $\left\|\boldsymbol{\mathit{\bar{\phi}}}\right\|_{\boldsymbol{\mathit{\pi}},2}^2 \leq \mathcal{V}$, we have \[\Pr\left[\left|\langle \widetilde{\boldsymbol{\mathit{\pi}}} - \boldsymbol{\mathit{\pi}}, \boldsymbol{\mathit{\bar{\phi}}}\rangle\right| > t\right] \leq \frac{1}{n^{100}} + \tO{\log \left(n \cdot \mathcal{V} / t\right)}\exp\left(- \frac{Z t^2}{\tO{\mathcal{V} \log^2 n}} \right) \,.\] \label{conc2} \end{lemma} We will apply it for $\boldsymbol{\mathit{\bar{\phi}}} = \boldsymbol{\mathit{\phi}} - \phi_v\cdot \mathbf{1}$, and it is important to note that $\mathcal{E}_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{\bar{\phi}}}) = \mathcal{E}_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{\phi}})$. In order to get a bound on $\left\|\boldsymbol{\mathit{\bar{\phi}}}\right\|_{\boldsymbol{\mathit{\pi}}^C\left(\mathbf{1}_v\right),2}^2$, we use the following lemma, which is proved in Appendix~\ref{proof_lem_variance}. \begin{lemma}[Bounding the second moment of potentials] For any graph $G$, resistances $\boldsymbol{\mathit{r}}$, potentials $\boldsymbol{\mathit{\phi}}$ with $E_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{\phi}}) \leq 1$, $C \subseteq V$ and $v\in V\backslash C$ we have $ \left\|\boldsymbol{\mathit{\phi}} - \phi_v \mathbf{1}\right\|_{\boldsymbol{\mathit{\pi}}^C(\mathbf{1}_v),2}^2 \leq 8 \cdot R_{eff}(C,v)$. \label{lem:variance} \end{lemma} To give some intuition on this, consider the case when $V = C\cup\{v\} = \{1,\dots,k\}\cup\{v\}$, and there are edges $e_1,\dots,e_k$ between $C$ and $v$, one for each vertex of $C$. Then, we have $\pi_i^C(\mathbf{1}_v) = (r_{e_i})^{-1} / \sum\limits_{i=1}^k (r_{e_i})^{-1}$, and so \[ \left\|\boldsymbol{\mathit{\bar{\phi}}}\right\|_{\boldsymbol{\mathit{\pi}}^C\left(\mathbf{1}_v\right),2}^2 = \sum\limits_{i=1}^k \frac{(\phi_i - \phi_v)^2}{r_{e_i}} \cdot \left(\sum\limits_{i=1}^k (r_{e_i})^{-1}\right)^{-1} \leq \mathcal{E}_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{\bar{\phi}}}) \cdot R_{eff}(C,v) \leq R_{eff}(C,v)\,. \] We finally arrive at the desired statement about estimating $\boldsymbol{\mathit{\pi}}^C(\mathbf{1}_v)$. \begin{lemma}[Estimating $\boldsymbol{\mathit{\pi}}^C(\mathbf{1}_v)$] Consider a graph $G(V,E)$ with resistances $\boldsymbol{\mathit{r}}$, a $\beta$-congestion reduction subset $C$, as well as $v\in V\backslash C$. Then, for any $\delta_2 > 0$, we can compute $\widetilde{\boldsymbol{\mathit{\pi}}}^{C}\left(\mathbf{1}_v\right)\in\mathbb{R}^n$ such that with high probability \begin{align} \left|\langle \boldsymbol{\mathit{\phi}}, \widetilde{\boldsymbol{\mathit{\pi}}}^{C}\left(\mathbf{1}_{v}\right) - \boldsymbol{\mathit{\pi}}^{C}\left(\mathbf{1}_{v}\right)\rangle\right| \leq \delta_2 \cdot \sqrt{R_{eff}(C,v)}\,, \label{eq:inner_product_bound} \end{align} where $\boldsymbol{\mathit{\phi}}\in\mathbb{R}^n$ is a fixed vector with $E_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{\phi}}) \leq 1$. The algorithm computes $\tO{\frac{\log n}{\delta_2^2}}$ random walks from $v$ to $C$. \label{estimate2} \end{lemma} \begin{proof} Because both $\widetilde{\boldsymbol{\mathit{\pi}}}^C(\mathbf{1}_v)$ and $\boldsymbol{\mathit{\pi}}^C(\mathbf{1}_v)$ are probability distributions, the quantity (\ref{eq:inner_product_bound}) doesn't change when a multiple of $\mathbf{1}$ is added to $\boldsymbol{\mathit{\phi}}$, and so we can replace it by $\boldsymbol{\mathit{\bar{\phi}}} = \boldsymbol{\mathit{\phi}} - \phi_v \mathbf{1}$. Now, $\widetilde{\boldsymbol{\mathit{\pi}}}^C\left(\mathbf{1}_v\right)$ will be defined as the empirical hitting distribution that results from sampling $Z$ random walks from $v$ to $C$. Directly applying the concentration bound in Lemma~\ref{conc2} and setting $Z = \tO{\frac{\log n}{\delta_2^2}}$, together with the fact that $\left\|\boldsymbol{\mathit{\bar{\phi}}}\right\|_{\boldsymbol{\mathit{\pi}}^C(\mathbf{1}_v),2}^2 \leq 8\cdot R_{eff}(C,v)$ by Lemma~\ref{lem:variance} and $\log \log R_{eff}(C,v) \leq O(\log\log n)$, we get \begin{align*} & \Pr\left[\left|\langle \widetilde{\boldsymbol{\mathit{\pi}}}^C(\mathbf{1}_v) - \boldsymbol{\mathit{\pi}}^{C}(\mathbf{1}_v), \boldsymbol{\mathit{\bar{\phi}}}\rangle\right| > \delta_2 \cdot \sqrt{R_{eff}(C,v)}\right] < \frac{1}{n^{10}}\,. \end{align*} \end{proof} \subsection{Proof of Lemma~\ref{lem:ds}} We are now ready for the proof of Lemma~\ref{lem:ds}. \begin{proof}[Proof of Lemma~\ref{lem:ds}] Let $\textsc{DP}$ be a demand projection data structure. We analyze its operations one by one. \begin{algorithm} \begin{algorithmic}[1] \caption{\textsc{DemandProjector} \textsc{DP}.\textsc{Initialize} } \Procedure{\textsc{DP}.\textsc{Initialize}}{$C,\boldsymbol{\mathit{r}},\boldsymbol{\mathit{q}},S,\mathcal{P}$} \State Initialize $C,\boldsymbol{\mathit{r}},\boldsymbol{\mathit{q}},S,\mathcal{P}$ \State $F = V\backslash C$ \State $h = \tO{\widehat{\eps}^{-4}\beta^{-6}+\widehat{\eps}^{-2} \beta^{-4} \gamma^{-2}}$ \Comment{\#random walks for each pair $u\in V$, $e\in E$ with $u\in e$} \State $t = 0$ \Comment{\#calls to \textsc{AddTerminal} since last call to $\textsc{UpdateFull}$} \State $\boldsymbol{\mathit{\phi}} = \boldsymbol{\mathit{L}}_{FF}^+ \left[\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right]_F$ \State $\widetilde{\boldsymbol{\mathit{\pi}}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right) = \left[\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right]_C - \boldsymbol{\mathit{L}}_{CF}\boldsymbol{\mathit{\phi}}$ \EndProcedure \end{algorithmic} \end{algorithm} \noindent {\bf \textsc{DP}.$\textsc{Initialize}(C,\boldsymbol{\mathit{r}},\boldsymbol{\mathit{q}},S,\mathcal{P})$:} We initialize the values of $C,\boldsymbol{\mathit{r}},\boldsymbol{\mathit{q}},S,\mathcal{P}$. Then we exactly compute the demand projection, i.e. $\widetilde{\boldsymbol{\mathit{\pi}}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right) = \boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right)$, which takes time $\tO{m}$ as shown in~\cite{gao2021fully}. More specifically, we have $ \boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right) = \begin{pmatrix}\boldsymbol{\mathit{I}} & \boldsymbol{\mathit{L}}_{CF} \boldsymbol{\mathit{L}}_{FF}^{-1}\end{pmatrix} \boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}} $ which only requires applying the operators $\boldsymbol{\mathit{L}}_{FF}^{-1}$ and $\boldsymbol{\mathit{L}}_{CF}$. \noindent{\bf \textsc{DP}.\textsc{AddTerminal}($v$, ${\widetilde{R}}_{eff}(C,v)$):} We will serve this operation by applying Lemma~\ref{lem:insert1}. It is important to note that the error guarantee for the $\textsc{Output}$ procedure increases with every call to $\textsc{AddTerminal}$, so in general we have a bounded budget for the number of calls to thus procedure before having to call again $\textsc{Initialize}$. We apply Lemma~\ref{lem:insert1} to obtain $\widetilde{\mathit{\pi}}_v^{C\cup\{v\}}(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}})$, and update the estimate $\widetilde{\boldsymbol{\mathit{\pi}}}_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right)$. The former can be achieved with $h = \widetilde{O}\left( \widehat{\eps}^{-4} \beta^{-6} + \widehat{\eps}^{-2} \beta^{-2} \gamma^{-2} \right)$ random walks. Note that these random walks are already stored in $\mathcal{P}$, so accessing each of them takes time $\tO{1}$. Using the congestion reduction property of $C$, we see that the running time of the procedure, which is dominated by shortcutting the random walks is $\tO{h\beta^{-2}}$, which gives the claimed bound. The latter can be achieved with $h' = \widetilde{O}\left( \widehat{\eps}^{-2} \beta^{-4} \gamma^{-2} \right)$ fresh random walks. Due to the congestion reduction property, simulating each of these requires $\tO{\beta^{-2}}$ time. \begin{algorithm} \begin{algorithmic}[1] \caption{\textsc{DemandProjector} \textsc{DP}.\textsc{Update} and \textsc{DP}.\textsc{Output}} \Procedure{\textsc{DP}.\textsc{Update}}{$e,\boldsymbol{\mathit{r}}',\boldsymbol{\mathit{q}}'$} \If {$e\in S$} \State $\widetilde{\boldsymbol{\mathit{\pi}}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right) = \widetilde{\boldsymbol{\mathit{\pi}}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right) + \left(\frac{q_e'}{\sqrt{r_e'}} - \frac{q_e}{\sqrt{r_e}}\right)\cdot\boldsymbol{\mathit{B}}^\top \mathbf{1}_e$ \EndIf \State $q_e = q_e'$, $r_e = r_e'$ \EndProcedure \Procedure{\textsc{DP}.\textsc{Output}}{$ $} \State \Return $\widetilde{\boldsymbol{\mathit{\pi}}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right)$ \EndProcedure \end{algorithmic} \end{algorithm} \noindent{\bf \textsc{DP}.\textsc{Update}($e,\boldsymbol{\mathit{r}}',\boldsymbol{\mathit{q}}'$):} We update the values of $r_e,q_e$. We also update the projection, by noting that since $e\in E(C)$, \begin{align*} \boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}'}{\sqrt{\boldsymbol{\mathit{r}}'}}\right) =\boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}}\right) + \left(\frac{q_e'}{\sqrt{r_e'}} - \frac{q_e}{\sqrt{r_e}}\right) \boldsymbol{\mathit{B}}^\top \mathbf{1}_e\,, \end{align*} so we change $\widetilde{\boldsymbol{\mathit{\pi}}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}}\right)$ by the same amount, which takes time $O(1)$ and does not introduce any additional error in our estimate. \noindent{\bf \textsc{DP}.\textsc{Output}():} We output our estimate $\widetilde{\boldsymbol{\mathit{\pi}}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right)$. Per Lemma~\ref{lem:insert1} we see that each of the previous $T$ calls to $\textsc{AddTerminal}$ add an error to our estimate of at most $\widehat{\eps}$ in the sense that if $\boldsymbol{\mathit{\Delta}}^t$ were the true change in the demand projection at the $t^{th}$ insertion, and $\boldsymbol{\mathit{\widetilde{\Delta}}}^t$ were the update made to our estimate, then \[ \left| \left\langle \boldsymbol{\mathit{\widetilde{\Delta}}}^t - \boldsymbol{\mathit{\Delta}}^t, \boldsymbol{\mathit{\phi}} \right\rangle \right| \leq\widehat{\eps}\,, \] w.h.p. for any fixed $\boldsymbol{\mathit{\phi}}$ such that $E_{\boldsymbol{\mathit{r}}^t}(\boldsymbol{\mathit{\phi}}) \leq 1$, where $\boldsymbol{\mathit{r}}^t$ represents the resistances when $t^{th}$ call to $\textsc{AddTerminal}$ is made. Equivalently, for any nonzero $\boldsymbol{\mathit{\phi}}$, \[ \frac{1}{\sqrt{E_{\boldsymbol{\mathit{r}}^t}(\boldsymbol{\mathit{\phi}})}} \left| \left\langle \boldsymbol{\mathit{\widetilde{\Delta}}}^t - \boldsymbol{\mathit{\Delta}}^t, \boldsymbol{\mathit{\phi}} \right\rangle \right| \leq\widehat{\eps}\,, \] By the invariant satisfied by the resistances passed as parameters to the $\textsc{AddTerminal}$ routine, we have that $\boldsymbol{\mathit{r}}^t \leq \alpha \cdot \boldsymbol{\mathit{r}}^T$ for all $t$. Therefore $1/E_{\boldsymbol{\mathit{r}}^T}(\boldsymbol{\mathit{\phi}}) \leq \alpha/E_{\boldsymbol{\mathit{r}}^t}(\boldsymbol{\mathit{\phi}})$. So we have that \[ \frac{1}{\sqrt{E_{\boldsymbol{\mathit{r}}^T}(\boldsymbol{\mathit{\phi}})}}\left| \left\langle \boldsymbol{\mathit{\widetilde{\Delta}}}^t - \boldsymbol{\mathit{\Delta}}^t, \boldsymbol{\mathit{\phi}} \right\rangle \right| \leq\widehat{\eps} \cdot \sqrt{\alpha} \,. \] Summing up over $T$ insertions, we obtain the desired error bound. Furthermore, note that returning the estimate takes time proportional to $\vert C \vert$, which is $\tO{\beta m + T}$. \end{proof} \appendix \section{Maintaining the Schur Complement} \label{sec:maintain_schur} Following the scheme from~\cite{gao2021fully} we maintain a dynamic Schur complement of the graph onto a subset of terminals $C$. The approach follows rather directly from~\cite{gao2021fully} and leverages the recent work of~\cite{bernstein2020fully} to dynamically maintain an edge sparsifier of the Schur complement of the graph onto $C$. Compared to~\cite{gao2021fully} we do not require a parameter that depends on the adaptivity of the adversary. In addition, when adding a vertex to $C$ we also return a $(1+\varepsilon)$-approximation of the effective resistance $R_{eff}(v,C)$, which gets returned by the function call. \begin{lemma}[\textsc{DynamicSC} (Theorem 4, \cite{gao2021fully})] There is a \textsc{DynamicSC} data structure supporting the following operations with the given runtimes against oblivious adversaries, for constants $0 < \beta,\varepsilon < 1$: \begin{itemize} \item{\textsc{Initialize}$(G, C^{\text{(init)}}, \boldsymbol{\mathit{r}}, \varepsilon, \beta)$: Initializes a graph $G$ with resistances $\boldsymbol{\mathit{r}}$ and a set of safe terminals $C^{\text{(safe)}}$. Sets the terminal set $C = C^{\text{(safe)}} \cup C^{\text{(init)}}$. Runtime: $\tO{m\beta^{-4} \varepsilon^{-4}}$. } \item{\textsc{AddTerminal}$(v\in V(G))$: Returns ${\widetilde{R}}_{eff}(C,v) \approx_{2} R_{eff}(C,v)$ and adds $v$ as a terminal. Runtime: Amortized $\tO{\beta^{-2} \varepsilon^{-2}}$. } \item{\textsc{TemporaryAddTerminals}$(\Delta C\subseteq V(G))$: Adds all vertices in the set $\Delta C$ as (temporary) terminals. Runtime: Worst case $\tO{K^2 \beta^{-4} \varepsilon^{-4}}$, where $K$ is the total number of terminals added by all of the $\textsc{TemporaryAddTerminals}$ operations that have not been rolled back using $\textsc{Rollback}$. All $\textsc{TemporaryAddTerminals}$ operations should be rolled back before the next call to $\textsc{AddTerminals}$. } \item{\textsc{Update}$(e, r)$: Under the guarantee that both endpoints of $e$ are terminals, updates $r_e = r$. Runtime: Worst case $\tO{1}$. } \item{$\widetilde{SC}()$: Returns a spectral sparsifier $\widetilde{SC} \approx_{1+\varepsilon} SC(G,C)$ (with respect to resistances $\boldsymbol{\mathit{r}}$) with $\tO{|C|\varepsilon^{-2}}$ edges. Runtime: Worst case $\tO{\left(\beta m + (K\beta^{-2}\varepsilon^{-2})^2\right)\varepsilon^{-2}}$ where $K$ is the total number of terminals added by all of the $\textsc{TemporaryAddTerminals}$ operations that have not been rolled back. } \item{\textsc{Rollback}$()$: Rolls back the last $\textsc{Update}$, $\textsc{AddTerminals}$, or \textsc{TemporaryAddTerminals} if it exists. The runtime is the same as the original operation. } \end{itemize} Finally, all calls return valid outputs with high probability. The size of $C$ should always be $O(\beta m)$. \end{lemma} This data structure is analyzed in detail in~\cite{gao2021fully}. Additionally, let us show that an approximation to $R_{eff}(v,C)$ can be efficiently computed along with the \textsc{AddTerminal} operation. To get an estimate we simply inspect the neighbors of $v$ in the sparsified Schur complement of $C \cup \{v\}$ and compute the inverse of the sum of their inverses. This is indeed a $1+O(\varepsilon)$-approximation, as effective resistances are preserved within a $1+O(\varepsilon)$ factor in the sparsifier. To show that this operation takes little amortized time, we note that by the proof appearing in~\cite[Lemma 6.2]{gao2021fully}, vertex $v$ appears in amortized $\tO{1}$ expanders maintained dynamically. As the dynamic sparsifier keeps $\tO{\varepsilon^{-2}}$ neighbors of $v$ from each expander, the number of neighbors to inspect with each call is $\tO{\varepsilon^{-2}}$, which also bounds the time necessary to approximate the resistance. \section{Auxiliary Lemmas} \label{sec:aux} \begin{replemma}{lem:sc-energy-bd} Let $\boldsymbol{\mathit{d}}$ be a demand vector, let $\boldsymbol{\mathit{r}}$ be resistances, and let $C \subseteq V$ be a subset of vertices. Then \[ \mathcal{E}_{\boldsymbol{\mathit{r}}}\left( \boldsymbol{\mathit{\pi}}^C(\boldsymbol{\mathit{d}}) \right) \leq \mathcal{E}_{\boldsymbol{\mathit{r}}}\left( \boldsymbol{\mathit{d}} \right) \,. \] \end{replemma} \begin{proof} Letting $F = V\setminus C$, and $\boldsymbol{\mathit{L}}$ be the Laplacian of the underlying graph, we can write \[ \boldsymbol{\mathit{\pi}}^C(\boldsymbol{\mathit{d}}) = \boldsymbol{\mathit{d}}_C - \boldsymbol{\mathit{L}}_{CF} \boldsymbol{\mathit{L}}_{FF}^{-1} \boldsymbol{\mathit{d}}_F\,. \] By factoring $\boldsymbol{\mathit{L}}^+$ as \begin{align*} \boldsymbol{\mathit{L}}^+ =\left[\begin{array}{cc} I & 0\\ -\boldsymbol{\mathit{L}}_{FF}^{-1}\boldsymbol{\mathit{L}}_{FC} & I \end{array}\right]\left[\begin{array}{cc} SC(\boldsymbol{\mathit{L}},C)^{+} & 0\\ 0 & \boldsymbol{\mathit{L}}_{FF}^{-1} \end{array}\right]\left[\begin{array}{cc} I & -\boldsymbol{\mathit{L}}_{CF}\boldsymbol{\mathit{L}}_{FF}^{-1}\\ 0 & I \end{array}\right] \end{align*} we can write \begin{align*} \mathcal{E}_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{d}}) = \boldsymbol{\mathit{d}}^\top \boldsymbol{\mathit{L}}^+ \boldsymbol{\mathit{d}} = \left[\begin{array}{cc} \boldsymbol{\mathit{\pi}}^C(\boldsymbol{\mathit{d}}) \\ \boldsymbol{\mathit{d}}_F \end{array}\right]^\top \left[\begin{array}{cc} SC(\boldsymbol{\mathit{L}},C)^{+} & 0\\ 0 & \boldsymbol{\mathit{L}}_{FF}^{-1} \end{array}\right]\left[\begin{array}{cc} \boldsymbol{\mathit{\pi}}^C(\boldsymbol{\mathit{d}}) \\ \boldsymbol{\mathit{d}}_F \end{array}\right] = \|\boldsymbol{\mathit{\pi}}^C(\boldsymbol{\mathit{d}})\|_{SC(\boldsymbol{\mathit{L}},C)^+}^2 + \|\boldsymbol{\mathit{d}}_F\|_{\boldsymbol{\mathit{L}}_{FF}^{-1}}^2\,. \end{align*} Furthermore, we can use the same factorization to write \begin{align*} \mathcal{E}_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{\pi}}^C(\boldsymbol{\mathit{d}})) = \left[\begin{array}{cc} \boldsymbol{\mathit{\pi}}^C(\boldsymbol{\mathit{d}}) \\ 0 \end{array}\right]^\top \left[\begin{array}{cc} SC(\boldsymbol{\mathit{L}},C)^{+} & 0\\ 0 & \boldsymbol{\mathit{L}}_{FF}^{-1} \end{array}\right]\left[\begin{array}{cc} \boldsymbol{\mathit{\pi}}^C(\boldsymbol{\mathit{d}}) \\ 0 \end{array}\right] = \|\boldsymbol{\mathit{\pi}}^C(\boldsymbol{\mathit{d}})\|_{SC(\boldsymbol{\mathit{L}},C)^+}^2\,, \end{align*} which proves the claim. \end{proof} \begin{lemma} For any $\mu\in(1/\mathrm{poly}(m),\mathrm{poly}(m))$, we have $\left\|\boldsymbol{\mathit{r}}(\mu)\right\|_\infty \leq m^{\tO{\log m}}$. \end{lemma} \begin{proof} By Appendix A in~\cite{axiotis2020circulation}, for some $\mu_0 = \Theta(\left\|\boldsymbol{\mathit{c}}\right\|_2)$, the solution $\boldsymbol{\mathit{f}} = \boldsymbol{\mathit{u}} / 2$ has \[ \left\|\boldsymbol{\mathit{C}}^\top \left(\frac{\boldsymbol{\mathit{c}}}{\mu_0} + \frac{\mathbf{1}}{\boldsymbol{\mathit{s}}^+} - \frac{\mathbf{1}}{\boldsymbol{\mathit{s}}^-}\right)\right\|_{(\boldsymbol{\mathit{C}}^\top \boldsymbol{\mathit{R}} \boldsymbol{\mathit{C}})^+} \leq 1/10\,.\] This implies that $\min_e \left\{s_e(\mu_0)^+,s_e(\mu_0)^-\right\} \geq \min_e u_e / 4 \geq 1/4$, and so $\left\|\boldsymbol{\mathit{r}}(\mu_0)\right\|_\infty \leq O(1)$. Additinally, $\left\|\boldsymbol{\mathit{c}}\right\|_\infty \in\left[1,\mathrm{poly}(m)\right]$, so $\mu_0 = \Theta\left(\mathrm{poly}(m)\right)$. Now, for any integer $i \geq 0$ we let $\mu_{i+1} = \mu_{i}\cdot (1 - 1/\sqrt{m})^{\sqrt{m}/10}$. By Lemma~\ref{lem:resistance_stability2} we have that $\boldsymbol{\mathit{r}}\left(\mu_{i+1}\right) \approx_{m^2} \boldsymbol{\mathit{r}}\left(\mu_{i}\right)$, and so \[ \boldsymbol{\mathit{r}}\left(\frac{1}{\mathrm{poly}(m)}\right) = \boldsymbol{\mathit{r}}(\mu_{\tO{\log m}}) \leq \left(\frac{9}{100} m^2\right)^{\tO{\log m}} \boldsymbol{\mathit{r}}(\mu_0) \leq m^{\tO{\log m}} \boldsymbol{\mathit{r}}(\mu_0) \leq m^{\tO{\log m}} \,. \] \end{proof} \begin{lemma} Given a graph $G(V,E)$ with resistances $\boldsymbol{\mathit{r}}$ and any parameter $\varepsilon > 0$, there exists an algorithm that runs in time $\tO{m/\varepsilon^2}$ and produces a matrix $\boldsymbol{\mathit{Q}}\in\mathbb{R}^{\tO{1/\varepsilon^2}\times n}$ such that with high probability for any $u,v\in V$, \[ R_{eff}(u,v) \approx_{1+\varepsilon} \left\|\boldsymbol{\mathit{Q}}\mathbf{1}_u - \boldsymbol{\mathit{Q}}\mathbf{1}_v\right\|_2^2\] \label{lem:approx_effective_res} \end{lemma} \section{Deferred Proofs from Section~\ref{sec:ipm}} \subsection{Central path stability bounds} \label{proof_stability_bounds} \begin{lemma}[Central path energy stability] Consider a minimum cost flow instance on a graph $G(V,E)$. For any $\mu > 0$ and $\mu' = \mu / (1+1/\sqrt{m})^k$ for some $k \in (0,\sqrt{m}/10)$, we have \begin{align*} \sum\limits_{e\in E} \left(\frac{1}{s_e(\mu)^+\cdot s_e(\mu')^+} + \frac{1}{s_e(\mu)^-\cdot s_e(\mu')^-}\right) \left(f_e(\mu') - f_e(\mu)\right)^2 \leq 2k^2 \end{align*} \label{lem:energy_stability} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:energy_stability}] We let $\delta = 1/\sqrt{m}$, $\boldsymbol{\mathit{f}} = \boldsymbol{\mathit{f}}(\mu)$, $\boldsymbol{\mathit{s}} = \boldsymbol{\mathit{s}}(\mu)$, $\boldsymbol{\mathit{r}} = \boldsymbol{\mathit{r}}(\mu)$, $\boldsymbol{\mathit{f}}' = \boldsymbol{\mathit{f}}(\mu')$, $\boldsymbol{\mathit{s}}'= \boldsymbol{\mathit{s}}(\mu')$, and $\boldsymbol{\mathit{r}}' = \boldsymbol{\mathit{r}}(\mu')$. We also set $\boldsymbol{\mathit{\widetilde{f}}} = \boldsymbol{\mathit{f}}' - \boldsymbol{\mathit{f}}$. By definition of centrality we have \begin{align*} & \boldsymbol{\mathit{C}}^\top \left(\frac{1}{\boldsymbol{\mathit{s}}^-} - \frac{1}{\boldsymbol{\mathit{s}}^+}\right) = \boldsymbol{\mathit{C}}^\top \frac{\boldsymbol{\mathit{c}}}{\mu}\\ & \boldsymbol{\mathit{C}}^\top \left(\frac{1}{\boldsymbol{\mathit{s}}^- + \boldsymbol{\mathit{\widetilde{f}}}} - \frac{1}{\boldsymbol{\mathit{s}}^+ - \boldsymbol{\mathit{\widetilde{f}}}}\right) = \boldsymbol{\mathit{C}}^\top \frac{\boldsymbol{\mathit{c}}}{\mu'}\,, \end{align*} which, after subtracting, give \begin{align*} & \boldsymbol{\mathit{C}}^\top \left(\frac{1}{\boldsymbol{\mathit{s}}^- + \boldsymbol{\mathit{\widetilde{f}}}} - \frac{1}{\boldsymbol{\mathit{s}}^-} - \frac{1}{\boldsymbol{\mathit{s}}^+ - \boldsymbol{\mathit{\widetilde{f}}}} + \frac{1}{\boldsymbol{\mathit{s}}^+}\right) = \boldsymbol{\mathit{C}}^\top \left(\frac{\boldsymbol{\mathit{c}}}{\mu'} - \frac{\boldsymbol{\mathit{c}}}{\mu}\right)\\ & \Leftrightarrow \boldsymbol{\mathit{C}}^\top \left(\left(\frac{1}{\boldsymbol{\mathit{s}}^-(\boldsymbol{\mathit{s}}^- + \boldsymbol{\mathit{\widetilde{f}}})} +\frac{1}{\boldsymbol{\mathit{s}}^+(\boldsymbol{\mathit{s}}^+ - \boldsymbol{\mathit{\widetilde{f}}})}\right)\boldsymbol{\mathit{\widetilde{f}}}\right) = -\left((1 + \delta)^k - 1\right)\boldsymbol{\mathit{C}}^\top \frac{\boldsymbol{\mathit{c}}}{\mu}\,. \end{align*} As $\boldsymbol{\mathit{\widetilde{f}}} = \boldsymbol{\mathit{C}}\boldsymbol{\mathit{x}}$ for some $\boldsymbol{\mathit{x}}$, after taking the inner product of both sides with $\boldsymbol{\mathit{x}}$ we get \begin{align} & \left\langle \boldsymbol{\mathit{\widetilde{f}}}, \left(\frac{1}{\boldsymbol{\mathit{s}}^-(\boldsymbol{\mathit{s}}^- + \boldsymbol{\mathit{\widetilde{f}}})} +\frac{1}{\boldsymbol{\mathit{s}}^+(\boldsymbol{\mathit{s}}^+ - \boldsymbol{\mathit{\widetilde{f}}})}\right)\boldsymbol{\mathit{\widetilde{f}}}\right\rangle = -\left((1+\delta)^k - 1\right) \left\langle\frac{\boldsymbol{\mathit{c}}}{\mu},\boldsymbol{\mathit{\widetilde{f}}}\right\rangle\,. \label{eq:energy_stable_prelim} \end{align} We will now prove that $-\left\langle\frac{\boldsymbol{\mathit{c}}}{\mu},\boldsymbol{\mathit{\widetilde{f}}}\right\rangle \leq k \sqrt{m}$. First of all, by differentiating the centrality condition \[ \boldsymbol{\mathit{C}}^\top \left(\frac{\boldsymbol{\mathit{c}}}{\nu} + \frac{\mathbf{1}}{\boldsymbol{\mathit{s}}(\nu)^+} - \frac{\mathbf{1}}{\boldsymbol{\mathit{s}}(\nu)^-}\right) = \mathbf{0}\] with respect to $\nu$ we get \[ \boldsymbol{\mathit{C}}^\top \left(-\frac{\boldsymbol{\mathit{c}}}{\nu^2} + \left(\frac{\mathbf{1}}{(\boldsymbol{\mathit{s}}(\nu)^+)^2} + \frac{\mathbf{1}}{(\boldsymbol{\mathit{s}}(\nu)^-)^2}\right) \frac{d\boldsymbol{\mathit{f}}(\nu)}{d\nu} \right) = \mathbf{0} \,,\] or equivalently \[ \boldsymbol{\mathit{C}}^\top \left(\boldsymbol{\mathit{r}}(\nu) \frac{d\boldsymbol{\mathit{f}}(\nu)}{d\nu}\right) = -\frac{1}{\nu} \boldsymbol{\mathit{C}}^\top \left(\frac{\mathbf{1}}{\boldsymbol{\mathit{s}}(\nu)^+} - \frac{\mathbf{1}}{\boldsymbol{\mathit{s}}(\nu)^-}\right) \,.\] If we set $g(\boldsymbol{\mathit{s}}) = \frac{\frac{\mathbf{1}}{\boldsymbol{\mathit{s}}^+}-\frac{\mathbf{1}}{\boldsymbol{\mathit{s}}^-}}{\boldsymbol{\mathit{r}}}$, this can also be equivalently written as \[ \frac{d\boldsymbol{\mathit{f}}(\nu)}{d\nu} = -\frac{1}{\nu} \left(g(\boldsymbol{\mathit{s}}(\nu)) - (\boldsymbol{\mathit{R}}(\nu))^{-1}\boldsymbol{\mathit{B}} (\boldsymbol{\mathit{B}}^\top (\boldsymbol{\mathit{R}}(\nu)^{-1})\boldsymbol{\mathit{B}})^+ \boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}}(\nu))\right)\,.\] We have \begin{align*} -\left\langle\frac{\boldsymbol{\mathit{c}}}{\mu},\boldsymbol{\mathit{\widetilde{f}}}\right\rangle & = -\int_{\nu=\mu}^{\mu'}\left\langle\frac{\boldsymbol{\mathit{c}}}{\mu}, d\boldsymbol{\mathit{f}}(\nu) \right\rangle\\ & = \frac{1}{\mu}\int_{\nu=\mu}^{\mu'}\left\langle\frac{\nu}{\boldsymbol{\mathit{s}}(\nu)^-} - \frac{\nu}{\boldsymbol{\mathit{s}}(\nu)^+}, \frac{1}{\nu} \left(g(\boldsymbol{\mathit{s}}(\nu)) - (\boldsymbol{\mathit{R}}(\nu))^{-1}\boldsymbol{\mathit{B}}(\boldsymbol{\mathit{B}}^\top (\boldsymbol{\mathit{R}}(\nu))^{-1} \boldsymbol{\mathit{B}})^+ \boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}}(\nu))\right) \right\rangle d\nu\\ & = -\frac{1}{\mu}\int_{\nu=\mu}^{\mu'}\left\langle \sqrt{\boldsymbol{\mathit{r}}(\nu)} g(\boldsymbol{\mathit{s}}(\nu)), \boldsymbol{\mathit{\Pi}}_{\mathrm{ker}(\boldsymbol{\mathit{B}}^\top (\boldsymbol{\mathit{R}}(\nu))^{-1/2})} \sqrt{\boldsymbol{\mathit{r}}(\nu)}g(\boldsymbol{\mathit{s}}(\nu)) \right\rangle d\nu\\ & = \frac{1}{\mu}\int_{\nu=\mu'}^{\mu}\left\|\boldsymbol{\mathit{\Pi}}_{\mathrm{ker}(\boldsymbol{\mathit{B}}^\top (\boldsymbol{\mathit{R}}(\nu))^{-1/2})} \sqrt{\boldsymbol{\mathit{r}}(\nu)}g(\boldsymbol{\mathit{s}}(\nu)) \right\|_2^2 d\nu\\ & \leq \frac{1}{\mu}\int_{\nu=\mu'}^{\mu}\left\|\sqrt{\boldsymbol{\mathit{r}}(\nu)}g(\boldsymbol{\mathit{s}}(\nu)) \right\|_2^2 d\nu\\ & \leq \frac{1}{\mu}\int_{\nu=\mu'}^{\mu} m d\nu\\ & = m \frac{\mu - \mu'}{\mu}\\ & = m(1 - (1+\delta)^{-k})\\ & \leq \delta k m\\ & = k \sqrt{m}\,, \end{align*} where $\boldsymbol{\mathit{\Pi}}_{\mathrm{ker}(\boldsymbol{\mathit{B}}^\top(\boldsymbol{\mathit{R}}(\nu))^{-1/2})} =\boldsymbol{\mathit{I}} - (\boldsymbol{\mathit{R}}(\nu))^{-1/2}\boldsymbol{\mathit{B}} (\boldsymbol{\mathit{B}}^\top (\boldsymbol{\mathit{R}}(\nu))^{-1}\boldsymbol{\mathit{B}})^+\boldsymbol{\mathit{B}}^\top (\boldsymbol{\mathit{R}}(\nu))^{-1/2}$ is the orthogonal projection onto the kernel of $\boldsymbol{\mathit{B}}^\top (\boldsymbol{\mathit{R}}(\nu))^{-1/2}$. Plugging this into (\ref{eq:energy_stable_prelim}) and using the fact that $(1+\delta)^k \leq 1+1.1\delta k = 1 + 1.1 k / \sqrt{m}$, we get \begin{align*} \sum\limits_{e\in E} \left(\frac{1}{s_e(\mu)^+\cdot s_e(\mu')^+} + \frac{1}{s_e(\mu)^-\cdot s_e(\mu')^-}\right) \left(f_e(\mu') - f_e(\mu)\right)^2 \leq 2k^2\,. \end{align*} \end{proof} We give an auxiliary lemma which converts between different kinds of slack approximations. \begin{lemma} We consider flows $\boldsymbol{\mathit{f}},\boldsymbol{\mathit{f}}'$ with slacks $\boldsymbol{\mathit{s}},\boldsymbol{\mathit{s}}'$ and resistances $\boldsymbol{\mathit{r}},\boldsymbol{\mathit{r}}'$. Then, \[ \max\left\{\left|\frac{s_e'^+-s_e^+}{s_e^+}\right|,\left|\frac{s_e'^--s_e^-}{s_e^-}\right|\right\}\leq \sqrt{r_e} \left|f_e' - f_e\right| \leq \sqrt{2}\max\left\{\left|\frac{s_e'^+-s_e^+}{s_e^+}\right|,\left|\frac{s_e'^--s_e^-}{s_e^-}\right|\right\}\] and if $r_e\not\approx_{1+\gamma} r_e'$ for some $\gamma\in(0,1)$, then $\sqrt{r_e}\left|f_e' - f_e\right| \geq \gamma/6$. \label{lem:approx_conversion} \end{lemma} \begin{proof} For the first one, note that \begin{align*} r_e = \frac{1}{(s_e^+)^2} + \frac{1}{(s_e^-)^2} \in\left[\max\left\{\frac{1}{(s_e^+)^2} ,\frac{1}{(s_e^-)^2}\right\}, 2\max\left\{\frac{1}{(s_e^+)^2}, \frac{1}{(s_e^-)^2}\right\}\right]\,. \end{align*} Together with the fact that $\left|f_e'-f_e\right| = \left|s_e'^+-s_e^+\right| = \left|s_e'^--s_e^-\right|$, it implies the first statement. For the second one, without loss of generality let $s_e^+ \leq s_e^-$, so by the previous statement we have $\sqrt{r_e}\left|f_e' - f_e\right| \geq \frac{\left|s_e'^+ - s_e^+\right|}{s_e^+}$. If this is $<\gamma/6$ then $(1-\gamma/6) s_e^+ \leq s_e'^+ \leq (1 + \gamma/6) s_e^+$, so $s_e'^+ \approx_{1+\gamma/3} s_e^+$. However, we also have that $\frac{\left|s_e'^- - s_e^-\right|}{s_e^-} \leq \frac{\left|s_e'^+ - s_e^+\right|}{s_e^+} \leq \gamma /6$, so $s_e'^- \approx_{1+\gamma/3} s_e^-$. Therefore, $r_e' = \frac{1}{(s_e'^+)^2} + \frac{1}{(s_e'^-)^2} \approx_{1+\gamma} \frac{1}{(s_e^+)^2} + \frac{1}{(s_e^-)^2} = r_e$, a contradiction. \end{proof} The following lemma is a fine-grained explanation of how resistances can change. \begin{lemma} Consider a minimum cost flow instance on a graph $G(V,E)$ and parameters $\mu > 0$ and $\mu' \geq \mu / (1+1/\sqrt{m})^{k}$, where $k \in(0,\sqrt{m} / 10)$. For any $e\in E$ and $\gamma\in(0,1)$ we let $\mathrm{change}(e,\gamma)$ be the largest integer $t(e)\geq 0$ such that there are real numbers $\mu = \mu_{1}(e) > \mu_{2}(e) > \dots > \mu_{t(e)+1}(e) = \mu'$ with $\sqrt{r_e(\mu_i)} \left|f_e(\mu_{i+1}) - f_e(\mu_i)\right| \geq \gamma$ for all $i\in[t(e)]$. Then, $\sum\limits_{e\in E} \left(\mathrm{change}(e,\gamma)\right)^2 \leq O(k^2 / \gamma^2)$. \label{lem:resistance_stability1} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:resistance_stability1}] First, we assume that without loss of generality, $r_e(\mu_{i+1}(e)) \approx_{(1+6\gamma)^2} r_e(\mu_i(e))$ for all $e\in E$ and $i\in[t(e)]$. If this is not true, then by continuity there exists a $\nu\in(\mu_{i+1},\mu_i)$ such that $r_e(\mu_{i+1}(e)) \not\approx_{1+6\gamma} r_e(\nu)$ and $r_e(\nu) \not\approx_{1+6\gamma} r_e(\mu_i(e))$. By Lemma~\ref{lem:approx_conversion}, this implies that $\sqrt{r_e(\mu_{i+1}(e))} \left|f_e(\nu) - f_e(\mu_{i+1}(e))\right| \geq \gamma$ and $\sqrt{r_e(\nu)} \left|f_e(\mu_{i}(e)) - f_e(\nu)\right| \geq \gamma$. Therefore we can break the interval $(\mu_{i+1},\mu_i)$ into $(\mu_{i+1},\nu)$ and $(\nu,\mu_{i})$ and make the statement stronger. Similarly, we also assume that $r_e(\nu) \approx_{(1+6\gamma)^3} r_e(\mu_i(e))$ for all $e\in E$, $i\in[t(e)]$, and $\nu\in(\mu_{i+1},\mu_i)$. If this is not the case, then by using the fact that $r_e(\mu_{i+1}(e)) \approx_{(1+6\gamma)^2} r_e(\mu_i(e))$, we also get that $r_e(\mu_{i+1}) \not \approx_{1+6\gamma} r_e(\nu)$, and so we can again break the interval as before and obtain a stronger statement. Now, we look at the following integral: \[ \mathcal{E} := \int_{\nu=\mu}^{\mu'} \sum\limits_{e\in E} r_e(\nu) \left(\frac{df_e(\nu)}{d\nu}\right)^2 \left|d\nu\right|\,, \] where $df_e(\nu)$ is the differential of the flow $f_e(\nu)$ with respect to the centrality parameter. Similarly to Lemma~\ref{lem:energy_stability}, we use the following equation that describes how the flow changes: \[ \frac{d\boldsymbol{\mathit{f}}(\nu)}{d\nu} = -\frac{1}{\nu} \left(g(\boldsymbol{\mathit{s}}(\nu)) - (\boldsymbol{\mathit{R}}(\nu))^{-1}\boldsymbol{\mathit{B}} (\boldsymbol{\mathit{B}}^\top (\boldsymbol{\mathit{R}}(\nu)^{-1})\boldsymbol{\mathit{B}})^+ \boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}}(\nu))\right)\,.\] This implies that \begin{align*} \left\|\sqrt{\boldsymbol{\mathit{r}}(\nu)} \frac{d\boldsymbol{\mathit{f}}(\nu)}{d\nu}\right\|_2^2 & = \frac{1}{\nu^2} \left\|\sqrt{\boldsymbol{\mathit{r}}(\nu)}g(\boldsymbol{\mathit{s}}(\nu)) - (\boldsymbol{\mathit{R}}(\nu))^{-1/2}\boldsymbol{\mathit{B}} (\boldsymbol{\mathit{B}}^\top (\boldsymbol{\mathit{R}}(\nu)^{-1})\boldsymbol{\mathit{B}})^+ \boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}}(\nu))\right\|_2^2\\ & \leq \frac{1}{\nu^2} \left\|\left(I - (\boldsymbol{\mathit{R}}(\nu))^{-1/2}\boldsymbol{\mathit{B}} (\boldsymbol{\mathit{B}}^\top (\boldsymbol{\mathit{R}}(\nu)^{-1})\boldsymbol{\mathit{B}})^+ \boldsymbol{\mathit{B}}^\top(\boldsymbol{\mathit{R}}(\nu))^{-1/2}\right) \sqrt{\boldsymbol{\mathit{r}}}g(\boldsymbol{\mathit{s}}(\nu))\right\|_2^2\\ & \leq \frac{1}{\nu^2} \left\| \sqrt{\boldsymbol{\mathit{r}}}g(\boldsymbol{\mathit{s}}(\nu))\right\|_2^2\\ &\leq \frac{m}{\nu^2}\,, \end{align*} and so \begin{align} \mathcal{E} \leq \int_{\nu=\mu}^{\mu'} \frac{m}{\nu^2} \left|d\nu\right| = m \left(\frac{1}{\mu'} - \frac{1}{\mu}\right) \leq m \frac{1.1\delta k}{\mu} = 1.1 k \sqrt{m} / \mu\,. \label{eq:integral_upper_bound} \end{align} On the other hand, for any $e\in E$ and $i\in[t(e)]$ we have \begin{align*} \int_{\nu=\mu_i(e)}^{\mu_{i+1}(e)} r_e(\nu) \left(\frac{df_e(\nu)}{d\nu}\right)^2 \left|d\nu\right| & \geq \frac{r_e(\mu_i(e))}{(1+6\gamma)^3} \int_{\nu=\mu_i(e)}^{\mu_{i+1}(e)} \left(\frac{df_e(\nu)}{d\nu}\right)^2 \left|d\nu\right| \\ & \geq \frac{r_e(\mu_i(e))}{(1+6\gamma)^3} \frac{\left(\int_{\nu=\mu_i(e)}^{\mu_{i+1}(e)} \left|\frac{df_e(\nu)}{d\nu}\right| \left|d\nu\right|\right)^2} { \int_{\nu=\mu_i(e)}^{\mu_{i+1}(e)} \left|d\nu\right| }\\ & = \frac{r_e(\mu_i(e))}{(1+6\gamma)^3(\mu_i(e) - \mu_{i+1}(e))} \left(f(\mu_i(e)) - f(\mu_{i+1}(e))\right)^2\\ & \geq \frac{\gamma^2}{36(1+6\gamma)^3(\mu_i(e) - \mu_{i+1}(e))}\,, \end{align*} where we used the Cauchy-Schwarz inequality. Now, note that \begin{align*} \int_{\nu=\mu_1(e)}^{\mu_{t(e)+1}(e)} r_e(\nu) \left(\frac{df_e(\nu)}{d\nu}\right)^2 \left|d\nu\right| & \geq \sum\limits_{i=1}^{t(e)} \frac{\gamma^2}{(1+6\gamma)^3(\mu_i(e) - \mu_{i+1}(e))}\\ & \geq \frac{\gamma^2 (t(e))^2}{(1+6\gamma)^3(\mu - \mu')}\\ & \geq \frac{\gamma^2 (t(e))^2\sqrt{m}}{(1+6\gamma)^3 k\mu}\,, \end{align*} where remember that $t(e) = \mathrm{change}(e,\gamma)$ and we again used Cauchy-Schwarz. Summing this up for all $e\in E$ and combining with (\ref{eq:integral_upper_bound}), we get that $\sum\limits_{e\in E} (\mathrm{change}(e,\gamma))^2 \leq O(k^2 / \gamma^2)$. \end{proof} \begin{lemma}[Central path $\ell_\infty$ slack stability] Consider a minimum cost flow instance on a graph $G(V,E)$. For any $\mu > 0$ and $\mu' = \mu / (1+1/\sqrt{m})^k$ for some $k \in (0,\sqrt{m}/10)$, we have \begin{align*} \boldsymbol{\mathit{s}}(\mu') \approx_{3k^2} \boldsymbol{\mathit{s}}(\mu)\,. \end{align*} \label{lem:resistance_stability2} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:resistance_stability2}] By Lemma~\ref{lem:energy_stability}, for any $e\in E$ we have that \begin{align} \left(\frac{1}{s_e(\mu)^+\cdot s_e(\mu')^+} + \frac{1}{s_e(\mu)^-\cdot s_e(\mu')^-}\right) \left(f_e(\mu') - f_e(\mu)\right)^2 \leq 2k^2\,. \label{eq:edge_energy_stability} \end{align} If $s_e(\mu')^+ = (1+c)\cdot s_e(\mu)^+$ for some $c\geq 0$, then \[ (f_e(\mu') - f_e(\mu))^2 = c^2 (s_e(\mu)^+)^2\] and \[s_e(\mu)^+\cdot s_e(\mu')^+ = (1+c) (s_e(\mu)^+)^2\,,\] so by (\ref{eq:edge_energy_stability}) we have that $c \leq 3k^2$. Similarly, if $s_e(\mu')^+ = (1+c)^{-1} \cdot s_e(\mu)^+$ for some $c \geq 0$, then \[ (f_e(\mu') - f_e(\mu))^2 = c^2 (s_e(\mu')^+)^2\] and \[ s_e(\mu)^+\cdot s_e(\mu')^+ = (1+c) (s_e(\mu')^+)^2\,,\] so by (\ref{eq:edge_energy_stability}) we have that $c \leq 3k^2$. We have proved that $s_e(\mu')^+\approx_{3k^2} s_e(\mu)^+$ and by symmetry we also have $s_e(\mu')^- \approx_{3k^2} s_e(\mu)^-$. \end{proof} \subsection{Proof of Lemma~\ref{lem:approx_central}} \label{sec:proof_lem_approx_central} Our goal is to keep track of how close $\boldsymbol{\mathit{f}}^*$ remains to centrality (in $\ell_2$ norm) and how close $\boldsymbol{\mathit{f}}$ remains to $\boldsymbol{\mathit{f}}^*$ in $\ell_\infty$ norm. From these two we can conclude that at all times $\boldsymbol{\mathit{f}}$ is close in $\ell_\infty$ to the central flow. We first prove the following lemma, which bounds how the distance of $\boldsymbol{\mathit{f}}^*$ to centrality (measured in energy of the residual) degrades when taking a progress step. \begin{lemma} Let $\boldsymbol{\mathit{f}}^*$ be a flow with slacks $\boldsymbol{\mathit{s}}^*$ and resistances $\boldsymbol{\mathit{r}}^*$, and $\boldsymbol{\mathit{f}}$ be a flow with slacks $\boldsymbol{\mathit{s}}$ and resistances $\boldsymbol{\mathit{r}}$, where $\boldsymbol{\mathit{s}}\approx_{1+\varepsilon_{\mathrm{solve}}} \boldsymbol{\mathit{s}}^*$ for some $\varepsilon_{\mathrm{solve}} \in(0,0.1)$. We define $\boldsymbol{\mathit{f}}'^* = \boldsymbol{\mathit{f}}^* + \varepsilon_{\mathrm{step}}\boldsymbol{\mathit{\widetilde{f}}}^*$ for some $\varepsilon_{\mathrm{step}} \in(0,0.1)$ (and the new slacks $\boldsymbol{\mathit{s}}'^*$), where \begin{align} & \boldsymbol{\mathit{\widetilde{f}}}^* = \delta g(\boldsymbol{\mathit{s}}) - \delta \boldsymbol{\mathit{R}}^{-1}\boldsymbol{\mathit{B}}(\boldsymbol{\mathit{B}}^\top \boldsymbol{\mathit{R}}^{-1} \boldsymbol{\mathit{B}})^+ \boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}}) \,, \label{eq:ipm_step} \end{align} $\delta = \frac{1}{\sqrt{m}}$, and $g(\boldsymbol{\mathit{s}}) := \frac{\frac{1}{\boldsymbol{\mathit{s}}^+} - \frac{1}{\boldsymbol{\mathit{s}}^-}}{\boldsymbol{\mathit{r}}}$. If we let $\boldsymbol{\mathit{h}} = \frac{\boldsymbol{\mathit{c}}}{\mu} + \frac{1}{\boldsymbol{\mathit{s}}^{*+}} - \frac{1}{\boldsymbol{\mathit{s}}^{*-}}$ and $\boldsymbol{\mathit{h}}' = \frac{\boldsymbol{\mathit{c}}(1+\varepsilon_{\mathrm{step}}\delta)}{\mu} + \frac{1}{\boldsymbol{\mathit{s}}'^{*+}} - \frac{1}{\boldsymbol{\mathit{s}}'^{*-}}$ be the residuals of $\boldsymbol{\mathit{f}}^*$ and $\boldsymbol{\mathit{f}}'^*$ for some $\mu > 0$, then \[ \left\|\boldsymbol{\mathit{C}}^\top \boldsymbol{\mathit{h}}'\right\|_{\boldsymbol{\overline{\mathit{H}}}^+} \leq (1+\varepsilon_{\mathrm{step}}\delta) \left\|\boldsymbol{\mathit{C}}^\top \boldsymbol{\mathit{h}}\right\|_{\boldsymbol{\overline{\mathit{H}}}^+} + 5 \left\|\frac{\boldsymbol{\mathit{r}}^*}{\boldsymbol{\mathit{\bar{r}}}}\right\|_\infty^{1/2} \varepsilon_{\mathrm{solve}} \cdot \varepsilon_{\mathrm{step}} + 2\left\|\frac{\boldsymbol{\mathit{r}}'^*}{\boldsymbol{\mathit{\bar{r}}}}\right\|_\infty^{1/2} \varepsilon_{\mathrm{step}}^2 \,, \] where $\boldsymbol{\mathit{\bar{r}}}$ are some arbitrary resistances and $\boldsymbol{\overline{\mathit{H}}} = \boldsymbol{\mathit{C}}^\top \overline{\boldsymbol{\mathit{R}}} \boldsymbol{\mathit{C}}$. \label{lem:ipm_step} \end{lemma} \begin{proof} Let $\boldsymbol{\mathit{\rho}}^+ = \varepsilon_{\mathrm{step}} \boldsymbol{\mathit{\widetilde{f}}}^* / \boldsymbol{\mathit{s}}^{*+}$ and $\boldsymbol{\mathit{\rho}}^- = -\varepsilon_{\mathrm{step}}\boldsymbol{\mathit{\widetilde{f}}}^* /\boldsymbol{\mathit{s}}^{*-}$. First of all, it is easy to see that \begin{align*} \left\|\boldsymbol{\mathit{\rho}}\right\|_2 &\leq \left\|\frac{\boldsymbol{\mathit{s}}}{\boldsymbol{\mathit{s}}^*}\right\|_\infty \left\|\frac{\boldsymbol{\mathit{s}}^*}{\boldsymbol{\mathit{s}}}\boldsymbol{\mathit{\rho}}\right\|_2\\ &\leq \varepsilon_{\mathrm{step}}(1+\varepsilon_{\mathrm{solve}})\left\|\boldsymbol{\mathit{\widetilde{f}}}^*\right\|_{\boldsymbol{\mathit{r}},2}\\ & = \varepsilon_{\mathrm{step}}\delta(1+\varepsilon_{\mathrm{solve}}) \left\|\sqrt{\boldsymbol{\mathit{r}}}g(\boldsymbol{\mathit{s}}) - \boldsymbol{\mathit{R}}^{-1/2} \boldsymbol{\mathit{B}} (\boldsymbol{\mathit{B}}^\top \boldsymbol{\mathit{R}}^{-1}\boldsymbol{\mathit{B}})^+ \boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})\right\|_{2}\\ & = \varepsilon_{\mathrm{step}}\delta (1+\varepsilon_{\mathrm{solve}})\left\|\left(\boldsymbol{\mathit{I}} - \boldsymbol{\mathit{R}}^{-1/2} \boldsymbol{\mathit{B}} (\boldsymbol{\mathit{B}}^\top \boldsymbol{\mathit{R}}^{-1}\boldsymbol{\mathit{B}})^+ \boldsymbol{\mathit{B}}^\top\boldsymbol{\mathit{R}}^{-1/2}\right) \sqrt{\boldsymbol{\mathit{r}}}g(\boldsymbol{\mathit{s}})\right\|_{2}\\ & \leq \varepsilon_{\mathrm{step}}\delta (1+\varepsilon_{\mathrm{solve}})\left\|\sqrt{\boldsymbol{\mathit{r}}}g(\boldsymbol{\mathit{s}})\right\|_{2}\\ & = \varepsilon_{\mathrm{step}}\delta (1+\varepsilon_{\mathrm{solve}})\left\|\frac{\frac{1}{\boldsymbol{\mathit{s}}^+} - \frac{1}{\boldsymbol{\mathit{s}}^-}}{\sqrt{\frac{1}{(\boldsymbol{\mathit{s}}^+)^2} + \frac{1}{(\boldsymbol{\mathit{s}}^-)^2}}}\right\|_{2}\\ & \leq \varepsilon_{\mathrm{step}} \delta (1+\varepsilon_{\mathrm{solve}})\sqrt{m}\\ & = \varepsilon_{\mathrm{step}}(1+\varepsilon_{\mathrm{solve}})\,. \end{align*} We bound the energy to route the residual of $\boldsymbol{\mathit{f}}'^*$ as \begin{align*} & \left\|\boldsymbol{\mathit{C}}^\top \boldsymbol{\mathit{h}}'\right\|_{\boldsymbol{\overline{\mathit{H}}}^+} \\ & = \left\|\boldsymbol{\mathit{C}}^\top \boldsymbol{\mathit{h}} + \boldsymbol{\mathit{C}}^\top \left(\frac{\varepsilon_{\mathrm{step}}\delta \boldsymbol{\mathit{c}}}{\mu} + \frac{1}{\boldsymbol{\mathit{s}}'^{*+}} - \frac{1}{\boldsymbol{\mathit{s}}^{*+}} - \frac{1}{\boldsymbol{\mathit{s}}'^{*-}} + \frac{1}{\boldsymbol{\mathit{s}}^{*-}}\right) \right\|_{\boldsymbol{\overline{\mathit{H}}}^{+}}\\ & = \left\|\boldsymbol{\mathit{C}}^\top \boldsymbol{\mathit{h}} + \boldsymbol{\mathit{C}}^\top \left(\frac{\varepsilon_{\mathrm{step}}\delta \boldsymbol{\mathit{c}}}{\mu} + \frac{\boldsymbol{\mathit{\rho}}^+}{\boldsymbol{\mathit{s}}'^{*+}} - \frac{\boldsymbol{\mathit{\rho}}^-}{\boldsymbol{\mathit{s}}'^{*-}} \right) \right\|_{\boldsymbol{\overline{\mathit{H}}}^{+}}\\ & = \left\|\boldsymbol{\mathit{C}}^\top \boldsymbol{\mathit{h}} + \boldsymbol{\mathit{C}}^\top \left(\frac{\varepsilon_{\mathrm{step}}\delta \boldsymbol{\mathit{c}}}{\mu} + \frac{\boldsymbol{\mathit{\rho}}^+}{\boldsymbol{\mathit{s}}^{*+}} - \frac{\boldsymbol{\mathit{\rho}}^-}{\boldsymbol{\mathit{s}}^{*-}}\right) + \boldsymbol{\mathit{C}}^\top\left( \frac{(\boldsymbol{\mathit{\rho}}^+)^2}{\boldsymbol{\mathit{s}}'^{*+}} - \frac{(\boldsymbol{\mathit{\rho}}^-)^2}{\boldsymbol{\mathit{s}}'^{*-}}\right)\right\|_{\boldsymbol{\overline{\mathit{H}}}^{+}}\,. \end{align*} Now, using (\ref{eq:ipm_step}) we get that $\boldsymbol{\mathit{r}} \boldsymbol{\mathit{\widetilde{f}}}^* = \delta \boldsymbol{\mathit{r}} g(\boldsymbol{\mathit{s}}) - \delta \boldsymbol{\mathit{B}} (\boldsymbol{\mathit{B}}^\top \boldsymbol{\mathit{R}}^{-1} \boldsymbol{\mathit{B}})^+ \boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})$ and so $\boldsymbol{\mathit{C}}^\top \left(\boldsymbol{\mathit{r}} \boldsymbol{\mathit{\widetilde{f}}}^*\right) = \delta \boldsymbol{\mathit{C}}^\top \left(\boldsymbol{\mathit{r}} g(\boldsymbol{\mathit{s}})\right)$, which follows by the fact that for any $i$, $\mathbf{1}_i^\top \boldsymbol{\mathit{C}}^\top \boldsymbol{\mathit{B}} = \left(\boldsymbol{\mathit{B}}^\top \boldsymbol{\mathit{C}} \mathbf{1}_i\right)^\top = \mathbf{0}$, since $\boldsymbol{\mathit{C}} \mathbf{1}_i$ is a circulation by definition of $\boldsymbol{\mathit{C}}$. As $\boldsymbol{\mathit{r}} \boldsymbol{\mathit{\widetilde{f}}}^* = \left(\frac{1}{(\boldsymbol{\mathit{s}}^+)^2} + \frac{1}{(\boldsymbol{\mathit{s}}^-)^2}\right) \boldsymbol{\mathit{\widetilde{f}}}^* = \varepsilon_{\mathrm{step}}^{-1} \frac{\boldsymbol{\mathit{s}}^{*+}}{(\boldsymbol{\mathit{s}}^+)^2}\boldsymbol{\mathit{\rho}}^+ - \varepsilon_{\mathrm{step}}^{-1}\frac{\boldsymbol{\mathit{s}}^{*-}}{(\boldsymbol{\mathit{s}}^-)^2} \boldsymbol{\mathit{\rho}}^- $, we have $\varepsilon_{\mathrm{step}} \delta \boldsymbol{\mathit{C}}^\top \left(\boldsymbol{\mathit{r}} g(\boldsymbol{\mathit{s}})\right) = \boldsymbol{\mathit{C}}^\top \left(\frac{\boldsymbol{\mathit{s}}^{*+}}{(\boldsymbol{\mathit{s}}^+)^2}\boldsymbol{\mathit{\rho}}^+ - \frac{\boldsymbol{\mathit{s}}^{*-}}{(\boldsymbol{\mathit{s}}^-)^2}\boldsymbol{\mathit{\rho}}^-\right)$ and so \begin{align*} & \left\|\boldsymbol{\mathit{C}}^\top \boldsymbol{\mathit{h}} + \boldsymbol{\mathit{C}}^\top \left(\frac{\varepsilon_{\mathrm{step}}\delta \boldsymbol{\mathit{c}}}{\mu} + \frac{\boldsymbol{\mathit{\rho}}^+}{\boldsymbol{\mathit{s}}^{*+}} - \frac{\boldsymbol{\mathit{\rho}}^-}{\boldsymbol{\mathit{s}}^{*-}}\right) + \boldsymbol{\mathit{C}}^\top\left( \frac{(\boldsymbol{\mathit{\rho}}^+)^2}{\boldsymbol{\mathit{s}}'^{*+}} - \frac{(\boldsymbol{\mathit{\rho}}^-)^2}{\boldsymbol{\mathit{s}}'^{*-}}\right)\right\|_{\boldsymbol{\overline{\mathit{H}}}^{+}}\\ & =\left\|\boldsymbol{\mathit{C}}^\top \boldsymbol{\mathit{h}} + \boldsymbol{\mathit{C}}^\top \left( \frac{\varepsilon_{\mathrm{step}} \delta \boldsymbol{\mathit{c}}}{\mu} + \varepsilon_{\mathrm{step}}\delta \boldsymbol{\mathit{r}} g(\boldsymbol{\mathit{s}}) -\frac{\boldsymbol{\mathit{s}}^{*+}}{(\boldsymbol{\mathit{s}}^{+})^2}\boldsymbol{\mathit{\rho}}^+ + \frac{\boldsymbol{\mathit{s}}^{*-}}{(\boldsymbol{\mathit{s}}^{-})^2} \boldsymbol{\mathit{\rho}}^- + \frac{\boldsymbol{\mathit{\rho}}^+}{\boldsymbol{\mathit{s}}^{*+}} - \frac{\boldsymbol{\mathit{\rho}}^-}{\boldsymbol{\mathit{s}}^{*-}}\right) + \boldsymbol{\mathit{C}}^\top\left( \frac{(\boldsymbol{\mathit{\rho}}^+)^2}{\boldsymbol{\mathit{s}}'^{*+}} -\frac{(\boldsymbol{\mathit{\rho}}^-)^2}{\boldsymbol{\mathit{s}}'^{*-}}\right)\right\|_{\boldsymbol{\overline{\mathit{H}}}^{+}}\\ & = \Big\|\boldsymbol{\mathit{C}}^\top \boldsymbol{\mathit{h}} + \varepsilon_{\mathrm{step}} \delta \boldsymbol{\mathit{C}}^\top \left(\frac{\boldsymbol{\mathit{c}}}{\mu} + \boldsymbol{\mathit{r}} g(\boldsymbol{\mathit{s}})\right) + \boldsymbol{\mathit{C}}^\top \left(\left( \mathbf{1} - \left(\frac{\boldsymbol{\mathit{s}}^{*+}}{\boldsymbol{\mathit{s}}^+}\right)^2\right) \frac{\boldsymbol{\mathit{\rho}}^+}{\boldsymbol{\mathit{s}}^{*+}} - \left(\mathbf{1} - \left(\frac{\boldsymbol{\mathit{s}}^{*-}}{\boldsymbol{\mathit{s}}^-}\right)^2\right) \frac{\boldsymbol{\mathit{\rho}}^-}{\boldsymbol{\mathit{s}}^{*-}}\right)\\ & + \boldsymbol{\mathit{C}}^\top\left( \frac{(\boldsymbol{\mathit{\rho}}^+)^2}{\boldsymbol{\mathit{s}}'^{*+}} - \frac{(\boldsymbol{\mathit{\rho}}^-)^2}{\boldsymbol{\mathit{s}}'^{*-}}\right)\Big\|_{\boldsymbol{\overline{\mathit{H}}}^{+}}\\ & \leq (1+\varepsilon_{\mathrm{step}}\delta)\left\|\boldsymbol{\mathit{C}}^\top \boldsymbol{\mathit{h}}\right\|_{\boldsymbol{\overline{\mathit{H}}}^{+}} + 5 \left\|\frac{\boldsymbol{\mathit{r}}^*}{\boldsymbol{\mathit{\bar{r}}}}\right\|_\infty^{1/2} \varepsilon_{\mathrm{solve}} \cdot \varepsilon_{\mathrm{step}} + 2 \left\|\frac{\boldsymbol{\mathit{r}}'^{*}}{\boldsymbol{\mathit{\bar{r}}}}\right\|_\infty^{1/2} \varepsilon_{\mathrm{step}}^2 \end{align*} where we have used the triangle inequality, the fact that \begin{align*} & \varepsilon_{\mathrm{step}}\delta \left\|\boldsymbol{\mathit{C}}^\top \left(\frac{\boldsymbol{\mathit{c}}}{\mu} + \boldsymbol{\mathit{r}} g(\boldsymbol{\mathit{s}})\right)\right\|_{\boldsymbol{\overline{\mathit{H}}}^{+}}\\ & = \varepsilon_{\mathrm{step}}\delta \left\|\boldsymbol{\mathit{C}}^\top \left(\frac{\boldsymbol{\mathit{c}}}{\mu} + \frac{1}{\boldsymbol{\mathit{s}}^+} - \frac{1}{\boldsymbol{\mathit{s}}^-}\right)\right\|_{\boldsymbol{\overline{\mathit{H}}}^{+}}\\ & \leq \varepsilon_{\mathrm{step}}\delta \left\|\boldsymbol{\mathit{C}}^\top \left(\frac{\boldsymbol{\mathit{c}}}{\mu} + \frac{1}{\boldsymbol{\mathit{s}}^{*+}} - \frac{1}{\boldsymbol{\mathit{s}}^{*-}}\right)\right\|_{\boldsymbol{\overline{\mathit{H}}}^{+}} +\varepsilon_{\mathrm{step}}\delta \left\|\boldsymbol{\mathit{C}}^\top \left(\frac{1}{\boldsymbol{\mathit{s}}^+} - \frac{1}{\boldsymbol{\mathit{s}}^{*+}} - \frac{1}{\boldsymbol{\mathit{s}}^-} + \frac{1}{\boldsymbol{\mathit{s}}^{*-}}\right)\right\|_{\boldsymbol{\overline{\mathit{H}}}^{+}}\\ & \leq \varepsilon_{\mathrm{step}}\delta \left\|\boldsymbol{\mathit{C}}^\top \left(\frac{\boldsymbol{\mathit{c}}}{\mu} + \frac{1}{\boldsymbol{\mathit{s}}^{*+}} - \frac{1}{\boldsymbol{\mathit{s}}^{*-}}\right)\right\|_{\boldsymbol{\overline{\mathit{H}}}^{+}} +\varepsilon_{\mathrm{step}}\delta \left\|\frac{\boldsymbol{\mathit{r}}^*}{\boldsymbol{\mathit{\bar{r}}}}\right\|_\infty^{1/2} \left\|\boldsymbol{\mathit{C}}^\top \left(\frac{1}{\boldsymbol{\mathit{s}}^+} - \frac{1}{\boldsymbol{\mathit{s}}^{*+}} - \frac{1}{\boldsymbol{\mathit{s}}^-} + \frac{1}{\boldsymbol{\mathit{s}}^{*-}}\right)\right\|_{(\boldsymbol{\mathit{C}}^\top \boldsymbol{\mathit{R}}^* \boldsymbol{\mathit{C}})^{+}}\\ & \leq \varepsilon_{\mathrm{step}}\delta \left\|\boldsymbol{\mathit{C}}^\top \boldsymbol{\mathit{h}}\right\|_{\boldsymbol{\overline{\mathit{H}}}^{+}} + \varepsilon_{\mathrm{step}}\delta \left\|\frac{\boldsymbol{\mathit{r}}^*}{\boldsymbol{\mathit{\bar{r}}}}\right\|_\infty^{1/2} \left\| \frac{\frac{1}{\boldsymbol{\mathit{s}}^+} - \frac{1}{\boldsymbol{\mathit{s}}^{*+}} - \frac{1}{\boldsymbol{\mathit{s}}^-} + \frac{1}{\boldsymbol{\mathit{s}}^{*-}}}{ \left(\frac{1}{(\boldsymbol{\mathit{s}}^{*+})^2} + \frac{1}{(\boldsymbol{\mathit{s}}^{*-})^2}\right)^{1/2}}\right\|_2\\ & \leq \varepsilon_{\mathrm{step}}\delta \left\|\boldsymbol{\mathit{C}}^\top \boldsymbol{\mathit{h}}\right\|_{\boldsymbol{\overline{\mathit{H}}}^{+}} + \varepsilon_{\mathrm{step}}\delta \left\|\frac{\boldsymbol{\mathit{r}}^*}{\boldsymbol{\mathit{\bar{r}}}}\right\|_\infty^{1/2}\left( \left\| \frac{\frac{1}{\boldsymbol{\mathit{s}}^+} - \frac{1}{\boldsymbol{\mathit{s}}^{*+}}}{ \left(\frac{1}{(\boldsymbol{\mathit{s}}^{*+})^2}\right)^{1/2}}\right\|_2 + \left\| \frac{\frac{1}{\boldsymbol{\mathit{s}}^-} - \frac{1}{\boldsymbol{\mathit{s}}^{*-}}}{ \left(\frac{1}{(\boldsymbol{\mathit{s}}^{*-})^2}\right)^{1/2}}\right\|_2\right)\\ & =\varepsilon_{\mathrm{step}}\delta \left\|\boldsymbol{\mathit{C}}^\top \boldsymbol{\mathit{h}}\right\|_{\boldsymbol{\overline{\mathit{H}}}^{+}} + \varepsilon_{\mathrm{step}}\delta \left\|\frac{\boldsymbol{\mathit{r}}^*}{\boldsymbol{\mathit{\bar{r}}}}\right\|_\infty^{1/2} \left( \left\| \frac{\boldsymbol{\mathit{s}}^{*+}}{\boldsymbol{\mathit{s}}^{+}} - \mathbf{1}\right\|_2 + \left\| \frac{\boldsymbol{\mathit{s}}^{*-}}{\boldsymbol{\mathit{s}}^{-}} - \mathbf{1} \right\|_2\right)\\ & \leq\varepsilon_{\mathrm{step}}\delta \left\|\boldsymbol{\mathit{C}}^\top \boldsymbol{\mathit{h}}\right\|_{\boldsymbol{\overline{\mathit{H}}}^{+}} + \varepsilon_{\mathrm{step}}\delta \left\|\frac{\boldsymbol{\mathit{r}}^*}{\boldsymbol{\mathit{\bar{r}}}}\right\|_\infty^{1/2} 2\varepsilon_{\mathrm{solve}}\sqrt{m} \text{ (as $\boldsymbol{\mathit{s}} \approx_{1+\varepsilon_{\mathrm{solve}}} \boldsymbol{\mathit{s}}^*$) } \\ & =\varepsilon_{\mathrm{step}}\delta \left\|\boldsymbol{\mathit{C}}^\top \boldsymbol{\mathit{h}}\right\|_{\boldsymbol{\overline{\mathit{H}}}^{+}} + 2 \left\|\frac{\boldsymbol{\mathit{r}}^*}{\boldsymbol{\mathit{\bar{r}}}}\right\|_\infty^{1/2} \varepsilon_{\mathrm{step}}\varepsilon_{\mathrm{solve}} \end{align*} and similarly, using $\left\|1 - \left(\frac{\boldsymbol{\mathit{s}}^{*}}{\boldsymbol{\mathit{s}}}\right)^2\right\|_\infty \leq \varepsilon_{\mathrm{solve}}(2+\varepsilon_{\mathrm{solve}})$, the fact that \begin{align*} & \Big\|\boldsymbol{\mathit{C}}^\top \left(\left( \mathbf{1} - \left(\frac{\boldsymbol{\mathit{s}}^{*+}}{\boldsymbol{\mathit{s}}^+}\right)^2\right) \frac{\boldsymbol{\mathit{\rho}}^+}{\boldsymbol{\mathit{s}}^{*+}} - \left(\mathbf{1} - \left(\frac{\boldsymbol{\mathit{s}}^{*-}}{\boldsymbol{\mathit{s}}^-}\right)^2\right) \frac{\boldsymbol{\mathit{\rho}}^-}{\boldsymbol{\mathit{s}}^{*-}}\right) + \boldsymbol{\mathit{C}}^\top\left( \frac{(\boldsymbol{\mathit{\rho}}^+)^2}{\boldsymbol{\mathit{s}}'^{*+}} +\frac{(\boldsymbol{\mathit{\rho}}^-)^2}{\boldsymbol{\mathit{s}}'^{*-}}\right)\Big\|_{\boldsymbol{\overline{\mathit{H}}}^{+}}\\ & \leq \left\|\frac{\boldsymbol{\mathit{r}}^*}{\boldsymbol{\mathit{\bar{r}}}}\right\|_\infty^{1/2} \varepsilon_{\mathrm{solve}}(2+\varepsilon_{\mathrm{solve}})\left\|\boldsymbol{\mathit{\rho}}\right\|_2 + \left\|\frac{\boldsymbol{\mathit{r}}'^*}{\boldsymbol{\mathit{\bar{r}}}}\right\|_\infty^{1/2} \left\|\boldsymbol{\mathit{\rho}}\right\|_4^2\\ & \leq \left\|\frac{\boldsymbol{\mathit{r}}^*}{\boldsymbol{\mathit{\bar{r}}}}\right\|_\infty^{1/2} \varepsilon_{\mathrm{step}}\varepsilon_{\mathrm{solve}}(2+\varepsilon_{\mathrm{solve}})(1+\varepsilon_{\mathrm{solve}}) + \left\|\frac{\boldsymbol{\mathit{r}}'^*}{\boldsymbol{\mathit{\bar{r}}}}\right\|_\infty^{1/2} \varepsilon_{\mathrm{step}}^2 (1+\varepsilon_{\mathrm{solve}})^2\\ & \leq 3 \left\|\frac{\boldsymbol{\mathit{r}}^*}{\boldsymbol{\mathit{\bar{r}}}}\right\|_\infty^{1/2} \varepsilon_{\mathrm{step}} \varepsilon_{\mathrm{solve}} + 2 \left\|\frac{\boldsymbol{\mathit{r}}'^*}{\boldsymbol{\mathit{\bar{r}}}}\right\|_\infty^{1/2} \varepsilon_{\mathrm{step}}^2\,. \end{align*} \end{proof} We will also use the following lemma, which is standard~\cite{axiotis2020circulation}. \begin{lemma}[Small residual implies $\ell_\infty$ closeness] Given a flow $\boldsymbol{\mathit{f}} = \boldsymbol{\mathit{f}}^0 + \boldsymbol{\mathit{C}} \boldsymbol{\mathit{x}}$ with slacks $\boldsymbol{\mathit{s}}$ and resistances $\boldsymbol{\mathit{r}}$, if $\left\|\boldsymbol{\mathit{C}}^\top\left(\frac{\boldsymbol{\mathit{c}}}{\mu} + \frac{1}{\boldsymbol{\mathit{s}}^+} - \frac{1}{\boldsymbol{\mathit{s}}^-}\right)\right\|_{(\boldsymbol{\mathit{C}}^\top \boldsymbol{\mathit{R}} \boldsymbol{\mathit{C}})^+} \leq 1/1000$ then $\boldsymbol{\mathit{f}}$ is $(\mu,1.01)$-central. \label{lem:small_residual_linfty} \end{lemma} Applying Lemma~\ref{lem:ipm_step} for $T=\frac{k}{\varepsilon_{\mathrm{step}}}$ iterations, we get the lemma below, which measures the closeness of $\boldsymbol{\mathit{f}}^*$ to the central path in $\ell_2$ after $T$ iterations. \begin{lemma}[Centrality of $\boldsymbol{\mathit{f}}^*$] Let $\boldsymbol{\mathit{f}}^{*1},\dots,\boldsymbol{\mathit{f}}^{*T+1}$ be flows with slacks $\boldsymbol{\mathit{s}}^{*1},\dots,\boldsymbol{\mathit{s}}^{*T+1}$ and resistances $\boldsymbol{\mathit{r}}^{*1},\dots,\boldsymbol{\mathit{r}}^{*T+1}$, and $\boldsymbol{\mathit{f}}^{1},\dots,\boldsymbol{\mathit{f}}^{T+1}$ be flows with slacks $\boldsymbol{\mathit{s}}^{1},\dots,\boldsymbol{\mathit{s}}^{T+1}$ and resistances $\boldsymbol{\mathit{r}}^{1},\dots,\boldsymbol{\mathit{r}}^{T+1}$, such that $\boldsymbol{\mathit{s}}^t \approx_{1+\varepsilon_{\mathrm{solve}}}\boldsymbol{\mathit{s}}^{*t}$ for all $t\in [T]$, where $T = \frac{k}{\varepsilon_{\mathrm{step}}}$ for some $k\leq \sqrt{m} / 10$, $\varepsilon_{\mathrm{step}}\in(0,0.1)$ and $\varepsilon_{\mathrm{solve}}\in(0,0.1)$. Additionally, we have that \begin{itemize} \item $\boldsymbol{\mathit{f}}^{*1}$ is $\mu$-central \item {For all $t\in[T]$, $\boldsymbol{\mathit{f}}^{*t+1} = \boldsymbol{\mathit{f}}^{*t} + \varepsilon_{\mathrm{step}} \cdot \boldsymbol{\mathit{\widetilde{f}}}^t$, where \[ \left\|\sqrt{\boldsymbol{\mathit{r}}^t}\left(\boldsymbol{\mathit{\widetilde{f}}}^{*t} - \boldsymbol{\mathit{\widetilde{f}}}^{t}\right) \right\|_\infty \leq \varepsilon\,, \] \[ \boldsymbol{\mathit{\widetilde{f}}}^{*t} = \delta g(\boldsymbol{\mathit{s}}^t) - \delta (\boldsymbol{\mathit{R}}^t)^{-1} \boldsymbol{\mathit{B}}\left(\boldsymbol{\mathit{B}}^\top (\boldsymbol{\mathit{R}}^t)^{-1} \boldsymbol{\mathit{B}}\right)^+ \boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}}^t) \] and $\delta = \frac{1}{\sqrt{m}}$. } \end{itemize} Then, $\boldsymbol{\mathit{f}}^{*T+1}$ is $(\mu/(1+\varepsilon_{\mathrm{step}}\delta)^{T},1.01)$-central, as long as we set $\varepsilon_{\mathrm{step}} \leq 10^{-5}k^{-3}$ and $\varepsilon_{\mathrm{solve}} \leq 10^{-5} k^{-3}$. \label{lem:small_residual} \end{lemma} \begin{proof} For all $t\in[T+1]$, we denote the residual of $\boldsymbol{\mathit{f}}^{*t}$ as $\boldsymbol{\mathit{h}}^t = \frac{\boldsymbol{\mathit{c}}(1+\varepsilon_{\mathrm{step}}\delta)^{t-1}}{\mu} + \frac{1}{\boldsymbol{\mathit{s}}^{+,*t}} - \frac{1}{\boldsymbol{\mathit{s}}^{-,*t}}$. Note that $\boldsymbol{\mathit{C}}^\top \boldsymbol{\mathit{h}}^1 = \mathbf{0}$ as $\boldsymbol{\mathit{f}}^{*1}$ is $\mu$-central. We assume that the statement of the lemma is not true, and let $\widehat{T}$ be the smallest $t\in [T+1]$ such that $\boldsymbol{\mathit{f}}^{*t}$ is not $(\mu / (1 + \varepsilon_{\mathrm{step}}\delta)^{t-1},1.01)$-central. Obviously $\widehat{T} > 1$. This means that $\boldsymbol{\mathit{f}}^{*t}$ is $(\mu / (1 + \varepsilon_{\mathrm{step}}\delta)^{t-1},1.01)$-central for all $t\in[\widehat{T}-1]$, i.e. $\boldsymbol{\mathit{s}}^{*t}\approx_{1.01} \boldsymbol{\mathit{s}}\left(\mu / (1+\varepsilon_{\mathrm{step}}\delta)^{t-1}\right)$. Also, note that by Lemma~\ref{lem:resistance_stability2} about slack stability, and since $(1+\varepsilon_{\mathrm{step}}\delta)^{\left|\widehat{T}-t\right|} \leq (1+\delta)^{1.1k}$, we have $\boldsymbol{\mathit{s}}\left(\mu / (1+\varepsilon_{\mathrm{step}}\delta)^{t-1}\right) \approx_{3.7k^2} \boldsymbol{\mathit{s}}\left(\mu / (1+\varepsilon_{\mathrm{step}}\delta)^{\widehat{T}-1}\right)$ for all $t\in[T+1]$. Additionally, note that, as shown in proof of Lemma~\ref{lem:ipm_step}, we have \[ \left\|\boldsymbol{\mathit{\widetilde{f}}}^{*\widehat{T}-1}\right\|_{\boldsymbol{\mathit{r}}^{\widehat{T}-1},\infty} \leq \left\|\boldsymbol{\mathit{\widetilde{f}}}^{*\widehat{T}-1}\right\|_{\boldsymbol{\mathit{r}}^{\widehat{T}-1},2} \leq 1 \,,\] so \begin{align*} \left\|\frac{\boldsymbol{\mathit{s}}^{*\widehat{T}}}{\boldsymbol{\mathit{s}}^{*\widehat{T}-1}} - \mathbf{1}\right\|_\infty & =\varepsilon_{\mathrm{step}}\left\|\frac{\boldsymbol{\mathit{\widetilde{f}}}^{\widehat{T}-1}}{\boldsymbol{\mathit{s}}^{*\widehat{T}-1}}\right\|_\infty\\ & \leq \varepsilon_{\mathrm{step}}(1+\varepsilon_{\mathrm{solve}})\left\|\sqrt{\boldsymbol{\mathit{r}}^{\widehat{T}-1}} \boldsymbol{\mathit{\widetilde{f}}}^{\widehat{T}-1}\right\|_\infty\\ & \leq \varepsilon_{\mathrm{step}}(1+\varepsilon_{\mathrm{solve}})\left(\left\|\sqrt{\boldsymbol{\mathit{r}}^{\widehat{T}-1}} \boldsymbol{\mathit{\widetilde{f}}}^{*\widehat{T}-1}\right\|_\infty + \varepsilon\right)\\ & \leq \varepsilon_{\mathrm{step}}(1+\varepsilon_{\mathrm{solve}})\left(1 + \varepsilon\right)\\ & \leq 1.3\varepsilon_{\mathrm{step}}\,. \end{align*} From this we conclude that $\boldsymbol{\mathit{s}}^{*\widehat{T}} \approx_{1+2.6\varepsilon_{\mathrm{step}}} \boldsymbol{\mathit{s}}^{*\widehat{T}-1}$, and from the previous discussion we get that \[\boldsymbol{\mathit{s}}^{*\widehat{T}} \approx_{1+2.6\varepsilon_{\mathrm{step}}} \boldsymbol{\mathit{s}}^{*\widehat{T}-1} \approx_{1.01} \boldsymbol{\mathit{s}}(\mu/(1+\varepsilon_{\mathrm{step}}\delta)^{\widehat{T}-1}) \approx_{3.7k^2} \boldsymbol{\mathit{s}}(\mu/(1+\varepsilon_{\mathrm{step}}\delta)^{t-1}) \approx_{1.01} \boldsymbol{\mathit{s}}^{*t}\,,\] so $\boldsymbol{\mathit{s}}^{*\widehat{T}} \approx_{4k^2} \boldsymbol{\mathit{s}}^{*t}$ for all $t\in[\widehat{T}-1]$. On the other hand, if we apply Lemma~\ref{lem:ipm_step} $\widehat{T}-1$ times with $\boldsymbol{\mathit{\bar{r}}} = \boldsymbol{\mathit{r}}^{*\widehat{T}}$, we get \begin{align*} & \left\|\boldsymbol{\mathit{C}}^\top \boldsymbol{\mathit{h}}^{\widehat{T}}\right\|_{\left(\boldsymbol{\mathit{C}}^\top \boldsymbol{\mathit{R}}^{\widehat{T}}\boldsymbol{\mathit{C}}\right)^+} \\ & =\left\|\boldsymbol{\mathit{C}}^\top \boldsymbol{\mathit{h}}^{\widehat{T}}\right\|_{\boldsymbol{\overline{\mathit{H}}}^+} \\ & \leq (1+\varepsilon_{\mathrm{step}}\delta) \left\|\boldsymbol{\mathit{C}}^\top \boldsymbol{\mathit{h}}^{{\widehat{T}}-1}\right\|_{\boldsymbol{\overline{\mathit{H}}}^+} + 5 \left\|\frac{\boldsymbol{\mathit{r}}^{*{\widehat{T}}-1}}{\boldsymbol{\mathit{\bar{r}}}}\right\|_\infty^{1/2} \varepsilon_{\mathrm{step}} \cdot \varepsilon_{\mathrm{solve}} + 2\left\|\frac{\boldsymbol{\mathit{r}}^{*\widehat{T}}}{\boldsymbol{\mathit{\bar{r}}}}\right\|_\infty^{1/2} \varepsilon_{\mathrm{step}}^2\\ & \dots\\ & \leq 5 \sum\limits_{t=1}^{\widehat{T}-1} (1+\varepsilon_{\mathrm{step}}\delta)^{\widehat{T}-t-1} \left\|\frac{\boldsymbol{\mathit{r}}^{*t}}{\boldsymbol{\mathit{r}}^{*\widehat{T}}}\right\|_\infty^{1/2} \varepsilon_{\mathrm{step}} \cdot \varepsilon_{\mathrm{solve}} + 2\sum\limits_{t=1}^{\widehat{T}-1} (1+\varepsilon_{\mathrm{step}}\delta)^{\widehat{T}-t-1} \left\|\frac{\boldsymbol{\mathit{r}}^{*t+1}}{\boldsymbol{\mathit{r}}^{*\widehat{T}}}\right\|_\infty^{1/2} \varepsilon_{\mathrm{step}}^2\\ & \leq 6 T \max_{t\in [\widehat{T}-1]} \left\|\frac{\boldsymbol{\mathit{r}}^{*t}}{\boldsymbol{\mathit{r}}^{*\widehat{T}}}\right\|_\infty^{1/2} \varepsilon_{\mathrm{step}} \cdot \varepsilon_{\mathrm{solve}} + 2.4 T \max_{t\in [\widehat{T}-1]} \left\|\frac{\boldsymbol{\mathit{r}}^{*t+1}}{\boldsymbol{\mathit{r}}^{*\widehat{T}}}\right\|_\infty^{1/2} \varepsilon_{\mathrm{step}}^2\\ & \leq 24 T k^2 \varepsilon_{\mathrm{step}} \cdot \varepsilon_{\mathrm{solve}} + 10 T k^2 \varepsilon_{\mathrm{step}}^2 \\ & = 24 k^3 \varepsilon_{\mathrm{solve}} + 10 k^3 \varepsilon_{\mathrm{step}}\\ & \leq 1/1000\,, \end{align*} where we used the fact that $(1+\varepsilon_{\mathrm{step}}\delta)^{\widehat{T}} \leq e^{\varepsilon_{\mathrm{step}}\delta T} = e^{\delta k} \leq 1.2$ and our setting of $\varepsilon_{\mathrm{solve}} \leq 10^{-5}k^{-3}$ and $\varepsilon_{\mathrm{step}} \leq 10^{-5}k^{-3}$. By Lemma~\ref{lem:small_residual_linfty} this implies that $\boldsymbol{\mathit{f}}^{*\widehat{T}}$ is $(\mu / (1+\varepsilon_{\mathrm{step}}\delta)^{\widehat{T}-1},1.01)$-central, a contradiction. \end{proof} We are now ready to prove the following lemma, which is the goal of this section: \begin{replemma}{lem:approx_central} Let $\boldsymbol{\mathit{f}}^1,\dots,\boldsymbol{\mathit{f}}^{T+1}$ be flows with slacks $\boldsymbol{\mathit{s}}^t$ and resistances $\boldsymbol{\mathit{r}}^t$ for $t\in[T+1]$, where $T = \frac{k}{\varepsilon_{\mathrm{step}}}$ for some $k\leq \sqrt{m}/10$ and $\varepsilon_{\mathrm{step}} = 10^{-5}k^{-3}$, such that \begin{itemize} \item $\boldsymbol{\mathit{f}}^1$ is $(\mu,1+\varepsilon_{\mathrm{solve}}/8)$-central for $\varepsilon_{\mathrm{solve}} = 10^{-5} k^{-3}$ \item {For all $t\in[T]$, $\boldsymbol{\mathit{f}}^{t+1} = \begin{cases} \boldsymbol{\mathit{f}}(\mu) + \varepsilon_{\mathrm{step}} \sum\limits_{i=1}^{t} \boldsymbol{\mathit{\widetilde{f}}}^i & \text{if $\exists i\in[t]:\boldsymbol{\mathit{\widetilde{f}}}^{i} \neq \mathbf{0}$}\\ \boldsymbol{\mathit{f}}^1 & \text{otherwise} \end{cases}$, where \[ \boldsymbol{\mathit{\widetilde{f}}}^{*t} = \delta g(\boldsymbol{\mathit{s}}^t) - \delta (\boldsymbol{\mathit{R}}^t)^{-1} \boldsymbol{\mathit{B}}\left(\boldsymbol{\mathit{B}}^\top (\boldsymbol{\mathit{R}}^t)^{-1} \boldsymbol{\mathit{B}}\right)^+ \boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}}^t) \] for $\delta = \frac{1}{\sqrt{m}}$ and \[ \left\|\sqrt{\boldsymbol{\mathit{r}}^t}\left(\boldsymbol{\mathit{\widetilde{f}}}^{*t} - \boldsymbol{\mathit{\widetilde{f}}}^{t}\right) \right\|_\infty \leq \varepsilon \] for $\varepsilon = 10^{-6} k^{-6}$. } \end{itemize} Then, setting $\varepsilon_{\mathrm{step}} = \varepsilon_{\mathrm{solve}} = 10^{-5} k^{-3}$ and $\varepsilon = 10^{-6} k^{-6}$ we get that $\boldsymbol{\mathit{s}}^{T+1} \approx_{1.1} \boldsymbol{\mathit{s}}\left(\mu/(1+\varepsilon_{\mathrm{step}}\delta)^{k\varepsilon_{\mathrm{step}}^{-1}}\right)$. \end{replemma} \begin{proof} We set $\boldsymbol{\mathit{f}}^{*1} = \boldsymbol{\mathit{f}}(\mu)$ and for each $t\in[T]$, \[ \boldsymbol{\mathit{f}}^{*t+1} = \boldsymbol{\mathit{f}}^{*t} + \varepsilon_{\mathrm{step}} \boldsymbol{\mathit{\widetilde{f}}}^{*t} \,, \] and the corresponding slacks $\boldsymbol{\mathit{s}}^{*t}$ and resistances $\boldsymbol{\mathit{r}}^{*t}$. Let $\widehat{T}$ be the first $t\in[T+1]$ such that $\boldsymbol{\mathit{s}}^{\widehat{T}} \not\approx_{1+\varepsilon_{\mathrm{solve}}} \boldsymbol{\mathit{s}}^{*\widehat{T}}$. Obviously $t > 1$ as $\boldsymbol{\mathit{s}}^1 \approx_{1+\varepsilon_{\mathrm{solve}}/8} \boldsymbol{\mathit{s}}(\mu) = \boldsymbol{\mathit{s}}^{*1}$. Now, for all $t\in[T]$ we have \[ \left\|\sqrt{\boldsymbol{\mathit{r}}^t}\left(\boldsymbol{\mathit{\widetilde{f}}}^{*t} - \boldsymbol{\mathit{\widetilde{f}}}^t\right)\right\|_\infty \leq \varepsilon \,. \] Fix some $e\in E$. If $\widetilde{f}_e^t = \mathbf{0}$ for all $t\in [\widehat{T}-1]$, then we have $\sqrt{r_e^{\widehat{T}}}\left|\widetilde{f}_e^{*t}\right| = \sqrt{r_e^t}\left|\widetilde{f}_e^{*t}\right| \leq \varepsilon$ for all such $t$. This means that \begin{align*} \sqrt{r_e^{\widehat{T}}} \left|f_e^{*\widehat{T}} - f_e^{\widehat{T}}\right| & \leq \sqrt{r_e^{\widehat{T}}} \left|f_e^{*\widehat{T}} - f_e^{*1}\right| + \sqrt{r_e^{\widehat{T}}} \left|f_e^{*1} - f_e^{\widehat{T}}\right|\\ & = \sqrt{r_e^{\widehat{T}}} \left|f_e^{*\widehat{T}} - f_e^{*1}\right| + \sqrt{r_e^{\widehat{T}}} \left|f_e^{*1} - f_e^{1}\right|\\ & \leq \varepsilon_{\mathrm{step}}\sqrt{r_e^{\widehat{T}}}\sum\limits_{t=1}^{\widehat{T}-1} \left|\widetilde{f}_e^{*t}\right| + \sqrt{r_e^{\widehat{T}}} \left|f_e^{*1} - f_e^{1}\right|\\ & \leq \widehat{T}\varepsilon_{\mathrm{step}}\varepsilon + \sqrt{r_e^{\widehat{T}}} \left|f_e^{*1} - f_e^{1}\right|\\ & \leq k \varepsilon + \sqrt{2} \varepsilon_{\mathrm{solve}}/8\\ & \leq \varepsilon_{\mathrm{solve}} / 2\,, \end{align*} as long as $\varepsilon \leq \varepsilon_{\mathrm{solve}} / (2k) = O(1/k^4)$. In the second to last inequality we used Lemma~\ref{lem:approx_conversion}. Otherwise, there exists $t\in[\widehat{T}-1]$ such that $\widetilde{f}_e^t \neq \mathbf{0}$, and by definition $f_e^{\widehat{T}} = f_e(\mu) + \varepsilon_{\mathrm{step}}\sum\limits_{t=1}^{\widehat{T}-1} \widetilde{f}_e^t$, so \begin{align*} \sqrt{r_e^{*\widehat{T}}}\left|f_e^{*\widehat{T}} - f_e^{\widehat{T}}\right| & \leq \sqrt{r_e^{*\widehat{T}}}\left|f_e^{*1} - f_e(\mu)\right| + \varepsilon_{\mathrm{step}}\sum\limits_{t=1}^{\widehat{T}-1}\sqrt{r_e^{*\widehat{T}}}\left|\widetilde{f}_e^{*t} - \widetilde{f}_e^t\right|\\ & \leq 3k^2 \varepsilon_{\mathrm{step}}\sum\limits_{t=1}^{\widehat{T}-1}\sqrt{r_e^{*t}} \left|\widetilde{f}_e^{*t} - \widetilde{f}_e^t\right|\\ & \leq 3k^2 \varepsilon_{\mathrm{step}}(1+\varepsilon_{\mathrm{solve}})\sum\limits_{t=1}^{\widehat{T}-1}\sqrt{r_e^{t}}\left|\widetilde{f}_e^{*t} - \widetilde{f}_e^t\right|\\ & \leq 3k^2\varepsilon_{\mathrm{step}}(1+\varepsilon_{\mathrm{solve}}) T \varepsilon\\ & \leq4k^3\varepsilon\,, \end{align*} where we have used Lemma~\ref{lem:resistance_stability2} and the fact that $\boldsymbol{\mathit{s}}^t \approx_{1+\varepsilon_{\mathrm{solve}}} \boldsymbol{\mathit{s}}^{*t}$ for all $t\in [\widehat{T}-1]$ which also implies that $\sqrt{\boldsymbol{\mathit{r}}^t}\approx_{1+\varepsilon_{\mathrm{solve}}} \sqrt{\boldsymbol{\mathit{r}}^{*t}}$. Setting $\varepsilon = \frac{\varepsilon_{\mathrm{solve}}}{8k^3} = O\left(\frac{1}{k^6}\right)$, this becomes $\leq \varepsilon_{\mathrm{solve}}/2$. Therefore we have proved that $\left\|\sqrt{\boldsymbol{\mathit{r}}^{*\widehat{T}}}\left(\boldsymbol{\mathit{f}}^{*\widehat{T}} - \boldsymbol{\mathit{f}}^{\widehat{T}}\right)\right\|_\infty \leq \varepsilon_{\mathrm{solve}}/2$, and so $\boldsymbol{\mathit{s}}^{*\widehat{T}} \approx_{1+\varepsilon_{\mathrm{solve}}} \boldsymbol{\mathit{s}}^{\widehat{T}}$, a contradiction. Therefore we conclude that $\boldsymbol{\mathit{s}}^t \approx_{1+\varepsilon_{\mathrm{solve}}} \boldsymbol{\mathit{s}}^{*t}$ for all $t\in [T+1]$. Now, as long as $\varepsilon_{\mathrm{step}}, \varepsilon_{\mathrm{solve}} \leq 10^{-5} k^{-3}$, we can apply Lemma~\ref{lem:small_residual}, which guarantees that $\boldsymbol{\mathit{s}}^{*T+1} \approx_{1.01} \boldsymbol{\mathit{s}}\left(\mu / (1+\varepsilon_{\mathrm{step}}\delta)^{T}\right)$, and so $\boldsymbol{\mathit{s}}^{T+1} \approx_{1.1} \boldsymbol{\mathit{s}}\left(\mu / (1+\varepsilon_{\mathrm{step}}\delta)^{T}\right)$. Therefore we set $\varepsilon_{\mathrm{step}} = \varepsilon_{\mathrm{solve}} = 10^{-5} k^{-3}$ and $\varepsilon = 10^{-6} k^{-6} \leq \frac{\varepsilon_{\mathrm{solve}}}{8k^3}$. \end{proof} \subsection{Proof of Lemma~\ref{lem:multistep}} \label{proof_lem_multistep} \begin{proof} We will apply Lemma~\ref{lem:approx_central} with $\boldsymbol{\mathit{f}}^1$ being the flow corresponding to the resistances $\mathcal{L}.\boldsymbol{\mathit{r}} = \mathcal{C}.\boldsymbol{\mathit{r}}$, and $T = k\varepsilon_{\mathrm{step}}^{-1}$. Note that it is important to maintain the invariant $\mathcal{L}.\boldsymbol{\mathit{r}} = \mathcal{C}.\boldsymbol{\mathit{r}}$ throughout the algorithm so that both data structures correspond to the same electrical flow problem. For each $t\in [T]$, for the $t$-th iteration, Lemma~\ref{lem:approx_central} requires an estimate $\boldsymbol{\mathit{\widetilde{f}}}^t$ such that $\left\|\sqrt{\boldsymbol{\mathit{r}}^t}\left(\boldsymbol{\mathit{\widetilde{f}}}^{*t} - \boldsymbol{\mathit{\widetilde{f}}}^t\right)\right\|_\infty \leq \varepsilon$, where \[ \boldsymbol{\mathit{\widetilde{f}}}^{*t} = \delta g(\boldsymbol{\mathit{s}}^t) - \delta (\boldsymbol{\mathit{R}}^t)^{-1}\boldsymbol{\mathit{B}} \left(\boldsymbol{\mathit{B}}^\top (\boldsymbol{\mathit{R}}^t)^{-1}\boldsymbol{\mathit{B}}\right)^+\boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}}^t) \] and $\delta = 1/\sqrt{m}$. We claim that such an estimate can be computed for all $t$ by using $\mathcal{L}$ and $\mathcal{C}$. We apply the following process for each $t\in[T]$: \begin{itemize} \item{Let $Z$ be the edge set returned by $\mathcal{L}.\textsc{Solve}()$.} \item{Call $\mathcal{C}.\textsc{Check}(e)$ for each $e\in Z$ to obtain flow values $\widetilde{f}_e^t$.} \item{Compute $\boldsymbol{\mathit{f}}^t$ and its slacks $\boldsymbol{\mathit{s}}^{t+1}$ and resistances $\boldsymbol{\mathit{r}}^{t+1}$ as in Lemma~\ref{lem:approx_central}, i.e. \[ f_e^{t+1} = \begin{cases} f_e(\mu) + \varepsilon_{\mathrm{step}} \sum\limits_{i=1}^{t} \widetilde{f}_e^i & \text{if $\exists i\in[t]:\widetilde{f}_e^{i} \neq 0$}\\ f_e^1 & \text{otherwise} \end{cases}\,. \] This can be computed in $O\left(\left|Z\right|\right)$ by adding either $\varepsilon_{\mathrm{step}}\widetilde{f}_e^i$ or $f_e(\mu) - f_e^1 + \varepsilon_{\mathrm{step}}\widetilde{f}_e^i$ to $f_e^t$ for each $e\in Z$. } \item{Call $\mathcal{L}.\textsc{Update}(e,\boldsymbol{\mathit{f}}^{t+1})$ and $\mathcal{C}.\textsc{Update}(e,\boldsymbol{\mathit{f}}^{t+1})$ for all $e$ in the support of $\boldsymbol{\mathit{\widetilde{f}}}^t$. Note that $\mathcal{L}.\textsc{Update}$ works as long as \[ r_e^{\max} / \alpha \leq r_e^{t+1} \leq \alpha \cdot r_e^{\min} \,,\] where $r_e^{\max}$, $r_e^{\min}$ are the maximum and minimum values of $\mathcal{L}.r_e$ since the last call to $\mathcal{L}.\textsc{BatchUpdate}$. After this, we have $\mathcal{L}.\boldsymbol{\mathit{r}} = \mathcal{C}.\boldsymbol{\mathit{r}} = \boldsymbol{\mathit{r}}^{t+1}$.} \end{itemize} In the above process, when $\mathcal{L}.\textsc{Solve}()$ is called we have $\mathcal{L}.\boldsymbol{\mathit{r}} = \boldsymbol{\mathit{r}}^t$ (for $t=1$ this is true because $\mathcal{L}.\boldsymbol{\mathit{r}}$ are the resistances corresponding to $\boldsymbol{\mathit{f}}^1$). By the $(\alpha,\beta,\varepsilon/2)$-\textsc{Locator} guarantees in Definition~\ref{def:locator}, with high probability $Z$ contains all the edges $e$ such that $\sqrt{r_e^t} \left|\widetilde{f}_e^{*t} \right| \geq \varepsilon / 2$. Now, for each $e\in Z$, $\mathcal{C}.\textsc{Check}(e)$ returns a flow value $\widetilde{f}_e^t$ such that: \begin{itemize} \item $\sqrt{r_e^t} \left|\widetilde{f}_e^t - \widetilde{f}_e^{*t}\right| \leq \varepsilon$ \item if $\sqrt{r_e^t} \left|\widetilde{f}_e^{*t}\right| < \varepsilon / 2$, then $\widetilde{f}_e^t = 0$. \end{itemize} Therefore, the condition that \[ \sqrt{\boldsymbol{\mathit{r}}^t}\left\|\boldsymbol{\mathit{\widetilde{f}}}^t - \boldsymbol{\mathit{\widetilde{f}}}^{*t}\right\|_\infty \leq \varepsilon \] is satisfied. Additionally $\boldsymbol{\mathit{\widetilde{f}}}^t$ is independent of the randomness of $\mathcal{L}$, because (the distribution of) $\boldsymbol{\mathit{\widetilde{f}}}^t$ would be the same if $\mathcal{C}.\textsc{Check}$ was run for {\bf all} edges $e$. It remains to show that the $\textsc{Locator}$ requirement \[ \boldsymbol{\mathit{r}}^{\max} / \alpha \leq \boldsymbol{\mathit{r}}^{t+1} \leq \alpha \cdot \boldsymbol{\mathit{r}}^{\min} \] is satisfied. Consider the minimum value of $t$ for which this is not satisfied. By Lemma~\ref{lem:approx_central}, we have that \begin{align} \boldsymbol{\mathit{s}}^{\tau+1} \approx_{1.1} \boldsymbol{\mathit{s}}\left(\mu/(1+\varepsilon_{\mathrm{step}}\delta)^{\tau}\right) \label{eq:slack_1.1_approx} \end{align} for any $\tau\in [t]$. Now let $\boldsymbol{\mathit{\widehat{r}}}$ be the resistances of $\mathcal{L}$ at any point since the last call to $\mathcal{L}.\textsc{BatchUpdate}$. By the lemma statement and (\ref{eq:slack_1.1_approx}), we know that $\boldsymbol{\mathit{\widehat{r}}} \approx_{1.1^2} \boldsymbol{\mathit{r}}(\hat{\mu})$ for some $\hat{\mu}\in[\mu / (1+\varepsilon_{\mathrm{step}}\delta)^t, \mu^0]$. However, we also know that $\mu^0 \leq \mu \cdot (1+\varepsilon_{\mathrm{step}}\delta)^{(0.5\alpha^{1/4} - k)\varepsilon_{\mathrm{step}}^{-1}}$ and so \[ \frac{\hat{\mu}}{\mu/(1+\varepsilon_{\mathrm{step}}\delta)^{t}} \leq (1 + \varepsilon_{\mathrm{step}}\delta)^{0.5\alpha^{1/4}\varepsilon_{\mathrm{step}}^{-1}} \leq (1 + \delta)^{0.5\alpha^{1/4}} \,, \] so by Lemma~\ref{lem:resistance_stability2} we have \[ \boldsymbol{\mathit{s}}\left(\mu/(1+\varepsilon_{\mathrm{step}}\delta)^t\right) \approx_{0.75\alpha^{1/2}} \boldsymbol{\mathit{s}}\left(\hat{\mu}\right)\,. \] As $\boldsymbol{\mathit{s}}^{t+1} \approx_{1.1} \boldsymbol{\mathit{s}}\left(\mu/(1+\varepsilon_{\mathrm{step}}\delta)^{t}\right)$, we have that $\boldsymbol{\mathit{s}}^{t+1} \approx_{0.825\alpha^{1/2}} \boldsymbol{\mathit{s}}(\hat{\mu})$, and so \[ \boldsymbol{\mathit{r}}^{t+1} \approx_{0.825\alpha} \boldsymbol{\mathit{r}}(\hat{\mu}) \approx_{1.1^2} \boldsymbol{\mathit{\widehat{r}}} \,.\] This means that $\boldsymbol{\mathit{r}}^{t+1} \approx_{\alpha} \boldsymbol{\mathit{\widehat{r}}}$ and is a contradiction. We conclude that the requirements of $\mathcal{L}$ are met for all $t$, and as a result Lemma~\ref{lem:approx_central} shows that $\boldsymbol{\mathit{s}}^{T+1} \approx_{1.1} \boldsymbol{\mathit{s}}\left(\mu / (1+\varepsilon_{\mathrm{step}}\delta)^{k\varepsilon_{\mathrm{step}}^{-1}}\right)$. By Lemma~\ref{lem:recenter}, we can now obtain $\boldsymbol{\mathit{f}}\left(\mu / (1+\varepsilon_{\mathrm{step}}\delta)^{k\varepsilon_{\mathrm{step}}^{-1}}\right)$. Finally, we return $\mathcal{L}.\boldsymbol{\mathit{r}}$ and $\mathcal{C}$ to their original states. \paragraph{Success probability.} We note that all the outputs of $\mathcal{C}$ are independent of the randomness of $\mathcal{L}$, and $\mathcal{L}$ is only updated based on these outputs. As each operation of $\mathcal{L}$ succeeds with high probability, the whole process succeeds with high probability as well. \paragraph{Runtime.} The recentering operation in Lemma~\ref{lem:recenter} takes $\tO{m}$. Additionally, we call $\mathcal{L}.\textsc{Solve}$ $k\varepsilon_{\mathrm{step}}^{-1} = O(k^4)$ times and, as $|Z| = O(1/\varepsilon^2)$, the total number of times $\mathcal{L}.\textsc{Update}$, $\mathcal{C}.\textsc{Update}$, and $\mathcal{C}.\textsc{Check}$ are called is $O(k\varepsilon_{\mathrm{step}}^{-1} \varepsilon^{-2}) = O(k^{16})$. \end{proof} \subsection{Proof of Lemma~\ref{lem:mincostflow}} \label{proof_lem_mincostflow} \begin{proof} Let $\delta = 1/\sqrt{m}$. Over a number of $T=\tO{m^{1/2}/k}$ iterations, we will repeatedly apply $\textsc{MultiStep}$ (Lemma~\ref{lem:multistep}). We will also replace the oracle from Definition~\ref{def:perfect_checker} by the $\textsc{Checker}$ data structure in Section~\ref{sec:checker}. \paragraph{Initialization.} We first initialize the $\textsc{Locator}$ with error $\varepsilon / 2$, by calling $\mathcal{L}.\textsc{Initialize}(\boldsymbol{\mathit{f}})$. Let $\boldsymbol{\mathit{s}}^t$ be the slacks $\mathcal{L}.\boldsymbol{\mathit{s}}$ before the $t$-th iteration and $\boldsymbol{\mathit{r}}^t$ the corresponding resistances, and $\boldsymbol{\mathit{s}}^{0t}$ be the slacks $\mathcal{L}.\boldsymbol{\mathit{r}}^0$ before the $t$-th iteration and $\boldsymbol{\mathit{r}}^{0t}$ the corresponding resistances, for $t\in[T]$. Also, we let $\mu_t = \mu / \left(1 + \varepsilon_{\mathrm{step}}\delta\right)^{(t-1)k\varepsilon_{\mathrm{step}}^{-1}}$. We will maintain the invariant that $\boldsymbol{\mathit{s}}^t \approx_{1+\varepsilon_{\mathrm{solve}}/8} \boldsymbol{\mathit{s}}\left(\mu_t\right)$, which is a requirement in order to apply Lemma~\ref{lem:multistep}. As in~\cite{gao2021fully}, we will also need to maintain $O(k^4)$ $\textsc{Checker}$s $\mathcal{C}^i$ for $i\in[O(k^4)]$, so we call $\mathcal{C}^i.\textsc{Initialize}(\boldsymbol{\mathit{f}}, \varepsilon, \beta_{\textsc{Checker}})$ for each one of these. Note that in general $\beta_{\textsc{Checker}} \neq \beta$, as the vertex sparsifiers $\mathcal{L}$ and $\mathcal{C}^i$ will not be on the same vertex set. As in Lemma~\ref{lem:multistep}, we will maintain the invariant that $\mathcal{L}.\boldsymbol{\mathit{r}} = \mathcal{C}^i.\boldsymbol{\mathit{r}}$ for all $i$. \paragraph{Resistance updates.} Assuming that all the requirements of Lemma~\ref{lem:multistep} (\textsc{MultiStep}) are satisfied at the $t$-th iteration, that lemma computes a flow $\boldsymbol{\mathit{\bar{f}}} = \boldsymbol{\mathit{f}}\left(\mu_{t+1}\right)$ with slacks $\boldsymbol{\bar{\mathit{s}}}$. In order to guarantee that $\boldsymbol{\mathit{s}}^{t+1} \approx_{1+\varepsilon_{\mathrm{solve}}/8} \boldsymbol{\mathit{s}}\left(\mu_{t+1}\right)$, we let $Z$ be the set of edges such that either \[ s_e^{+,t} \not\approx_{1+\varepsilon_{\mathrm{solve}}/8} \bar{s}_e^+ \text{ or } s_e^{-,t} \not\approx_{1+\varepsilon_{\mathrm{solve}}/8} \bar{s}_e^- \] and then call $\mathcal{L}.\textsc{Update}(e,\boldsymbol{\mathit{\bar{f}}})$ for all $e\in Z$. This guarantees that $s_e^{+,t+1} = \bar{s}_e^+$ and $s_e^{-,t+1} = \bar{s}_e^-$ for all $e\in Z$ and so $\boldsymbol{\mathit{s}}^{t+1}\approx_{1+\varepsilon_{\mathrm{solve}}/8} \boldsymbol{\bar{\mathit{s}}} = \boldsymbol{\mathit{s}}\left(\mu_{t+1}\right)$. We also apply the same updates to the $\mathcal{C}^i$'s using $\mathcal{C}^i.\textsc{Update}$, in order to ensure that they have the same resistances with $\mathcal{L}$. \paragraph{Batched resistance updates.} The number of times $\mathcal{L}.\textsc{Update}$ is called can be quite large because of multiple edges on which error slowly accumulates. This is because in general $\Omega(m)$ resistances will be updated throughout the algorithm. As \textsc{Locator}.\textsc{Update} is only slightly sublinear, this would lead to an $\Omega(m^{3/2})$-time algorithm. For this reason, as in~\cite{gao2021fully}, we occasionally (every $\widehat{T}$ iterations for some $\widehat{T}\geq 1$ to be defined later) perform batched updates by calling $\mathcal{L}.\textsc{BatchUpdate}(Z,\boldsymbol{\mathit{\bar{f}}})$, where $Z$ is the set of edges such that either \[ s_e^{+,t} \not\approx_{1+\varepsilon_{\mathrm{solve}}/16} \bar{s}_e^+ \text{ or } s_e^{-,t} \not\approx_{1+\varepsilon_{\mathrm{solve}}/16} \bar{s}_e^- \,. \] This again guarantees that $s_e^{+,t+1} = \bar{s}_e^+$ and $s_e^{-,t+1} = \bar{s}_e^-$ for all $e\in Z$ and so $\boldsymbol{\mathit{s}}^{t+1}\approx_{1+\varepsilon_{\mathrm{solve}}/16} \boldsymbol{\bar{\mathit{s}}} = \boldsymbol{\mathit{s}}\left(\mu_{t+1}\right)$. Note that after updating $\mathcal{L}.\boldsymbol{\mathit{s}}$ and $\mathcal{L}.\boldsymbol{\mathit{r}}$, this operation also sets $\mathcal{L}.\boldsymbol{\mathit{r}}^0 = \mathcal{L}.\boldsymbol{\mathit{r}}$. We perform the same resistance updates to the $\mathcal{C}^i$'s in the regular (i.e. not batched) way, using $\mathcal{C}^i.\textsc{Update}$. \paragraph{\textsc{Locator} requirements.} What is left is to ensure that the requirements of Lemma~\ref{lem:multistep} are satisfied at the $t$-th iteration, as well as that the requirements of $\mathcal{L}$.\textsc{Update} and $\mathcal{L}$.\textsc{BatchUpdate} from Definition~\ref{def:locator} are satisfied. The requirements are as follows: \begin{enumerate} \item{Lemma~\ref{lem:multistep}: $\boldsymbol{\mathit{s}}^{0t} \approx_{1+\varepsilon_{\mathrm{solve}}/8} \boldsymbol{\mathit{s}}\left(\mu^0\right)$ for some $\mu^0 \leq \mu_t\cdot(1+\varepsilon_{\mathrm{step}}\delta)^{(0.5\alpha^{1/4} - k)\varepsilon_{\mathrm{step}}^{-1}}$. Note that $\mathcal{L}.\boldsymbol{\mathit{s}}^0$ is updated every time $\mathcal{L}.\textsc{BatchUpdate}$ is called, and after the call we have $\mathcal{L}.\boldsymbol{\mathit{s}}^0 = \mathcal{L}.\boldsymbol{\mathit{s}} \approx_{1+\varepsilon_{\mathrm{solve}}/16} \boldsymbol{\mathit{s}}(\mu^0)$ for some $\mu^0 > 0$. To ensure that it is called often enough, we call $\mathcal{L}.\textsc{BatchUpdate}(\emptyset)$ every $(0.5\alpha^{1/4} / k - 1)\varepsilon_{\mathrm{step}}^{-1}$ iterations. Because of this, we have $\mu^0 \leq \mu_t \cdot \left(1+\varepsilon_{\mathrm{step}}\delta\right)^{(0.5\alpha^{1/4}/k - 1)\varepsilon_{\mathrm{step}}^{-1} \cdot k} = \mu_t \cdot \left(1+\varepsilon_{\mathrm{step}}\delta\right)^{(0.5\alpha^{1/4} - k)\varepsilon_{\mathrm{step}}^{-1}} $. Additionally, for any resistances $\boldsymbol{\mathit{\widehat{r}}}$ that $\mathcal{L}$ had at any point since the last call to $\mathcal{L}.\textsc{BatchUpdate}$, it is immediate that \[ \boldsymbol{\mathit{\widehat{r}}} \approx_{(1+\varepsilon_{\mathrm{solve}}/8)^2} \boldsymbol{\mathit{r}}(\hat{\mu})\] for some $\hat{\mu}\in[\mu_t, \mu^0]$, as this is exactly the invariant that our calls to $\mathcal{L}.\textsc{Update}$ maintain. Therefore, $\boldsymbol{\mathit{\widehat{r}}} \approx_{1.1^2} \boldsymbol{\mathit{r}}(\hat{\mu})$. } \item{$\mathcal{L}$.\textsc{Update}: $r_e^{\max} / \alpha \leq r_e^{t+1} \leq \alpha \cdot r_e^{\min}$, where $r_e^{\min}, r_e^{\max}$ are the minimum and maximum values that $\mathcal{L}.r_e$ has had since the last call to $\mathcal{L}.\textsc{BatchUpdate}$. Let $\boldsymbol{\mathit{\widehat{r}}}$ be any value of $\mathcal{L}.\boldsymbol{\mathit{r}}$ since the last call to $\mathcal{L}.\textsc{BatchUpdate}$. Because of the invariant maintained by resistance updates (including inside $\textsc{MultiStep}$), we have that $\boldsymbol{\mathit{\widehat{r}}}$ are $(\hat{\mu},1.1)$-central resistances for some $\hat{\mu}$ such that \[ \mu_{t+1} \leq \hat{\mu} \leq \mu_{t+1} \cdot \left(1+\varepsilon_{\mathrm{step}}\delta\right)^{(0.5\alpha^{1/4} - k)\varepsilon_{\mathrm{step}}^{-1}} \leq \mu_{t+1} \cdot (1+\delta)^{0.5\alpha^{1/4}} \,.\] As in the previous item, we have that and $\boldsymbol{\mathit{s}}^{0t}$ are $(\mu^0, 1+\varepsilon_{\mathrm{solve}}/8)$-central. By Lemma~\ref{lem:resistance_stability2} this implies \[ \boldsymbol{\mathit{s}}(\mu_{t+1}) \approx_{0.75\alpha^{1/2}} \boldsymbol{\mathit{s}}(\hat{\mu})\,,\] and since $\boldsymbol{\mathit{\widehat{r}}} \approx_{1.1^2} \boldsymbol{\mathit{r}}(\hat{\mu})$, we conclude that $\boldsymbol{\mathit{r}}(\mu_{t+1}) \approx_{\alpha} \boldsymbol{\mathit{\widehat{r}}}$. } \item{$\mathcal{L}$.\textsc{BatchUpdate}: Between any two successive calls to $\mathcal{L}.\textsc{Initialize}$, the number of edges updated (number of calls to $\mathcal{L}.\textsc{Update}$ plus the sum of $|Z|$ for all calls to $\mathcal{L}.\textsc{BatchUpdate}$) is $O(\beta m)$. We make sure that this is satisfied by calling $\mathcal{L}.\textsc{Initialize}(\boldsymbol{\mathit{\bar{f}}})$ every $\varepsilon_{\mathrm{solve}} \sqrt{\beta m} / k$ iterations, where $\boldsymbol{\mathit{\bar{f}}} = \boldsymbol{\mathit{f}}(\mu_t)$, at the beginning of the $t$-th iteration. Consider any two successive initializations at iterations $t^{init}$ and $t^{end}$ respectively. Let $\ell$ be the number of edges $e$ that have potentially been updated, i.e. such that either \[ s_e(\mu_{t^{init}})^{+} \not\approx_{1+\varepsilon_{\mathrm{solve}}/16}s_e(\mu_{i})^{+} \text{ or } s_e(\mu_{t^{init}})^- \not\approx_{1+\varepsilon_{\mathrm{solve}}/16}s_e(\mu_{i})^- \] for some $i\in\left[t^{init},t^{end}\right]$. First, note that this implies that \[ \sqrt{r_e(\mu_{t^{init}})}\left|f_e(\mu_{t^{init}}) - f_e(\mu_i)\right| > \frac{\varepsilon_{\mathrm{solve}} / 16}{1+\varepsilon_{\mathrm{solve}}/16} > \varepsilon_{\mathrm{solve}} / 17 \,. \] Now, by the fact that $t^{end} - t^{init} \leq \varepsilon_{\mathrm{solve}} \sqrt{\beta m} / k$, we have that \[ \mu_{t^{init}} \leq \mu_i \cdot (1+\varepsilon_{\mathrm{step}}\delta)^{k\varepsilon_{\mathrm{step}}^{-1}\cdot \varepsilon_{\mathrm{solve}}\sqrt{\beta m} / k} \leq \mu_i \cdot (1+\delta)^{\varepsilon_{\mathrm{solve}}\sqrt{\beta m}} \,.\] By applying Lemma~\ref{lem:resistance_stability1} with $k=\varepsilon_{\mathrm{solve}}\sqrt{\beta m}$ and $\gamma = \varepsilon_{\mathrm{solve}}/17$, we get that $\ell \leq O(\beta m)$. Therefore the statement follows. } \end{enumerate} We will also need to show how to implement the $\textsc{PerfectChecker}$ used in $\textsc{MultiStep}$ using the $\mathcal{C}^i$'s, as well as how to satisfy all $\textsc{Checker}$ requirements. \paragraph{\textsc{Checker} requirements.} \begin{enumerate} \item{Implementing $\varepsilon$-$\textsc{PerfectChecker}$ inside $\textsc{MultiStep}$. We follow almost the same procedure as in~\cite{gao2021fully}, other than the fact that we also need to provide some additional information to $\mathcal{C}^i.\textsc{Solve}$. Each call to $\textsc{PerfectChecker}.\textsc{Update}$ translates to calls to $\mathcal{C}^i.\textsc{TemporaryUpdate}$ for all $i$. In addition, the $i$-th batch of calls to $\textsc{PerfectChecker}.\textsc{Check}$ inside $\textsc{MultiStep}$ (i.e. that corresponding to a single set of edges returned by $\mathcal{L}.\textsc{Solve}$) is only run on $\mathcal{C}^i$ using $\mathcal{C}^i.\textsc{Check}$. As each call to $\mathcal{C}^i.\textsc{Check}$ is independent of previous calls to it, we can get correct outputs with high probability even when we run it multiple times (one for each edge returned by $\mathcal{L}.\textsc{Solve}$). In order to guarantee that we have a vector $\boldsymbol{\mathit{\pi}}_{old}^i$ as required by $\mathcal{C}^i.\textsc{Check}$, once every $k^4$ calls to $\textsc{MultiStep}$ (i.e. if $t$ is a multiple of $k^4$) we compute \[ \boldsymbol{\mathit{\pi}}_{old}^{i} = \boldsymbol{\mathit{\pi}}^{C^{i,t}}\left(\boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}}(\mu_t))\right) \] for all $i\in[O(k^4)]$, where $C^{i,t}$ is the vertex set of the Schur complement data structure stored internally by the $\mathcal{C}^i$ right before the $t$-th call to $\textsc{MultiStep}$. This can be computed in $\tO{m}$ for each $i$ as in $\textsc{DemandProjector}.\textsc{Initialize}$ in Lemma~\ref{lem:ds}. Now, the total number of $\mathcal{C}^i.\textsc{TemporaryUpdate}$s that have not been rolled back is $O(k^{16})$, and the total number of $\mathcal{C}^i.\textsc{Update}$s over $k^4$ calls to $\textsc{MultiStep}$ by Lemma~\ref{lem:resistance_stability1} is $O(k^{10} / \varepsilon_{\mathrm{solve}}^2) = O(k^{16})$. This means that the total number of terminal insertions to $C^{i,t}$ as well as resistance changes is $O(k^{16})$. By Lemma~\ref{lem:old_projection_approximate}, if $C^i$ is the current state of the vertex set of the Schur complement of $\mathcal{C}^i$ and $\boldsymbol{\mathit{s}}$ are the current slacks, \[ \mathcal{E}_{\boldsymbol{\mathit{r}}} \left(\boldsymbol{\mathit{\pi}}_{old}^i - \boldsymbol{\mathit{\pi}}^{C^i}(g(\boldsymbol{\mathit{s}}))\right) \leq \tO{\alpha' \beta_{\textsc{Checker}}^{-4}} \cdot k^{32}\,, \] where $\alpha'$ is the largest possible multiplicative change of some $\mathcal{C}^i.r_e$ since the computation of $\boldsymbol{\mathit{\pi}}_{old}^i$. Furthermore, note that $\boldsymbol{\mathit{\pi}}_{old}$ is supported on $C^i$. This is because $C^{i,t} \subseteq C^i$ and $C^{i,t}$ does not contain any temporary terminals. Now, as we have already proved in Lemma~\ref{lem:multistep}, at any point inside the $t$-th call to $\textsc{MultiStep}$, $\mathcal{C}^i.\boldsymbol{\mathit{r}}$ are $(\hat{\mu},1.1)$-central resistances for some $\hat{\mu}\in[\mu_{t+1}, \mu_t]$. Fix $\hat{\mu}\in[\mu_{t+1},\mu_t], \hat{\mu}'\in[\mu_{t'+1},\mu_{t'}]$, as well as the corresponding resistances of $\mathcal{C}^i$, $\boldsymbol{\mathit{\widehat{r}}}$, $\boldsymbol{\mathit{\widehat{r}}}'$, where $t' \geq t$. Now, note that since we are computing $\boldsymbol{\mathit{\pi}}_{old}^i$ every $k^4$ calls to $\textsc{MultiStep}$, we have that \[ \frac{\hat{\mu}}{\hat{\mu}'} \leq \frac{\mu_t}{\mu_{t'+1}} \leq (1+\varepsilon_{\mathrm{step}}\delta)^{k\varepsilon_{\mathrm{step}}^{-1}\cdot (t'-t + 1)} \leq (1+\delta)^{O(k^5)}\,,\] so Lemma~\ref{lem:resistance_stability2} implies that \[ \boldsymbol{\mathit{s}}(\hat{\mu}) \approx_{O(k^{10})} \boldsymbol{\mathit{s}}(\hat{\mu}')\,. \] As $\boldsymbol{\mathit{\widehat{r}}} \approx_{1.1^2} \boldsymbol{\mathit{r}}(\hat{\mu})$ and $\boldsymbol{\mathit{\widehat{r}}}' \approx_{1.1^2} \boldsymbol{\mathit{r}}(\hat{\mu}')$, we get that $\boldsymbol{\mathit{\widehat{r}}} \approx_{O(k^{20})} \boldsymbol{\mathit{\widehat{r}}}'$. Therefore, $\alpha' \leq O(k^{20})$. Setting $\beta_{\textsc{Checker}} \geq \tOm{\alpha'^{1/4} k^{8} \varepsilon^{-1/2} m^{-1/4}} = \tOm{k^{16} / m^{1/4}}$, we get that \[ \tO{\alpha'\beta_{\textsc{Checker}}^{-4}}\cdot k^{32} \leq \varepsilon^2 m / 4\,, \] as required by $\mathcal{C}^i.\textsc{Check}$. Finally, at the end of $\textsc{MultiStep}$ we bring all $\mathcal{C}^i$ to their original state before calling $\textsc{MultiStep}$, by calling $\mathcal{C}^i.\textsc{Rollback}$. We also update all the resistances of $\mathcal{L}$ to their original state by calling $\mathcal{L}.\textsc{Update}$. } \item{Between any two successive calls to $\mathcal{C}^i$.$\textsc{Initialize}$, the total number of edges updated at any point (via $\mathcal{C}^i.\textsc{Update}$ or $\mathcal{C}^i.\textsc{TemporaryUpdate}$ that have not been rolled back) is $O(\beta_{\textsc{Checker}} m)$. For $\textsc{Update}$, we can apply a similar analysis as in the $\textsc{Locator}$ case to show that if we call $\mathcal{C}^i$.$\textsc{Initialize}$ every $\varepsilon_{\mathrm{solve}}\sqrt{\beta_{\textsc{Checker}} m} /k$ iterations, the total number of updates never exceeds $O(\beta_{\textsc{Checker}} m)$. For $\textsc{TemporaryUpdate}$, note that at any time there are at most $O(k^{16})$ of these that have not been rolled back (this is inside \textsc{MultiStep}). Therefore, as long as $k^{16} \leq O(\beta_{\textsc{Checker}} m) \Leftrightarrow \beta_{\textsc{Checker}} \geq \Omega(k^{16} / m)$, the requirement is met. } \end{enumerate} \paragraph{Output Guarantee.} After the application of Lemma~\ref{lem:multistep} at the last iteration, we will have $\boldsymbol{\mathit{\bar{f}}} = \boldsymbol{\mathit{f}}(\mu_{T+1})$, where $\mu_{T+1} = \mu / (1+\varepsilon_{\mathrm{step}}\delta)^{\tO{m^{1/2} \varepsilon_{\mathrm{step}}^{-1}}} = \mu / \mathrm{poly}(m) \leq m^{-10}$. \paragraph{Success probability.} Note that all operations of $\textsc{Locator}$ and $\textsc{Checker}$ work with high probability. Regarding the interaction of the randomness of these data structures and the fact that they work against oblivious adversaries, we defer to~\cite{gao2021fully}, where there is a detailed discussion of why this works. In short, note that outside of $\textsc{MultiStep}$, all updates are deterministic (as they only depend on the central path), and in $\textsc{MultiStep}$ the updates to $\textsc{Locator}$ and $\textsc{Checker}$ only depend on outputs of a $\textsc{Checker}$. As each time we are getting the output from a different $\textsc{Checker}$, the inputs to $\mathcal{C}^i.\textsc{Check}$ are independent of the randomness of $\mathcal{C}^i$, and thus succeed with high probability. Finally, note that the output of $\textsc{Locator}$ is only passed onto $\mathcal{C}^i.\textsc{Check}$, whose output is then independent of the inputs received by $\textsc{Locator}$. Therefore, $\textsc{Locator}$ does not ``leak'' any randomness. Our only deviation from~\cite{gao2021fully} in $\textsc{Checker}$ has to do with the extra input of $\textsc{Checker}.\textsc{Check}$ ($\boldsymbol{\mathit{\pi}}_{old}^i$). However, note that this is computed outside of $\textsc{MultiStep}$, and as such the only randomness it depends on is the $\beta_{\textsc{Checker}}$-congestion reduction subset $C^i$ generated when calling $\mathcal{C}^i.\textsc{Initialize}$. As such, it only depends on the internal randomness of $\mathcal{C}^i$. As we mentioned, the output of $\mathcal{C}^i$.$\textsc{Check}$ is never fed back to $\mathcal{C}^i$, and thus the operation works with high probability. \paragraph{Runtime (except $\textsc{Checker}$).} Each call to $\textsc{MultiStep}$ (Lemma~\ref{lem:multistep}) takes time $\tO{m}$ plus $O(k^{16})$ calls to $\mathcal{L}.\textsc{Update}$ and $O(k^4)$ calls to $\mathcal{L}.\textsc{Solve}$. As the total number of iterations is $\tO{m^{1/2} /k}$, the total time because of calls to $\textsc{MultiStep}$ is $\tO{m^{3/2} / k}$, plus $\tO{m^{1/2} k^{15}}$ calls to $\mathcal{L}.\textsc{Update}$ and $\tO{m^{1/2} k^3}$ calls to $\mathcal{L}.\textsc{Solve}$. Now, the total number of calls to $\mathcal{L}.\textsc{Initialize}$ is $\tO{\frac{m^{1/2} / k}{\varepsilon_{\mathrm{solve}}\sqrt{\beta m}/k}} = \tO{k^3 \beta^{-1/2}}$. The total number of calls to $\mathcal{L}.\textsc{BatchUpdate}(\emptyset)$ is $\tO{\frac{m^{1/2} / k}{0.5\alpha^{1/4} / k}} = \tO{m^{1/2} \alpha^{-1/4}}$ and the total number of calls to $\mathcal{L}.\textsc{BatchUpdate}(Z,\boldsymbol{\mathit{f}})$ is $\tO{\frac{m^{1/2}}{k \widehat{T}}}$. Regarding the size of $Z$, let us focus on the calls to $\mathcal{L}.\textsc{BatchUpdate}(Z,\boldsymbol{\mathit{f}})$ between two successive calls to $\mathcal{L}.\textsc{Initialize}$. We already showed that the sum of $|Z|$ over all calls during this interval is $O(\beta m)$. Therefore the total sum of $|Z|$ over all iterations of the algorithm is $\tO{m k^3 \beta^{1/2}}$. In order to bound the number of calls to $\mathcal{L}.\textsc{Update}$, we concentrate on those between two successive calls to $\mathcal{L}.\textsc{BatchUpdate}(Z,\boldsymbol{\mathit{f}})$ in iterations $t^{old}$ and $t^{new} > t^{old}$. After the call to $\mathcal{L}.\textsc{BatchUpdate}(Z,\boldsymbol{\mathit{f}})$ in iteration $t^{old}$ we have $\mathcal{L}.\boldsymbol{\mathit{s}} \approx_{1+\varepsilon_{\mathrm{solve}}/16} \boldsymbol{\mathit{s}}(\mu_{t^{old}})$. Fix $\mu\in\left[\mu(t^{new}),\mu(t^{old})\right]$ and let $\ell$ be the number of $e\in E$ such that $s_e(\mu)\not\approx_{1+\varepsilon_{\mathrm{solve}}/8} s_e^{t^{old}}$. As $s_e^{t^{old}} \approx_{1+\varepsilon_{\mathrm{solve}}/16} s_e(\mu_{t^{old}})$ by the guarantees of $\mathcal{L}.\textsc{BatchUpdate}$, this implies that $s_e(\mu)\not\approx_{1+\varepsilon_{\mathrm{solve}}/16} s_e(\mu_{t^{old}})$, and so \[ \sqrt{r_e(\mu^{t^{old}})}\left| f_e(\mu_{t^{old}}) - f_e(\mu) \right| \geq \frac{\varepsilon_{\mathrm{solve}} / 16}{1+\varepsilon_{\mathrm{solve}}/16} > \varepsilon_{\mathrm{solve}}/17 \,.\] As \[ \mu_{t^{old}} \leq \mu\cdot (1+\varepsilon_{\mathrm{step}}\delta)^{k\varepsilon_{\mathrm{step}}^{-1}\cdot \widehat{T}} \leq \mu\cdot (1+\delta)^{k\cdot\widehat{T}} \,,\] by applying Lemma~\ref{lem:resistance_stability1} with $k = (1+\delta)^{k\cdot\widehat{T}}$ and $\gamma = \varepsilon_{\mathrm{solve}} / 17$, we get that $\ell \leq O(k^2 \widehat{T}^2 \varepsilon_{\mathrm{solve}}^{-2})$. As there are $\tO{\frac{m^{1/2}}{k\widehat{T}}}$ calls to $\mathcal{L}.\textsc{BatchUpdate}(Z,\boldsymbol{\mathit{f}})$, the total number of calls to $\mathcal{L}.\textsc{Update}$ is \[ \tO{ m^{1/2} \widehat{T} \varepsilon_{\mathrm{solve}}^{-2}} \tO{ m^{1/2} k^6 \widehat{T}} \,.\] We conclude that we have runtime $\tO{m^{3/2}/k}$, plus \begin{itemize} \item $\tO{k^3 \beta^{-1/2}}$ calls to $\mathcal{L}.\textsc{Initialize}$, \item $\tO{m^{1/2} k^3}$ calls to $\mathcal{L}.\textsc{Solve}$, \item $\tO{m^{1/2} \left(k^6 \widehat{T} + k^{15}\right)}$ calls to $\mathcal{L}.\textsc{Update}$, \item $\tO{m^{1/2} \alpha^{-1/4}}$ calls to $\mathcal{L}.\textsc{BatchUpdate}(\emptyset)$, and \item $\tO{m^{1/2} k^{-1} \widehat{T}^{-1}}$ calls to $\mathcal{L}.\textsc{BatchUpdate}(Z,\boldsymbol{\mathit{f}})$. \end{itemize} \paragraph{Runtime of $\textsc{Checker}$.} We look at each operation separately. We begin with the runtime of $\textsc{Checker}.\textsc{Check}$. We have \[ \tO{\underbrace{m^{1/2} / k}_{\text{\# calls to \textsc{MultiStep}}} \cdot \underbrace{k^{16}}_{\text{\# calls in each $\textsc{MultiStep}$}} \cdot \underbrace{(\beta_{\textsc{Checker}} m + (k^{16} \beta_{\textsc{Checker}}^{-2} \varepsilon^{-2})^2))\varepsilon^{-2}}_{\text{runtime per call}} } \,.\] To make the first term $\tO{m^{3/2} / k}$, we set $\beta_{\textsc{Checker}} = k^{-28}$. Note that this satisfies our previous requirements that $\beta_{\textsc{Checker}} \geq \tOm{k^{16} / m}$ and $\beta_{\textsc{Checker}} \geq \tOm{k^{16}/m^{1/4}}$ as long as $k \leq m^{1/176}$. Therefore the total runtime because of this operation is $\tO{m^{3/2} / k + m^{1/2} k^{195}}$. For $\textsc{Checker}.\textsc{Initialize}$, we have \[ \tO{\underbrace{k^4}_{\# \textsc{Checker}s} \cdot \underbrace{k^3 \beta_{\textsc{Checker}}^{-1/2}}_{\# \text{times initialized}} \cdot \underbrace{m \beta_{\textsc{Checker}}^{-4} \varepsilon^{-4}}_{\text{runtime per init}}} = \tO{m k^{157}}\,. \] For $\textsc{Checker}.\textsc{Update}$, similarly with the analysis of $\textsc{Locator}$ but noting that there are no batched updates, we have \[ \tO{ \underbrace{k^4}_{\text{\#\textsc{Checker}s}} \cdot \underbrace{m \varepsilon_{\mathrm{solve}}^{-2}}_{\text{\#calls per \textsc{Checker}}} \cdot \underbrace{\beta_{\textsc{Checker}}^{-2} \varepsilon^{-2}}_{\text{runtime per call}} } = \tO{mk^{78}}\,. \] For $\textsc{Checker}.\textsc{TemporaryUpdate}$, we have \[ \tO{ \underbrace{k^4}_{\text{\#\textsc{Checker}s}} \cdot \underbrace{m^{1/2} / k}_{\text{\# calls to \textsc{MultiStep}}} \cdot \underbrace{k^{16}}_{\text{\# calls per \textsc{MultiStep}}} \cdot \underbrace{(k^{16} \beta_{\textsc{Checker}}^{-2} \varepsilon^{-2})^2}_{\text{runtime per call}} } = \tO{m^{1/2} k^{187}}\,. \] Finally, note that, by definition, computing the vectors $\boldsymbol{\mathit{\pi}}_{old}^i$ takes $\tO{m^{3/2}/k}$, as we do it once per $k^{4}$ calls to $\textsc{MultiStep}$ and it takes $\tO{mk^4}$. As long as $k \leq m^{1/316}$, the total runtime because of $\textsc{Checker}$ is $\tO{m^{3/2}/k}$. \end{proof} \section{Deferred Proofs from Section~\ref{sec:locator}} \subsection{Proof of Lemma~\ref{lem:F_system1}} We first provide a helper lemma for upper bounding escape probabilities in terms of the underlying graph's resistances. \begin{lemma}[Bounding escape probabilities] \label{lem:escape_prob} Let a graph with resistances $\boldsymbol{\mathit{r}}$, and consider a random walk which at each step moves from the current vertex $u$ to an adjacent vertex $v$ sampled with probability proportional to $1/{r_{uv}}$. Let $p_u^{\{u,t\}}(s)$ represent the probability that a walk starting at $s$ hits $u$ before $t$. Then \[ p_u^{\{u,t\}}(s) = \frac{R_{eff}(s,t)}{R_{eff}(u,t)} \cdot p_s^{\{s,t\}}(u) \leq \frac{R_{eff}(s,t)}{R_{eff}(u,t)} \leq \frac{r_{st}}{R_{eff}(u,t)}\,. \] \end{lemma} \begin{proof} Using standard arguments we can prove that if $\boldsymbol{\mathit{L}}$ is the Laplacian associated with the underlying graph, then \[ p_u^{\{u,t\}}(s) = \frac{(\mathbf{1}_s - \mathbf{1}_t)^\top \boldsymbol{\mathit{L}}^+ (\mathbf{1}_u - \mathbf{1}_t)}{R_{eff}(u,t)}\,. \] This immediately yields the claim as we can further write it as \[ p_u^{\{u,t\}}(s) = \frac{R_{eff}(s,t)}{R_{eff}(u,t)} \cdot \frac{(\mathbf{1}_u - \mathbf{1}_t)^\top \boldsymbol{\mathit{L}}^+ (\mathbf{1}_s - \mathbf{1}_t)}{R_{eff}(s,t)} = \frac{R_{eff}(s,t)}{R_{eff}(u,t)} \cdot p_s^{\{s,t\}}(u) \leq \frac{R_{eff}(s,t)}{R_{eff}(u,t)}\,, \] where we crucially used the symmetry of $\boldsymbol{\mathit{L}}$. The final inequality is due to the fact that $R_{eff}(s,t) \leq r_{st}$. Now let us prove the claimed identity for escape probabilities. Let $\boldsymbol{\mathit{\psi}}$ be the vector defined by $\psi_i = p_u^{\{u,t\}}(i)$ for all $i \in V$, which clearly satisfies $\psi_u = 1$ and $\psi_t = 0$. Furthermore, for all $i \notin\{u,t\}$ we have \[ \psi_i = \sum_{j \sim i} \frac{ r_{ij}^{-1} }{ \sum_{k\sim i}r_{ik}^{-1} } \psi_j\,, \] which can be written in short as \[ (\boldsymbol{\mathit{L}} \boldsymbol{\mathit{\psi}})_i = 0\,\quad \textnormal{for all } i\notin\{s,t\}\,. \] Now we solve the corresponding linear system. We interpret $\boldsymbol{\mathit{\psi}}$ as electrical potentials corresponding to routing $1/R_{eff}(u,t)$ units of electrical flow from $u$ to $t$. Indeed, by Ohm's law, this corresponds to a potential difference $\psi_u - \psi_t = 1$. Furthermore, this shows that \[ \psi_s - \psi_t = (\mathbf{1}_s - \mathbf{1}_t)^\top \boldsymbol{\mathit{L}}^+ (\mathbf{1}_u - \mathbf{1}_t) \cdot \frac{1}{R_{eff}(u,t)}\,, \] which concludes the proof. \end{proof} Now we are ready to prove the main statement. \begin{proof}[Proof of Lemma~\ref{lem:F_system1}]\label{proof_lem_F_system1} Note that the demand can be decomposed as $\boldsymbol{\mathit{d}} - \boldsymbol{\mathit{\pi}}^C(\boldsymbol{\mathit{d}}) = \boldsymbol{\mathit{d}}^1 - \boldsymbol{\mathit{d}}^2$, where $\boldsymbol{\mathit{d}}^1 = \frac{\mathbf{1}_s}{\sqrt{r_{st}}} - \boldsymbol{\mathit{\pi}}^C(\frac{\mathbf{1}_s}{\sqrt{r_{st}}})$ and $\boldsymbol{\mathit{d}}^2 = \frac{\mathbf{1}_t}{\sqrt{r_{st}}} - \boldsymbol{\mathit{\pi}}^C(\frac{\mathbf{1}_t}{\sqrt{r_{st}}})$. Now let $p^1$ be the probability distribution of $s-C$ random walks obtained via a random walk from $s$ with transition probabilities proportional to inverse resistances. Similarly, let $p^2$ be the probability distribution of $t-C$ random walks obtained by running the same process starting from $t$. Now, it is well known that an electrical flow is the sum of these random walks, i.e. \[ \boldsymbol{\mathit{R}}^{-1} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \boldsymbol{\mathit{d}}^1 = \frac{1}{\sqrt{r_{st}}} \cdot \mathbb{E}_{P\sim p^1} \left[net(P)\right] \] and similarly for $\boldsymbol{\mathit{d}}^2$ \[ \boldsymbol{\mathit{R}}^{-1} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \boldsymbol{\mathit{d}}^2 = \frac{1}{\sqrt{r_{st}}} \cdot \mathbb{E}_{P\sim p^2} \left[net(P)\right] \,, \] where $net(P)\in\mathbb{R}^m$ is a flow vector whose $e$-th entry is the net number of times the edge $e=(u,v)$ is used by $P$. Therefore we can write: \begin{align*} \left|\frac{\phi_u - \phi_v}{\sqrt{r_{uv}}}\right| & = \sqrt{r_{uv}} \left|\boldsymbol{\mathit{R}}^{-1} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \boldsymbol{\mathit{d}}\right|_{uv}\\ & = \sqrt{\frac{r_{uv}}{r_{st}}} \left| \mathbb{E}_{P^1\sim p^1}\left[net_e(P^1)\right] - \mathbb{E}_{P^2\sim p^2}\left[net_e(P^2)\right] \right|\,. \end{align*} Let us also subdivide $e$ by inserting an additional vertex $w$ in the middle (i.e. $r_{uw} = r_{wv} = r_{uv} / 2$). This has no effect in the random walks, but will be slightly more convenient in terms of notation. The first expectation term can be expressed as \begin{align*} \mathbb{E}_{P^1\sim p^1}\left[net_e(P^1)\right] & = \Pr_{P^1\sim p^1}\left[P^1 \text{ visits $t$ before $C\cup\{w\}$}\right] \cdot \mathbb{E}_{P^1\sim p^1}\left[net_e(P^1)\ |\ P^1 \text{ visits $t$ before $C\cup\{w\}$}\right]\\ & + \Pr_{P^1\sim p^1}\left[P^1 \text{ visits $w$ before $C\cup\{t\}$}\right] \cdot \mathbb{E}_{P^1\sim p^1}\left[net_e(P^1)\ |\ P^1 \text{ visits $t$ before $C\cup\{t\}$}\right]\,. \end{align*} Now, note that \[ \mathbb{E}_{P^1\sim p^1}\left[net_e(P^1)\ |\ P^1 \text{ visits $t$ before $C\cup\{w\}$}\right] = \mathbb{E}_{P^2\sim p^2}\left[net_e(P^2)\right] \,.\] Additionally, \begin{align*} & \Pr_{P^1\sim p^1}\left[P^1 \text{ visits $w$ before $C\cup\{t\}$}\right] \\ & = \Pr_{P^1\sim p^1}\left[P^1 \text{ visits $w$ before $C$}\right] \cdot \Pr_{P^1\sim p^1}\left[P^1 \text{ visits $w$ before $t$}\ |\ P^1 \text{ visits $w$ before $C$} \right] \end{align*} The first term of the product is $p_w^{C\cup\{w\}}(s)$. For the second term, we define a new graph $\widehat{G}$ by deleting $C$, and denote the hitting probabilities in $\widehat{G}$ by $\widehat{\mathit{p}}$. Then, the second term is equal to $\widehat{\mathit{p}}_w^{\{t,w\}}(s)$. We have concluded that \begin{align*} \mathbb{E}_{P^1\sim p^1}\left[net_e(P^1)\right] & \leq \mathbb{E}_{P^2\sim p^2}\left[net_e(P^2)\right] + p_w^{C\cup\{w\}}(s) \cdot \widehat{\mathit{p}}_w^{\{t,w\}}(s)\,. \end{align*} Combining this with the symmetric argument for $p^2$ shows that \begin{align*} \left|\mathbb{E}_{P^1\sim p^1}\left[net_e(P^1)\right] - \mathbb{E}_{P^2\sim p^2}\left[net_e(P^2)\right]\right| \leq p_w^{C\cup\{w\}}(s) \cdot \widehat{\mathit{p}}_w^{\{t,w\}}(s) + p_w^{C\cup\{w\}}(t) \cdot \widehat{\mathit{p}}_w^{\{s,w\}}(t)\,. \end{align*} Using Lemma~\ref{lem:escape_prob} and the fact that ${\widehat{R}}_{eff}(w, t) \geq r_{uv} / 4$ (${\widehat{R}}_{eff}$ are the effective resistances in $\widehat{G}$), we can bound \[ \widehat{p}_{w}^{\{t,w\}}(s) \leq \min\{1,\frac{r_{st}}{{\widehat{R}}_{eff}(w, t)}\} \leq \min\{1,4 \frac{r_{st}}{r_{uv}}\}\,, \] and the same upper bound holds for $\widehat{\mathit{p}}_w^{\{s,w\}}(t)$. Therefore, \begin{align*} & \left|\mathbb{E}_{P^1\sim p^1}\left[net_e(P^1)\right] - \mathbb{E}_{P^2\sim p^2}\left[net_e(P^2)\right]\right|\\ & \leq \min\{1,4 \frac{r_{st}}{r_{uv}}\} \left(p_w^{C\cup\{w\}}(s) + p_w^{C\cup\{w\}}(t)\right)\\ & \leq 2 \sqrt{\frac{r_{st}}{r_{uv}}} \left(p_w^{C\cup\{w\}}(s) + p_w^{C\cup\{w\}}(t)\right)\\ & \leq 2 \sqrt{\frac{r_{st}}{r_{uv}}} \left( p_u^{C\cup\{u\}}(s) + p_v^{C\cup\{v\}}(s) + p_u^{C\cup\{u\}}(t) + p_v^{C\cup\{v\}}(t) \right)\,. \end{align*} Putting everything together, we have that \begin{align*} & \left|\frac{\phi_u - \phi_v}{\sqrt{r_{uv}}}\right| \leq 2 \left( p_u^{C\cup\{u\}}(s) + p_v^{C\cup\{v\}}(s) + p_u^{C\cup\{u\}}(t) + p_v^{C\cup\{v\}}(t) \right)\,. \end{align*} \end{proof} \subsection{Proof of Lemma~\ref{st_projection1_energy}} \label{proof_st_projection1_energy} \begin{proof} Let $\boldsymbol{\mathit{d}} = \boldsymbol{\mathit{B}}^\top \frac{\mathbf{1}_e}{\sqrt{\boldsymbol{\mathit{r}}}}$ and $e = (u,w)$. Note that $\mathcal{E}_r\left(\boldsymbol{\mathit{d}}\right) \leq r_e \cdot (\frac{1}{\sqrt{r_e}})^2 = 1$, therefore the case that remains is \begin{align} R_{eff}(C,e) \geq 36 \cdot r_e\,. \label{eq:st_projection1_condition} \end{align} For each $v\in C$ by Lemma~\ref{st_projection1} we have that \begin{align*} \left|\pi_v^{C}(\boldsymbol{\mathit{d}})\right| \leq (p_v^{C}(u) + p_v^{C}(w)) \cdot \frac{\sqrt{r_e}}{R_{eff}(v,e)} \,. \end{align*} Now, we would like to bound the energy of routing $\boldsymbol{\mathit{\pi}}^C(\boldsymbol{\mathit{d}})$ by the energy to route it via $w$. For each $v\in C$ we let $\boldsymbol{\mathit{d}}^v$ be the following demand: \[\boldsymbol{\mathit{d}}^v = \pi_v^C(\boldsymbol{\mathit{d}}) \cdot (\mathbf{1}_v - \mathbf{1}_w )\,. \] Note that $\boldsymbol{\mathit{\pi}}^C(\boldsymbol{\mathit{d}}) = \sum\limits_{v\in C} \boldsymbol{\mathit{d}}^v$. We have, \begin{align*} \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}(\pi^C(\boldsymbol{\mathit{d}}))} & = \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}\left(\pi^C\left(\sum\limits_{v\in C} \boldsymbol{\mathit{d}}^v\right)\right)}\\ & \leq \sum\limits_{v\in C} \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}\left(\pi^C\left(\boldsymbol{\mathit{d}}^v\right)\right)}\\ & \leq \sum\limits_{v\in C} (R_{eff}(v,w))^{1/2} \left|\pi_v^C(\boldsymbol{\mathit{d}})\right|\\ & \leq \sum\limits_{v\in C} (R_{eff}(v,w))^{1/2} \cdot (p_v^{C}(u) + p_v^{C}(w)) \cdot \frac{\sqrt{r_e}}{R_{eff}(v,e)}\,. \end{align*} Now, note that, because $R_{eff}$ is a metric, \begin{align*} R_{eff}(v,u) & \geq R_{eff}(v,w) - |R_{eff}(v,w) - R_{eff}(v,u)|\\ & \geq R_{eff}(v,w) - r_e\\ & \geq R_{eff}(v,w) - \frac{1}{36} R_{eff}(C,e)\\ & \geq \frac{35}{36} R_{eff}(v,w)\,, \end{align*} where we used the triangle inequality twice, (\ref{eq:st_projection1_condition}), and also the fact that $R_{eff}(v,w) \geq R_{eff}(C,e)$ because $v\in C$ and $w\in e$. Now, note also that \[ R_{eff}(v,e) \geq \frac{1}{2} \min\{R_{eff}(v,w),R_{eff}(v,u) \geq \frac{1}{2}\cdot \frac{35}{36} R_{eff}(v,w)\}\,,\] so \begin{align*} & 2 \sum\limits_v (R_{eff}(v,w))^{1/2} \cdot (p_v^{C}(u) + p_v^{C}(w)) \cdot \frac{\sqrt{r_e}}{R_{eff}(v,e)}\\ & \leq 2\sqrt{2\cdot\frac{36}{35}} \sum\limits_v (p_v^{C}(u) + p_v^{C}(w)) \cdot \sqrt{\frac{r_e}{R_{eff}(v,e)}}\\ & \leq 6\cdot \sqrt{\frac{r_e}{R_{eff}(C,e)}}\,, \end{align*} where we used the fact that $\sum\limits_{v\in C} (p_v^{C}(u) + p_v^{C}(w)) = 2$ and $R_{eff}(v,e) \geq R_{eff}(C,e)$ because $v\in C$. \end{proof} \subsection{Proof of Lemma~\ref{lem:important_edges}} \label{proof_lem_important_edges} \begin{proof} By definition of the fact that $e$ is not $\varepsilon$-important, we have \[ R_{eff}(C,e) > r_e / \varepsilon^2 \,.\] Using the fact that the demand $\boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{p}}}{\sqrt{\boldsymbol{\mathit{r}}}}\right)$ is supported on $C$ and Lemma~\ref{st_projection1_energy}, we get \begin{align*} \left|\left\langle \mathbf{1}_e, \boldsymbol{\mathit{R}}^{-1/2} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{\phi}}^*\right\rangle\right| &= \left|\left\langle \boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\mathbf{1}_e}{\sqrt{\boldsymbol{\mathit{r}}}}\right), \boldsymbol{\mathit{\phi}}_C^*\right\rangle\right| \\ & \leq \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}\left(\boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\mathbf{1}_e}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\right)} \cdot \sqrt{E_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{\phi}}^*)}\\ & = \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}\left(\boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\mathbf{1}_e}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\right)} \cdot \delta \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}\left(\boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{p}}}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\right)}\\ & \leq 6 \sqrt{\frac{r_e}{R_{eff}(C,e)}} \cdot \delta \sqrt{m}\\ & \leq 6\varepsilon\,. \end{align*} \end{proof} \subsection{Proof of Lemma~\ref{lem:projection_change_energy}} \label{proof_lem_projection_change_energy} \begin{proof} We note that \begin{align*} & \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}\left( \boldsymbol{\mathit{\pi}}^{C\cup\{v\}}(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}}) - \boldsymbol{\mathit{\pi}}^{C}(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}})\right)}\\ & = \left|\pi_v^{C\cup\{v\}}(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}})\right| \sqrt{R_{eff}(C,v)} \\ & \leq \sum\limits_{e=(u,w)\in E} \left|\pi_v^{C\cup\{v\}}(\boldsymbol{\mathit{B}}^\top \frac{q_e}{\sqrt{r_e}} \mathbf{1}_e)\right| \sqrt{R_{eff}(C,v)} \\ & \leq \sum\limits_{e=(u,w)\in E} (p_v^{C\cup\{v\}}(u) + p_v^{C\cup\{v\}}(w)) \cdot \min\left\{\sqrt{\frac{R_{eff}(C,v)}{r_e}}, \frac{\sqrt{r_e R_{eff}(C,v)}}{R_{eff}(e,v)}\right\} \,, \end{align*} where we used Lemma~\ref{st_projection1}. For some sufficiently large $c$ to be defined later, we partition $E$ into $X$ and $Y$, where $X = \{e\in E\ |\ R_{eff}(C,v) \leq c^2 \cdot r_e \text{ or } r_e R_{eff}(C,v) \leq c^2 \cdot (R_{eff}(e,v))^2 \}$ and $Y = E\backslash X$. We first note that \begin{align*} & \sum\limits_{e=(u,w)\in X} (p_v^{C\cup\{v\}}(u) + p_v^{C\cup\{v\}}(w)) \cdot \min\left\{\sqrt{\frac{R_{eff}(C,v)}{r_e}}, \frac{\sqrt{r_e R_{eff}(C,v)}}{R_{eff}(e,v)}\right\}\\ & \leq c \cdot \sum\limits_{e=(u,w)\in X} (p_v^{C\cup\{v\}}(u) + p_v^{C\cup\{v\}}(w))\\ & \leq c\cdot \tO{\beta^{-2}}\,, \end{align*} where the last inequality follows by the congestion reduction property. Now, let $e=(u,w)\in Y$. We will prove that both $u$ and $w$ are much closer to $v$ than $C$. This, in turn, will imply that their hitting probabilities on $v$ are roughly the same, and so they mostly cancel out in the projection. First of all, we let $R_{eff}(C,v) = c_1^2 \cdot r_e$ and $r_e R_{eff}(C,v) = c_2^2 \cdot (R_{eff}(e,v))^2$, for some $c_1,c_2 > 0$, where by definition $c_1,c_2\geq c$. Now, we assume without loss of generality that $R_{eff}(u,v) \leq R_{eff}(w,v)$, and so \begin{align*} R_{eff}(u,v) \leq 2 R_{eff}(e,v) = \frac{2}{c_2} \sqrt{r_e R_{eff}(C,v)} = \frac{2}{c_1c_2} R_{eff}(C,v) \leq \frac{2}{c_1c_2} \left(R_{eff}(C,u) + R_{eff}(u,v)\right) \,, \end{align*} so \begin{equation} \begin{aligned} R_{eff}(u,v) \leq \frac{\frac{2}{c_1c_2}}{1 - \frac{2}{c_1c_2}} R_{eff}(C,u) = \frac{2}{c_1c_2 - 2} R_{eff}(C,u) \leq \frac{3}{c_1c_2} R_{eff}(C,u) \,. \end{aligned} \label{eq:energy_Reff_1} \end{equation} Futhermore, note that \[ R_{eff}(w,v) \leq R_{eff}(u,v) + r_e \leq \left(\frac{2}{c_1c_2} + \frac{1}{c_1^2}\right) R_{eff}(C,v) \leq \left(\frac{2}{c_1c_2} + \frac{1}{c_1^2}\right) (R_{eff}(C,w) + R_{eff}(w,v)) \,, \] and so we have \begin{equation} \begin{aligned} R_{eff}(w,v) \leq \frac{\frac{2}{c_1c_2} + \frac{1}{c_1^2}}{ 1 - \frac{2}{c_1c_2} - \frac{1}{c_1^2} } R_{eff}(C,w) \leq 3\left(\frac{1}{c_1c_2} + \frac{1}{c_1^2}\right) R_{eff}(C,w) \,. \end{aligned} \label{eq:energy_Reff_2} \end{equation} Now, by Lemma~\ref{lem:escape_prob} together with (\ref{eq:energy_Reff_1}) we have \[ p_v^{C\cup\{v\}}(u) \geq 1 - \frac{3}{c_1c_2} \] and with (\ref{eq:energy_Reff_2}) we have \[ p_v^{C\cup\{v\}}(w) \geq 1 - 3 \left(\frac{1}{c_1c_2} + \frac{1}{c_1^2}\right)\,, \] therefore \[ \left|p_v^{C\cup\{v\}}(u) - p_v^{C\cup\{v\}}(w)\right| \leq 6\left(\frac{1}{c_1c_2} + \frac{1}{c_1^2}\right)\,. \] So, \begin{align*} \left|\pi_v^{C\cup\{v\}}(\boldsymbol{\mathit{B}}^\top \frac{q_e}{\sqrt{r_e}} \mathbf{1}_e)\right| \sqrt{R_{eff}(C,v)} & = \left| p_v^{C\cup\{v\}}(u) - p_v^{C\cup\{v\}}(w) \right| \sqrt{\frac{R_{eff}(C,v)}{r_e}}\\ & \leq 6 \left(\frac{1}{c_1c_2} + \frac{1}{c_1^2}\right) \cdot c_1\\ & = 6 \left(\frac{1}{c_2} + \frac{1}{c_1}\right)\\ & = O\left(\frac{1}{c}\right)\,. \end{align*} Now, we will apply Lemma~\ref{lem:small-neighborhood2} to prove that with high probability $|Y| \leq \tO{\beta^{-1}}$. The reason we can apply the lemma is that for any $e=(u,w)\in Y$, we have \[ R_{eff}(e,v) = \frac{1}{c_1c_2}R_{eff}(C,v) \leq \frac{1}{2} R_{eff}(C,v)\,,\] and so $Y\subseteq N_E(v,R_{eff}(C,v) / 2)$. Therefore, we get that \begin{align*} \sum\limits_{e=(u,w)\in Y} \left|\pi_v^{C\cup\{v\}}(\boldsymbol{\mathit{B}}^\top \frac{q_e}{\sqrt{r_e}} \mathbf{1}_e)\right| \sqrt{R_{eff}(C,v)} \leq c^{-1} \cdot \tO{\beta^{-1}}\,. \end{align*} Overall, we conclude that \[ \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}\left( \boldsymbol{\mathit{\pi}}^{C\cup\{v\}}(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}}) - \boldsymbol{\mathit{\pi}}^{C}(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}}{\sqrt{\boldsymbol{\mathit{r}}}})\right)} \leq (c + c^{-1}) \tO{\beta^{-2}} = \tO{\beta^{-2}}\,, \] by setting $c$ to be a large enough constant. \end{proof} \subsection{Proof of Lemma~\ref{lem:old_projection_approximate}} \label{sec:proof_old_projection_approximate} \begin{proof} We write \begin{align*} & \boldsymbol{\mathit{\pi}}^{C^T,\boldsymbol{\mathit{r}}^T}\left(\boldsymbol{\mathit{B}}^\top\frac{\boldsymbol{\mathit{q}}^T}{\sqrt{\boldsymbol{\mathit{r}}^T}}\right) - \boldsymbol{\mathit{\pi}}^{C^0,\boldsymbol{\mathit{r}}^0}\left(\boldsymbol{\mathit{B}}^\top\frac{\boldsymbol{\mathit{q}}^0}{\sqrt{\boldsymbol{\mathit{r}}^0}}\right) \\ & = \underbrace{\sum\limits_{\text{$i$ is an \textsc{AddTerminal}}} \pi_{v^i}^{C^{i+1},\boldsymbol{\mathit{r}}^i}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}^{i}}{\sqrt{\boldsymbol{\mathit{r}}^{i}}}\right) \cdot \left(\mathbf{1}_{v^i} - \boldsymbol{\mathit{\pi}}^{C^i,\boldsymbol{\mathit{r}}^i}\left(\mathbf{1}_{v^i}\right)\right)}_{\boldsymbol{\mathit{d}}_{Add}}\\ & + \underbrace{\sum\limits_{i \text{ is an \textsc{Update}}} \boldsymbol{\mathit{\pi}}^{C^i,\boldsymbol{\mathit{r}}^i}\left(\boldsymbol{\mathit{B}}^\top \left(\frac{\boldsymbol{\mathit{q}}^{i+1}}{\sqrt{\boldsymbol{\mathit{r}}^{i+1}}} - \frac{\boldsymbol{\mathit{q}}^i}{\sqrt{\boldsymbol{\mathit{r}}^i}} \right) \mathbf{1}_{e^i}\right)}_{\boldsymbol{\mathit{d}}_{Upd}}\,, \end{align*} which implies that \begin{align*} \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}^T} \left( \boldsymbol{\mathit{\pi}}^{C^T,\boldsymbol{\mathit{r}}^T}\left(\boldsymbol{\mathit{B}}^\top\frac{\boldsymbol{\mathit{q}}^T}{\sqrt{\boldsymbol{\mathit{r}}^T}}\right) - \boldsymbol{\mathit{\pi}}^{C^0,\boldsymbol{\mathit{r}}^0}\left(\boldsymbol{\mathit{B}}^\top\frac{\boldsymbol{\mathit{q}}^0}{\sqrt{\boldsymbol{\mathit{r}}^0}}\right) \right)} \leq \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}^T}(\boldsymbol{\mathit{d}}_{Add})} + \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}^T}(\boldsymbol{\mathit{d}}_{Upd})}\,. \end{align*} We bound each of these terms separately. For the second one, we have that \begin{align*} & \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}^T}\left(\boldsymbol{\mathit{d}}_{Upd}\right)} \\ & \leq \sum\limits_{i \text{ is an \textsc{Update}}} \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}^T}\left(\boldsymbol{\mathit{\pi}}^{C^i,\boldsymbol{\mathit{r}}^i}\left(\boldsymbol{\mathit{B}}^\top \left(\frac{\boldsymbol{\mathit{q}}^{i+1}}{\sqrt{\boldsymbol{\mathit{r}}^{i+1}}} - \frac{\boldsymbol{\mathit{q}}^{i}}{\sqrt{\boldsymbol{\mathit{r}}^{i}}}\right) \mathbf{1}_{e^i}\right)\right)}\\ & = \sum\limits_{i \text{ is an \textsc{Update}}} \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}^T}\left(\boldsymbol{\mathit{B}}^\top \left(\frac{\boldsymbol{\mathit{q}}^{i+1}}{\sqrt{\boldsymbol{\mathit{r}}^{i+1}}} - \frac{\boldsymbol{\mathit{q}}^{i}}{\sqrt{\boldsymbol{\mathit{r}}^{i}}}\right) \mathbf{1}_{e^i}\right)}\\ & \leq \sum\limits_{i \text{ is an \textsc{Update}}} \left(\sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}^T}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}^{i+1}}{\sqrt{\boldsymbol{\mathit{r}}^{i+1}}} \mathbf{1}_{e^i}\right)} + \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}^T}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}^{i}}{\sqrt{\boldsymbol{\mathit{r}}^{i}}} \mathbf{1}_{e^i}\right)}\right)\\ & \leq \max_i \left\|\frac{\boldsymbol{\mathit{r}}^T}{\boldsymbol{\mathit{r}}^i}\right\|_\infty^{1/2} \sum\limits_{i \text{ is an \textsc{Update}}} \left(\sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}^{i+1}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}^{i+1}}{\sqrt{\boldsymbol{\mathit{r}}^{i+1}}} \mathbf{1}_{e^i}\right)} + \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}^i}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}^{i}}{\sqrt{\boldsymbol{\mathit{r}}^{i}}} \mathbf{1}_{e^i}\right)}\right)\\ & \leq 2 \max_i \left\|\frac{\boldsymbol{\mathit{r}}^T}{\boldsymbol{\mathit{r}}^i}\right\|_\infty^{1/2} T\,. \end{align*} For the first one, we have \begin{align*} & \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}^T}\left(\boldsymbol{\mathit{d}}_{Add}\right)} \\ & \leq \sum\limits_{\text{$i$ is an \textsc{AddTerminal}}} \left|\pi_{v^i}^{C^{i+1},\boldsymbol{\mathit{r}}^i}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}^{i}}{\sqrt{\boldsymbol{\mathit{r}}^{i}}}\right)\right| \cdot \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}^T}\left(\mathbf{1}_{v^i} - \boldsymbol{\mathit{\pi}}^{C^i,\boldsymbol{\mathit{r}}^i}\left(\mathbf{1}_{v^i}\right)\right)}\\ & \leq \left\|\frac{\boldsymbol{\mathit{r}}^T}{\boldsymbol{\mathit{r}}^i}\right\|_{\infty}^{1/2} \sum\limits_{\text{$i$ is an \textsc{AddTerminal}}} \left|\pi_{v^i}^{C^{i+1},\boldsymbol{\mathit{r}}^i}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}^{i}}{\sqrt{\boldsymbol{\mathit{r}}^{i}}}\right)\right| \cdot \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}^i}\left(\mathbf{1}_{v^i} - \boldsymbol{\mathit{\pi}}^{C^i,\boldsymbol{\mathit{r}}^i}\left(\mathbf{1}_{v^i}\right)\right)}\\ & \leq \tO{\max_i \left\|\frac{\boldsymbol{\mathit{r}}^T}{\boldsymbol{\mathit{r}}^i}\right\|_{\infty}^{1/2}\beta^{-2}} \cdot T\,, \end{align*} where in the last inequality we used Lemma~\ref{lem:projection_change_energy}. The desired statement now follows immediately. \end{proof} \subsection{Proof of Lemma~\ref{lem:locator}} \label{sec:full_proof_lem_locator} \begin{proof} \noindent {\bf $\textsc{Initialize}(\boldsymbol{\mathit{f}})$}: We set $\boldsymbol{\mathit{s}}^+ = \boldsymbol{\mathit{u}} - \boldsymbol{\mathit{f}}$, $\boldsymbol{\mathit{s}}^- = \boldsymbol{\mathit{f}}$, $\boldsymbol{\mathit{r}}^0 = \boldsymbol{\mathit{r}} = \frac{1}{(\boldsymbol{\mathit{s}}^+)^2} + \frac{1}{(\boldsymbol{\mathit{s}}^-)^2}$. We first initialize a $\beta$-congestion reduction subset $C$ based on Lemma~\ref{lem:cong_red}, which takes time $\tO{m \beta^{-2}}$, and a data structure $\textsc{DynamicSC}$ for maintaining the sparsified Schur Complement onto $C$, as described in Appendix~\ref{sec:maintain_schur}, which takes time $\tO{m \beta^{-4} \varepsilon^{-4}}$. We also set $C^0 = C$. Then, we generate an $\tO{\varepsilon^{-2}}\times m$ sketching matrix $\boldsymbol{\mathit{Q}}$ as in~(Lemma 5.1, \cite{gao2021fully} v2), which takes time $\tO{m \varepsilon^{-2}}$, and let its rows be $\boldsymbol{\mathit{q}}^i$ for $i\in[\tO{\varepsilon^{-2}}]$. In order to compute the set of important edges, we use Lemma~\ref{lem:approx_effective_res} after contracting $C$, which shows that we can compute all resistances of the form $R_{eff}(C,u)$ for $u\in V\backslash C$ up to a factor of $2$ in $\tO{m}$. From these, we can get $4$-approximate estimates of $R_{eff}(C,e)$ for $e\in E\backslash E(C)$, using the fact that \[ \min\{R_{eff}(C,u),R_{eff}(C,w)\} \approx_{2} R_{eff}(C,e) \,.\] Then, in $O(m)$ time, we can easily compute a set of edges $S$ such that \begin{align*} \{e\ |\ e \text{ is $\frac{\varepsilon\beta}{\alpha}$-important } \} \subseteq S\subseteq \{e\ |\ e \text{ is $\frac{\varepsilon \beta}{4\alpha}$-important}\} \end{align*} We also need to sample the random walks that will be used inside the demand projection data structures. We use (Lemma 5.15,~\cite{gao2021fully} v2) to sample $h = \tO{\widehat{\eps}^{-4} \beta^{-6} + \widehat{\eps}^{-2} \beta^{-2} \gamma^{-2}}$ random walks for each $u\in V\backslash C$ and $e\in E\backslash E(C)$ with $u\in e$, where we set $\gamma = \frac{\varepsilon}{4\alpha}$ so that $S$ is a subset of $\gamma$-important edges. Note that, by Definition~\ref{def:locator}, a $\gamma$-important edge will always remain $\gamma$-important until the \textsc{Locator} is re-initialized, as any edge's resistive distance to $C$ can only decrease, and its own resistance is constant. Therefore $S$ can be assumed to always be a subset of $\gamma$-important edges. The runtime to sample the set $\mathcal{P}$ of these random walks is \[ \tO{m h \beta^{-2}} = \tO{m (\widehat{\eps}^{-4} \beta^{-8} + \widehat{\eps}^{-2} \beta^{-4} \gamma^{-2})} = \tO{m (\widehat{\eps}^{-4} \beta^{-8} + \widehat{\eps}^{-2} \varepsilon^{-2}\alpha^2 \beta^{-4} )}\,. \] In order to be able to detect congested edges, we will initialize $\tO{\varepsilon^{-2}}$ demand projection data structures, with the guarantees from Lemma~\ref{lem:ds}. We will maintain an approximation to $\boldsymbol{\mathit{\pi}}^C(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S^i}{\sqrt{\boldsymbol{\mathit{r}}}})$ for all $i\in\tO{\varepsilon^{-2}}$, where $\boldsymbol{\mathit{q}}^i$ are the rows of the sketching matrix that we have generated, as well as $\boldsymbol{\mathit{\pi}}^{old} := \boldsymbol{\mathit{\pi}}^{C^0,\boldsymbol{\mathit{r}}^0}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{p}}^0}{\sqrt{\boldsymbol{\mathit{r}}^0}}\right)$, where $\boldsymbol{\mathit{p}}^0 = \sqrt{\boldsymbol{\mathit{r}}^0} g(\boldsymbol{\mathit{s}}^0) = \frac{\frac{1}{\boldsymbol{\mathit{s}}^{+,0}} - \frac{1}{\boldsymbol{\mathit{s}}^{-,0}}}{\sqrt{\boldsymbol{\mathit{r}}^0}}$. Specifically, we call \[ \textsc{DP}^i.\textsc{Initialize}(C, \boldsymbol{\mathit{r}}, \boldsymbol{\mathit{q}}^i, S, \mathcal{P}) \] for all $i\in[\tO{\varepsilon^{-2}}]$, and also exactly compute $\boldsymbol{\mathit{\pi}}^{old}$, which can be done by calling \[ \textsc{DemandProjector}.\textsc{Initialize}(C, \boldsymbol{\mathit{r}}, \boldsymbol{\mathit{p}}, [m], \mathcal{P})\,. \] The total runtime for this operation is dominated by the random walk generation, and is \[\tO{m (\widehat{\eps}^{-4} \beta^{-8} + \widehat{\eps}^{-2} \varepsilon^{-2}\alpha^2 \beta^{-4} )}\,.\] \noindent {\bf $\textsc{Update}(e,\boldsymbol{\mathit{f}})$}: We set $s_e^+ = u_e - f_e$, $s_e^- = f_e$, and $r_e = \frac{1}{(s_e^+)^2} + \frac{1}{(s_e^-)^2}$. Then, we also set $p_e = \frac{\frac{1}{s_e^+} - \frac{1}{s_e^-}}{\sqrt{r_e}}$. We distinguish two cases: \begin{itemize} \item{$e\in E(C)$: In this case, we can simply call \[ \textsc{DynamicSC}.\textsc{Update}(e, r_e) \] and \[ \textsc{DP}^i.\textsc{Update}(e, \boldsymbol{\mathit{r}}, \boldsymbol{\mathit{q}}) \] for all $i\in[\tO{\varepsilon^{-2}}]$. Note that we can do this as $\textsc{DP}^i$ was initialized with resistances $\boldsymbol{\mathit{r}}^0$ and $\boldsymbol{\mathit{r}}^0 \approx_{\alpha} \boldsymbol{\mathit{r}}$. } \item{$e\in E\backslash E(C)$: We let $e = (u,w)$. We want to insert $u$ and $w$ into $C$, but for doing that $\textsc{DP}^i$'s require constant factor estimates of the resistances $R_{eff}(C,u)$ and $R_{eff}(C\cup\{u\},w)$. In order to get these estimates, we will use $\textsc{DynamicSC}$. We first call \[ \textsc{DynamicSC}.\textsc{AddTerminal}(u)\,, \] which takes time $\tO{\beta^{-2} \varepsilon^{-2}}$ and returns ${\widetilde{R}}_{eff}(C,u) \approx_2 R_{eff}(C,u)$. Given this estimate, we can call \[ \textsc{DP}^i.\textsc{AddTerminal}(u, {\widetilde{R}}_{eff}(C,u))\,, \] for all $i\in[\tO{\varepsilon^{-2}}]$, each of which takes time \[ \tO{\widehat{\eps}^{-4} \beta^{-8} + \widehat{\eps}^{-2} \beta^{-6} \gamma^{-2}} = \tO{\widehat{\eps}^{-4} \beta^{-8} + \widehat{\eps}^{-2} \varepsilon^{-2} \alpha^2 \beta^{-6}} \,.\] Now, we can set $C = C\cup\{u\}$ and repeat the same process for $w$. Finally, to update the resistance, note that we now have $e\in E(C)$, so we apply the procedure from the first case. } \end{itemize} Finally, if the total number of calls to $\textsc{DP}^i.\textsc{AddTerminal}$ for some fixed $i$ since the last call to $\mathcal{L}.\textsc{BatchUpdate}(\emptyset)$ exceeds $\frac{\varepsilon}{\widehat{\eps} \alpha^{1/2}}$ (note that the number of calls is actually the same for all $i$), we call $\mathcal{L}.\textsc{BatchUpdate}(\emptyset)$ in order to re-initialize the demand projections. We conclude that the total runtime is \[ \tO{m \frac{\widehat{\eps} \alpha^{1/2}}{\varepsilon^{3}} + \widehat{\eps}^{-4} \varepsilon^{-2} \beta^{-8} + \widehat{\eps}^{-2} \varepsilon^{-4} \alpha^2 \beta^{-6}} \,,\] where the first term comes from amortizing the calls to $\mathcal{L}.\textsc{BatchUpdate}(\emptyset)$, each of which, as we will see, takes $\tO{m \varepsilon^{-2}}$. \noindent {\bf $\textsc{BatchUpdate}(Z,\boldsymbol{\mathit{f}})$}: First, for each $e\in Z$, we set $s_e^+ = u_e - f_e$, $s_e^- = f_e$, $r_e^0 = r_e = \frac{1}{(s_e^+)^2} + \frac{1}{(s_e^-)^2}$, and $p_e^0 = p_e = \frac{\frac{1}{s_e^+} - \frac{1}{s_e^-}}{\sqrt{r_e}}$. For each $e=(u,w)\in Z$, we call \[ \textsc{DynamicSC}.\textsc{AddTerminal}(u)\] and \[ \textsc{DynamicSC}.\textsc{AddTerminal}(w)\] (if $u$ and $w$ are not already in $C$), and then we call \[ \textsc{DynamicSC}.\textsc{Update}(e, r_e) \,. \] Then, we set $C^0 = C = C\cup (\cup_{(u,w)\in Z}\{u,w\})$. Additionally, we re-compute $\boldsymbol{\mathit{\pi}}^{old}$ based on the new values of $C^0, \boldsymbol{\mathit{r}}^0, \boldsymbol{\mathit{p}}^0$. All of this takes time $\tO{m + |Z| \beta^{-2} \varepsilon^{-2}}$. Now, to pass these updates to the $\textsc{DemandProjector}$s, we first have to re-compute the set of important edges $S$ (with the newly updated resistances) as any set such that \begin{align*} \{e\ |\ e \text{ is $\frac{\varepsilon}{\alpha}$-important } \} \subseteq S\subseteq \{e\ |\ e \text{ is $\frac{\varepsilon}{4\alpha}$-important}\}\,. \end{align*} As we have already argued, this takes $\tO{m}$. Now, finally, we re-initialize all the $\textsc{DemandProjector}$s by calling \[ \textsc{DP}^i.\textsc{Initialize}(C, \boldsymbol{\mathit{r}}, \boldsymbol{\mathit{q}}, S, \mathcal{P})\,. \] for all $i\in[\tO{\varepsilon^{-2}}]$, where each call takes $\tO{m}$. We conclude with a total runtime of \[ \tO{m\varepsilon^{-2} + |Z| \beta^{-2} \varepsilon^{-2}} \,. \] \noindent {\bf $\textsc{Solve}()$}: This operation performs the main task of the locator, which is to detect congested edges. We will do that by using the approximate demand projections that we have been maintaining. We remind that the congestion vector we are trying to approximate to $O(\varepsilon)$ additive accuracy is \begin{align*} \boldsymbol{\mathit{\rho}}^* = \delta \sqrt{\boldsymbol{\mathit{r}}}g(\boldsymbol{\mathit{s}}) - \delta \boldsymbol{\mathit{R}}^{-1/2} \boldsymbol{\mathit{B}}\boldsymbol{\mathit{L}}^+ \boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})\,. \end{align*} We will first reduce the problem of finding the entries of $\boldsymbol{\mathit{\rho}}^*$ with magnitude $\geq \Omega(\varepsilon)$, to the problem of computing an $O(\varepsilon)$-additive approximation to \[ v_i^* = \delta \cdot \left\langle \boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S^i}{\sqrt{\boldsymbol{\mathit{r}}}}\right), \widetilde{SC}^+ \boldsymbol{\mathit{\pi}}^{C^0,\boldsymbol{\mathit{r}}^0}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{p}}^0}{\sqrt{\boldsymbol{\mathit{r}}^0}}\right)\right\rangle \] for all $i\in[\tO{\varepsilon^{-2}}]$, where $\widetilde{SC}$ is the approximate Schur complement maintained in $\textsc{DynamicSC}$. Then, we will see how to approximate $v_i^*$ to additive accuracy $O(\varepsilon)$ using the demand projection data structures. First of all, note that, by definition of $g(\boldsymbol{\mathit{s}}) = \frac{\frac{1}{\boldsymbol{\mathit{s}}^+} - \frac{1}{\boldsymbol{\mathit{s}}^-}}{\boldsymbol{\mathit{r}}}$, \[ \left\|\delta \sqrt{\boldsymbol{\mathit{r}}} g(\boldsymbol{\mathit{s}})\right\|_\infty \leq \delta \leq \varepsilon\,, \] so this term can be ignored. Using Lemma~\ref{lem:non-projected-demand-contrib}, we get that \[ \delta \left\|\boldsymbol{\mathit{R}}^{-1/2} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ (\boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}}) - \boldsymbol{\mathit{\pi}}^{C}(\boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})))\right\|_\infty \leq \delta \cdot \tO{\beta^{-2}} \leq \varepsilon / 2 \,. \] This means that the entries of the vector \[ \delta \boldsymbol{\mathit{R}}^{-1/2} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \boldsymbol{\mathit{\pi}}^{C}(\boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})) \] that have magnitude $\leq \varepsilon$ do not correspond to the $\Omega(\varepsilon)$-congested edges that we are looking for. Now, we set $T = |C \backslash C^0|$, where $C^0$ was the congestion reduction subset during the last call to $\textsc{BatchUpdate}$, and apply Lemma~\ref{lem:old_projection_approximate}. This shows that \[ \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}\left(\boldsymbol{\mathit{\pi}}^{old} - \boldsymbol{\mathit{\pi}}^{C}\left(\boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})\right)\right) } \leq \tO{\alpha^{1/2}\beta^{-2}} \cdot T \,.\] Therefore, if we define \[ \boldsymbol{\mathit{\rho}} = -\delta\boldsymbol{\mathit{R}}^{-1/2}\boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \boldsymbol{\mathit{\pi}}^{old}\,, \] we conclude that \[ \left\|\boldsymbol{\mathit{\rho}} - \boldsymbol{\mathit{\rho}}^*\right\|_\infty \leq O(\varepsilon) + \delta \left\|\boldsymbol{\mathit{R}}^{-1/2} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \left(\boldsymbol{\mathit{\pi}}^{old} - \boldsymbol{\mathit{\pi}}^{C}\left(\boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})\right)\right) \right\|_\infty \leq O(\varepsilon) + \delta T \cdot \tO{\alpha^{1/2} \beta^{-2}} \leq O(\varepsilon)\,, \] where we used the fact that $T = \frac{\varepsilon}{\widehat{\eps} \alpha^{1/2}} \leq \frac{\varepsilon}{\delta \beta^{-2} \alpha^{1/2}}$. Therefore it suffices to estimate $\boldsymbol{\mathit{\rho}}$ up to $O(\varepsilon)$-additive accuracy. Now, note that, by definition, no edge $e\in E\backslash S$ is $\varepsilon / \alpha$-important with respect to $\boldsymbol{\mathit{r}}^0$ and $C^0$. By using Lemma~\ref{st_projection1_energy}, for each such edge we get \begin{align*} & \delta \left|\boldsymbol{\mathit{R}}^{-1/2} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \boldsymbol{\mathit{\pi}}^{old}\right|_e\\ & \leq \delta \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}\left(\boldsymbol{\mathit{\pi}}^{C^0}\left(\boldsymbol{\mathit{B}}^\top \frac{\mathbf{1}_e}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\right)}\sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{\pi}}^{old})}\\ & \leq \delta \alpha \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}^0}\left(\boldsymbol{\mathit{\pi}}^{C^0}\left(\boldsymbol{\mathit{B}}^\top \frac{\mathbf{1}_e}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\right)}\sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}^0}(\boldsymbol{\mathit{\pi}}^{old})}\\ & \leq \delta \alpha \cdot \frac{\varepsilon}{\alpha} \cdot O(\sqrt{m})\\ & = O(\varepsilon) \,, \end{align*} where we also used the fact that $\mathcal{E}_{\boldsymbol{\mathit{r}}^0}(\boldsymbol{\mathit{\pi}}^{C^0,\boldsymbol{\mathit{r}}^0}(g(\boldsymbol{\mathit{s}}^0))) \leq O(\mathcal{E}_{\boldsymbol{\mathit{r}}^0}(g(\boldsymbol{\mathit{s}})))$. Therefore it suffices to approximate \[ \boldsymbol{\mathit{\rho}}' = \delta \boldsymbol{\mathit{I}}_S \boldsymbol{\mathit{R}}^{-1/2} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \boldsymbol{\mathit{\pi}}^{old} \,.\] Note that here we can replace $\boldsymbol{\mathit{L}}^+$ by $\begin{pmatrix}-\boldsymbol{\mathit{L}}_{FF}^{-1} \boldsymbol{\mathit{L}}_{FC} \\ \boldsymbol{\mathit{I}}\end{pmatrix} \widetilde{SC}^+$ where $\widetilde{SC} \approx_{1+\varepsilon} SC$ and only lose another additive $\varepsilon$ error, as \begin{align*} & \delta \left|\left\langle \mathbf{1}_e, \boldsymbol{\mathit{R}}^{-1/2} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \boldsymbol{\mathit{\pi}}^{old}\right\rangle - \delta \left\langle \mathbf{1}_e, \boldsymbol{\mathit{R}}^{-1/2} \boldsymbol{\mathit{B}} \begin{pmatrix}-\boldsymbol{\mathit{L}}_{FF}^{-1} \boldsymbol{\mathit{L}}_{FC}\\ \boldsymbol{\mathit{I}} \end{pmatrix}\widetilde{SC}^+ \boldsymbol{\mathit{\pi}}^{old}\right\rangle\right|\\ & = \delta \left|\left\langle \boldsymbol{\mathit{\pi}}^{C}\left(\boldsymbol{\mathit{B}}^\top \frac{\mathbf{1}_e}{\sqrt{\boldsymbol{\mathit{r}}}}\right), \left(SC^+ - \widetilde{SC}^+\right) \boldsymbol{\mathit{\pi}}^{old}\right\rangle\right|\\ & \leq O(\delta \varepsilon \cdot \sqrt{m})\\ & \leq O(\varepsilon) \,, \end{align*} where we used the fact that \[ (1-\varepsilon) SC \preceq \widetilde{SC} \preceq (1+\varepsilon) SC \Rightarrow -O(\varepsilon) \widetilde{SC}^+ \preceq SC^+ - \widetilde{SC}^+\preceq O(\varepsilon) \widetilde{SC}^+ \,.\] and that \begin{align*} \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}\left(\boldsymbol{\mathit{\pi}}^{old}\right)} & \leq \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}\left(\boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})\right)\right)} + \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}\left(\boldsymbol{\mathit{\pi}}^{old} - \boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})\right)\right)}\\ & \leq O(m) + \tO{\alpha^{1/2} \beta^{-2}} \cdot T\\ & \leq O(m)\,, \end{align*} where we used the fact that $T = \frac{\varepsilon}{\widehat{\eps} \alpha^{1/2}} \leq \frac{\varepsilon}{\delta \beta^{-2}\alpha^{1/2}} \leq \frac{\sqrt{m}}{\beta^{-2}\alpha^{1/2}} $. Now, we will use the sketching lemma (Lemma 5.1, \cite{gao2021fully} v2), which shows that in order to find all entries of \[ \boldsymbol{\mathit{I}}_S \boldsymbol{\mathit{R}}^{-1/2} \boldsymbol{\mathit{B}} \begin{pmatrix} -\boldsymbol{\mathit{L}}_{FF} \boldsymbol{\mathit{L}}_{FC} \\ \boldsymbol{\mathit{I}}\end{pmatrix}\widetilde{SC}^+ \boldsymbol{\mathit{\pi}}^{old} \] with magnitude $\Omega(\varepsilon)$, it suffices to compute the inner products \begin{align*} & \delta \left\langle \boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S^i}{\sqrt{\boldsymbol{\mathit{r}}}}, \begin{pmatrix}-\boldsymbol{\mathit{L}}_{FF}^{-1} \boldsymbol{\mathit{L}}_{FC} \\ \boldsymbol{\mathit{I}} \end{pmatrix} \widetilde{SC}^+ \boldsymbol{\mathit{\pi}}^{old}\right\rangle \\ & = \delta \left\langle \boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S^i}{\sqrt{\boldsymbol{\mathit{r}}}}\right), \widetilde{SC}^+ \boldsymbol{\mathit{\pi}}^{old} \right\rangle \end{align*} for $i\in[\tO{\varepsilon^{-2}}]$, up to additive accuracy \[ \varepsilon \cdot \left\|\delta \boldsymbol{\mathit{I}}_S \boldsymbol{\mathit{R}}^{-1/2} \boldsymbol{\mathit{B}} \begin{pmatrix}-\boldsymbol{\mathit{L}}_{FF}^{-1} \boldsymbol{\mathit{L}}_{FC} \\ \boldsymbol{\mathit{I}} \end{pmatrix} \widetilde{SC}^+ \boldsymbol{\mathit{\pi}}^{old} \right\|_2^{-1} \geq \Omega(\varepsilon) \,, \] where we used the fact that \begin{align*} & \left\|\delta \boldsymbol{\mathit{I}}_S \boldsymbol{\mathit{R}}^{-1/2} \boldsymbol{\mathit{B}} \begin{pmatrix}-\boldsymbol{\mathit{L}}_{FF}^{-1} \boldsymbol{\mathit{L}}_{FC} \\ \boldsymbol{\mathit{I}} \end{pmatrix} \widetilde{SC}^+ \boldsymbol{\mathit{\pi}}^{old} \right\|_2^2\\ & = \delta^2 \langle \widetilde{SC}^+ \boldsymbol{\mathit{\pi}}^{old}, \begin{pmatrix}-\boldsymbol{\mathit{L}}_{CF} \boldsymbol{\mathit{L}}_{FF}^{-1} & \boldsymbol{\mathit{I}} \end{pmatrix} \boldsymbol{\mathit{L}} \begin{pmatrix}-\boldsymbol{\mathit{L}}_{FF}^{-1} \boldsymbol{\mathit{L}}_{FC} \\ \boldsymbol{\mathit{I}} \end{pmatrix} \widetilde{SC}^+ \boldsymbol{\mathit{\pi}}^{old} \rangle\\ & = \delta^2 \left\langle \widetilde{SC}^+ \boldsymbol{\mathit{\pi}}^{old}, \begin{pmatrix}-\boldsymbol{\mathit{L}}_{CF} \boldsymbol{\mathit{L}}_{FF}^{-1} & \boldsymbol{\mathit{I}} \end{pmatrix} \begin{pmatrix}\boldsymbol{\mathit{L}}_{FF} & \boldsymbol{\mathit{L}}_{FC} \\ \boldsymbol{\mathit{L}}_{CF} & \boldsymbol{\mathit{L}}_{CC}\end{pmatrix} \begin{pmatrix}-\boldsymbol{\mathit{L}}_{FF}^{-1} \boldsymbol{\mathit{L}}_{FC} \\ \boldsymbol{\mathit{I}} \end{pmatrix} \widetilde{SC}^+ \boldsymbol{\mathit{\pi}}^{old} \right\rangle\\ & = \delta^2 \left\langle \widetilde{SC}^+ \boldsymbol{\mathit{\pi}}^{old}, SC \widetilde{SC}^+ \boldsymbol{\mathit{\pi}}^{old} \right\rangle\\ & \leq 2 \delta^2 \left\langle \boldsymbol{\mathit{\pi}}^{old}, \widetilde{SC}^+ \boldsymbol{\mathit{\pi}}^{old} \right\rangle\\ & \leq O(\delta^2 m)\\ & = O(1)\,. \end{align*} Now, for the second part of the proof, we would like to compute $\boldsymbol{\mathit{v}}$ such that $\left\|\boldsymbol{\mathit{v}}-\boldsymbol{\mathit{v}}^*\right\|_\infty \leq O(\varepsilon)$, where we remind that \[ \boldsymbol{\mathit{v}}^* = \left\langle \boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S^i}{\sqrt{\boldsymbol{\mathit{r}}}}\right), \widetilde{SC}^+ \boldsymbol{\mathit{\pi}}^{old} \right\rangle \,. \] Note that we already have estimates $\widetilde{\boldsymbol{\mathit{\pi}}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S^i}{\sqrt{\boldsymbol{\mathit{r}}}}\right)$ given by $\textsc{DP}^i$ for all $i\in[\tO{\varepsilon^{-2}}]$. We obtain these estimates by calling \[ \textsc{DP}^i.\textsc{Output}() \] each of which takes time $O(\beta m)$. By the guarantees of Definition~\ref{def:demand_projector}, with high probability we have \begin{align*} & \delta \left| \left\langle \widetilde{\boldsymbol{\mathit{\pi}}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}^i_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right) -\boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}^i_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right), \widetilde{SC}^+ \boldsymbol{\mathit{\pi}}^{old} \right\rangle \right| \leq \widehat{\eps} \sqrt{\alpha} T\,, \end{align*} where we used the fact that \[ E_{\boldsymbol{\mathit{r}}}(\delta \cdot \widetilde{SC}^+ \boldsymbol{\mathit{\pi}}^{old}) \leq O(1)\,. \] Now, since by definition $\textsc{BatchUpdate}$ is called every $\frac{\varepsilon}{\widehat{\eps}\alpha^{1/2}}$ calls to $\textsc{Update}$, We have $T \leq \frac{\varepsilon}{\widehat{\eps}\alpha^{1/2}}$ and so \begin{align*} & \delta \left| \left\langle \widetilde{\boldsymbol{\mathit{\pi}}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}^i_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right) -\boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}^i_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right), \widetilde{SC}^+ \boldsymbol{\mathit{\pi}}^{old} \right\rangle \right| \leq \varepsilon\,. \end{align*} This means that, running the algorithm from (Lemma 5.1, \cite{gao2021fully} v2), we can obtain an edge set of size $\tO{\varepsilon^{-2}}$ that contains all edges such that $\left|\rho_e^*\right| \geq c \cdot \varepsilon$ for some constant $c > 0$. By rescaling $\varepsilon$ to get the right constant, we obtain all edges such that $\left|\rho_e^*\right| \geq \varepsilon / 2$ with high probability. The runtime is dominated by the time to get $\widetilde{SC}$ and apply its inverse, and is $\tO{\beta m \varepsilon^{-2}}$. \paragraph{Success probability} We will argue that $\mathcal{L}$ uses $\textsc{DynamicSC}$ and the $\textsc{DP}^i$ as an oblivious adversary. First of all, note that no randomness is injected into the inputs of $\textsc{DynamicSC}$, as they are all coming form the inputs of $\mathcal{L}$. Regarding $\textsc{DP}^i$, note that its only output is given by the call to $\textsc{DP}^i.\textsc{Output}$. However, note that its output is only used to estimate the inner product \[ \left\langle \boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S^i}{\sqrt{\boldsymbol{\mathit{r}}}}\right), \widetilde{SC}^+ \boldsymbol{\mathit{\pi}}_{old} \right\rangle\,, \] from which we obtain the set of congested edges and we directly return it from $\mathcal{L}$. Thus, it does not influence the state of $\mathcal{L},\textsc{DynamicSC}$ or any future inputs. \end{proof} \section{Deferred Proofs from Section~\ref{sec:demand_projection}} \subsection{Proof of Lemma~\ref{st_projection1}} \label{proof_st_projection1} \begin{proof} Let $\mathcal{P}_v(u)$ be a random walk that starts from $u$ and stops when it hits $v$. \begin{align*} p_v^{C\cup\{v,w\}}(u) & = \Pr\left[\mathcal{P}_v(u)\cap C = \emptyset \text{ and } w\notin \mathcal{P}_v(u)\right]\\ & = \Pr\left[\mathcal{P}_v(u)\cap C = \emptyset\right] \cdot \Pr\left[w\notin \mathcal{P}_v(u) \ |\ \mathcal{P}_v(u)\cap C = \emptyset \right]\\ & = p_v^{C\cup\{v\}}(u) \cdot \Pr\left[w\notin \mathcal{P}_v(u) \ |\ \mathcal{P}_v(u)\cap C = \emptyset \right] \end{align*} Consider new resistances $\widehat{r}$, where $\widehat{r}_e = r_e$ for all $e\in E$ not incident to $C$ and $\widehat{r}_e = \infty$ for all $e\in E$ incident to $C$. Also, let $\widehat{\mathit{p}}$ be the hitting probability function for these new resistances. It is easy to see that \[ \Pr\left[w\notin \mathcal{P}_v(u) \ |\ \mathcal{P}_v(u)\cap C = \emptyset \right] = \widehat{\mathit{p}}_v^{\{v,w\}}(u)\,. \] Therefore, we have \begin{align*} p_v^{C\cup\{v,w\}}(u) & = p_v^{C\cup\{v\}}(u) \cdot \widehat{\mathit{p}}_v^{\{v,w\}}(u)\,. \end{align*} Now we will bound $\widehat{\mathit{p}}_v^{\{v,w\}}(u)$. Let $\psi$ be electrical potentials for pushing $1$ unit of flow from $v$ to $w$ with resistances $\widehat{r}$ and let $f$ be the associated electrical flow. We have that \begin{align} \left|\psi_u - \psi_w\right| = |f_e| \widehat{r}_e \leq \widehat{r}_e = r_e \label{1} \end{align} (because $|f_e| \leq 1$ and $e$ is not incident to $C$) and \begin{align} \left|\psi_v - \psi_w\right| = \widehat{R}_{eff}(v,w) \geq R_{eff}(v,w) \label{2} \end{align} Additionally, by well known facts that connect electrical potential embeddings with random walks, we have that \[ \psi_u = \psi_w + \widehat{\mathit{p}}_v^{\{v,w\}}(u) (\psi_v - \psi_w)\,, \] or equivalently \[ \widehat{\mathit{p}}_v^{\{v,w\}}(u) = \frac{\psi_u - \psi_w}{\psi_v - \psi_w} \,. \] Using (\ref{1}) and (\ref{2}), this immediately implies that \begin{align*} \widehat{\mathit{p}}_v^{\{v,w\}}(u) \leq \frac{r_e}{R_{eff}(v,w)}\,. \end{align*} So we have proved that \begin{align*} p_v^{C\cup\{v,w\}}(u) \leq p_v^{C\cup\{v\}}(u) \frac{r_e}{R_{eff}(v,w)} \end{align*} and symmetrically \begin{align*} p_v^{C\cup\{v,u\}}(w) \leq p_v^{C\cup\{v\}}(w) \frac{r_e}{R_{eff}(v,u)}\,. \end{align*} Now, let's look at $\pi_v^{C\cup\{v\}}(B^\top \mathbf{1}_e) = p_v^{C\cup\{v\}}(u) - p_v^{C\cup\{v\}}(w)$. Note that \[ p_v^{C\cup\{v\}}(u) = p_v^{C\cup\{v,w\}}(u) + p_w^{C\cup\{v,w\}}(u) p_v^{C\cup\{v\}}(w) \] which we re-write as \[ p_v^{C\cup\{v\}}(u) - p_v^{C\cup\{v\}}(w) = p_v^{C\cup\{v,w\}}(u) - (1-p_w^{C\cup\{v,w\}}(u)) p_v^{C\cup\{v\}}(w)\leq p_v^{C\cup\{v,w\}}(u)\,. \] Symmetrically, \[ p_v^{C\cup\{v\}}(w) - p_v^{C\cup\{v\}}(u) \leq p_v^{C\cup\{v,u\}}(w)\,. \] From these we conclude that \begin{align*} \left|\pi_v^{C\cup\{v\}}(B^\top \mathbf{1}_e)\right| & = \left|p_v^{C\cup\{v\}}(u) - p_v^{C\cup\{v\}}(w)\right| \\ & \leq \max\left\{p_v^{C\cup\{v,w\}}(u), p_v^{C\cup\{v,u\}}(w)\right\}\\ & \leq \max\left\{p_v^{C\cup\{v\}}(u)\cdot \frac{r_e}{R_{eff}(v,w)}, p_v^{C\cup\{v\}}(w)\cdot\frac{r_e}{R_{eff}(v,u)}\right\}\\ & \leq (p_v^{C\cup\{v\}}(u) + p_v^{C\cup\{v\}}(w)) \cdot \max\left\{\frac{r_e}{R_{eff}(v,w)}, \frac{r_e}{R_{eff}(v,u)}\right\}\,, \end{align*} which, after dividing by $\sqrt{r_e}$ gives \begin{align*} \left|\pi_v^{C\cup\{v\}}(B^\top \mathbf{1}_e)\right| & \leq (p_v^{C\cup\{v\}}(u) + p_v^{C\cup\{v\}}(w)) \cdot \max\left\{\frac{\sqrt{r_e}}{R_{eff}(v,w)}, \frac{\sqrt{r_e}}{R_{eff}(v,u)}\right\}\,. \end{align*} \end{proof} \subsection{Proof of Lemma~\ref{estimate1}} \label{proof_estimate1} \begin{proof} For each $u\in V \backslash C$ and $e\in S'$ with $u\in e$, we generate $Z$ random walks $P^1(u),\dots,P^Z(u)$ from $u$ to $C\cup\{v\}$. We set \begin{align*} \widetilde{\mathit{\pi}}_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_{S'}}{\sqrt{\boldsymbol{\mathit{r}}}}\right) &= \sum\limits_{e=(u,w)\in S'} \sum\limits_{z=1}^Z \frac{1}{Z} \frac{q_e}{\sqrt{r_e}} \left(1_{\{v \in P^z(u)\}} - 1_{\{v\in P^z(w)\}}\right) \\ &= \sum\limits_{e=(u,w)\in S'} \sum\limits_{z=1}^Z (X_{e,u,z} - X_{e,w,z}) \,, \end{align*} where we have set $X_{e,u,z} = \frac{1}{Z} \frac{q_e}{\sqrt{r_e}} 1_{\{v\in P^z(u)\}} $ and $ X_{e,w,z} = -\frac{1}{Z} \frac{q_e}{\sqrt{r_e}} 1_{\{v\in P^z(w)\}}$. Note that $\underset{P^z(u)}{\mathbb{E}}[X_{e,u,z}] = \frac{1}{Z} \frac{q_e}{\sqrt{r_e}} p_v^{C\cup\{v\}}(u)$ and $\underset{P^z(w)}{\mathbb{E}}[X_{e,w,z}] = -\frac{1}{Z} \frac{q_e}{\sqrt{r_e}} p_v^{C\cup\{v\}}(w)$. This implies that our estimate is unbiased, as \[ \mathbb{E}\left[\widetilde{\mathit{\pi}}_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_{S'}}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\right] = \sum\limits_{e=(u,w)\in S'} \frac{q_e}{\sqrt{r_e}} (p_v^{C\cup\{v\}}(u) - p_v^{C\cup\{v\}}(w)) = \pi_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_{S'}}{\sqrt{\boldsymbol{\mathit{r}}}}\right) \,.\] We now need to show that our estimate is concentrated around the mean. To apply the concentration bound in Lemma~\ref{conc1}, we need the following bounds: \[ \sum\limits_{e=(u,w)\in S'} \sum\limits_{z=1}^Z \left(|\mathbb{E}[X_{e,u,z}]| + |\mathbb{E}[X_{e,w,z}]|\right) = \sum\limits_{e=(u,w)\in S'} \frac{|q_e|}{\sqrt{r_e}} (p_v^{C\cup\{v\}}(u) + p_v^{C\cup\{v\}}(w)) := E\] \[ \underset{\substack{e=(u,w)\in S'\\ z\in[Z]}}{\max} \max\{|X_{e,u,z}|, |X_{e,w,z}|\} \leq \max_{e\in S'} \frac{1}{Z\sqrt{r_e}} := M \,.\] So now for any $t\in [0,E]$ we have \begin{align*} & \Pr\left[\left|\widetilde{\mathit{\pi}}_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top\frac{\boldsymbol{\mathit{q}}_{S'}}{\sqrt{\boldsymbol{\mathit{r}}}}\right) - \pi_v^{C\cup\{v\}}(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_{S'}}{\sqrt{\boldsymbol{\mathit{r}}}})\right| > t\right] \\ & \leq 2 \exp\left(-\frac{t^2}{6EM}\right)\\ & = 2 \exp\left(-\frac{Z t^2}{6\sum\limits_{e=(u,w)\in S'} \frac{|q_e|}{\sqrt{r_e}} (p_v^{C\cup\{v\}}(u) + p_v^{C\cup\{v\}}(w))\max_{e\in S'} \frac{1}{\sqrt{r_e}} }\right)\\ & \leq 2 \exp\left(-\frac{Z t^2}{6\sum\limits_{e=(u,w)\in S'} (p_v^{C\cup\{v\}}(u) + p_v^{C\cup\{v\}}(w)) \max_{e\in S'} \frac{1}{r_e} }\right)\\ & \leq 2 \exp\left(-\frac{Z t^2 c^2 R_{eff}(C,v)}{6 \sum\limits_{e=(u,w)\in S'} (p_v^{C\cup\{v\}}(u) + p_v^{C\cup\{v\}}(w)) }\right)\\ & \leq 2 \exp\left(-Z t^2 c^2 R_{eff}(C,v) / \tO{\beta^{-2}}\right)\\ & \leq \frac{1}{n^{100}}\,, \end{align*} where the last inequality follows by setting $Z = \tO{\frac{\log n \log \frac{1}{\beta}}{\delta_1'^2}}$ and $t = \frac{\delta_1'}{\beta c \sqrt{R_{eff}(C,v)}}$. Note that we have used the fact that $R_{eff}(C,v) \leq r_e / c^2$ for all $e\in S'$, as well as the congestion reduction property (Definition~\ref{def:cong_red}) \[ \sum\limits_{e=(u,w)\in E\backslash E(C)} (p_v^{C\cup\{v\}}(u) + p_v^{C\cup\{v\}}(w)) \leq \tO{1/\beta^2} \,.\] \end{proof} \subsection{Proof of Lemma~\ref{estimate1_final}} \label{proof_estimate1_final} \begin{proof} In order to compute $\widetilde{\mathit{\pi}}_v^{C\cup\{v\}}(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_{S}}{\sqrt{\boldsymbol{\mathit{r}}}})$ we use Lemma~\ref{estimate1} with demand $\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_{S'}}{\sqrt{\boldsymbol{\mathit{r}}}}$ and error parameter $\delta_1' > 0$, where \[ \{e\in S\ |\ R_{eff}(C,v) \leq r_e / (2c^2) \} \subseteq S' \subseteq \{e\in S\ |\ R_{eff}(C,v) \leq r_e / c^2 \}\,, \] and $c > 0$ will be defined later. Note that such a set $S'$ can be trivially computed given our effective resistance estimate ${\widetilde{R}}_{eff}(C,v) \approx_{2} R_{eff}(C,v)$. However, algorithmically we do not directly compute $S'$, but instead find its intersection with the edges from which a sampled random walk ends up at $v$. (Using the congestion reduction property of $C$, this can be done in $\tO{\delta_1'^{-2} \beta^{-2} \log n\log\frac{1}{\beta}}$ time just by going through all random walks that contain $v$.) Now, Lemma~\ref{estimate1} guarantees that \begin{align*} \left| \widetilde{\mathit{\pi}}_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_{S'}}{\sqrt{\boldsymbol{\mathit{r}}}}\right) - \pi_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_{S'}}{\sqrt{\boldsymbol{\mathit{r}}}}\right) \right| \leq \frac{\delta_1'}{\beta c\sqrt{R_{eff}(C,v)}} \end{align*} given access to $O(\delta'^{-2} \log n \log \frac{1}{\beta})$ random walks for each $u\in V\backslash C$, $e\in S'$ with $u\in e$. Then, we set $\widetilde{\mathit{\pi}}_v^{C\cup\{v\}}(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}) := \widetilde{\mathit{\pi}}_v^{C\cup\{v\}}(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_{S'}}{\sqrt{\boldsymbol{\mathit{r}}}})$, and we have that \begin{equation} \begin{aligned} & \left| \widetilde{\mathit{\pi}}_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right) - \pi_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right) \right| \\ & \leq \left|\widetilde{\mathit{\pi}}_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_{S'}}{\sqrt{\boldsymbol{\mathit{r}}}}\right) - \pi_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_{S'}}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\right| + \left|\pi_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S - \boldsymbol{\mathit{q}}_{S'}}{\sqrt{\boldsymbol{\mathit{r}}}}\right) \right| \\ & \leq \frac{\delta_1'}{\beta c \sqrt{R_{eff}(C,v)}} + \left|\pi_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S - \boldsymbol{\mathit{q}}_{S'}}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\right|\,. \end{aligned} \label{approx1} \end{equation} Now, to bound the second term, we use Lemma~\ref{st_projection1}, which gives \begin{align*} \left|\pi_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S - \boldsymbol{\mathit{q}}_{S'}}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\right| & \leq \sum\limits_{e=(u,w)\in S\backslash S'} \left(p_v^{C\cup\{v\}}(u) + p_v^{C\cup\{v\}}(w)\right) \frac{\sqrt{r_e}}{R_{eff}(v,e)}\,. \end{align*} Now, note that for each $e\in S\backslash S'$, $e$ is close to $C$, but $v$ is far from $C$, so $R_{eff}(v,e)$ should be large. Specifically, by Lemma~\ref{lem:multi_effective_resistance} we have $R_{eff}(v,e) \geq \frac{1}{2} \min\left\{R_{eff}(v,u), R_{eff}(v,w)\right\}$, and by the triangle inequality \[ \min\{R_{eff}(v,u), R_{eff}(v,w) \} \geq R_{eff}(C,v) - \max\{R_{eff}(C,u), R_{eff}(C,w)\} \geq R_{eff}(C,v) - 2 R_{eff}(C,e) \,. \] By the fact that $e$ is $\gamma$-important and that \[ e \notin S' \supseteq \{e\in S\ |\ R_{eff}(C,v) \leq r_e / (2c^2) \}\,, \] we have $R_{eff}(C,e) \leq r_e / \gamma^2 \leq 2 c^2 R_{eff}(C,v) / \gamma^2$, so \begin{align*} \frac{\sqrt{r_e}}{R_{eff}(v,e)} & \leq \frac{1}{1/2 - 2c^2 / \gamma^2} \frac{\sqrt{r_e}}{R_{eff}(C,v)}\\ & \leq \frac{c}{1/2 - 2c^2 / \gamma^2} \frac{1}{\sqrt{R_{eff}(C,v)}}\,. \end{align*} By using the congestion reduction property (Definition~\ref{def:cong_red}), we obtain \begin{align} \left|\pi_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S - \boldsymbol{\mathit{q}}_{S'}}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\right| & \leq \frac{c}{1/2 - 2c^2 / \gamma^2} \frac{1}{\sqrt{R_{eff}(C,v)}} \tO{\frac{1}{\beta^2}} \,. \label{eq:far_edges_projection} \end{align} Setting $c = \min\{\delta_1 / \tO{\beta^{-2}}, \gamma / 4\}$ and $\delta_1' = \beta c \cdot \delta_1 / 2$, (\ref{approx1}) becomes \begin{align*} & \left| \widetilde{\mathit{\pi}}_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right) - \pi_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right) \right| \leq \frac{\delta_1}{\sqrt{R_{eff}(C,v)}} \,. \end{align*} Also, the number of random walks needed for each valid pair $(u,e)$ is \[ \tO{\delta_1'^{-2} \log n \log \frac{1}{\beta}} = \tO{\delta_1^{-2} \beta^{-2} c^{-2} \log n \log \frac{1}{\beta}} = \tO{\left(\delta_1^{-4} \beta^{-6} + \delta_1^{-2} \beta^{-2} \gamma^{-2}\right) \log n \log \frac{1}{\beta}}\]\,. For the last part of the lemma, we let $S'' = \{e\in S\ |\ R_{eff}(C,v) \leq r_e / \left(\gamma/4\right)^2\}$ and write \begin{align*} & \left| \pi_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\right| \\ & \leq \left|\pi_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_{S''}}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\right| + \left|\pi_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S - \boldsymbol{\mathit{q}}_{S''}}{\sqrt{\boldsymbol{\mathit{r}}}}\right) \right| \,. \end{align*} For the first term, \begin{align*} \left|\pi_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_{S''}}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\right| & \leq \sum\limits_{e=(u,w)\in S''} \left(p_v^{C\cup\{v\}}(u) + p_v^{C\cup\{v\}}(w)\right) \frac{1}{\sqrt{r_e}}\\ & \leq \frac{1}{(\gamma/4) \sqrt{R_{eff}(C,v)}} \tO{\frac{1}{\beta^2}}\,, \end{align*} and for the second term we have already proved in (\ref{eq:far_edges_projection}) (after replacing $c$ by $\gamma/4$) that \begin{align*} \left|\pi_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S - \boldsymbol{\mathit{q}}_{S''}}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\right| & \leq \frac{\gamma/4}{\sqrt{R_{eff}(C,v)}} \tO{\frac{1}{\beta^2}}\,. \end{align*} Putting these together, we conclude that \begin{align*} \left| \pi_v^{C\cup\{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\right| \leq \frac{1}{\gamma \sqrt{R_{eff}(C,v)}}\cdot \tO{\frac{1}{\beta^2}} \,. \end{align*} \end{proof} \subsection{Proof of Lemma~\ref{conc2}} \label{proof_conc2} \begin{proof} For any $I\subseteq \mathbb{R}$, we define $F_I = \{i\in[n]\ |\ |\bar{\phi}_i| \in I\}$. For some $0 < a < b$ to be defined later, we partition $[n]$ as \[ [n] = F_{I_0} \cup F_{I_1}\cup \dots\cup F_{I_K}\cup F_{I_{K+1}} \,,\] where $I_0 = [0,a)$, $I_{K+1} = [b,\infty)$, and $I_1,\dots,I_K$ is a partition of $[a,b)$ into $K = O(\log \frac{b}{a})$ intervals such that for all $k\in[K]$ we have $\Phi_k := \underset{i\in F_k}{\max}\, |\bar{\phi}_i| \leq 2\cdot \underset{i\in F_k}{\min}\, |\bar{\phi}_i|$. A union bound gives \begin{align*} \Pr\left[\left|\langle \widetilde{\boldsymbol{\mathit{\pi}}} - \boldsymbol{\mathit{\pi}}, \boldsymbol{\mathit{\bar{\phi}}} \rangle\right| > t\right] \leq \sum\limits_{k=0}^{K+1} \Pr\left[\left|\sum\limits_{i\in I_k} (\widetilde{\mathit{\pi}}_i - \pi_i) \bar{\phi}_i\right| > t / (K+2)\right]\,. \end{align*} We first examine $I_0$ and $I_{K+1}$ separately. Note that \begin{align*} \left|\sum\limits_{i\in F_{I_0}} (\widetilde{\mathit{\pi}}_i - \pi_i) \bar{\phi}_i\right| \leq \|\widetilde{\boldsymbol{\mathit{\pi}}} - \boldsymbol{\mathit{\pi}}\|_1 a \leq 2 a \end{align*} and \begin{align*} \left|\sum\limits_{i\in F_{I_{K+1}}} (\widetilde{\mathit{\pi}}_i - \pi_i) \bar{\phi}_i\right| \leq \sum\limits_{i\in F_{I_{K+1}}} \widetilde{\mathit{\pi}}_i\left|\bar{\phi}_i\right| + \sum\limits_{i\in F_{I_{K+1}}} \pi_i \left|\bar{\phi}_i\right| \leq \frac{1}{b} \sum\limits_{i\in F_{I_{K+1}}} |\widetilde{\mathit{\pi}}_i - \pi_i| \bar{\phi}_i^2 \leq \frac{1}{b}\sum\limits_{i\in F_{I_{K+1}}} \widetilde{\mathit{\pi}}_i |\bar{\phi}_i| + \frac{\left\|\bar{\phi}\right\|_{\boldsymbol{\mathit{\pi}},2}^2}{b} \end{align*} But note that by picking $b \geq \max\left\{\frac{(K+2) Var_{\boldsymbol{\mathit{\pi}}}(\boldsymbol{\mathit{\bar{\phi}}})}{t}, \sqrt{Var_{\boldsymbol{\mathit{\pi}}}(\boldsymbol{\mathit{\bar{\phi}}}) \cdot n^{101}}\right\}$, we have $\frac{Var_{\boldsymbol{\mathit{\pi}}}(\bar{\phi})}{b} \leq t / (K+2)$ and also for any $i\in F_{K+1}$ we have $\pi_i \leq \frac{Var_{\boldsymbol{\mathit{\pi}}}(\boldsymbol{\mathit{\bar{\phi}}})}{b^2} \leq \frac{1}{n^{101}}$. This means that $\Pr[\widetilde{\mathit{\pi}}_i \neq 0] \leq \frac{1}{n^{101}}$, and so by union bound \[ \Pr\left[\sum\limits_{i\in F_{I_{K+1}}} \widetilde{\mathit{\pi}}_i |\bar{\phi}_i| \neq 0\right] \leq \frac{1}{n^{100}}\,.\] Now, we proceed to $F_1,\dots,F_K$. We draw $Z$ samples $x_1,\dots,x_Z$ from $\boldsymbol{\mathit{\pi}}$. Then, we also define the following random variables for $z\in [Z]$ and $i\in[n]$: \[ X_{z,i} = \begin{cases}1 & \text{ if $x_z = i$} \\ 0 & \text{ otherwise }\end{cases}\] for $i\in[n]$ and \[ Y_{z,k} = \frac{1}{Z} \sum\limits_{i\in F_k} X_{z,i} \bar{\phi}_i \] This allows us to write $\sum\limits_{i\in F_k} \widetilde{\mathit{\pi}}_i \bar{\phi}_i = \sum\limits_{z=1}^Z Y_{z,k}$. Fix $k\in[K]$. We will apply Lemma~\ref{conc1} on the random variable $\sum\limits_{z=1}^Z Y_{z,k}$. We first compute \begin{align*} \sum\limits_{z=1}^Z |\mathbb{E}[Y_{z,k}]| = \left|\sum\limits_{i\in F_k} \pi_i \bar{\phi}_i\right| \leq \sum\limits_{i\in F_k} \pi_i |\bar{\phi}_i|:= E_k \end{align*} and \begin{align*} \underset{z\in[Z]}{\max}\, |Y_{z,k}| \leq \frac{\Phi_k}{Z} := M_k\,. \end{align*} Therefore we immediately have $E_kM_k \leq \frac{2}{Z} \sum\limits_{i\in F_k} \pi_i \bar{\phi}_i^2 \leq \frac{2}{Z} \cdot Var_{\pi}(\bar{\phi}) $. By Lemma~\ref{conc1}, \begin{align*} & \Pr\left[\left|\sum\limits_{z=1}^Z Y_{z,k} - \mathbb{E}\left[\sum\limits_{z=1}^Z Y_{z,k}\right]\right| > t / (K+2)\right] \\ & \leq 2 \exp\left(-\frac{t^2}{6E_kM_k (K+2)^2}\right)\\ & \leq 2 \exp\left(-\frac{Z t^2}{12\cdot Var_{\pi}(\bar{\phi}) (K+2)^2}\right)\,.\\ \end{align*} Summarizing, and using the fact that $K = \tO{\log (n\cdot Var_{\boldsymbol{\mathit{\pi}}}(\boldsymbol{\mathit{\bar{\phi}}}) / t^2)}$, we get \[\Pr\left[\left|\langle \widetilde{\boldsymbol{\mathit{\pi}}} - \boldsymbol{\mathit{\pi}}, \boldsymbol{\mathit{\bar{\phi}}}\rangle\right| > t\right] \leq \tO{\frac{1}{n^{100}}} + 2\tO{\log \left(n \cdot Var_{\boldsymbol{\mathit{\pi}}}(\boldsymbol{\mathit{\bar{\phi}}}) / t^2\right)}\exp\left(- \frac{Z t^2}{12 \cdot\tO{Var_{\boldsymbol{\mathit{\pi}}}(\boldsymbol{\mathit{\bar{\phi}}}) \log^2 n}} \right)\,.\] \end{proof} \subsection{Proof of Lemma~\ref{lem:variance}} \label{proof_lem_variance} \begin{proof} Let $S_0 = \emptyset$ and and for each $k\in\mathbb{N}$ let \[S_k = \{i\in[n]\backslash S_{k-1}\ :\ \phi_i^2 \leq 2^{k+1} R_{eff}(C,v)\}\,.\] Fix some $k \geq 2$. Note that $\phi_i^2 > 2^k R_{eff}(C,v)$ for all $i\in S_k$, implying $\frac{2^{k} R_{eff}(C,v)}{R_{eff}(S_k,v)} < E_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{\phi}}) \leq 1$, and so $R_{eff}(S_k,v) > 2^k R_{eff}(C,v) \geq 4 R_{eff}(C,v)$. As \[ R_{eff}(C,v) \geq \frac{1}{4} \min\{R_{eff}(S_k,v), R_{eff}(C\backslash S_k,v)\} > \min\{R_{eff}(C,v), \frac{1}{4} R_{eff}(C\backslash S_k,v)\} \,,\] we have $R_{eff}(C\backslash S_k, v) < 4 R_{eff}(C,v)$. This implies that $\left\|\pi_{S_k}^C(\mathbf{1}_v)\right\|_1 \leq \frac{R_{eff}(C\backslash S_k, v)}{R_{eff}(S_k, v)} < \frac{1}{2^{k-2}}$. So we conclude that $Var_{\boldsymbol{\mathit{\pi}}}(\boldsymbol{\mathit{\phi}}) = \sum\limits_{i\in S_k} \pi_i \phi_i^2 \leq \frac{1}{2^{k-2}} \cdot 2^{k+1} R_{eff}(C,v) = 8 R_{eff}(C,v)$. \end{proof} \subsection{Proof of Lemma~\ref{lem:insert1}} \label{proof_lem_insert1} \begin{proof} The first part of the statement is given by applying Lemma~\ref{estimate1_final}, and we see that it requires $\tO{\delta_1^{-4} \beta^{-6} + \delta_1^{-2} \beta^{-2} \gamma^{-2}}$ random walks for each $u\in V\backslash C$ and $e\in E\backslash E(C)$ with $u\in e$. For the second part we use the fact that the change in the demand projection after inserting $v$ into $C$ is given by \[ \boldsymbol{\mathit{\pi}}^{C\cup \{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right) - \boldsymbol{\mathit{\pi}}^{C}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right) = \pi_v^{C\cup \{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\cdot (\mathbf{1}_v - \boldsymbol{\mathit{\pi}}^C(\mathbf{1}_v))\,, \] and therefore we can estimate this update via \[ \widetilde{\mathit{\pi}}_v^{C\cup \{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\cdot (\mathbf{1}_v - \widetilde{\boldsymbol{\mathit{\pi}}}^C(\mathbf{1}_v))\,. \] where $\widetilde{\mathit{\pi}}_v^{C\cup \{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right)$ is the estimate we computed using Lemma~\ref{estimate1_final} and $\widetilde{\boldsymbol{\mathit{\pi}}}^C(\mathbf{1}_v)$ is obtained by applying Lemma~\ref{estimate2}. Let us show that this estimation indeed introduces only a small amount of error. For any $\boldsymbol{\mathit{\phi}}$, such that $E_r(\boldsymbol{\mathit{\phi}}) \leq 1$, we can write \begin{align*} &\left| \left\langle \widetilde{\mathit{\pi}}_v^{C\cup \{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\cdot (\mathbf{1}_v - \widetilde{\boldsymbol{\mathit{\pi}}}^C(\mathbf{1}_v)) - \pi_v^{C\cup \{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\cdot (\mathbf{1}_v - \boldsymbol{\mathit{\pi}}^C(\mathbf{1}_v)) , \boldsymbol{\mathit{\phi}} \right\rangle \right| \\ &\leq \left| \left\langle \pi_v^{C\cup \{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\cdot (\boldsymbol{\mathit{\pi}}^C(\mathbf{1}_v) - \widetilde{\boldsymbol{\mathit{\pi}}}^C(\mathbf{1}_v)) , \boldsymbol{\mathit{\phi}} \right\rangle \right|\\ &+ \left| \left\langle \left( \widetilde{\mathit{\pi}}_v^{C\cup \{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right) - \pi_v^{C\cup \{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right)\right)\cdot (\mathbf{1}_v - \boldsymbol{\mathit{\pi}}^C(\mathbf{1}_v)) , \boldsymbol{\mathit{\phi}} \right\rangle \right|\\ &+ \left| \left\langle \left(\widetilde{\mathit{\pi}}_v^{C\cup \{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right)- \pi_v^{C\cup \{v\}}\left(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}\right) \right)\cdot (\boldsymbol{\mathit{\pi}}^C(\mathbf{1}_v) - \widetilde{\boldsymbol{\mathit{\pi}}}^C(\mathbf{1}_v)) , \boldsymbol{\mathit{\phi}} \right\rangle \right|\,. \end{align*} At this point we can bound these quantities using Lemmas~\ref{estimate1_final} and~\ref{estimate2}. It is important to notice that they require that $S$ is a set of $\gamma$-important edges, for some parameter $\gamma$. Our congestion reduction subset $C$ keeps increasing due to vertex insertions. This, however, means that effective resistances between any vertex in $V\setminus C$ and $C$ can only decrease, and therefore the set of important edges can only increase. Thus we are still in a valid position to apply these lemmas. Using $E_{\boldsymbol{\mathit{r}}}(\boldsymbol{\mathit{\phi}})\leq 1$, which allows us to write: \[ \langle \mathbf{1}_v - \boldsymbol{\mathit{\pi}}^C(\mathbf{1}_v)), \boldsymbol{\mathit{\phi}} \rangle \leq \mathcal{E}_{\boldsymbol{\mathit{r}}}\left( \mathbf{1}_v - \boldsymbol{\mathit{\pi}}^C(\mathbf{1}_v) \right) = R_{eff}(v,C)\,, \] we can continue to upper bound the error by: \begin{align*} &\frac{\tO{\gamma^{-1} \beta^{-2}}}{\sqrt{R_{eff}(C,v)}}\cdot \delta_2 \sqrt{R_{eff}(C,v)} + \frac{\delta_1}{\sqrt{R_{eff}(C,v)}} \cdot \sqrt{R_{eff}(C,v)} + \frac{\delta_1}{\sqrt{R_{eff}(C,v)}} \cdot \delta_2 \sqrt{R_{eff}(C,v)}\\ &= \delta_2 \cdot \tO{\gamma^{-1} \beta^{-2}} + \delta_1 + \delta_1\delta_2 \,. \end{align*} Setting $\delta_1 = \widehat{\eps} / 2$ and $\delta_2 = {\widehat{\eps} \beta^2 \gamma}/\tO{1}$, we conclude that w.h.p. each operation introduces at most $\widehat{\eps}$ additive error in the maintained estimate for $\left\langle \widetilde{\boldsymbol{\mathit{\pi}}}^C(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}}),\boldsymbol{\mathit{\phi}} \right\rangle$. Per Lemma~\ref{estimate1_final}, estimating one coordinate of the demand projection requires \[ \tO{\delta_1^{-4} \beta^{-6} + \delta_1^{-2} \beta^{-2} \gamma^{-2}} = \tO{\widehat{\eps}^{-4} \beta^{-6} + \widehat{\eps}^{-2} \beta^{-2} \gamma^{-2}}\ \] random walks, and estimating $\widetilde{\boldsymbol{\mathit{\pi}}}^C(\boldsymbol{\mathit{B}}^\top \frac{\boldsymbol{\mathit{q}}_S}{\sqrt{\boldsymbol{\mathit{r}}}})$, per Lemma~\ref{estimate2}, requires \[ \tO{\delta_2^{-2} } = \tO{\widehat{\eps}^{-2} \beta^{-4} \gamma^{-2}} \] random walks. This concludes the proof. \end{proof} \section{The \textsc{Checker} Data Structure} \label{sec:checker} \begin{theorem}[Theorem 3, \cite{gao2021fully}] There is a \textsc{Checker} data structure supporting the following operations with the given runtimes against oblivious adversaries, for parameters $0 < \beta_{\textsc{Checker}},\varepsilon < 1$ such that $\beta_{\textsc{Checker}} \geq \tOm{\varepsilon^{-1/2}/m^{1/4}}$. \begin{itemize} \item{\textsc{Initialize}$(\boldsymbol{\mathit{f}}, \varepsilon, \beta_{\textsc{Checker}})$: Initializes the data structure with slacks $\boldsymbol{\mathit{s}}^+ = \boldsymbol{\mathit{u}} - \boldsymbol{\mathit{f}}$, $\boldsymbol{\mathit{s}}^- = \boldsymbol{\mathit{f}}$, and resistances $\boldsymbol{\mathit{r}} = \frac{1}{(\boldsymbol{\mathit{s}}^+)^2} + \frac{1}{(\boldsymbol{\mathit{s}}^-)^2}$. Runtime: $\tO{m\beta_{\textsc{Checker}}^{-4} \varepsilon^{-4}}$. } \item{\textsc{Update}$(e,\boldsymbol{\mathit{f}}')$: Set $s_e^+ = u_e - f_e'$, $s_e^- = f_e'$, and $r_e = \frac{1}{(s_e^+)^2} + \frac{1}{(s_e^-)^2}$. Runtime: Amortized $\tO{\beta_{\textsc{Checker}}^{-2} \varepsilon^{-2}}$. } \item{\textsc{TemporaryUpdate}$(e,\boldsymbol{\mathit{f}}')$: Set $s_e^+ = u_e - f_e'$, $s_e^- = f_e'$, and $r_e = \frac{1}{(s_e^+)^2} + \frac{1}{(s_e^-)^2}$. Runtime: Worst case $\tO{(K \beta_{\textsc{Checker}}^{-2} \varepsilon^{-2})^2}$, where $K$ is the number of $\textsc{TemporaryUpdate}$s that have not been rolled back using $\textsc{Rollback}$. All $\textsc{TemporaryUpdate}$s should be rolled back before the next call to $\textsc{Update}$. } \item{\textsc{Rollback}$()$: Rolls back the last $\textsc{TemporaryUpdate}$ if it exists. The runtime is the same as the original operation. } \item{\textsc{Check}$(e, \boldsymbol{\mathit{\pi}}_{old})$: Returns $\widetilde{f}_e$ such that $\sqrt{r_e}\left|\widetilde{f}_e - \widetilde{f}_e^*\right| \leq \varepsilon$, where \[ \boldsymbol{\mathit{\widetilde{f}}}^* = \delta g(\boldsymbol{\mathit{s}}) - \delta \boldsymbol{\mathit{R}}^{-1} \boldsymbol{\mathit{B}} \left(\boldsymbol{\mathit{B}}^\top \boldsymbol{\mathit{R}}^{-1} \boldsymbol{\mathit{B}} \right)^+ \boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})\,, \] for $\delta = 1/\sqrt{m}$. Additionally, a vector $\boldsymbol{\mathit{\pi}}_{old}$ that is supported on $C$ such that \[ \mathcal{E}_{\boldsymbol{\mathit{r}}}\left( \boldsymbol{\mathit{\pi}}_{old} - \boldsymbol{\mathit{\pi}}^{C} \left(\boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})\right) \right) \leq \varepsilon^2 m / 4 \] is provided, where $C$ is the vertex set of the dynamic sparsifier in the $\textsc{DynamicSC}$ that is maintained internally. Runtime: Worst case $\tO{\left(\beta_{\textsc{Checker}} m + (K\beta_{\textsc{Checker}}^{-2} \varepsilon^{-2})^2\right) \varepsilon^{-2} }$, where $K$ is the number of $\textsc{TemporaryUpdate}$s that have not been rolled back. Additionally, the output of \textsc{Check}$(e)$ is independent of any previous calls to \textsc{Check}. } \end{itemize} Finally, all calls to $\textsc{Check}$ return valid outputs with high probability. The total number of \textsc{Update}s and \textsc{TemporaryUpdate}s that have not been rolled back should always be $O(\beta_{\textsc{Checker}} m)$. \label{thm:checker} \end{theorem} This theorem is from~\cite{gao2021fully}. The only difference is in the guarantee of $\textsc{Check}$. We will now show how it can be implemented. Let \[ \boldsymbol{\mathit{\widetilde{f}}}^* = \delta g(\boldsymbol{\mathit{s}}) - \delta \boldsymbol{\mathit{R}}^{-1} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})\,. \] Let $\textsc{DynamicSC}$ be the underlying Schur complement data structure. We first add the endpoints $u,w$ of $e$ as terminals by calling \[ \textsc{DynamicSC}.\textsc{TemporaryAddTerminals}(\{u,w\}) \] so that the new Schur complement is on the vertex set $C' = C\cup\{u,w\}$. Then, we set \[ \boldsymbol{\mathit{\phi}} = -\widetilde{SC}^+ \boldsymbol{\mathit{\pi}}_{old}\] and $\widetilde{f}_e = (\phi_u - \phi_w) / \sqrt{r_e}$, where $\widetilde{SC}$ is the output of $\textsc{DynamicSC}.\widetilde{SC}()$. Equivalently, note that \[ \widetilde{f}_e = \delta \cdot \mathbf{1}_e^\top \boldsymbol{\mathit{R}}^{-1} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \boldsymbol{\mathit{\pi}}_{old}\,.\] We will show that $\sqrt{r_e}\left|\widetilde{f}_e - \widetilde{f}_e^*\right| \leq \varepsilon$. We write \begin{align*} & \sqrt{r_e} \left|\widetilde{f}_e - \widetilde{f}_e^*\right| \\ & \leq \left\|\delta \sqrt{\boldsymbol{\mathit{r}}} g(\boldsymbol{\mathit{s}})\right\|_\infty + \left\|\delta \boldsymbol{\mathit{R}}^{-1/2} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \left(\boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}}) - \boldsymbol{\mathit{\pi}}^{C'}\left(\boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})\right)\right)\right\|_\infty\\ & + \left\|\delta \boldsymbol{\mathit{R}}^{-1/2} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \left( \boldsymbol{\mathit{\pi}}^{C'}\left(\boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})\right) - \boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})\right) \right)\right\|_\infty + \left\|\delta \boldsymbol{\mathit{R}}^{-1/2} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \left(\boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})\right) - \boldsymbol{\mathit{\pi}}_{old}\right)\right\|_\infty\,. \end{align*} For the first term, \[ \left\|\delta \sqrt{\boldsymbol{\mathit{r}}} g(\boldsymbol{\mathit{s}})\right\|_\infty = \left\|\delta \frac{\frac{1}{\boldsymbol{\mathit{s}}^+} - \frac{1}{\boldsymbol{\mathit{s}}^-}}{\sqrt{\frac{1}{(\boldsymbol{\mathit{s}}^+)^2} + \frac{1}{(\boldsymbol{\mathit{s}}^-)^2}}}\right\|_\infty \leq \delta \leq \varepsilon / 10\,.\] Now, by the fact that $C'$ is a $\beta_{\textsc{Checker}}$-congestion reduction subset by definition in $\textsc{DynamicSC}$, Lemma~\ref{lem:non-projected-demand-contrib} immediately implies that the second term is $\leq \delta \cdot \tO{\beta_{\textsc{Checker}}^{-2}} \leq \varepsilon/10$. For the third term, we apply Lemma~\ref{lem:old_projection_approximate}, which shows that \begin{align*} & \left\|\delta \boldsymbol{\mathit{R}}^{-1/2} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \left( \boldsymbol{\mathit{\pi}}^{C'}\left(\boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})\right) - \boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})\right) \right)\right\|_\infty\\ & \leq \delta \cdot \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}}\left( \boldsymbol{\mathit{\pi}}^{C'}\left(\boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})\right) - \boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})\right) \right)}\\ & \leq \delta \cdot \tO{\beta_{\textsc{Checker}}^{-2}}\\ & \leq \varepsilon / 10\,, \end{align*} as the resistances don't change and we only have two terminal insertions from $C$ to $C'$. Finally, the fourth term is \[\left\|\delta \boldsymbol{\mathit{R}}^{-1/2} \boldsymbol{\mathit{B}} \boldsymbol{\mathit{L}}^+ \left(\boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})\right) - \boldsymbol{\mathit{\pi}}_{old}\right)\right\|_\infty \leq \delta \cdot \sqrt{\mathcal{E}_{\boldsymbol{\mathit{r}}} \left(\boldsymbol{\mathit{\pi}}^C\left(\boldsymbol{\mathit{B}}^\top g(\boldsymbol{\mathit{s}})\right) - \boldsymbol{\mathit{\pi}}_{old}\right)} \leq \delta \cdot \varepsilon \sqrt{m} / 2 = \varepsilon / 2\,. \] We conclude that $\sqrt{r_e}\left|\widetilde{f}_e - \widetilde{f}_e^*\right| \leq \varepsilon$. Finally, we call $\textsc{DynamicSC}.\textsc{Rollback}$ to undo the terminal insertions. The runtime of this operation is dominated by the call to $\textsc{DynamicSC}.\widetilde{SC}()$, which takes time $\tO{(\beta_{\textsc{Checker}} m + (K\beta_{\textsc{Checker}}^{-2} \varepsilon^{-2})^2)\varepsilon^{-2}}$. \end{document}
arXiv
Do Curtmola et al.'s IND-CKA1/2 security definitions protect against search pattern leakage? In the article Searchable Symmetric Encryption: Improved Definitions and Efficient Constructions, Curtmola et al. propose adaptive and non-adaptive (indistinguishability and simulator-based) security definitions for searchable encryption schemes, conventionally called IND-CKA1 and IND-CKA2. My question is: Do the IND-CKA1/2 security definitions guarantee that the search pattern is not leaked (i.e., that an attacker can not distinguish whether two issued trapdoors are generated with the same keywords)? In their article, they mention that [...] the security notion achieved for SSE is that nothing is leaked beyond the access pattern and the search pattern [...] Further, in their article, Bösch et al. state that Curtmola et al. review existing security definitions for searchable encryption and propose new indistinguishability and simulation-based definitions that address the shortcomings of the existing definitions. At the same time they loosen the character of SSE by allowing the leakage of a user's search pattern. So it seems pretty obvious that the definition should not guarantee search pattern hiding. Nevertheless, taking a look at the IND-CKA2 definition: $\mathbf{\mathrm{Ind}}^{*}_{\mathrm{SSE},\mathcal{A}}(k) \\ \;K\leftarrow \mathrm{Gen}(1^k)\\ \;b\overset{\$}{\leftarrow}\{0,1\}\\ \;(\mathrm{st}_{\mathcal{A}},\mathbf{D}_{0},\mathbf{D}_{1})\leftarrow\mathcal{A}_{0}(1^k)\\ \;(I_{b},\mathbf{c}_{b})\leftarrow\mathrm{Enc}_{K}(\mathbf{D}_{b})\\ \;(\mathrm{st}_{\mathcal{A}},w_{0,1},w_{1,1})\leftarrow\mathcal{A}_{1}(\mathrm{st}_\mathcal{A},I_{b})\\ \;t_{b,1}\leftarrow\mathrm{Trpdr}_{K}(w_{b,1})\\ \;\mbox{for }2\le i\le q,\\ \quad(\mathrm{st}_{\mathcal{A}},w_{0,i},w_{1,i})\leftarrow\mathcal{A}_{i}(\mathrm{st}_{\mathcal{A}},I_{b},c_{b},t_{b,1},\ldots,t_{b,i-1})\\ \quad t_{b,i}\leftarrow\mathrm{Trpdr}_{K}(w_{b,i})\\ \;\mbox{let }\mathbf{t}_{b}=(t_{b,1},\ldots,t_{b,q})\\ \;b'\leftarrow \mathcal{A}_{q+1}(\mathrm{st}_{\mathcal{A}},I_{b},\mathbf{c}_{b},\mathbf{t}_{b})\\ \;\mbox{if }b'=b\mbox{, output }1\\ \;\mbox{otherwise output }0$ one sees that any algorithm able to distinguish whether two trapdoors encode the same words or not directly breaks IND-CKA2, which would mean that IND-CKA2 protects against search pattern leakage. For instance, and very informally, in the first trapdoor query $\mathcal{A}$ can set $w_{0,1}=w_{1,1}=w_{1}$ for some fixed $w_{1}$ and receive $t_{b,1}=\mathrm{Trpdr}_{K}(w_{1})$. In the second query, it can issue $w_{0,2}=w_{1}$ and $w_{1,2}\neq w_{1}$. Then $t_{b,2}$ either encodes $w_{1}$ or some other word, and $\mathcal{A}$ can guess $b$ by guessing whether $t_{b,1}$ and $t_{b,2}$ encode the same word ($b=0$) or not ($b=1$). Continuing in this fashion, since $\mathcal{A}$ breaks the "search pattern challenge", $\mathcal{A}$ has a non-negligible advantage in distinguishing $b$. security-definition searchable-encryption JRGJRG The condition $\tau(\mathbf{D}_{0}, w_{0,1},\ldots,w_{0,q})=\tau(\mathbf{D}_{1}, w_{1,1},\ldots,w_{1,q})$ rules out the possibility that $\mathcal{A}$ issues the keywords as I pointed in the question, since the output of $\tau$ includes the search pattern matrix defined in page 9. This has the effect of weakening the definition to allow search pattern leakage. As a note, the stronger definition in Shen et al.'s 1 does not include a restriction of this type, and so it captures the search pattern protection property. 1 Shen, Emily, Elaine Shi, and Brent Waters. "Predicate Privacy in Encryption Systems." In Theory of Cryptography, edited by Omer Reingold, 5444:457–73. Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2009. Not the answer you're looking for? Browse other questions tagged security-definition searchable-encryption or ask your own question. Perfect security definitions Proof that IND$-CPA implies IND-CPA? Difference left-or-right CPA security, IND-CPA security Are the definitions of IND-CCA secure and of IND-CCA secure under standard model identical? IND-CPA security of CTR mode security against CPA attack? Does GCM (or GHASH) only provide 64-bit security against forgeries? Comparison of security definitions for signatures
CommonCrawl
Bulletin of the London Mathematical Society (1) Combinatorics, Probability and Computing (1) By Brittany L. Anderson-Montoya, Heather R. Bailey, Carryl L. Baldwin, Daphne Bavelier, Jameson D. Beach, Jeffrey S. Bedwell, Kevin B. Bennett, Richard A. Block, Deborah A. Boehm-Davis, Corey J. Bohil, David B. Boles, Avinoam Borowsky, Jessica Bramlett, Allison A. Brennan, J. Christopher Brill, Matthew S. Cain, Meredith Carroll, Roberto Champney, Kait Clark, Nancy J. Cooke, Lori M. Curtindale, Clare Davies, Patricia R. DeLucia, Andrew E. Deptula, Michael B. Dillard, Colin D. Drury, Christopher Edman, James T. Enns, Sara Irina Fabrikant, Victor S. Finomore, Arthur D. Fisk, John M. Flach, Matthew E. Funke, Andre Garcia, Adam Gazzaley, Douglas J. Gillan, Rebecca A. Grier, Simen Hagen, Kelly Hale, Diane F. Halpern, Peter A. Hancock, Deborah L. Harm, Mary Hegarty, Laurie M. Heller, Nicole D. Helton, William S. Helton, Robert R. Hoffman, Jerred Holt, Xiaogang Hu, Richard J. Jagacinski, Keith S. Jones, Astrid M. L. Kappers, Simon Kemp, Robert C. Kennedy, Robert S. Kennedy, Alan Kingstone, Ioana Koglbauer, Norman E. Lane, Robert D. Latzman, Cynthia Laurie-Rose, Patricia Lee, Richard Lowe, Valerie Lugo, Poornima Madhavan, Leonard S. Mark, Gerald Matthews, Jyoti Mishra, Stephen R. Mitroff, Tracy L. Mitzner, Alexander M. Morison, Taylor Murphy, Takamichi Nakamoto, John G. Neuhoff, Karl M. Newell, Tal Oron-Gilad, Raja Parasuraman, Tiffany A. Pempek, Robert W. Proctor, Katie A. Ragsdale, Anil K. Raj, Millard F. Reschke, Evan F. Risko, Matthew Rizzo, Wendy A. Rogers, Jesse Q. Sargent, Mark W. Scerbo, Natasha B. Schwartz, F. Jacob Seagull, Cory-Ann Smarr, L. James Smart, Kay Stanney, James Staszewski, Clayton L. Stephenson, Mary E. Stuart, Breanna E. Studenka, Joel Suss, Leedjia Svec, James L. Szalma, James Tanaka, James Thompson, Wouter M. Bergmann Tiest, Lauren A. Vassiliades, Michael A. Vidulich, Paul Ward, Joel S. Warm, David A. Washburn, Christopher D. Wickens, Scott J. Wood, David D. Woods, Motonori Yamaguchi, Lin Ye, Jeffrey M. Zacks Edited by Robert R. Hoffman, Peter A. Hancock, University of Central Florida, Mark W. Scerbo, Old Dominion University, Virginia, Raja Parasuraman, George Mason University, Virginia, James L. Szalma, University of Central Florida Book: The Cambridge Handbook of Applied Perception Research Published online: 05 July 2015, pp xi-xiv On the Diameters of Commuting Graphs Arising from Random Skew-Symmetric Matrices PETER HEGARTY, DMITRII ZHELEZOV Journal: Combinatorics, Probability and Computing / Volume 23 / Issue 3 / May 2014 Print publication: May 2014 We present a two-parameter family $(G_{m,k})_{m, k \in \mathbb{N}_{\geq 2}}$ , of finite, non-abelian random groups and propose that, for each fixed k, as m → ∞ the commuting graph of Gm,k is almost surely connected and of diameter k. We present heuristic arguments in favour of this conjecture, following the lines of classical arguments for the Erdős–Rényi random graph. As well as being of independent interest, our groups would, if our conjecture is true, provide a large family of counterexamples to the conjecture of Iranmanesh and Jafarzadeh that the commuting graph of a finite group, if connected, must have a bounded diameter. Simulations of our model yielded explicit examples of groups whose commuting graphs have all diameters from 2 up to 10. TWO-GROUPS IN WHICH AN AUTOMORPHISM INVERTS PRECISELY HALF THE ELEMENTS PETER HEGARTY, DESMOND MacHALE Journal: Bulletin of the London Mathematical Society / Volume 30 / Issue 2 / March 1998 We classify all finite 2-groups G for which the maximum proportion of elements inverted by an automorphism of G is a half. These groups constitute 10 isoclinism families.
CommonCrawl
Efficient single image dehazing by modifying the dark channel prior Sebastián Salazar-Colores1, Juan-Manuel Ramos-Arreguín ORCID: orcid.org/0000-0002-2604-96922, Jesús-Carlos Pedraza-Ortega2 & J Rodríguez-Reséndiz2 Outdoor images can be degraded due to the particles in the air that absorb and scatter light. The produced degradation generates contrast attenuation, blurring, and distortion in pixels, resulting in low visibility. These limit the efficiency of computer vision systems such as target tracking, surveillance, and pattern recognition. In this paper, we propose a fast and effective method, through modification in the computation of the dark channel which significantly reduces the artifacts generated in the restored images presented when using the ordinary dark channel. According to our experimental results, our method produces better results than some state-of-the-art methods in both efficiency and restoration quality. The processing time in tests shows that the method is adequate for images with high-resolution and real-time video processing. The presence of environmental disturbances such as haze and smog gives outdoor images and videos undesirable characteristics that affect the ability of computer vision systems to detect patterns and perform an efficient feature selection and classification. These characteristics are caused by the decrease in contrast and color modification originated by the presence of suspended particles in the air. Hence, the task of removing the haze, fog, and smog (dehazing), without compromising the image information, takes on special relevance. Therefore, to improve the performance of systems such as surveillance [1], traffic [2], self-driving vehicles [3] is essential to develop new and better dehazing methods. This problem has been studied extensively in the literature with two main approaches: methods that use multiple images [4] and methods that use just a single image [1]. Within the single-image approach, some results can be mentioned relevant results, such as the obtained by Tan et al. [5], Fattal [6], and Tarel et al. [7] where the main problem of these proposed methods is the time processing required and that the proposed methods are not based on solid physics concepts. The most studied method in the literature is presented by He et al. [8] where the dark channel prior (DCP) is introduced. The DCP is a simple but effective approach in most cases, although it produces artifacts around regions where the intensity changes abruptly. Usually, in order to eliminate the artifacts, a refinement stage is necessary, which has an impact on time processing [1, 9]. To get around this problem, He et al. [8] uses a soft-matting process, Gibson et al. [10] proposed a DCP method based on the median operator. Zhu et al. [11] introduced a linear color attenuation prior, and Ren et al. [12] used a deep multiscale neural network. This paper presents a fast novel method in which a modified dark channel is introduced, improving the quality of the depth estimations of the image elements and reducing significantly the artifacts generated when the traditional dark channel is used. The modification of the proposed dark channel, unlike most state-of-the-art methods, makes a refinement stage unnecessary; this has a positive impact on the simplicity and speed of the dehazing process. Experimental results demonstrate the effectiveness of the proposed method, and when compared with three state-of-the-art methods, the proposed method achieves a higher restoration quality and requires significantly less time. The paper is organized as follows. In Section 2, the image degradation model and the dark channel prior used is discussed. The proposed method is presented in Section 3. In Section 4, experimental results and analysis are shown. The conclusions are described in Section 5. Based on the atmospheric optic model [1], the formation of a pixel in an rgb digital image I can be described as: $$ I(x,y)=J(x,y)t(x,y)+A(1-t(x,y)), $$ where x,y is a pixel position, I(x,y)=(IR(x,y),IG(x,y), IB(x,y)) is the rgb observed pixel, and A=(AR,AG,AB) is the global rgb environmental airlight. t(x,y) is the transmission of scattered light, in an homogeneous medium, which can be described as: $$ t(x,y)=e^{-\beta d(x,y)}, $$ where β is a constant associated with the weather condition, and d(x,y) is the depth of the scene of every x,y of I. Finally, J(x,y)=(JR(x,y),JG(x,y),JB(x,y)) is the value of a pixel with position x,y of an image J which has the information of the scene without affectations. Then, to recover J(x,y), Eq. 1 can be expressed as: $$ J(x,y)=\frac{I(x,y)-A}{t(x,y)}+A $$ The difficulty in recovering the image J that lies in that both t and A parameters are unknown. In [8], a very useful tool for computing the unknown variables is presented: the dark channel (DC). The DC is defined as: $$ {I}^{dark}(x,y)={\underset{c\in (R,G,B)}{{\min }}}\,\left({\underset{z\in \Omega (x,y)}{{\min }}}\,{{I}^{c}}(z)\right), $$ where Ω(x,y) is a squared window of size l×l defined as: $$ \Omega(x,y)=I^{c}(x-k,y-k) $$ where \(k=\left (\left [- \left \lfloor {\frac {l}{2}}\right \rfloor...\left \lfloor {\frac {l}{2}} \right \rfloor \right ], k\in {\mathbb {N}}\right)\), in this paper the size l of Ω(x,y) used is 15. The dark channel prior (DCP) consists of the following statement: In a non-sky region, the dark channel of a haze free region has a low value, i.e.: $$ {{I}^{dark}}(x,y)\to 0 $$ To compute t(x,y), in [8], the Eq. 1 is normalized according to the airlight A by: $$ \frac{I(x,y)}{A}=\frac{J(x,y)}{A}t(x,y)+1-t(x,y) $$ Applying the dark channel in both sides of the Eq. 7: $$ \begin{aligned} &\underset{c\in (R,G,B)}{\min}\,\left(\underset{z\in \Omega (x,y)}{\min}\,{\frac{I(x,y)}{A}}(z)\right)=\\ &\underset{c\in (R,G,B)}{\min}\,\left(\underset{z\in \Omega (x,y)}{\min}\,{\frac{J(x,y)}{A}}(z)\right)\\ &\qquad \qquad \qquad t(x,y)+1-t(x,y), \end{aligned} $$ since J(x,y) is the haze-free image, then: $$ \begin{aligned} J^{dark}(x,&y)=\\ & \underset{c\in(R,G,B)}{\min }\,\left(\underset{z\in \Omega (x,y)}{\min}\,{\frac{J(x,y)}{A}}(z)\right)=0 \end{aligned} $$ Substituting Eq. 9 in Eq. 8: $$ \begin{aligned} \underset{c\in (R,G,B)}{\min}\,\left(\underset{z\in \Omega (x,y)}{\min}\,{\frac{I(x,y)}{A}}(z)\right)= 1-t(x,y), \end{aligned} $$ then the relation between the dark channel and the transmission t is: $$ \begin{aligned} t(x,y)&= 1-w\underset{c\in (R,G,B)}{\min}\, \left(\underset{z\in \Omega (x,y)}{\min}\,{\frac{I(x,y)}{A}}(z)\right)\\ &= 1-w{{I}^{dark}}(x,y), \end{aligned} $$ where w=[0...1] is a parameter that establishes the recovery level; in [8], the value used was w=0.95. In this paper, the best value w for our method was defined empirically as 0.85. In [8], the airlight A is considered constant in all the image and is estimated by first selecting the 0.01% of the map generated when the dark channel is computed. From the selected pixels, the one with the highest intensity in the input image I is selected, and that value is assigned to A. If the dark channel prior is used to restore an image, the results obtained have artifacts around the edges of the images as it is shown in Fig. 1. Result of recovery an image using the dark channel prior. a Hazy input. b Dark channel map. c Result To avoid or reduce the artifacts, the literature proposes the addition of a transmission refinement stage, as is shown in the Fig. 2. The refinement stage increases the processing time requirements of the method, so avoiding the refinement stage is important. In this paper, the modification of the dark channel makes unnecessary the refinement stage. Flowchart of the method based on the dark channel prior proposed in [8] The proposed method The modified dark channel To illustrate the cause of the artifacts generated when the dark channel prior is applied, the Fig. 3 shows an analysis of the consequences if the dark channel is used directly to restore the image. In Fig. 3a the input image I is displayed with two windows Ω1(p1) and Ω2(p2) with size l=3, which are centered in the pixels p1 and p2 respectively. Whereas the window Ω1(p1) is contained in a homogeneous area, the window Ω2(p2) is in a region near (according to the size of Ω2) to an edge. In Fig. 3b, it is shown the expected dark channel where the pixels p1 and p2 have different values since they belong to different regions. In Fig. 3c, it is shown the dark channel obtained using Eq. 4 where the value of pixel p1 is correctly estimated; however, the pixel p2 has a lower value than the expected value; this is because at least one element of the pixels in the window Ω2(p2) is lower than the pixel p2. This is the cause of the generation of artifacts near the edges when the image J(x,y) is recovered (Fig. 3d). Analysis of the classic DCP algorithm. a Input image. b Expected dark channel. c Obtained dark channel. d Recovered image using c In order to reduce the artifacts, in this paper is proposed a novel approach to incorporate the values obtained from the dark channel. In the proposed approach, initially Idark is a one channel image with the same size of I and all the elements Idark(x,y) are zero. We define α as a square window with size l where all its elements has the value of Idark(x,y) computed according to Eq. 4. Then: $$ \begin{aligned} &I^{dark}_{\left(x-\lfloor l/2 \rfloor... x+\lfloor l/2 \rfloor,y-\lfloor l/2 \rfloor... y+\lfloor l/2 \rfloor \right)}=\\ &\text{pixel-wise}\max (\alpha_{(1...l),(1...l)},\\ &\qquad\qquad I^{dark}_{(x-\lfloor l/2 \rfloor... x+\lfloor l/2 \rfloor,y-\lfloor l/2 \rfloor... y+\lfloor l/2 \rfloor))} \end{aligned} $$ In Fig. 4, a comparison of the information used for any pixel (x,y), between classic DC and the modified DC for l=3, is displayed. Analysis of the modified DC. a Input image. b Information in which the classic DC algorithm is based. c Information in which the proposed DC algorithm is based. d Recovered image using c The modified dark channel is described in Algorithm 1. Figure 5 shows an example of the results obtained using the modified DC, where the artifacts in the results have greatly reduced in comparison with the classic DC presented in Fig 1. The pixel-wise maximum operation in the proposed modified DC permits have information about the previous assignment in Idark in the neighborhood of any pixel (x,y). In heterogeneous regions (near from edges), a more robust and precise computation of the DC estimation is obtained because the underestimated values are reduced with the pixel-wise maximum operation. In homogeneous regions (far from edges), the impact of the pixel-wise maximum operation values is practically unmodified because in these regions the neighbors Idark are quite similar. Result of recovery an image using the modified dark channel prior. a Hazy input. b Dark channel map. c Result Analysis of the image size with the performance of recover images The DC is a consequence of an empirical observation in outdoor images. Since in the classical and modified DC the size l of Ω(x,y) is constant but the size of the input images are not, we make an analysis of the relation between the size of I and the performance of the dehazing task. To perform the analysis, two versions of the proposed method were developed: The first one is based on the method shown in Fig. 2, where the computation of the classic DC is substituted by the modified DC. The second method is a variation shown in Algorithm 2, where the variables t and A are computed with a resized version of I called Inr, where nr is the new resolution. Next, t is resized to the original size of I. Finally, the image is recovered using the scattering model presented in Eq. 1. To make the experimental tests, a dataset of 100 images was created using the Middlebury Stereo Datasets [13, 14], in which the haze was simulated with random values of t and A. The metric to measure the error used was the Peak Signal-to-Noise Ratio (PSNR). The resolutions nr tested were 320×240, 600×400, 640×480, 800×600, 1280×960.The results are shown in Table 1. Table 1 Comparison in terms of PSNR over 100 synthetic images with different resolutions Since, according to results of PSNR in Table 1, in our method, the quality of the restored image is not highly affected by the resolution used to compute A and t, furthermore, in resolution 600×400 the PSNR value is higher, needing a less time processing. Aforementioned, in this paper, the tests are realized with a nr value of 600×400. In order to have a reference framework about the performance of the method proposed, a comparison was made against four state-of-the-art methods: the classical DCP method with a soft-matting refinement stage [8], the method that use a median filter to refine the transmission[10], a new approach using an additional prior known as linear color attenuation prior [11], and a method that use a deep neural network [12]. Tests were done using 22 images acquired from two datasets used commonly in the literature: [15], and from [13, 14], in which the affectations were simulated with random values of t and A. It was perform a quantitative analysis using the peak signal-to-noise ratio (PSNR) [16] and the Structural Similarity Index (SSIM) [17]. The PSNR is a quantitative measure of restoration quality between the restored image J and the target image K, and it is defined as: $$ \text{PSNR}=10. {log}_{10}\left(\frac{ma{x^{2}_{I}}}{\frac{1}{n}{\sum\limits_{(x,y)}^{}{(J(x,y)-{{K}}(x,y))^{2}}}}\right), $$ where (x,y) is a pixel position, n i s the pixels cardinality of images of J and K, and \(ma{x^{2}_{I}}\) is the maximum value possible of the images J and K, in this case: 255. The Structural Similarity (SSIM) Index is based on a perception model and is described by three aspects comparison, such as: $$ SSIM=\left[l, c, s\right], $$ where l is the luminance comparison, c is the contrast comparison, and s is the structure comparison. The tests were conducted on a computer with a Core i5-2400 processor at 3.10 GHz with 12 GB of RAM using Matlab 2018a. Subjective analysis Figure 6 shows the employed dataset and the results generated by the implemented methods, where it is visible that the proposed method presents a higher contrast and brightness than the other methods. Haze removal results for synthetic images. a Input images. b Results by He et al. [8], c Results by Gibson et al. [10], d Results by Zhu et al. [11], e Results by Ren et al. [12], f Our method Objective analysis PSNR and SSIM metrics were applied to the dataset. Figure 6 presents the dataset with results of the compared algorithms. The Tab. 2 shows that the algorithm according to the SSIM index has an average of 0.81. Table 3 shows a PSNR value of 18.5. The quality performance of our method is just slightly outperformed by the He et al. [8] method. Table 4 shows that our approach outperforms the other methods; particularly, the results of our proposed method is at least 20 times faster than the He et al. [8] method. Table 2 Comparative analysis using the Structural Similarity Index Measure (SSIM) Table 3 Comparative analysis using the peak signal-to-noise ratio (PSNR) (in Db) Table 4 Comparative analysis of time processing performance (in seconds) This paper introduces an innovative method which used a variant of the dark channel that greatly reduces the recurrent artifacts presented when using the classic dark channel. Analyzing the experimental results of the quantitative analysis performed, it is observed that the proposed algorithm generates competitive results against four state-of-the-art algorithms without the need for a refinement stage. Because the proposed method has no refinement stage, additionally, it uses a scaled image to compute the variables t and A which is faster than state-of-the-art methods. The computation processing time used by the algorithm makes possible its application in high-resolution images and real-time video. Dark channel Dark channel prior PSNR: Peak signal-to-noise ratio SSIM: Structural Similarity Index C. Chengtao, Z. Qiuyu, L. Yanhua, in Control and Decision Conference (CCDC)Qingdao, China, 1. A survey of image dehazing approaches, (2015), pp. 3964–3969. https://doi.org/10.1109/CCDC.2015.7162616. S. Kim, S. Park, K. Choi, in Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV), vol. 1. A system architecture for real time traffic monitoring in foggy video (Mokpo, South Korea, 2015), pp. 1–4. https://doi.org/10.1109/FCV.2015.7103720. X. Ji, J. Cheng, J. Bai, T. Zhang, M. Wang, in International Congress on Image and Signal Processing (CISP), vol. 1. Real-time enhancement of the image clarity for traffic video monitoring systems in haze (Dalian, China, 2014), pp. 11–15. https://doi.org/10.1109/CISP.2014.7003741. Y. Y. Schechner, S. G. Narasimhan, S. K. Nayar, in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), vol. 16. Instant dehazing of images using polarization (Kauai, USA, 2001), p. 325. R. T. Tan, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1. Visibility in bad weather from a single image (IEEEAnchorage, 2008), pp. 1–8. R. Fattal, in ACM Transactions on Graphics (TOG), vol. 27. Single image dehazing (New York, USA, 2008), pp. 72–1729. https://doi.org/10.1145/1360612.1360671. J. -P. Tarel, N. Hautiere, in IEEE International Conference on Computer Vision (ICCV), vol. 1. Fast visibility restoration from a single color or gray level image (IEEEKyoto, 2009), pp. 2201–2208. K. He, J. Sun, X. Tang, Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell.33(12), 2341–2353 (2010). https://doi.org/10.1109/TPAMI.2010.168. V. Sahu, M. M. S. Vinkey Sahu, A survey paper on single image dehazing. Int. J. Recent Innov. Trends Comput. Commun. (IJRITCC). 3(2), 85–88 (2015). K. B. Gibson, D. T. Võ, T. Q. Nguyen, An investigation of dehazing effects on image and video coding. IEEE Trans. Image Process.: Publ. IEEE Sig. Process. Soc.21(2), 662–73 (2012). https://doi.org/10.1109/TIP.2011.2166968. Q. Zhu, J. Mai, L. Shao, A fast single image haze removal algorithm using color attenuation prior. IEEE Trans. Image Process.24(11), 3522–3533 (2015). https://doi.org/10.1109/TIP.2015.2446191. W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, M. -H. Yang, in Computer Vision – ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II, ed. by B. Leibe, J. Matas, N. Sebe, and M. Welling. Single image dehazing via multi-scale convolutional neural networks (SpringerCham, 2016), pp. 154–169. https://doi.org/10.1007/978-3-319-46475-6_10. D. Scharstein, R. Szeliski, in Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on. High-accuracy stereo depth maps using structured light, (2003). https://doi.org/10.1109/CVPR.2003.1211354. D. Scharstein, C. Pal, in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Learning conditional random fields for stereo, (2007). https://doi.org/10.1109/CVPR.2007.383191. M. Sulami, I. Glatzer, R. Fattal, M. Werman, in IEEE International Conference on Computational Photography (ICCP), 1. Automatic recovery of the atmospheric light in hazy images (Santa Clara, USA, 2014), pp. 1–11. https://doi.org/10.1109/ICCPHOT.2014.6831817. Q. Huynh-Thu, M. Ghanbari, Scope of validity of PSNR in image/video quality assessment. Electron. Lett.44(13), 800–801 (2008). https://doi.org/10.1049/el:20080522. R. Dosselmann, X. Yang, A comprehensive assessment of the structural similarity index. SIViP. 5(1), 81–91 (2011). https://doi.org/10.1007/s11760-009-0144-1. Sebastian Salazar-Colores would especially like to thank CONACYT (National Council of Science and Technology) for the financial support given for his doctoral studies. This work was supported in part by the National Council on Science and Technology (CONACYT), Mexico, under Scholarship 285651. Facultad de Informática, Universidad Autónoma de Querétaro, Av. de las Ciencias S/N, Juriquilla, Querétaro, 76230, México Sebastián Salazar-Colores Facultad de Ingeniería, Universidad Autónoma de Querétaro, Cerro de las Campanas s/n, Querétaro, 76010, México Juan-Manuel Ramos-Arreguín , Jesús-Carlos Pedraza-Ortega & J Rodríguez-Reséndiz Search for Sebastián Salazar-Colores in: Search for Juan-Manuel Ramos-Arreguín in: Search for Jesús-Carlos Pedraza-Ortega in: Search for J Rodríguez-Reséndiz in: All authors took part in the discussion of the work described in this paper. All authors read and approved the final manuscript. Correspondence to Juan-Manuel Ramos-Arreguín. We used publicly available data in order to illustrate and test our methods: The datasets used were acquired from [15], and from [13, 14], which can be found in http://www.cs.huji.ac.il/~raananf/projects/atm_light/results/, and http://vision.middlebury.edu/stereo/data/ respectively. Sebastián Salazar-Colores received his B. S. degree in Computer Science from Universidad Autónoma Benito Juárez de Oaxaca, received his M. S. degree in Electrical Engineering at Universidad de Guanajuato. He is a Ph.D. candidate in Computer Science at the Universidad Autónoma de Querétaro. His research interests are image processing and computer vision. Juan-Manuel Ramos-Arreguin received his M. S. degree in Electrical Engineering option instrumentation and digital systems from University of Guanajuato and the Ph.D. in Mechatronics Science from Centro de Ingenier?a y Desarrollo Industrial (Engineering and Industrial Development Center). Since 2009 he is part of the Engineering Department at the UAQ where he works as Researcher and Lecturer. His research interest includes Mechatronics and embedded systems. Jesus-Carlos Pedraza-Ortega received his M. S. degree in Electrical Engineering option Instrumentation and Digital Systems from University of Guanajuato, and the Ph.D. in Mechanical Engineering from the University of Tsukuba, Japan. Since 2008 he is part of the Engineering Department at the Autonomous University of Queretaro (UAQ) where he works as Researcher and Lecturer. His research interest includes Computer Vision, Image Processing, 3D object Reconstruction by using structured fringe pattern, Modeling and Simulation. Juvenal R. Reséndiz obtained his Ph.D. degree at Querétaro State University, México, in 2010. Currently, he is a professor at the same institution since 2008. He was a visiting professor at West Virginia University in 2012. He has received several awards for his contributions in developing education technology. He has worked on industrial and academic automation projects for 15 years. Currently, he is the chair of the Engineering Automation Program and Master in Automation at the same institution. He is the Director of Technologic Link of all Querétaro State University. He belongs to the Mexican Academy of Sciences, the National Research Academy in México, and seven associations regarding Engineering issues. He is the IEEE Querétaro Section past President. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Salazar-Colores, S., Ramos-Arreguín, J., Pedraza-Ortega, J. et al. Efficient single image dehazing by modifying the dark channel prior. J Image Video Proc. 2019, 66 (2019) doi:10.1186/s13640-019-0447-2 Defogging Dehazing Image enhancement Single-image dehazing
CommonCrawl
\begin{definition}[Definition:Conjugate (Group Theory)/Element/Also known as] Some sources refer to the '''conjugate''' of $x$ as the '''transform''' of $x$. Some sources refer to '''conjugacy''' as '''conjugation'''. Category:Definitions/Conjugacy \end{definition}
ProofWiki
\begin{document} \title{An Exponential Time Parameterized Algorithm for \textsf{Planar Disjoint Paths}\footnote{A preliminary version of this paper appeared in the proceedings of {\em STOC 2020}.}} \author{ Daniel Lokshtanov\thanks{University of California, Santa Barbara, USA. \texttt{[email protected]}} \and Pranabendu Misra\thanks{Max Planck Institute for Informatics, Saarbrucken, Germany. \texttt{[email protected]}} \and Michal Pilipczuk\thanks{Institute of Informatics, University of Warsaw, Poland. \texttt{[email protected]}} \and Saket Saurabh\thanks{The Institute of Mathematical Sciences, HBNI, Chennai, India. \texttt{[email protected]}} \and Meirav Zehavi\thanks{Ben-Gurion University, Beersheba, Israel. \texttt{[email protected]}} } \maketitle \begin{abstract} In the \textsf{Disjoint Paths}\ problem, the input is an undirected graph $G$ on $n$ vertices and a set of $k$ vertex pairs, $\{s_i,t_i\}_{i=1}^k$, and the task is to find $k$ pairwise vertex-disjoint paths such that the $i$'th path connects $s_i$ to $t_i$. In this paper, we give a parameterized algorithm with running time $2^{\mathcal{O}(k^2)}n^{\mathcal{O}(1)}$ for \textsf{Planar Disjoint Paths}{}, the variant of the problem where the input graph is required to be planar. Our algorithm is based on the unique linkage/treewidth reduction theorem for planar graphs by Adler et al.~[JCTB 2017], the algebraic cohomology based technique of Schrijver~[SICOMP 1994] and one of the key combinatorial insights developed by Cygan et al.~[FOCS 2013] in their algorithm for {\sf Disjoint Paths} on directed planar graphs. To the best of our knowledge our algorithm is the first parameterized algorithm to exploit that the treewidth of the input graph is small in a way completely different from the use of dynamic programming. \end{abstract} \pagestyle{plain} \setcounter{page}{1} \section{Introduction}\label{sec:intro} In the \textsf{Disjoint Paths}\ problem, the input is an undirected graph $G$ on $n$ vertices and a set of $k$ pairwise disjoint vertex pairs, $\{s_i,t_i\}_{i=1}^k$, and the task is to find $k$ pairwise vertex-disjoint paths connecting $s_i$ to $t_i$ for each $i\in\{1,\ldots,k\}$. The \textsf{Disjoint Paths}\ problem is a fundamental routing problem that finds applications in VLSI layout and virtual circuit routing, and has a central role in Robertson and Seymour’s Graph Minors series. We refer to surveys such as~\cite{frank1990packing,schrijver2003combinatorial} for a detailed overview. The \textsf{Disjoint Paths}\ problem was shown to be NP-complete by Karp (who attributed it to Knuth) in a followup paper~\cite{karp1975computational} to his initial list of 21 NP-complete problems~\cite{DBLP:conf/coco/Karp72} . It remains NP-complete even if $G$ is restricted to be a grid \cite{lynch1975equivalence, kramer1984complexity}. On directed graphs, the problem remains NP-hard even for $k=2$~\cite{DBLP:journals/tcs/FortuneHW80}. For undirected graphs, Perl and Shiloach~\cite{DBLP:journals/jacm/PerlS78} designed a polynomial time algorithm for the case where $k=2$. Then, the seminal work of Robertson and Seymour~\cite{DBLP:journals/jct/RobertsonS95b} showed that the problem is polynomial time solvable for every fixed $k$. In fact, they showed that it is {\em fixed parameter tractable (FPT)} by designing an algorithm with running time $f(k)n^3$. The currently fastest parameterized algorithm for \textsf{Disjoint Paths}\ has running time $h(k)n^2$~\cite{DBLP:journals/jct/KawarabayashiKR12}. However, all we know about $h$ and $f$ is that they are computable functions. That is, we still have no idea about what the running time dependence on $k$ really is. Similarly, the problem appears difficult in the realm of approximation, where one considers the optimization variant of the problem where the aim is to find disjoint paths connecting as many of the $\{s_i,t_i\}$ pairs as possible. Despite substantial efforts, the currently best known approximation algorithm remains a simple greedy algorithm that achieves approximation ratio $\mathcal{O}(\sqrt n)$. The \textsf{Disjoint Paths}\ problem has received particular attention when the input graph is restricted to be planar~\cite{DBLP:journals/jct/AdlerKKLST17,ding1992disjoint,DBLP:journals/siamcomp/Schrijver94,DBLP:conf/focs/CyganMPP13}. Adler et al.~\cite{DBLP:journals/jct/AdlerKKLST17} gave an algorithm for \textsf{Disjoint Paths}\ on planar graphs (\textsf{Planar Disjoint Paths}) with running time $2^{2^{\mathcal{O}(k)}}n^2$, giving at least a concrete form for the dependence of the running time on $k$ for planar graphs. Schrijver~\cite{DBLP:journals/siamcomp/Schrijver94} gave an algorithm for \textsf{Disjoint Paths}\ on {\em directed} planar graphs with running time $n^{\mathcal{O}(k)}$, in contrast to the NP-hardness for $k=2$ on general directed graphs. Almost 20 years later, Cygan et al.~\cite{DBLP:conf/focs/CyganMPP13} improved over the algorithm of Schrijver and showed that {\sf Disjoint Paths} on directed planar graphs is FPT by giving an algorithm with running time $2^{2^{\mathcal{O}(k^2)}} n^{\mathcal{O}(1)}$. The \textsf{Planar Disjoint Paths}\ problem is well-studied also from the perspective of approximation algorithms, with a recent burst of activity~\cite{ChuzhoyK15,ChuzhoyKL16,ChuzhoyKN17,ChuzhoyKN18,DBLP:conf/icalp/ChuzhoyKN18}. Highlights of this work include an approximation algorithm with factor $\mathcal{O}(n^{9/19} \log ^{\mathcal{O}(1)}n)$~\cite{ChuzhoyKL16} and, under reasonable complexity-theoretic assumptions, hardness of approximating the problem within a factor of $2^{\Omega{( \frac{1}{(\log \log n)^2} ) } }$~{\cite{ChuzhoyKN18}. In this paper, we consider the parameterized complexity of \textsf{Planar Disjoint Paths}{}.~Prior~to~our work, the fastest known algorithm was the $2^{2^{\mathcal{O}(k)}}n^2$ time algorithm of Adler et al.~\cite{DBLP:journals/jct/AdlerKKLST17}. Double exponential dependence on $k$ for a natural problem on planar graphs is something of an outlier--the majority of problems that are FPT on planar graphs enjoy running times of the form $2^{\mathcal{O}(\sqrt{k}\ \mathrm{polylog}\ \!k)} n^{\mathcal{O}(1)}$ (see, e.g.,~\cite{DBLP:journals/jacm/DemaineFHT05,DBLP:conf/focs/FominLMPPS16,DBLP:journals/jacm/FominLS18,DBLP:conf/soda/KleinM14,DBLP:journals/talg/PilipczukPSL18}). This, among other reasons (discussed below), led Adler~\cite{AdlerOpen13} to pose as an open problem in GROW 2013\footnote{The conference version of~\cite{DBLP:journals/jct/AdlerKKLST17} appeared in 2011, before~\cite{AdlerOpen13}. The document~\cite{AdlerOpen13} erroneously states the open problem for \textsf{Disjoint Paths}\ instead of for \textsf{Planar Disjoint Paths}{}---that \textsf{Planar Disjoint Paths}{} is meant is evident from the statement that a $2^{2^{\mathcal{O}(k)}}n^{\mathcal{O}(1)}$ time algorithm is known. } whether \textsf{Planar Disjoint Paths}{} admits an algorithm with running time $2^{k^{\mathcal{O}(1)}}n^{\mathcal{O}(1)}$. By integrating tools with origins in algebra and topology, we resolve this problem in the affirmative. In particular, we prove the following. \begin{theorem}\label{thm:main} The {\rm {\textsf{Planar Disjoint Paths}}} problem is solvable in time $2^{\mathcal{O}(k^2)}n^{\mathcal{O}(1)}$.\footnote{In fact, towards this we implicitly design a $w^{\mathcal{O}(k)}$-time algorithm, where $w$ is the treewidth of the input~graph.} \end{theorem} In addition to its value as a stand-alone result, our algorithm should be viewed as a piece of an on-going effort of many researchers to make the Graph Minor Theory of Robertson and Seymour algorithmically efficient. The graph minors project is abound with powerful algorithmic and structural results, such as the algorithm for \textsf{Disjoint Paths}{}~\cite{DBLP:journals/jct/RobertsonS95b}, {\sf Minor Testing}~\cite{DBLP:journals/jct/RobertsonS95b} (given two undirected graphs, $G$ and $H$ on $n$ and $k$ vertices, respectively, the goal is to check whether $G$ contains $H$ as a minor), the structural decomposition~\cite{DBLP:journals/jct/RobertsonS03a} and the Excluded Grid Theorem~\cite{DBLP:journals/jct/RobertsonST94}. Unfortunately, all of these results suffer from such bad hidden constants and dependence on the parameter $k$ that they have gotten their own term--``galactic algorithms''~\cite{Lipton2013}. It is the hope of many researchers that, in time, algorithms and structural results~from~Graph Minors can be more algorithmically efficient, perhaps even practically applicable. Substantial progress has been made in this direction, examples include the simpler decomposition theorem of Kawarabayashi and Wollan~\cite{DBLP:conf/stoc/KawarabayashiW11}, the faster algorithm for computing the structural decomposition of Grohe et al.~\cite{DBLP:conf/soda/GroheKR13}, the improved unique linkage theorem of Kawarabayashi and Wollan~\cite{DBLP:conf/stoc/KawarabayashiW10}, the linear excluded grid theorem on minor free classes of Demaine and Hajiaghayi~\cite{DBLP:journals/combinatorica/DemaineH08}, paving the way for the theory of Bidimensionality~\cite{DBLP:journals/jacm/DemaineFHT05}, and the polynomial grid minor theorem of Chekuri and Chuzhoy~\cite{DBLP:journals/jacm/ChekuriC16}. The algorithm for \textsf{Disjoint Paths}\ is a cornerstone of the entire Graph Minor Theory, and a vital ingredient in the $g(k)n^3$-time algorithm for {\sf Minor Testing}. Therefore, {\em efficient} algorithms for \textsf{Disjoint Paths}\ and {\sf Minor Testing} are necessary and crucial ingredients in an algorithmically efficient Graph Minors theory. This makes obtaining $2^{{\sf poly}(k)}n^{\mathcal{O}(1)}$ time algorithms for \textsf{Disjoint Paths}{} and {\sf Minor Testing} a tantalizing and challenging~goal. Theorem~\ref{thm:main} is a necessary basic step towards achieving this goal---a $2^{{\sf poly}(k)}n^{\mathcal{O}(1)}$ time algorithms for \textsf{Disjoint Paths}{} on general graphs also has to handle planar inputs, and it is easy to give a reduction from \textsf{Planar Disjoint Paths}\ to {\sf Minor Testing} in such a way that a $2^{{\sf poly}(k)}n^{\mathcal{O}(1)}$ time algorithm for {\sf Minor Testing} would imply a $2^{{\sf poly}(k)}n^{\mathcal{O}(1)}$ time algorithms for \textsf{Planar Disjoint Paths}{}. In addition to being a necessary step in the formal sense, there is strong evidence that an efficient algorithm for the planar case will be useful for the general case as well---indeed the algorithm for \textsf{Disjoint Paths}{} of Robertson and Seymour~\cite{DBLP:journals/jct/RobertsonS95b} relies on topology and essentially reduces the problem to surface-embedded graphs. Thus, an efficient algorithm for \textsf{Planar Disjoint Paths}{} represents a speed-up of the base case of the algorithm for \textsf{Disjoint Paths}{} of Robertson and Seymour. Coupled with the other recent advances~\cite{DBLP:journals/jacm/ChekuriC16,DBLP:journals/jacm/DemaineFHT05,DBLP:journals/combinatorica/DemaineH08,DBLP:conf/soda/GroheKR13,DBLP:conf/stoc/KawarabayashiW10,DBLP:conf/stoc/KawarabayashiW11}, this gives some hope that $2^{{\sf poly}(k)}n^{\mathcal{O}(1)}$ time algorithms for {\sf Disjoint Paths} and {\sf Minor Testing} may be within reach. \paragraph{Known Techniques and Obstacles in Designing a $2^{{\sf poly}(k)}$ Algorithm.} All known algorithms for both {\sf Disjoint Paths} and {\sf Planar Disjoint Pat}hs have the same high level structure. In particular, given a graph $G$ we distinguish between the cases of $G$ having ``small'' or ``large'' treewidth. In case the treewidth is large, we distinguish between two further cases: either $G$ contains a ``large'' clique minor or it does not. This results in the following case distinctions. \begin{enumerate} \setlength{\itemsep}{-2pt} \item {\bf Treewidth is small.} Let the treewidth of $G$ be $w$. Then, we use the known dynamic programming algorithm with running time $2^{\mathcal{O}(w \log w)}n^{\mathcal{O}(1)}$~\cite{scheffler1994practical} to solve the problem. It is important to note that, assuming the Exponential Time Hypothesis (ETH), there is no algorithm for \textsf{Disjoint Paths}\ running in time $2^{o(w \log w)}n^{\mathcal{O}(1)}$~\cite{DBLP:journals/siamcomp/LokshtanovMS18}, nor an algorithm for {\sf Planar Disjoint Paths} running in time $2^{o(w)}n^{\mathcal{O}(1)}$~\cite{DBLP:journals/tcs/BasteS15}. \item {\bf Treewidth is large and $G$ has a large clique minor.} In this case, we use the good routing property of the clique to find an irrelevant vertex and delete it without changing the answer to the problem. Since this case will not arise for graphs embedded on a surface or for planar graphs, we do not discuss it in more detail. \item {\bf Treewidth is large and $G$ has no large clique minor .} Using a fundamental structure theorem for minors called the ``flat wall theorem'', we can conclude that $G$ contains a large planar piece of the graph and a vertex $v$ that is sufficiently insulated in the middle of it. Applying the unique linkage theorem~\cite{DBLP:journals/jct/RobertsonS12} to this vertex, we conclude that it is irrelevant and remove it. For planar graphs, one can use the unique linkage theorem of Adler et al.~\cite{DBLP:journals/jct/AdlerKKLST17}. In particular, we use the following result: \begin{quote} Any instance of \textsf{Disjoint Paths}\ consisting of a planar graph with treewidth at least $82 k^{3/2}2^k$ and $k$ terminal pairs contains a vertex $v$ such that every solution to \textsf{Disjoint Paths}\ can be replaced by an equivalent one whose paths avoid $v$. \end{quote} This result says that if the treewidth of the input planar graph is (roughly) $\Omega(2^k)$, then we can find an irrelevant vertex and remove it. A natural question is whether we can guarantee an irrelevant vertex even if the treewidth is $\Omega({\sf poly}(k))$. Adler and Krause~\cite{DBLP:journals/corr/abs-1011-2136} exhibited a planar graph $G$ with $k+1$ terminal pairs such that $G$ contains a $(2^k + 1) \times (2^k + 1)$ grid as a subgraph, \textsf{Disjoint Paths}\ on this input has a unique solution, and the solution uses all vertices of $G$; in particular, {\em no vertex of $G$ is irrelevant}. This implies that the irrelevant vertex technique can only guarantee a treewidth of $\Omega(2^k)$, even if the input~graph~is~planar. \end{enumerate} \noindent Combining items (1) and (3), we conclude that the known methodology for \textsf{Disjoint Paths}\ can only guarantee an algorithm with running time $2^{2^{\mathcal{O}(k)}}n^2$ for \textsf{Planar Disjoint Paths}. Thus, a $2^{{\sf poly}(k)}n^{\mathcal{O}(1)}$-time algorithm for \textsf{Planar Disjoint Paths}\ appears to require entirely new ideas. As this obstacle was known to Adler et al.~\cite{AdlerOpen13}, it is likely to be the main motivation for Adler to pose the existence of a $2^{{\sf poly}(k)}n^{\mathcal{O}(1)}$ time algorithm for \textsf{Planar Disjoint Paths}\ as an open problem. \paragraph{Our Methods.} Our algorithm is based on a novel combination of two techniques that do not seem to give the desired outcome when used on their own. The first ingredient is the treewidth reduction theorem of Adler et al.~\cite{DBLP:journals/jct/AdlerKKLST17} that proves that given an instance of \textsf{Planar Disjoint Paths}, the treewidth can be brought down to $2^{\mathcal{O}(k)}$ (explained in item (3) above). This by itself is sufficient for an FPT algorithm (this is what Adler et al.~\cite{DBLP:journals/jct/AdlerKKLST17} do), but as explained above, it seems hopeless that it will bring a $2^{{\sf poly}(k)}n^{\mathcal{O}(1)}$-time algorithm. We circumvent the obstacle by using an algorithm for a more difficult problem with a worse running time, namely, Schrijver's $n^{\mathcal{O}(k )}$-time algorithm for {\sf Disjoint Paths} on directed planar graphs~\cite{DBLP:journals/siamcomp/Schrijver94}. Schrijver\rq{}s algorithm has two steps: a ``guessing'' step where one (essentially) guesses the homology class of the solution paths, and then a surprising homology-based algorithm that, given a homology class, finds a solution in that class (if one exists) in polynomial time. Our key insight is that for \textsf{Planar Disjoint Paths}, if the instance that we are considering has been reduced according to the procedure of Adler et al.~\cite{DBLP:journals/jct/AdlerKKLST17}, then we only need to iterate over $2^{\mathcal{O}(k^2)}$ homology classes in order to find the homology class of a solution, if one exists. The proof of this key insight is highly non-trivial, and builds on a cornerstone ingredient of the recent FPT algorithm of Cygan et al.~\cite{DBLP:conf/focs/CyganMPP13} for {\sf Disjoint Paths} on directed planar graphs. To the best of our knowledge, this is the first algorithm that finds the exact solution to a problem that exploits that the treewidth of the input graph is small in a way that is different from doing dynamic programming. A technical overview of our methods will appear in the next section. In our opinion, a major strength of the paper is that it breaks not only a barrier in running time, but also a longstanding methodological barrier. Since there are many algorithms that use the irrelevant vertex technique in some way, there is reasonable hope that they could benefit from the methods developed in this work. We remark that we have made no attempt to optimize the polynomial factor in this paper. Doing that, and in particular achieving linear dependency on $n$ while keeping the dependency on $k$ single-exponential, is the natural next question for future research. In particular, this might require to ``open-up'' the black boxes that we use, whose naive analysis yields a large polynomial dependency on $n$, but there is no reason to believe that it cannot be made linear---most likely, this will require extensive independent work on these particular ingredients. Having both the best dependency on $k$ and the best dependency on $n$ simultaneously may be critical to achieve a practical exact algorithm for large-scale instances. \setlength{\belowcaptionskip}{-17pt} \newcommand{\ensuremath{\mathsf{PreSegGro}}}{\ensuremath{\mathsf{PreSegGro}}} \newcommand{\ensuremath{\mathsf{SegGro}}}{\ensuremath{\mathsf{SegGro}}} \begin{figure} \caption{Flow at a vertex and its reduction.} \label{fig:flowStitch} \end{figure} \section{Overview}\label{sec:overview} \noindent{\bf Homology.} In this overview, we explain our main ideas in an {\em informal} manner. Our starting point is Schrijver's view \cite{DBLP:journals/siamcomp/Schrijver94} of a collection of ``non-crossing'' (but possibly not vertex- or even edge-disjoint) sets of walks as flows. To work with flows (defined immediately), we deal with directed graphs. (In this context, undirected graphs are treated as directed graphs by replacing each edge by two parallel arcs of opposite directions.) Specifically, we denote an instance of \textsf{Directed Planar Disjoint Paths}\ as a tuple $(D,S,T,g,k)$ where $D$ is a directed plane graph, $S,T\subseteq V(D)$, $k=|S|$ and $g: S\rightarrow T$ is bijective. Then, a {\em solution} is a set ${\cal P}$ of pairwise vertex-disjoint directed paths in $D$ containing, for each vertex $s\in S$, a path directed from~$s$~to~$g(s)$. In the language of flows, each arc of $D$ is assigned a word with letters in $T\cup T^{-1}$ (that is, we treat the set of vertices $T$ also as an alphabet), where $T^{-1}=\{t^{-1}: t\in T\}$. This collection of words is denoted by $(T \cup T^{-1})^*$ and let $1$ denote the empty word. A word is {\em reduced} if, for all $t\in T$, the letters $t$ and $t^{-1}$ do not appear consecutively. Then, a {\em flow} is an assignment of reduced words to arcs that satisfies two constraints. First, when we concatenate the words assigned to the arcs incident to a vertex $v\notin S\cup T$ in clockwise order, where words assigned to ingoing arcs are reversed and their letters negated, the result (when reduced) is the empty word $1$ (see Fig.~\ref{fig:flowStitch}). This is an algebraic interpretation of the standard flow-conservation constraint. Second, when we do the same operation with respect to a vertex $v\in S\cup T$, then when the vertex is in $S$, the result is $g(s)$ (rather than the empty word), and when it is in $T$, the result is $t$. There is a natural association of flows to solutions: for every $t\in T$, assign the letter $t$ to all arcs used by the path from $g^{-1}(t)$ to $t$. Roughly speaking, Schrijver proved that if a flow $\phi$ is given along with the instance $(D,S,T,g,k)$, then in {\em polynomial time} we can either find a solution or determine that there is no solution ``similar to $\phi$''. Specifically, two flows are {\em homologous} (which is the notion of similarity) if one can be obtained from the other by a {\em set} of ``face operations'' defined as follows. \begin{definition}\label{def:homologyOverview} Let $D$ be a directed plane graph with outer face $f$, and denote the set of faces of $D$ by $\cal F$. Two flows $\phi$ and $\psi$ are {\em homologous} if there exists a function $h: {\cal F}\rightarrow (T\cup T^{-1})^*$ such that {\em (i)} $h(f)=1$, and {\em (ii)} for every arc $e\in A(D)$, $h(f_1)^{-1}\cdot \phi(e)\cdot h(f_2)=\psi(e)$ where $f_1$ and $f_2$ are the faces at the left-hand side and the right-hand side of $e$, respectively. \end{definition} Then, a slight modification of Schrijver's theorem~\cite{DBLP:journals/siamcomp/Schrijver94} readily gives the following corollary. \begin{corollary}\label{prop:schOverview} There is a polynomial-time algorithm that, given an instance $(D,S,T,g,$ $k)$ of \textsf{Directed Planar Disjoint Paths}, a flow $\phi$ and a subset $X\subseteq A(D)$, either finds a solution of $(D-X,S,T,g,k)$ or decides that there is no solution of it such that the ``flow associated with it'' and $\phi$ are homologous in $D$. \end{corollary} \begin{figure} \caption{Two different ways of extracting a walk from a flow.} \label{fig13} \end{figure} \noindent{\bf Discrete Homotopy and Our Objective.} While the language of flows and homology can be used to phrase our arguments, it also makes them substantially longer and somewhat obscure because it brings rise to multiple technicalities. For example, different sets of non-crossing walks may correspond to the same flow (see Fig.~\ref{fig13}). Instead, we define a notion of {\em discrete homotopy}, inspired by (standard) homotopy. Specifically, we deal only with collections of non-crossing {\em and edge-disjoint} walks, called {\em weak linkages}. Then, two weak linkages are {\em discretely homotopic} if one can be obtained from the other by using ``face operations'' that push/stretch its walks across faces and keep them non-crossing and edge-disjoint (see Fig.~\ref{fig0203}). More precisely, discrete homotopy is an equivalence relation that consists of three face operations, whose precise definition (not required to understand this overview) can be found in Section \ref{sec:discreteHomotopy}. We note that the order in which face operations are applied is important in discrete homotopy (unlike homology)---we cannot stretch a walk across a face if no walk passes its boundary, but we can execute operations that will move a walk to that face, and then stretch it. In Section \ref{sec:discreteHomotopy}, we translate Corollary \ref{prop:schOverview} to discrete homotopy (and undirected graphs) to derive the following~result. \begin{lemma}\label{lem:discreteHomotopyOverview} There is a polynomial-time algorithm that, given an instance $(G,S,T,g,k)$ of \textsf{Planar Disjoint Paths}, a weak linkage $\cal W$ in $G$ and a subset $X\subseteq E(G)$, either finds a solution of $(G-X,S,T,g,k)$ or decides that no solution of it is discretely homotopic to $\cal W$ in $G$. \end{lemma} \begin{figure} \caption{Moving a walk of a weak linkage (in blue) onto the Steiner tree (the walk in purple) with ``face operations''(e.g. a sub-path of the blue path is pushed giving the green sub-path).} \label{fig0203} \end{figure} In light of this result, our objective is reduced to the following task. \begin{quote} \framebox{ \begin{minipage}{0.85\textwidth} Compute a collection of weak linkages such that if there exists a solution, then there also exists a solution ({\em possibly a different one!}) that is discretely homotopic to one of the weak linkages in our collection. To prove Theorem \ref{thm:main}, the size of the collection should be upper bounded by $2^{\mathcal{O}(k^2)}$. \end{minipage} } \end{quote} \noindent{\bf Key Player: Steiner Tree.} A key to the proof of our theorem is a very careful construction (done in three steps in Section \ref{sec:steiner}) of a so-called {\em Backbone Steiner tree}. We use the term Steiner tree to refer to any tree in the {\em radial completion} of $G$ (the graph obtained by placing a vertex on each face and making it adjacent to all vertices incident to the face) whose set of leaves is precisely $S\cup T$. In the first step, we consider an arbitrary Steiner tree as our Steiner tree $R$. Having $R$ at hand, we have a more focused goal: we will zoom into weak linkages that are ``pushed onto $R$'', and we will only generate such weak linkages to construct our collection. Informally, a weak linkage is {\em pushed onto $R$} if all of the edges used by all of its walks are {\em parallel to} edges of $R$. We do not demand that the edges belong to $R$, because then the goal described immediately cannot be achieved---instead, we make $4n+1$ parallel copies of each edge in the radial completion (the number $4n+1$ arises from considerations in the ``pushing process''), and then impose the above weaker demand. Now, our goal is to show that, if there exists a solution, then there also exists one that can be pushed onto $R$ by applying face operations (in discrete homotopy) so that it becomes {\em identical} to one of the weak linkages in our collection (see Fig.~\ref{fig0203}). At this point, one remark is in place. Our Steiner tree $R$ is a subtree of the radial completion of $G$ rather than $G$ itself. Thus, if there exists a solution discretely homotopic to one of the weak linkages that we generate, it might not be a solution in $G$. We easily circumvent this issue by letting the set $X$ in Lemma \ref{lem:discreteHomotopyOverview} contain all ``fake'' edges. \noindent{\bf Partitioning a Weak Linkage Into Segments.} For the sake of clarity, before we turn to present the next two steps taken to construct $R$, we begin with the (non-algorithmic) part of the proof where we analyze a (hypothetical) solution towards pushing it onto $R$. Our most basic notion in this analysis is that of a {\em segment}, defined as follows (see Fig.~\ref{fig0408}). \begin{definition}\label{def:segmentOverview} For a walk $W$ in the radial completion of $G$ that is edge-disjoint from $R$, a {\em segment} is a maximal subwalk of $W$ that does not ``cross'' $R$. \end{definition} \begin{figure} \caption{Segments arising from the crossings of a walk with the Steiner tree.} \label{fig0408} \end{figure} Let $\ensuremath{\mathsf{Seg}}(W)$ denote the set of segments of $W$. Clearly, $\ensuremath{\mathsf{Seg}}(W)$ is a partition of $W$. Ideally, we would like to upper bound the number of segments of (all the paths of) a solution by $2^{\mathcal{O}(k^2)}$. However, this will not be possible because, while $R$ is easily seen to have only $\mathcal{O}(k)$ vertices of degree $1$ or at least $3$, it can have ``long'' maximal degree-2 paths which can give rise to numerous segments (see Fig.~\ref{fig0408}). To be more concrete, we say that a maximal degree-2 path of $R$ is {\em long} if it has more than $2^{ck}$ vertices (for some constant $c$), and it is {\em short} otherwise. Then, as the paths of a solution are vertex disjoint, the following observation is immediate. \begin{observation}\label{obs:shortPathsOverview} Let $\cal P$ be a solution. Then, its number of segments that have at least one endpoint on a short path, or a vertex of degree other than $2$, of $R$, is upper bounded by $2^{\mathcal{O}(k)}$. \end{observation} To deal with segments crossing only long paths, several new ideas are required. In what follows, we first explain how to handle segments going across different long paths, whose number {\em can} be bounded (unlike some of the other types of segments we will encounter). \noindent{\bf Segments Between Different Long Paths.} To deal with such segments, we modify $R$ (in the second step of its construction). For each long path $P$ with endpoints $u$ and $v$, we will compute two minimum-size vertex sets, $S_u$ and $S_v$, such that $S_u$ separates (i.e., intersects all paths with one endpoint in each of the two specified subgraphs) the following subgraphs in the radial completion of $G$: {\em (i)} the subtree of $R$ that contains $u$ after the removal of a vertex $u_1$ of $P$ that is ``very close'' to $u$, and {\em (ii)} the subtree of $R$ that contains $v$ after the removal of a vertex $u_2$ that is ``close'' to $u$. The condition satisfied by $S_v$ is symmetric (i.e. $u$ and $v$ switch their roles; see Fig.~\ref{fig05}). Here, ``very close'' refers to distance $2^{c_1k}$ and ``close'' refers to distance $2^{c_2k}$ on the path, for some constants $c_1<c_2$. Let $u'$ and $v'$ be the vertices of $P$ in the intersection with the separators $S_u$ and $S_v$ respectively. (The selection of $u'$ not to be $u$ itself is of use in the third modification of $R$.) \begin{figure} \caption{Separators and flows for a long maximal degree-2 path $P$ in $R$.} \label{fig05} \end{figure} To utilize these separators, we need their sizes to be upper bounded by $2^{\mathcal{O}(k)}$. For our initial $R$, such small separators may not exist. However, the modification we present now will guarantee their existence. Specifically, we will ensure that $R$ does not have any {\em detour}, which roughly means that each of its maximal degree-2 paths is a shortest path connecting the two subtrees obtained once it is removed. More formally, we define a detour as follows (see Fig.~\ref{fig:undetour}). \begin{figure} \caption{Detours in the Steiner tree.} \label{fig:undetour} \end{figure} \begin{definition}\label{def:detourOverview} A {\em detour} in $R$ is a pair of vertices $u,v\in V_{\geq 3}(R)\cup V_{=1}(R)$ (i.e. the non-degree $2$ vertices in R) that are endpoints of a maximal degree-2 path $L$ of $R$, and a path $P$ in the radial completion of $G$, such that {\em (i)} $P$ is shorter than $L$, {\em (ii)} one endpoint of $P$ belongs to the component of $R-V(L)\setminus\{u,v\}$ containing $u$, and {\em (iii)} one endpoint of $P$ belongs to the component of $R-V(L)\setminus\{u,v\}$ containing $v$. \end{definition} By repeatedly ``short-cutting'' $R$, a process that terminates in a linear number of steps, we obtain a new Steiner tree $R$ with no detour. Now, if the separator $S_u$ is large, then there is a large number of vertex-disjoint paths that connect the two subtrees separated by $S_u$, and all of these paths are ``long'', namely, of length at least $2^{c_2k}-2^{c_1k}$. Based on a result by Bodlaender et al.~\cite{DBLP:journals/jacm/BodlaenderFLPST16} (whose application requires to work in the radial completion of $G$ rather than $G$ itself), we show that the existence of these paths implies that the treewidth of $G$ is large. Thus, if the treewidth of $G$ were small, all of our separators would have also been small. Fortunately, to guarantee this, we just need to invoke the following known result in a preprocessing step: \begin{proposition}[\cite{DBLP:journals/jct/AdlerKKLST17}]\label{prop:twReductionOverview} There is a $2^{\mathcal{O}(k)}n^2$-time algorithm that, given an instance $(G,S,T,g,k)$ of \textsf{Planar Disjoint Paths}, outputs an equivalent instance $(G',S,T,g,k)$ of \textsf{Planar Disjoint Paths}\ where $G'$ is a subgraph of $G$ whose treewidth is upper bounded by $2^{ck}$ for some constant $c$. \end{proposition} Having separators of size $2^{\mathcal{O}(k)}$, because segments going across different long paths must intersect these separators (or have an endpoint at distance $2^{\mathcal{O}(k)}$ in $R$ from some endpoint of a maximal degree-2 path), we immediately deduce the following. \begin{observation}\label{obs:goingAcrossDiffPathsOverview} Let $\cal P$ be a solution. Then, its number of segments that have one endpoint on one long path, and a second endpoint on a different long path, is upper bounded by $2^{\mathcal{O}(k)}$. \end{observation} \noindent{\bf Segments with Both Endpoints on the Same Long Path.} We are now left with segments whose both endpoints belong to the same long path, which have two different kinds of behavior: they may or may not {\em spiral} around $R$, where spiraling means that the two endpoints of the segment belong to different ``sides'' of the path (see Fig.~\ref{fig0408} and Fig.~\ref{fig:rollbackSpirals}). By making sure that at least one vertex in $S\cup T$ is on the outer face of the radial completion of $G$, we ensure that the cycle formed by any non-spiraling segment together with the subpath of $R$ connecting its two endpoints does not enclose all of $S\cup T$; specifically, we avoid having to deal with segments as the one in Fig.~\ref{fig07}. While it is tempting to try to devise face operations that transform a spiraling segment into a non-spiraling one, this is not always possible. In particular, if the spiral ``captures'' a path $P$ (of a solution), then when $P$ and the spiral are pushed onto $R$, the spiral is not reduced to a simple path between its endpoints, but to a walk that ``flanks'' $P$. Due to such scenarios, dealing with spirals (whose number we are not able to upper bound) requires special attention. Before we turn to this task, let us consider the non-spiraling segments. \noindent{\bf Non-Spiraling Segments.} To achieve our main goal, we aim to push a (hypothetical) solution onto $R$ so that the only few parallel copies of each edge will be used. Now, we argue that non-spiraling segments do not pose a real issue in this context. To see this, consider a less refined partition of a solution where some non-spiraling segments are ``grouped'' as follows (see Fig.~\ref{fig0408}). \begin{figure} \caption{A bad segment that contains all of $S \cup T$ in its cycle.} \label{fig07} \end{figure} \begin{definition}\label{def:usefulSubwalkWeaklyOverview} A subwalk of a walk $W$ is a {\em preliminary group} of $W$ if either {\em (i)} it has endpoints on two different maximal degree-2 paths of $R$ or an endpoint in $V_{=1}(R)\cup V_{\geq 3}(R)$ or it is spiraling, or {\em (ii)} it is the union of an inclusion-wise maximal collection of segments not of type {\em (i)}. \end{definition} The collection of preliminary groups of $W$ is denoted by $\ensuremath{\mathsf{PreSegGro}}(W)$. Clearly, it is a partition of $W$. For a weak linkage $\cal W$, $\ensuremath{\mathsf{PreSegGro}}({\cal W})=\bigcup_{W\in{\cal W}}\ensuremath{\mathsf{PreSegGro}}(W)$. Then, \begin{observation}\label{obs:usefulSubwalkWeaklyOverview} Let $\cal W$ be a weak linkage. The number of type-(ii) preliminary groups in $\ensuremath{\mathsf{PreSegGro}}({\cal W})$ is at most $1$ plus the number of type-(i) preliminary groups in $\ensuremath{\mathsf{PreSegGro}}({\cal W})$. \end{observation} Roughly speaking, a type-(i) preliminary group is easily pushed onto $R$ so that it becomes merely a simple path (see Fig.~\ref{fig0408}). Thus, by Observation \ref{obs:usefulSubwalkWeaklyOverview}, all type-(ii) preliminary groups of a solution in total do not give rise to the occupation of more than $x+1$ copies of an edge, where $x$ is the number of type-(i) preliminary groups. \noindent{\bf Rollback Spirals and Winding Number.} Unfortunately, the number of spirals can be huge. Nevertheless, we can pair-up {\em some} of them so that they will ``cancel'' each other when pushed onto $R$ (see Fig.~\ref{fig:rollbackSpirals}), thereby behaving like a type-(ii) preliminary group. Intuitively, we pair-up two spirals of a walk if one of them goes from the left-side to the right-side of the path, the other goes from the right-side to the left-side of the same path, and ``in between'' them on the walk, there are only type-(ii) preliminary groups and spirals that have already been paired-up. We refer to paired-up spirals as {\em rollback spirals}. (Not all spirals can be paired-up in this manner.) This gives rise to the following strengthening of Definition \ref{def:usefulSubwalkWeaklyOverview}. \begin{definition}\label{def:usefulSubwalkOverview} A subwalk of a walk $W$ is called a {\em group} of $W$ if either {\em (i)} it is a non-spiral type-(i) preliminary group, or {\em (ii)} it is the union of an inclusion-wise maximal collection of segments not of type {\em (i)} (i.e., all endpoints of the segments in the group are internal vertices of the same maximal degree-2 path of $R$). The {\em potential} of a group is (roughly) $1$ plus its number of non-rollback~spirals. \end{definition} Now, rather than upper bounding the total number of spirals, we only need to upper bound the number of non-rollback spirals. To this end, we use the notion of {\em winding number} (in Section \ref{sec:winding}), informally defined as follows. Consider a solution $\cal P$, a path $Q\in {\cal P}$, and a long path $P$ of $R$ with separators $S_u$ and $S_v$. As $S_u$ and $S_v$ are minimal separators in a triangulated graph (the radial completion is triangulated), they are cycles, and as at least one vertex in $T$ belongs to the outer face, they form a ring (see Fig.~\ref{fig:windingPaths}). Each maximal subpath of $Q$ that lies inside this ring can either {\em visit} the ring, which means that both its endpoints belong to the same separator, or {\em cross} the ring, which means that its endpoints belong one to $S_u$ and the other to $S_v$ (see Fig.~\ref{fig:windingPaths}). Then, the (absolute value of the) {\em winding number} of a crossing subpath is the number of times it ``winds around'' $P$ inside the ring (see Fig.~\ref{fig:windingPaths}). At least intuitively, it should be clear that winding numbers and non-rollback spirals are related. In particular, each ring can only have $2^{\mathcal{O}(k)}$ visitors and crossings subpaths (because the size of each separator is $2^{\mathcal{O}(k)}$), and we only have $\mathcal{O}(k)$ rings to deal with. Thus, it is possible to show that if the winding number of every crossing subpath is upper bounded by $2^{\mathcal{O}(k)}$, then the total number of non-rollback spirals is upper bounded by $2^{\mathcal{O}(k)}$ as well. The main tool we employ to bound the winding number of every crossing path is the following known result (rephrased to simplify the overview). \begin{figure} \caption{Rollback spirals.} \label{fig:rollbackSpirals} \end{figure} \begin{proposition}[\cite{DBLP:conf/focs/CyganMPP13}]\label{prop:ring-reroutingOverview} Let $G$ be a graph embedded in a ring with a crossing path $P$. Let $\mathcal{P}$ and $\mathcal{Q}$ be two collections of vertex-disjoint crossings paths of the same size. (A path in $\mathcal{P}$ can intersect a path in $\mathcal{Q}$, but not another path in $\mathcal{P}$.) Then, $G$ has a collection of crossing paths $\mathcal{P}'$ such that {\em (i)} for every path in $\mathcal{P}$, there is a path in $\mathcal{P}'$ with the same endpoints and vice versa, and {\em (ii)} the maximum difference between (the absolute value of) the winding numbers with respect to $P$ of any path in $\mathcal{P}'$ and any path in $\mathcal{Q}$ is at most $6$. \end{proposition} To see the utility of Proposition~\ref{prop:ring-reroutingOverview}, suppose momentarily that none of our rings has visitors. Then, if we could ensure that for each of our rings, there is a collection $\mathcal{Q}$ of vertex-disjoint paths of {\em maximum size} such that the winding number of each path in $\mathcal{Q}$ is a constant, Proposition \ref{prop:ring-reroutingOverview} would have the following implication: if there is a solution, then we can modify it within each ring to obtain another solution such that each crossing subpath of each of its paths will have a constant winding number (under the supposition that the rings are disjoint, which we will deal with later in the overview), see Fig.~\ref{fig:windingPaths}. Our situation is more complicated due to the existence of visitors---we need to ensure that the replacement $\mathcal{P}'$ does not intersect them. On a high-level, this situation is dealt with by first showing how to ensure that visitors do not ``go too deep'' into the ring on either side of it. Then, we consider an ``inner ring'' where visitors do not exist, on which we can apply Proposition \ref{prop:ring-reroutingOverview}. Afterwards, we are able to bound the winding number of each crossing path by $2^{\mathcal{O}(k)}$ (but not by a constant) in the (normal) ring. \begin{figure} \caption{A solution winding in a ring (top), and the ``unwinding'' or it (bottom).} \label{fig:windingPaths} \end{figure} \noindent{\bf Modifying $R$ within Rings.} To ensure the existence of the aforementioned collection $\mathcal{Q}$ for each ring, we need to modify $R$. To this end, consider a long path $P$ with separators $S_u$ and $S_v$, and let $P'$ be the subpath of $P$ inside the ring defined by the two separators. We compute a maximum-sized collection of vertex-disjoint paths $\ensuremath{\mathsf{Flow}}(u,v)$ such that each of them has one endpoint in $S_u$ and the other in $S_v$.\footnote{This flow has an additional property: there is a tight collection of $\mathcal{C}(u,v)$ of concentric cycles separating $S_u$ and $S_v$ such that paths in $\ensuremath{\mathsf{Flow}}(u,v)$ do not ``oscillate'' too much between any two cycles in the collection. Such a maximum flow is said the be \emph{minimal} with respect to $\mathcal{C}(u,v)$.} Then, we prove a result that roughly states the following. \begin{lemma} There is a path $P^\star$ in the ring defined by $S_u$ and $S_v$ with the same endpoints as $P'$ crossing each path in $\ensuremath{\mathsf{Flow}}(u,v)$ at most once. Moreover, $P^\star$ is computable in linear time. \end{lemma} Having $P^\star$ at hand, we replace $P'$ by $P^\star$. This is done for every maximal degree-2 path, and thus we complete the construction of $R$. However, at this point, it is not clear why after we perform these replacements, the separators considered earlier remain separators, or that we even still have a tree. Roughly speaking, a scenario as depicted in Fig.~\ref{fig:badRings} can potentially happen. To show that this is not the case, it suffices to prove that there cannot exist a vertex that belongs to two different rings. Towards that, we apply another preprocessing operation: we ensure that the radial completion of $G$ does not have $2^{ck}$ (for some constant $c$) {\em concentric cycles} that contain no vertex in $S\cup T$ by using another result by Adler et al.~\cite{DBLP:journals/jct/AdlerKKLST17}. Informally, a sequence of concentric cycles is a sequence of vertex-disjoint cycles where each one of them is contained inside the next one in the sequence. Having no such sequences, we prove the following. \begin{lemma}\label{lem:closeToROverview} Let $R'$ be any Steiner tree. For every vertex $v$, there exists a vertex in $V(R')$ whose distance to $v$ (in the radial completion of $G$) is $2^{ck}$ for some constant $c$. \end{lemma} \begin{figure} \caption{The green and blue rings intersect, which can create cycles in $R$ when~replacing~paths.} \label{fig:badRings} \end{figure} To see why intuitively this lemma is correct, note that if $v$ was ``far'' from $R'$ in the {\em radial completion of $G$}, then in $G$ itself $v$ is surrounded by a large sequence of concentric cycles that contain no vertex in $S\cup T$. Having Lemma \ref{lem:closeToROverview} at hand, we show that if a vertex belongs to a certain ring, then it is ``close'' to at least one vertex of the restriction of $R$ to that ring. In turn, that means that if a vertex belongs to two rings, it can be used to exhibit a ``short'' path between one vertex in the restriction of $R$ to one ring and another vertex in the restriction of $R$ to the second ring. By choosing constants properly, this path is shown to exhibit a detour in $R$, and hence we reach a contradiction. (In this argument, we use the fact that for every vertex $u$, towards the computation of the separator, we considered a vertex $u'$ of distance $2^{c_1k}$ from $u$---this subpath between $u$ and $u'$ is precisely that subpath that we will shortcut.) \noindent{\bf Pushing a Solution Onto $R$.} So far, we have argued that if there is a solution, then there is also one such that the sum of the potential of all of the groups of all of its paths is at most $2^{\mathcal{O}(k)}$. Additionally, we discussed the intuition why this, in turn, implies the following result. \begin{lemma}\label{lem:finalOverview} If there is a solution $\cal P$, then there is a weak linkage pushed onto $R$ that is discretely homotopic to $\cal P$ and uses at most $2^{\mathcal{O}(k)}$ copies of every edge. \end{lemma} The formal proof of Lemma \ref{lem:finalOverview} (in Section \ref{sec:pushing}) is quite technical. On a high level, it consists of three phases. First, we push onto $R$ all {\em sequences} of the solution---that is, maximal subpaths that touch (but not necessarily cross) $R$ only at their endpoints. Second, we eliminate some U-turns of the resulting weak linkage (see Fig.~\ref{fig16}), as well as ``move through'' $R$ segments with both endpoints being internal vertices of the same maximal degree-2 path of $R$ and crossing it in opposing directions (called {\em swollen segments}). At this point, we are able to bound by $2^{\mathcal{O}(k)}$ the number of segments of the pushed weak linkage. Third, we eliminate all of the remaining U-turns, and show that then, the number of copies of each edge used must be at most $2^{\mathcal{O}(k)}$. We also modify the pushed weak linkage to be of a certain ``canonical form'' (see Section \ref{sec:pushing}). \noindent{\bf Generating a Collection of Pushed Weak Linkages.} In light of Lemma \ref{lem:finalOverview} and Proposition \ref{prop:schOverview}, it only remains to generate a collection of $2^{\mathcal{O}(k^2)}$ pushed weak linkages that includes all pushed weak linkages (of some canonical form) using at most $2^{\mathcal{O}(k)}$ copies of each edge. (This part, along with the preprocessing and construction of $R$, are the algorithmic parts of~our~proof.) This part of our proof is essentially a technical modification and adaptation of the work of Schrijver \cite{DBLP:journals/siamcomp/Schrijver94} (though we need to be more careful to obtain the bound $2^{\mathcal{O}(k^2)}$). Thus, we only give a brief description of it in the overview. Essentially, we generate pairs of a {\em pairing} and a {\em template}: a pairing assigns, to each vertex $v$ of $R$ of degree $1$ or at least $3$, a set of pairs of edges incident to $v$ to indicate that copies of these edges are to be visited consecutively (by at least one walk of the weak linkage under construction); a template further specifies, for each of the aforementioned pairs of edges, how many times copies of these edges are to be visited consecutively (but not which copies are paired-up). Clearly, there is a natural association of a pairing and a template to a pushed weak linkage. Further, we show that to generate all pairs of pairings and templates associated with the weak linkages we are interested in, we only need to consider pairings that in total have $\mathcal{O}(k)$ pairs and templates that assign numbers bounded by $2^{\mathcal{O}(k)}$ (because we deal with weak linkages using $2^{\mathcal{O}(k)}$ copies of each edge): \begin{lemma} There is a collection of $2^{\mathcal{O}(k^2)}$ pairs of pairings and templates that, for any canonical pushed weak linkage $\cal W$ using only $2^{\mathcal{O}(k)}$ copies of each edge, contains a pair (of a pairing and a template) ``compatible'' with $\cal W$. Further, such a collection is computable in~time~$2^{\mathcal{O}(k^2)}$. \end{lemma} \begin{figure} \caption{A walk going back and forth along a path of $R$, which gives rise to U-turns.} \label{fig16} \end{figure} Using somewhat more involved arguments (in Section \ref{sec:reconstruction}), we also prove the following. \begin{lemma} Any canonical pushed weak linkage is ``compatible'' with exactly one pair of a pairing and a template. Moreover, given a pair of a pairing and a template, if a canonical pushed weak linkage compatible with it exists, then it can be found in time polynomial in~its~size. \end{lemma} These two lemmas complete the proof: we can indeed generate a collection of $2^{\mathcal{O}(k^2)}$ pushed weak linkages containing all canonical pushed weak linkages using only $2^{\mathcal{O}(k)}$ copies of any edge. \section{Preliminaries}\label{sec:prelims} Let $A$ be a set of elements. A cyclic ordering $<$ on $A$ is an ordering $(a_0,a_1,\ldots,a_{|A|-1})$ of the elements in $A$ such that, by enumerating $A$ in {\em clockwise order} starting at $a_i\in A$, we refer to the ordering $a_i,a_{(i+1)\mod |A|},\ldots,a_{(i+|A|-1)\mod |A|}$, and by enumerating $A$ in {\em counter-clockwise order} starting at $a_i\in A$, we refer to the ordering $a_i,a_{(i-1)\mod |A|},\ldots,a_{(i-|A|+1)\mod |A|}$. We consider all cyclic orderings of $A$ that satisfy the following condition to be the equivalent (up to cyclic shifts): the enumeration of $A$ in cyclic clockwise order starting at $a_i$, for any $a_i\in A$, produces the same sequence. For a function $f: X\rightarrow Y$ and a subset $X'\subseteq X$, we denote the restriction of $f$ to $X'$ by $f|_{X'}$. \paragraph{Graphs.} Given an undirected graph $G$, we let $V(G)$ and $E(G)$ denote the vertex set and edge set of $G$, respectively. Similarly, given a directed graph (digraph) $D$, we let $V(D)$ and $A(D)$ denote the vertex set and arc set of $D$, respectively. Throughout the paper, we deal with graphs without self-loops but with parallel edges. Whenever it is not explicitly written otherwise, we deal with undirected graphs. Moreover, whenever $G$ is clear from context, denote $n=|V(G)|$. For a graph $G$ and a subset of vertices $U\subseteq V(G)$, the subgraph of $G$ induced by $U$, denoted by $G[U]$, is the graph on vertex set $U$ and edge set $\{\{u,v\}\in E(G): u,v\in U\}$. Additionally, $G-U$ denotes the graph $G[V(G)\setminus U]$. For a subset of edges $F\subseteq E(G)$, $G-F$ denotes the graph on vertex set $V(G)$ and edge set $E(G)\setminus F$. For a vertex $v\in V(G)$, the set of neighbors of $v$ in $G$ is denoted by $N_G(v)$, and for a subset of vertices $U\subseteq V(G)$, the open neighborhood of $U$ in $G$ is defined as $N_G(U)=\bigcup_{v\in U}N_G(v)\setminus U$. Given three subsets of vertices $A,B,S\subseteq V(G)$, we say that $S$ {\em separates} $A$ from $B$ if $G-S$ has no path with an endpoint in $A$ and an endpoint in $B$. For two vertices $u,v\in V(G)$, the {\em distance} between $u$ and $v$ in $G$ is the length (number of edges) of the shortest path between $u$ and $v$ in $G$ (if no such path exists, then the distance is $\infty$), and it is denoted by $\ensuremath{\mathsf{dist}}_G(u,v)$; in case $u=v$, $\ensuremath{\mathsf{dist}}_G(u,v)=0$. For two subsets $A,B\subseteq V(G)$, define $\ensuremath{\mathsf{dist}}_G(A,B)=\min_{u\in A, v\in B}\ensuremath{\mathsf{dist}}_G(u,v)$. A {\em{linkage}} of order $k$ in $G$ is an ordered family $\mathcal{P}$ of $k$ vertex-disjoint paths in $G$. Two linkages $\mathcal{P}=(P_1,\ldots,P_k)$ and $\mathcal{Q}=(Q_1,\ldots,Q_k)$ are {\em{aligned}} if for all $i\in\{1,\ldots,k\}$, $P_i$ and $Q_i$ have the same endpoints. For a tree $T$ and $d\in\mathbb{N}$, let $V_{\geq d}(T)$ (resp.~$V_{=d}(T)$) denote the set of vertices of degree at least (resp.~exactly $d$ in $T$. For two vertices $u,v\in V(T)$, the unique subpath of $T$ between $u$ and $v$ is denoted by $\ensuremath{\mathsf{path}}_T(u,v)$. We say that two vertices $u,v\in V(T)$ are {\em near each other} if $\ensuremath{\mathsf{path}}_T(u,v)$ has no internal vertex from $V_{\geq 3}(T)$, and call $\ensuremath{\mathsf{path}}_T(u,v)$ a {\em degree-2 path}. In case $u,v\in V_{\geq 3}(T)\cup V_{=1}(T)$, $\ensuremath{\mathsf{path}}_T(u,v)$ is called a {\em maximal degree-2 path}. \paragraph{Planarity.} A planar graph is a graph that can be embedded in the Euclidean plane, that is, there exists a mapping from every vertex to a point on a plane, and from every edge to a plane curve on that plane, such that the extreme points of each curve are the points mapped to the endpoints of the corresponding edge, and all curves are disjoint except on their extreme points. A {\em plane graph} $G$ is a planar graph with a fixed embedding. Its faces are the regions bounded by the edges, including the outer infinitely large region. For every vertex $v\in V(G)$, we let $E_G(v)=(e_0,e_1,\ldots,e_{t-1})$ for $t\in \mathbb{N}$ where $e_0,e_1,\ldots,e_{t-1}$ are the edges incident to $v$ in clockwise order (the decision which edge is $e_0$ is arbitrary). A planar graph $G$ is {\em triangulated} if the addition of any edge (not parallel to an existing edge) to $G$ results in a non-planar graph. A plane graph $G$ that is triangulated is $2$-connected, and each of its faces is a simple cycle that is a triangle or a cycle that consists of two parallel edges (when the graph is not simple). As we will deal with triangulated graphs, the following proposition will come in handy. \begin{proposition}[Proposition 8.2.3 in \cite{DBLP:books/daglib/0030489}]\label{prop:sepCycle} Let $G$ be a triangulated plane graph. Let $A,B\subseteq V(G)$ be disjoint subsets such that $G[A]$ and $G[B]$ are connected graphs. Then, for any minimal subset $S\subseteq V(G)\setminus (A\cup B)$ that separates $A$ from $B$, it holds that $G[S]$ is a cycle.\footnote{Here, the term cycle also refers to the degenerate case where $|S|=1$.} \end{proposition} The {\em radial graph} (also known as the {\em face-vertex incidence graph}) of a plane graph $G$ is the planar graph $G'$ whose vertex set consists of $V(G)$ and a vertex $v_f$ for each face $f$ of $G$, and whose edge set consists of an edge $\{u,v_f\}$ for every vertex $u\in V(G)$ and face $f$ of $G$ such that $u$ is incident to (i.e.~lies on the boundary of) $f$. The {\em radial completion} of $G$ is the graph $G'$ obtained by adding the edges of $G$ to the radial graph of $G$. The graph $G'$ is planar, and we draw it on the plane so that its drawing coincides with that of $G$ with respect to $V(G)\cup E(G)$. Moreover, $G'$ is triangulated and, under the assumption that $G$ had no self-loops, $G'$ also has no self-loops (since all new edges in $G'$ have one endpoint in $V(G)$ and the other endpoint in $V(G') \setminus V(G)$). For a plane graph $G$, the \emph{radial distance} between two vertices $u$ and $v$ is one less than the minimum length of a sequence of vertices that starts at $u$ and ends at $v$, such that every two consecutive vertices in the sequence lie on a common face.\footnote{We follow the definition of radial distance that is given in \cite{DBLP:conf/soda/JansenLS14} in order to cite a result in that paper verbatim (see Proposition \ref{prop:concentricAtAllDists}). We remark that in \cite{DBLP:journals/jacm/BodlaenderFLPST16}, the definition of a radial distance is slightly different.} We denote the radial distance by ${\sf rdist}_G(u,v)$. This definition extends to subsets of vertices: for $X,Y \subseteq V(G)$, ${\sf rdist}_G(X,Y)$ is the minimum radial distance over all pairs of vertices in $X \times Y$. For any $t\in\mathbb{N}$, a sequence ${\cal C}=(C_1,C_2,\ldots,C_t)$ of $t$ cycles in a plane graph $G$ is said to be {\em concentric} if for all $i\in\{1,2,\ldots,t-1\}$, the cycle $C_i$ is drawn in the strict interior of $C_{i+1}$ (excluding the boundary, that is, $V(C_i)\cap V(C_{i+1})=\emptyset$). The {\em length} of ${\cal C}$ is $t$. For a subset of vertices $U\subseteq V(G)$, we say that $\cal C$ is $U$-free if no vertex of $U$ is drawn in the strict interior~of~$C_t$. \paragraph{Treewidth.} Treewidth is a measure of how ``treelike'' is a graph, formally defined as follows. \begin{definition}[{\bf Treewidth}]\label{def:treewidth} A \emph{tree decomposition} of a graph $G$ is a pair $(T,\beta)$ of a tree $T$ and $\beta:V(T) \rightarrow 2^{V(G)}$, such that \begin{enumerate} \itemsep0em \item\label{item:twedge} for any edge $\{x,y\} \in E(G)$ there exists a node $v \in V(T)$ such that $x,y \in \beta(v)$, and \item\label{item:twconnected} for any vertex $x \in V(G)$, the subgraph of $T$ induced by the set $T_x = \{v\in V(T): x\in\beta(v)\}$ is a non-empty tree. \end{enumerate} The {\em width} of $(T,\beta)$ is $\max_{v\in V(T)}\{|\beta(v)|\}-1$. The {\em treewidth} of $G$, denoted by $\mathtt{tw}(G)$, is the minimum width over all tree decompositions of $G$. \end{definition} The following proposition, due to Cohen-Addad et al.~\cite{DBLP:conf/stoc/Cohen-AddadVKMM16}, relates the treewidth of a plane graph to the treewidth of its radial completion. \begin{proposition}[Lemma 1.5 in ~\cite{DBLP:conf/stoc/Cohen-AddadVKMM16}]\label{prop:twRadial} Let $G$ be a plane graph, and let $H$ be its radial completion. Then, $\mathtt{tw}(H)\leq \frac{7}{2}\cdot \mathtt{tw}(G)$.\footnote{More precisely, Cohen-Addad et al.~\cite{DBLP:conf/stoc/Cohen-AddadVKMM16} prove that the {\em branchwidth} of $G$ is at most twice the branchwidth of $H$. Since the treewidth (plus 1) of a graph is lower bounded by its branchwidth and upper bounded by $\frac{3}{2}$ its branchwidth (see \cite{DBLP:conf/stoc/Cohen-AddadVKMM16}), the proposition follows.} \end{proposition} \subsection{Homology and Flows}\label{sec:prelimsHomology} For an alphabet $\Sigma$, denote $\Sigma^{-1}=\{\alpha^{-1}: \alpha\in \Sigma\}$. For a symbol $\alpha\in\Sigma$, define $(\alpha^{-1})^{-1}=\alpha$, and for a word $w=a_1a_2\cdots a_t$ over $\Sigma\cup \Sigma^{-1}$, define $w^{-1}=a_t^{-1}a_{t-1}^{-1}\cdots a_1^{-1}$ and $w^1=w$. The empty word (the unique word of length $0$) is denoted by $1$. We say that a word $w=a_1a_2\cdots a_t$ over $\Sigma\cup \Sigma^{-1}$ is {\em reduced} if there does not exist $i\in\{1,2,\ldots,t-1\}$ such that $a_i=a_{i+1}^{-1}$. We denote the (infinite) set of reduced words over $\Sigma\cup \Sigma^{-1}$ by ${\mathsf{RW}}(\Sigma)$. The {\em concatenation} $w\circ\widehat{w}$ of two words $w=a_1a_2\cdots a_t$ and $\widehat{w}=b_1b_2\cdots b_\ell$ is the word $w^\star=a_1a_2\cdots a_tb_1b_2\cdots b_\ell$. The {\em product} $w\cdot\widehat{w}$ of two words $w=a_1a_2\cdots a_t$ and $\widehat{w}=b_1b_2\cdots b_\ell$ in ${\mathsf{RW}}(\Sigma)$ is a word $w^\star$ defined as follows: \[w^\star=a_1a_2\cdots a_{t-r}b_{r+1}b_{r+2}\cdots b_\ell\] where $r$ is the largest integer in $\{1,2,\ldots,\min(t,\ell)\}$ such that, for every $i\in\{1,2,\ldots,r\}$, $b_i=a_{t+1-i}^{-1}$. Note that $w^\star$ is a reduced word, and the product operation is associative. The {\em reduction} of a word $w=a_1a_2\cdots a_t$ over $\Sigma\cup \Sigma^{-1}$ is the (reduced) word $w^\star=a_1\cdot a_2\cdots a_t$. \begin{definition}[{\bf Homology}]\label{def:homology} Let $D$ be a directed plane graph with outer face $f$, and denote the set of faces of $D$ by $\cal F$. Let $\Sigma$ be an alphabet. Two functions $\phi,\psi: A(D)\rightarrow {\mathsf{RW}}(\Sigma)$ are {\em homologous} if there exists a function $h: {\cal F}\rightarrow {\mathsf{RW}}(\Sigma)$ such that $h(f)=1$, and for every arc $e\in A(D)$, we have $h(f_1)^{-1}\cdot \phi(e)\cdot h(f_2)=\psi(e)$ where $f_1$ and $f_2$ are the faces at the left-hand side and the right-hand side of $e$, respectively. \end{definition} The following observation will be useful in later results. \begin{observation}\label{obs:homProp} Let $\alpha,\beta,\gamma: A(D) \rightarrow {\mathsf{RW}}(\Sigma)$ be three functions such that $\alpha,\beta$ and $\beta,\gamma$ are pairs of homologous functions. Then, $\alpha, \gamma$ is also a pair of homologous functions. \end{observation} \begin{proof} Let $f$ and $g$ be the functions witnessing the homology of $\alpha, \beta$ and $\beta,\gamma$, respectively. Then, it is easy to check that the function $h = g \circ f$ (i.e.~the composition of $g$ and $f$) witnesses the homology of $\alpha,\gamma$. \end{proof} Towards the definition of flow, we denote an instance of \textsf{Directed Planar Disjoint Paths}\ by a tuple $(D,S,T,g,k)$ where $D$ is a directed plane graph, $S,T\subseteq V(D)$, $k=|S|$ and $g: S\rightarrow T$. We assume that $g$ is bijective because otherwise the given instance is a \textsf{No}-instance, A {\em solution} of an instance $(D,S,T,g,k)$ of \textsf{Directed Planar Disjoint Paths}\ is a set ${\cal P}$ of pairwise vertex-disjoint directed paths in $D$ that contains, for every vertex $s\in S$, a path directed from $s$ to $g(s)$. When we write ${\mathsf{RW}}(T)$, we treat $T$ as an alphabet---that is, every vertex in $T$ is treated as a symbol. \begin{definition}[{\bf Flow}]\label{def:flow} Let $(D,S,T,g,k)$ be an instance of \textsf{Directed Planar Disjoint Paths}. Let $\phi: A(D)\rightarrow {\mathsf{RW}}(T)$ be a function. For any vertex $v\in V(D)$, denote the concatenation $\phi(e_1)^{\epsilon_1}\circ\phi(e_2)^{\epsilon_2}\cdots\phi(e_r)^{\epsilon_r}$ by $\ensuremath{\mathsf{conc}}(v)$, where $e_1,e_2,\ldots,e_r$ are the arcs incident to $v$ in clockwise order where the first arc $e_1$ is chosen arbitrarily, and for each $i\in\{1,2,\ldots,r\}$, $\epsilon_i=1$ if $v$ is the head of $e_i$ and $\epsilon_i=-1$ if $v$ is the tail of $e_i$. Then, the function $\phi$ is a {\em flow} if:\footnote{We note that there is slight technical difference between our definition and the definition in Section~3.1 in~\cite{DBLP:journals/siamcomp/Schrijver94}. There, a flow must put only a single alphabet ($v$ or $v^{-1}$ in $T \cup T^{-1}$) on the arcs incident on vertices in $S \cup T$.} \begin{enumerate} \item For every vertex $v\in V(D)\setminus(S\cup T)$, the reduction of $\ensuremath{\mathsf{conc}}(v)$ is $1$. \item For every vertex $v\in S$, {\em (i)} $\ensuremath{\mathsf{conc}}(v)=a_0a_1\ldots a_{\ell-1}$ is a word of length $\ell\geq 1$, and {\em (ii)} there exists $i\in\{0,1,\ldots,\ell-1\}$ such that the reduction of $a_ia_{(i+1)\mod \ell}\cdots a_{(i+\ell-1)\mod \ell}$ equals $a_i$, where $a_i=g(v)$ if the arc $e$ associated with $a_i$ has $v$ as its tail, $a_i=g(v)^{-1}$ otherwise. \item For every vertex $v\in T$, {\em (i)} $\ensuremath{\mathsf{conc}}(v)=a_0a_1\ldots a_{\ell-1}$ is a word of length $\ell\geq 1$, and {\em (ii)} there exists $i\in\{0,1,\ldots,\ell-1\}$ such that the reduction of $a_ia_{(i+1)\mod \ell}\cdots a_{(i+\ell-1)\mod \ell}$ equals $a_i$, where $a_i=v$ if the arc $e$ associated with $a_i$ has $v$ as its head, $a_i=v^{-1}$ otherwise. \end{enumerate} \end{definition} In the above definition, the conditions on the reduction of $\ensuremath{\mathsf{conc}}(v)$ for each vertex $v \in V(D)$ are called \emph{flow conservation} constraints. Informally speaking, these constraints resemble ``usual'' flow conservation constraints and ensure that every two walks that carry two different alphabets do not cross. The association between solutions to $(D,S,T,g,k)$ and flows is defined as follows. \begin{definition}\label{def:solToFlow} Let $(D,S,T,g,k)$ be an instance of \textsf{Directed Planar Disjoint Paths}. Let ${\cal P}$ be a solution of $(D,S,T,g,k)$. The {\em flow $\phi: A(D)\rightarrow {\mathsf{RW}}(T)$ associated with $\cal P$} is defined as follows. For every arc $e\in A(D)$, define $\phi(e)=1$ if there is no path in $\cal P$ that traverses $e$, and $\phi(e)=t$ otherwise where $t\in T$ is the end-vertex of the (unique) path in $\cal P$ that traverses $e$. \end{definition} The following proposition, due to Schrijver \cite{DBLP:journals/siamcomp/Schrijver94}, also holds for the above definition of flow. \begin{proposition}[Proposition 5 in \cite{DBLP:journals/siamcomp/Schrijver94}]\label{prop:homology} There exists a polynomial-time algorithm that, given a instance $(D,S,T,g,k)$ of \textsf{Directed Planar Disjoint Paths}\ and a flow $\phi$, either finds a solution of $(D,S,T,g,k)$ or determines that there is no solution of $(D,S,T,g,k)$ such that the flow associated with it and $\phi$ are homologous. \end{proposition} We need a slightly more general version of this proposition because we will work with an instance of \textsf{Directed Planar Disjoint Paths}\ where $D$ contains some ``fake'' edges that emerge when we consider the radial completion of the input graph---the edges added to the graph in order to attain its radial completion are considered to be fake. \begin{corollary}\label{cor:homology} There exists a polynomial-time algorithm that, given a instance $(D,S,T,g,k)$ of \textsf{Directed Planar Disjoint Paths}, a flow $\phi$ and a subset $X\subseteq A(D)$, either finds a solution of $(D-X,S,T,g,k)$ or determines that there is no solution of $(D-X,S,T,g,k)$ such that the flow associated with it and $\phi$ are homologous.\footnote{Note that $\phi$ and homology concern $D$ rather than $D-X$.} \end{corollary} \begin{proof} Given $(D,S,T,g,k)$, $\phi$ and $X$, the algorithm constructs an equivalent instance $(D',S,T,g,$ $k)$ of {\sc Directed \textsf{Planar Disjoint Paths}} and a flow $\phi'$ as follows. Each arc $(u,v) \in X$ is replaced by a new vertex $w$ and two new arcs (whose drawing coincides with the former drawing of $(u,v)$), $(u,w)$ and $(v,w)$, and we define $\phi'(u,w) = \phi(u,v)$ and $\phi'(v,w) = \phi(u,v)^{-1}$. For all other arcs $a \in A(D) \cap A(D')$, $\phi'(a) = \phi(a)$. It is immediate to verify that $\phi'$ is a flow in $D'$, and that $(D',S,T,g,k)$ admits a solution that is homologous to $\phi'$ if and only if $(D,S,T,g,k)$ admits a solution that is disjoint from $X$ and homologous to $\phi$. Indeed, any solution of one of these instances is also a solution of the other one. Now, we apply Proposition~\ref{prop:homology} to either obtain a solution to $(D',S,T,g,k)$ homologous to $\phi'$ or conclude that no such solution exists. \end{proof} \section{Preprocessing to Obtain a Good Instance}\label{sec:preprocessing} We denote an instance of \textsf{Planar Disjoint Paths}\ similarly to an instance of \textsf{Directed Planar Disjoint Paths}\ (in Section \ref{sec:prelims}) except that now the graph is denoted by $G$ rather than $D$ to stress the fact that it is undirected. Formally, an instance of \textsf{Planar Disjoint Paths}\ is a tuple $(G,S,T,g,k)$ where $G$ is a plane graph, $S,T\subseteq V(G)$, $k=|S|$ and $g: S\rightarrow T$ is bijective. Moreover, we say that $(G,S,T,g,k)$ is {\em nice} if every vertex in $S\cup T$ has degree $1$ and $S\cap T=\emptyset$. The vertices in $S\cup T$ are called {\em terminals}. Let $H_G$ be the radial completion of $G$. We choose a plane embedding of $H_G$ so that one of the terminals, $t^\star\in T$, will lie on the outer face.\footnote{This can be ensured by starting with an embedding of $H_G$ on a sphere, picking some face where $t^\star$ lies as the outer face, and then projecting the embedding onto the plane.} A {\em solution} of an instance $(G,S,T,g,k)$ of \textsf{Planar Disjoint Paths}\ is a set ${\cal P}$ of pairwise vertex-disjoint paths in $G$ that contains, for every vertex $s\in S$, a path with endpoints $s$ and~$g(s)$. The following proposition eliminates all long sequences of $S\cup T$-free concentric cycles. \begin{proposition}[\cite{DBLP:journals/jct/AdlerKKLST17}]\label{prop:concentric} There exists a $2^{\mathcal{O}(k)}n^2$-time algorithm that, given an instance $(G,S,T,g,$ $k)$ of \textsf{Planar Disjoint Paths}, outputs an equivalent instance $(G',S,T,g,k)$ of \textsf{Planar Disjoint Paths}\ where $G'$ is a subgraph of $G$ that has no sequence of $S\cup T$-free concentric cycles whose length is larger than $2^{ck}$ for some fixed constant $c\geq 1$. \end{proposition} Additionally, the following proposition reduces the treewidth of $G$. In fact, Proposition \ref{prop:concentric} was given by Adler et al.~\cite{DBLP:journals/jct/AdlerKKLST17} as a step towards the proof of the following proposition. However, while the absence of a long sequence of concentric cycles implies that the treewidth is small, the reversed statement is not correct (i.e.~small treewidth does not imply the absence of a long sequence of concentric cycles). Having small treewidth is required but not sufficient for our arguments, thus we cite both propositions. \begin{proposition}[Lemma 10 in \cite{DBLP:journals/jct/AdlerKKLST17}]\label{prop:twReduction} There exists a $2^{\mathcal{O}(k)}n^2$-time algorithm that, given an instance $(G,S,T,g,k)$ of \textsf{Planar Disjoint Paths}, outputs an equivalent instance $(G',S,T,g,k)$ of \textsf{Planar Disjoint Paths}\ where $G'$ is a subgraph of $G$ whose treewidth is upper bounded by $2^{ck}$ for some fixed constant $c\geq 1$.\footnote{While the running time in Lemma 10 in \cite{DBLP:journals/jct/AdlerKKLST17} is stated to be $2^{2^{\mathcal{O}(k)}}n^2$, the proof is easily seen to imply the bound $2^{\mathcal{O}(k)}n^2$ in our statement. The reason why Adler et al.~\cite{DBLP:journals/jct/AdlerKKLST17} obtain a double-exponential dependence on $k$ when they solve \textsf{Planar Disjoint Paths}\ is {\em not} due to the computation that attains a tree decomposition of width $\mathtt{tw}=2^{\mathcal{O}(k)}$, but it is because that upon having such a tree decomposition, they solve the problem in time $2^{\mathcal{O}(\mathtt{tw})}n=2^{2^{\mathcal{O}(k)}}n$.} \end{proposition} The purpose of this section is to transform an arbitrary instance of \textsf{Planar Disjoint Paths}\ into a so called ``good'' instance, defined as follows. \begin{definition}\label{def:goodInstance} An instance $(G,S,T,g,k)$ of \textsf{Planar Disjoint Paths}\ is {\em good} if it is nice, at least one terminal $t^\star\in T$ belongs to the outer faces of both $G$ and its radial completion $H_G$, the treewidth of $G$ is upper bounded by $2^{ck}$, and $G$ has no $S\cup T$-free sequence of concentric cycles whose length is larger than $2^{ck}$. Here, $c\geq 1$ is the fixed constant equal to the maximum among the fixed constants in Propositions \ref{prop:concentric} and \ref{prop:twReduction}. \end{definition} Towards this transformation, note that given an instance $(G,S,T,g,k)$ of \textsf{Planar Disjoint Paths}\ and a terminal $v\in S$, we can add to $G$ a degree-1 vertex $u$ adjacent to $v$, and replace $v$ by $u$ in $S$ and in the domain of $g$. This operation results in an equivalent instance of \textsf{Planar Disjoint Paths}. Furthermore, it does not increase the treewidth of $G$ (unless the treewidth of $G$ is $0$, in which it increases to be $1$). The symmetric operation can be done for any terminal $v\in T$. By repeatedly applying these operations, we can easily transform $(G,S,T,g,k)$ to an equivalent nice instance. Moreover, the requirement that at least one terminal $t^\star\in T$ belongs to the outer faces of both $G$ and its radial completion can be made without loss of generality by drawing $G$ appropriately in the first place. Thus, we obtain the following corollary of Propositions \ref{prop:concentric} and~\ref{prop:twReduction}. Note that $k$ remains unchanged. \begin{corollary}\label{cor:twReduction} There exists a $2^{\mathcal{O}(k)}n^2$-time algorithm that, given an instance $(G,S,T,g,k)$ of \textsf{Planar Disjoint Paths}, outputs an equivalent good instance $(G',S',T',g',k)$ of \textsf{Planar Disjoint Paths}\ where $|V(G')|=\mathcal{O}(|V(G)|)$. \end{corollary} We remark that our algorithm, presented in Section \ref{sec:algorithm}, will begin by applying Corollary~\ref{cor:twReduction}. To simplify arguments ahead, it will be convenient to suppose that every edge in $H_G$ has $4|V(G)|+1=4n+1$ parallel copies. Thus, we slightly abuse the notation $H_G$ and use it to refer to $H_G$ enriched with such a number of parallel copies of each edge. For a pair of adjacent vertices $u,v\in V(H_G)$, we will denote the $4n+1$ parallel copies of edges between them by $e_{-2n},e_{-2n+1},\ldots,e_{-1},e_0,e_1,e_2,\ldots,e_{2n}$ where $e=\{u,v\}$, such that when the edges incident to $u$ (or $v$) are enumerated in cyclic order, the occurrences of $e_i$ and $e_{i+1}$ are consecutive for every $i\in\{-2n,-2n+1,\ldots,2n-1\}$, and $e_{-2n}$ and $e_{2n}$ are the outermost copies of $e$. Thus, for every $i\in\{-2n+1,-2n+2,\ldots,2n-1\}$, $e_i$ lies on the boundary of exactly two faces: the face bounded by $e_{i-1}$ and $e_i$, and the face bounded by $e_i$ and $e_{i+1}$. When the specification of the precise copy under consideration is immaterial, we sometimes simply use the notation $e$. \section{Discrete Homotopy}\label{sec:discreteHomotopy} The purpose of this section is to assert that rather than working with homology (Definition \ref{def:homology}) or the standard notion of homotopy, to obtain our algorithm it will suffice to work with a notion called {\em discrete homotopy}. Working with discrete homotopy will {\em substantially shorten and simplify} our proof, particularly Section \ref{sec:pushing}. Translation from discrete homotopy to homology is straightforward, thus readers are invited to skip the proofs in this section when reading the paper for the first time. We begin by defining the notion of a weak linkage. This notion is a generalization of a linkage (see Section \ref{sec:prelims}) that concerns walks rather than paths, and which permits the walks to intersect one another in vertices. Here, we only deal with walks that may repeat vertices but which do not repeat edges. Moreover, weak linkages concern walks that are {\em non-crossing}, a property defined as follows (see Fig.~\ref{fig:crossing}). \begin{figure} \caption{Crossing (left) and non-crossing (right) walks.} \label{fig:crossing} \end{figure} \begin{definition}[{\bf Non-Crossing Walks}]\label{def:nonCrossingWalks} Let $G$ be a plane graph, and let $W$ and $W'$ be two edge-disjoint walks in $G$. A {\em crossing} of $W$ and $W'$ is a tuple $(v,e,\widehat{e},e',\widehat{e}')$ where $e,\widehat{e}$ are consecutive in $W$, $e',\widehat{e}'$ are consecutive in $W'$, $v\in V(G)$ is an endpoint of $e,\widehat{e},e'$ and $\widehat{e}'$, and when the edges incident to $v$ are enumerated in clockwise order, then exactly one edge in $\{e',\widehat{e}'\}$ occurs between $e$ and $\wh{e}$. We say that $W$ is {\em self-crossing} if, either it has a repeated edge, or it has two edge-disjoint subwalks that are crossing. \end{definition} We remark that when we say that a collection of edge-disjoint walks is non-crossing, we mean that none of its walks is self-crossing and no pair of its walks has a crossing. \begin{definition}[{\bf Weak Linkage}]\label{def:weakLinkage} Let $G$ be a plane graph. A {\em{weak linkage}} in $G$ of order $k$ is an ordered family of $k$ edge-disjoint non-crossing walks in $G$. Two weak linkages ${\cal W}=(W_1,\ldots,W_k)$ and $\mathcal{Q}=(Q_1,\ldots,Q_k)$ are {\em{aligned}} if for all $i\in\{1,\ldots,k\}$, $W_i$ and $Q_i$ have the same endpoints. Given an instance $(G,S,T,g,k)$ of \textsf{Planar Disjoint Paths}, a weak linkage ${\cal W}$ in $G$ (or $H_G$) is {\em sensible} if its order is $k$ and for every terminal $s\in S$, ${\cal W}$ has a walk with endpoints $s$ and $g(s)$. \end{definition} The following observation is clear from Definitions \ref{def:nonCrossingWalks} and \ref{def:weakLinkage}. \begin{observation} Let $G$ be a plane graph, and let $\cal W$ be a weak linkage in $G$. Let $e_1,e_2$ and $e_3,e_4$ be two pairs of edges in $E({\cal W})$ that tare all distinct and incident on a vertex $v$, and there is some walk in $\cal W$ where $e_1,e_2$ are consecutive, and likewise for $e_3,e_4$. Then, in a clockwise enumeration of edges incident to $v$, the pairs $e_1,e_2$ and $e_3,e_4$ do not cross, that is, they do not occur as $e_1, e_3, e_2, e_4$ in clockwise order (including cyclic shifs). \end{observation} We now define the collection of operations applied to transform one weak linkage into another weak linkage aligned with it (see Fig.~\ref{fig:faceOp}). We remark that the face push operation is not required for our arguments, but we present it here to ensure that discrete homotopy defines an equivalence relation (in case it will find other applications that need this property). \begin{figure} \caption{Face operations.} \label{fig:faceOp} \end{figure} \begin{definition}[{\bf Operations in Discrete Homotopy}]\label{def:discreteHomotopyOperations} Let $G$ be a triangulated plane graph with a weak linkage ${\cal W}$, and a face $f$ that is not the outer face with boundary cycle $C$. Let~$W\!\in\! {\cal W}$. \begin{itemize} \item {\bf \textsf{Face Move}.} Applicable to $(W,f)$ if there exists a subpath $P$ of $C$ such that {\em (i)} $P$ is a subwalk of $W$, {\em (ii)} $1\leq |E(P)|\leq |E(f)|-1$, and {\em (iii)} no edge in $E(C)\setminus E(P)$ belongs to any walk in $\cal W$. Then, the face move operation replaces $P$ in $W$ by the unique subpath of $C$ between the endpoints of $P$ that is edge-disjoint from $P$. \item {\bf \textsf{Face Pull}.} Applicable to $(W,f)$ if $C$ is a subwalk $Q$ of $W$. Then, the face pull operation replaces $Q$ in $W$ by a single occurrence of the first vertex in $Q$. \item {\bf \textsf{Face Push}.} Applicable to $(W,f)$ if {\em (i)} no edge in $E(C)$ belongs to any walk in $\cal W$, and {\em (ii)} there exist two consecutive edges $e,e'$ in $W$ with common vertex $v\in V(C)$ (where $W$ visits $e$ first, and $v$ is visited between $e$ and $e'$) and an order (clockwise or counter-clockwise) to enumerate the edges incident to $v$ starting at $e$ such that the two edges of $E(C)$ incident to $v$ are enumerate between $e$ and $e'$, and for any pair of consecutive edges of $W'$ for all $W'\in{\cal W}$ incident to $v$, it does not hold that one is enumerate between $e$ and the two edges of $E(C)$ while the other is enumerated between $e'$ and the two edges of $E(C)$. Let $\widetilde{e}$ be the first among the two edges of $E(C)$ that is enumerated. Then, the face push operation replaces the occurrence of $v$ between $e$ and $e'$ in $W$ by the traversal of $C$ starting~at~$\widetilde{e}$. \end{itemize} \end{definition} We verify that the application of a single operation results in a weak linkage. \begin{observation} Let $G$ be a triangulated plane graph with a weak linkage ${\cal W}$, and a face $f$ that is not the outer face. Let $W\in {\cal W}$ with a discrete homotopy operation applicable to $(W,f)$. Then, the result of the application is another weak linkage aligned to $\cal W$. \end{observation} Then, discrete homotopy is defined as follows. \begin{definition}[{\bf Discrete Homotopy}]\label{def:discreteHomotopy} Let $G$ be a triangulated plane graph with weak linkages ${\cal W}$ and ${\cal W}'$. Then, $\cal W$ is {\em discretely homotopic} to ${\cal W}'$ if there exists a finite sequence of discrete homotopy operations such that when we start with $\cal W$ and apply the operations in the sequence one after another, every operation is applicable, and the final result is~${\cal W}'$. \end{definition} We verify that discrete homotopy gives rise to an equivalence relation. \begin{lemma} Let $G$ be a triangulated plane graph with weak linkages ${\cal W},{\cal W}'$ and ${\cal W}''$. Then, {\em (i)} $\cal W$ is discretely homotopic to itself, {\em (ii)} if ${\cal W}$ is discretely homotopic to ${\cal W}'$, then ${\cal W}'$ is discretely homotopic to ${\cal W}$, and {\em (iii)} if ${\cal W}$ is discretely homotopic to ${\cal W}'$ and ${\cal W}'$ is discretely homotopic to ${\cal W}''$, then $\cal W$ is discretely homotopic to ${\cal W}''$. \end{lemma} \begin{proof} Statement $(i)$ is trivially true. The proof of statement $(ii)$ is immediate from the observation that each discrete homotopy operation has a distinct inverse. Indeed, every face move operation is invertible by a face move operation (applied to the same walk and cycle). Additionally, every face pull operation is invertible by a face push operation (applied to the same walk and cycle), and vice versa. Hence, given the sequence of operations to transform $\cal W$ to $\cal W'$, say $\phi$, the sequence of operations to transform $\cal W'$ to $\cal W$ is obtained by first writing the operations of $\phi$ in reverse order and then inverting each of them. Finally, Statement $(iii)$ follows by considering the sequence of discrete homotopy operations obtained by concatenating the sequence of operations to transform $\cal W$ to $\cal W'$ and the sequence of operations to transform $\cal W'$ to $\cal W''$. \end{proof} Towards the translation of discrete homotopy to homology, we need to associate a flow with every weak linkage and thereby extend Definition \ref{def:solToFlow}. \begin{definition} Let $(D,S,T,g,k)$ be an instance of \textsf{Directed Planar Disjoint Paths}. Let ${\cal W}$ be a sensible weak linkage in $D$. The {\em flow $\phi: A(D)\rightarrow {\mathsf{RW}}(T)$ associated with $\cal W$} is defined as follows. For every arc $e\in A(D)$, define $\phi(e)=1$ if there is no walk in $\cal W$ that traverses $e$, and $\phi(e)=t$ otherwise where $t\in T$ is the end-vertex of the (unique) walk in $\cal W$ that traverses $e$. \end{definition} Additionally, because homology concerns directed graphs, we need the following notation. Given a graph $G$, we let $\vec{G}$ denote the directed graph obtained by replacing every edge $e\in E(G)$ by two arcs of opposite orientations with the same endpoints as $e$. Notice that $G$ and the underlying graph of $\vec{G}$ are not equal (in particular, the latter graph contains twice as many edges as the first one). Given a weak linkage $\cal W$ in $G$, the weak linkage in $\vec{G}$ that corresponds to $\cal W$ is the weak linkage obtained by replacing each edge $e$ in each walk in $\cal W$, traversed from $u$ to $v$, by the copy of $e$ in $\vec{G}$ oriented from $u$ to $v$. Now, we are ready to translate discrete homotopy to homology. \begin{lemma}\label{lem:discreteHomotopyToHomology} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}\ where $G$ is triangulated. Let ${\cal W}$ be a sensible weak linkage in $G$. Let ${\cal W}'$ be a weak linkage discretely homotopic to ${\cal W}'$. Let $\widehat{\cal W}$ and $\widehat{\cal W}'$ be the weak linkages corresponding to ${\cal W}$ and ${\cal W}'$ in $\vec{G}$, respectively. Then, the flow associated with $\widehat{\cal W}$ is homologous to the flow associated with $\widehat{\cal W}'$. \end{lemma} \begin{proof} Let $\phi$ and $\psi$ be the flows associated with $\wh{\cal W}$ and $\wh{\cal W}'$, respectively, in $\vec{G}$. Consider a sequence $O_1, O_2, \ldots O_\ell$ of discrete homotopy operations that, starting from ${\cal W}$, result in ${\cal W}'$. We prove the lemma by induction on $\ell$. Consider the case when $\ell = 1$. Then, the sequence contains only one discrete homotopy operation, which a face move, face pull or face push operation. Let this operation be applied to a face $f$ and a walk ${W} \in {\cal W}$, where the walk ${W}$ goes from $s \in S$ to $g(s) \in T$. Let $C$ be the boundary cycle of $f$ in $G$, and let $\vec{C}$ denote the collection of arcs in $G$ obtained from the edges of $C$. After this discrete homotopy operation, we obtain a walk ${W}' \in {\cal W}'$, which differs from ${W}$ only in the subset of edges $C$. All other walks are identical between ${\cal W}$ and ${\cal W}'$. Hence, $\wh{\cal W}$ and $\wh{\cal W}'$ differ in $\vec{G}$ only in a subset of $\vec{C}$. Then observe that the flows $\phi$ and $\psi$ are identical everywhere in $A(\vec{G})$ except for a subset of $\vec{C}$. More precisely, Let $P = \vec{C}\cap \wh{W}$ and $P' = \vec{C} \cap \wh{W}'$. Then $\phi(e) = g(s)$ if $e \in P$ and $\phi(e) = 1$ if $e \in \vec{C} - P$; a similar statement holds for $\psi$ and $P'$. Furthermore, it is clear from the description of each of the discrete homotopy operations that $P$ and $P'$ have no common edges and $P \cup P'$ is the (undirected)\footnote{That is, the underlying undirected graph of $\vec{C}$ is a cycle.} cycle $\vec{C}$ in $\vec{G}$. It only remains to describe the homology between the flows $\phi$ and $\psi$, which is exhibited by a function $h$ on the faces of $\vec{G}$. Then $h$ assigns $1$ to all faces of $\vec{G}$ that lie in the exterior of $\vec{C}$, and $g(s)$ to all the faces that lie in the interior of $\vec{C}$. Note that $h$ assigns $1$ to the outer face of $\vec{G}$. It is easy to verify that $h$ is indeed a homology between $\psi$ and $\phi$, that is, for any edge $e \in A(\vec{G})$ it holds that $\psi(e) = h(f_1)^{-1} \cdot \phi(e) \cdot h(f_2)$, where $f_1$ and $f_2$ are the faces on the left and the right of $e$ with respect to its orientation. This proves the case where $\ell = 1$. Now for $\ell > 1$, consider the weak linkage $\wh{\cal W}^\star$ obtained from $\wh{\cal W}$ after applying the sequence $O_1, O_2, \ldots, O_{\ell - 1}$. Then by the induction hypothesis, we can assume that the flow associated with $\wh{\cal W}^\star$, say $\psi^\star$ is homologous to $\phi$. Further, applying $O_\ell$ to $\wh{\cal W}^\star$ gives us $\wh{\cal W}'$, and hence the flows $\psi^\star$ and $\phi$ are also homologous. Hence, by Observation~\ref{obs:homProp} the flows $\phi$ and $\psi$ are~homologous. \end{proof} Having Corollary \ref{cor:homology} and Lemma \ref{lem:discreteHomotopyToHomology} at hand, we prove the following theorem. \begin{lemma}\label{lemma:discreteHomotopy} There exists a polynomial-time algorithm that, given an instance $(G,S,T,g,k)$ of \textsf{Planar Disjoint Paths}\ where $G$ is triangulated, a sensible weak linkage $\cal W$ in $G$ and a subset $X\subseteq E(G)$, either finds a solution of $(G-X,S,T,g,k)$ or determines that no solution of $(G-X,S,T,g,k)$ is discretely homotopic to $\cal W$ in $G$. \end{lemma} \begin{proof} We first convert the given instance of \textsf{Planar Disjoint Paths}\ into an instance of \textsf{Directed Planar Disjoint Paths}\ as follows. We convert the graph $G$ into the digraph $\vec{G}$, as described earlier. Then we construct $\vec{X}$ from $X$ by picking the two arcs of opposite orientation for each edge in $X$. Then we convert the sensible weak linkage $\cal W$ into a weak linkage $\wh{\cal W}$ in $\vec{G}$. Finally, we obtain the flow $\phi$ in $\vec{G}$ associated with $\wh{\cal W}$. Next, we apply Corollary~\ref{cor:homology} to the instance $(\vec{G},S,T,g,k)$, $\vec{X}$ and $\phi$. Then either it returns a solution $\wh{\mathcal{P}}$ that is disjoint from $\vec{X}$, or that there is no solution that is homologous to $\phi$ and disjoint from $\vec{X}$. In the first case, $\wh{\mathcal{P}}$ can be easily turned into a solution $\mathcal{P}$ for the undirected input instance that is disjoint from $X$. In the second case, we can conclude that the undirected input instance has no solution that is discretely homotopic to $\cal W$. Indeed, if this were not the case, then consider a solution $\mathcal{P}$ to $(G-X, S,T,g,k)$ that is discretely homotopic to $\cal W$. Then we have a solution $\wh{\mathcal{P}}$ to the directed instance that is disjoint to $\vec{X}$. Hence, by Lemma~\ref{lem:discreteHomotopyToHomology}, the flow associated with $\wh{\mathcal{P}}$ is homologous to $\phi$, the flow associated with $\wh{W}$. Hence, $\wh{\mathcal{P}}$ is a solution to the instance $(\vec{G},S,T,g,k)$ that is disjoint from $\vec{X}$ and whose flow is homologous to $\phi$. But this is a contradiction to Corollary~\ref{cor:homology}. \end{proof} As a corollary to this lemma, we derive the following result. \begin{corollary}\label{cor:discreteHomotopy} There exists a polynomial-time algorithm that, given an instance $(G,S,T,g,k)$ of \textsf{Planar Disjoint Paths}\ and a sensible weak linkage $\cal W$ in $H_G$, either finds a solution of $(G,S,T,g,$ $k)$ or decides that no solution of $(G,S,T,g,k)$ is discretely homotopic to $\cal W$ in $H_G$. \end{corollary} \begin{proof} Consider the instance $(H_G,S,T,g,k)$ along with the set $X=E(H_G) \setminus E(G)$ of forbidden edges. We then apply Lemma~\ref{lemma:discreteHomotopy} to $(H_G,S,T,g,k)$, $X$ and $\cal W$ (note that $H_G$ is triangulated). If we obtain a solution to this instance, then it is also a solution in $G$ since it traverses only edges in $E(H_G)\setminus X = E(G)$. Else, we correctly conclude that there is no solution of $(H_G-X,S,T,g,k)$ (and hence also of $(G,S,T,g,k)$) that is discretely homotopic to $\cal W$~in~$H_G$. \end{proof} \section{Construction of the Backbone Steiner Tree}\label{sec:steiner} In this section, we construct a tree that we call a backbone Steiner tree $R=R^3$ in $H_G$. Recall that $H_G$ is the radial completion of $G$ enriched with $4|V(G)|+1$ parallel copies for each edge. These parallel copies will not be required during the construction of $R$, and therefore we will treat $H_G$ as having just one copy of each edge. Hence, we can assume that $H_G$ is a simple planar graph, and then $E(H_G) = O(n)$ where $n$ is the number of vertices in $G$. We denote $H=H_G$ when $G$ is clear from context. The tree $R$ will be proven to admit the following property: if the input instance is a \textsf{Yes}-instance, then it admits a solution ${\cal P}=(P_1,\ldots,P_k)$ that is discretely homotopic to a weak linkage ${\cal W}=(W_1,\ldots,W_k)$ in $H$ aligned with ${\cal P}$ that uses at most $2^{\mathcal{O}(k)}$ edges parallel to those in $R$, and none of the edges not parallel to those in $R$. We use the term Steiner tree to refer to any subtree of $H$ whose set of leaves is precisely $S\cup T$. To construct the backbone Steiner tree $R=R^3$, we start with an arbitrary Steiner tree $R^1$ in $H$. Then over several steps, we modify the tree to satisfy several useful properties. \subsection{Step I: Initialization} We initialize $R^1$ to be an arbitrarily chosen Steiner tree. Thus, $R^1$ is a subtree of $H$ such that $V_{=1}(R^1)=S\cup T$. The following observation is immediate from the definition of a Steiner tree. \begin{observation}\label{obs:leaIntSteiner} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}. Let $R'$ be a Steiner tree. Then, $|V_{=1}(R')|=2k$ and $|V_{\geq 3}(R')|\leq 2k-1$. \end{observation} Before we proceed to the next step, we claim that every vertex of $H$ is, in fact, ``close'' to the vertex set of $R_1$. For this purpose, we need the following proposition by Jansen et al.~\cite{DBLP:conf/soda/JansenLS14}. \begin{proposition}[Proposition 2.1 in \cite{DBLP:conf/soda/JansenLS14}]\label{prop:concentricAtAllDists} Let $G$ be a plane graph and with disjoint subsets $X,Y\subseteq V(G)$ such that $G[X]$ and $G[Y]$ are connected graphs and ${\sf rdist}_G(X,Y) = d \geq 2$. For any $r\in\{0,1,\ldots,d-1\}$, there is a cycle $C$ in $G$ such that all vertices $u\in V(C)$ satisfy ${\sf rdist}_G(X,\{u\}) = r$, and such that $V(C)$ separates $X$ and $Y$ in $G$. \end{proposition} Additionally, we need the following simple observation. \begin{observation}\label{obs:dist-eq-rdist-in-triangle-graph} Let $G$ be a triangulated plane graph. Then, for any pair of vertices $u,v \in V(G)$, $\ensuremath{\mathsf{dist}}_G(u,v) = {\sf rdist}_G(u,v)$. \end{observation} \begin{proof} Let ${\sf rdist}_G(u,v) = t$, and consider a sequence of vertices $u=x_1, x_2, \ldots, x_{t+1}=v$ that witnesses this fact---then, every two consecutive vertices in this sequence have a common face. Since $G$ is triangulated, we have that $\{x_i,x_{i+1}\} \in E(G)$ for every two consecutive vertices $x_i,x_{i+1}$, $1 \leq i \leq t$. Hence, $x_1, x_2, \ldots, x_{t+1}$ is a walk from $u$ to $v$ in $G$ with $t$ edges, and therefore $\ensuremath{\mathsf{dist}}_G(u,v) \leq {\sf rdist}_G(u,v)$. Conversely, let $\ensuremath{\mathsf{dist}}_G(u,v) = \ell$; then, there is a path with $\ell$ edges from $u$ to $v$ in $G$, which gives us a sequence of vertices $u=y_1, y_2, \ldots, y_{\ell+1} = v$ where each pair of consecutive vertices forms an edge in $G$. Since $G$ is planar, each such pair of consecutive vertices $y_i,y_{i+1}$,$1 \leq i \leq \ell$, must have a common face. Therefore, ${\sf rdist}_G(u,v) \leq \ensuremath{\mathsf{dist}}_G(u,v)$. \end{proof} It is easy to see that Observation \ref{obs:dist-eq-rdist-in-triangle-graph} is not true for general plane graph. However, this observation will be useful for us because the graph $H$, where we construct the backbone Steiner tree, is triangulated. We now present the promised claim, whose proof is based on Proposition \ref{prop:concentricAtAllDists}, Observation \ref{obs:dist-eq-rdist-in-triangle-graph} and the absence of long sequences of $S\cup T$-free concentric cycles in good instances. Here, recall that $c$ is the fixed constant in Corollary~\ref{cor:twReduction}. We remark that, for the sake of clarity, throughout the paper we denote some natural numbers whose value depends on $k$ by notations of the form $\alpha_{\mathrm{subscript}}(k)$ where the subscript of $\alpha$ hints at the use of the value. \begin{lemma}\label{lem:closeToR} Let $(G,S,T,g,k)$ be a good instance of \textsf{Planar Disjoint Paths}. Let $R'$ be a Steiner tree. For every vertex $v\in V(H)$, it holds that $\ensuremath{\mathsf{dist}}_H(v,V(R'))\leq \alpha_{\mathrm{dist}}(k):=4\cdot 2^{ck}$. \end{lemma} \begin{proof} Suppose, by way of contradiction, that $\ensuremath{\mathsf{dist}}_H(v^\star,V(R'))>\alpha_{\mathrm{dist}}(k)$ for some vertex $v^\star\in V(H)$. Since $H$ is the (enriched) radial completion of $G$, it is triangulated. By Observation~\ref{obs:dist-eq-rdist-in-triangle-graph}, ${\sf rdist}_H(u,v) = \ensuremath{\mathsf{dist}}_H(u,v)$ for any pair of vertices $u,v \in V(H)$. Thus, ${\sf rdist}_H(v^\star,V(R'))>\alpha_{\mathrm{dist}}(k)$. By Proposition \ref{prop:concentricAtAllDists}, for any $r\in\{0,1,\ldots,\alpha_{\mathrm{dist}}(k)\}$, there is a cycle $C_r$ in $H$ such that all vertices $u\in V(C_r)$ satisfy ${\sf rdist}_H(v^\star,u)={\sf dist}_H(v^\star,u) = r$, and such that $V(C_r)$ separates $\{v^\star\}$ and $V(R')$ in $H$. In particular, these cycles must be pairwise vertex-disjoint, and each one of them contains either {\em (i)} $v^\star$ in its interior (including the boundary) and $V(R')$ in its exterior (including the boundary), or {\em (ii)} $v^\star$ in its exterior (including the boundary) and $V(R')$ in its interior (including the boundary). We claim that only case (i) is possible. Indeed, suppose by way of contradiction that $C_i$, for some $r\in\{0,1,\ldots,\alpha_{\mathrm{dist}}(k)\}$, contains $v^\star$ in its exterior and $V(R')$ in its interior. Because the outer face of $H$ contains a terminal $t^\star\in T$ and $t^\star\in V(R')$, we derive that $t^\star\in V(C_i)$. Thus, ${\sf rdist}_H(v^\star,t^\star)= i\leq \alpha_{\mathrm{dist}}(k)$. However, because $t^\star\in V(R')$, this is a contradiction to the supposition that $\ensuremath{\mathsf{dist}}_H(v^\star,V(R'))>\alpha_{\mathrm{dist}}(k)$. Thus, our claim holds true. From this, we conclude that ${\cal C}=(C_0,C_1,\ldots,C_{\alpha_{\mathrm{dist}}(k)})$ is a $V(R')$-free sequence of concentric cycles in $H$. Since $S\cup T\subseteq V(R')$, it is also $S\cup T$-free. Consider some odd integer $r\in\{1,2,\ldots,\alpha_{\mathrm{dist}(k)}\}$. Note that every vertex $u\in V(C_r)$ that does not belong to $V(G)$ lies in some face $f$ of $G$, and that the two neighbors of $u$ in $C_r$ must belong to the boundary of $f$ (by the definition of radial completion). Moreover, each of the vertices on the boundary of $f$ is at distance (in $H$) from $u$ that is the same, larger by one or smaller by one, than the distance of $f$ from $u$, and hence none of these vertices can belong to any $C_i$ for $i>r+1$ as well as $i<r+1$. For every $r\in\{1,2,\ldots,\alpha_{\mathrm{dist}}(k)-1\}$ such that $r\mod 3=1$, define $C'_r$ as some cycle contained in the closed walk obtained from $C_r$ by replacing every vertex $u\in V(C_r)\setminus V(G)$, with neighbors $x,y$ on $C_r$, by a path from $x$ to $y$ on the boundary of the face of $G$ that corresponds to $u$. In this manner, we obtain an $S\cup T$-free sequence of concentric cycles in $G$ whose length is at least $2^{ck}$. However, this contradicts the supposition that $(G,S,T,g,k)$ is good. \end{proof} \subsection{Step II: Removing Detours} In this step, we modify the Steiner tree to ensure that there exist no ``shortcuts'' via vertices outside the Steiner tree. This property will be required in subsequent steps to derive additional properties of the Steiner tree. To formulate this, we need the following definition (see Fig.~\ref{fig:undetour}). \begin{definition}[{\bf Detours in Trees}]\label{def:detour} A subtree $T$ of a graph $G$ {\em has a detour} if there exist two vertices $u,v\in V_{\geq 3}(T)\cup V_{=1}(T)$ that are near each other, and a path $P$ in $G$, such that \begin{enumerate} \item $P$ is shorter than $\ensuremath{\mathsf{path}}_T(u,v)$, and \item one endpoint of $P$ belongs to the connected component of $T-V(\ensuremath{\mathsf{path}}_T(u,v))\setminus\{u,v\}$ that contains $u$, and the other endpoint of $P$ belongs to the connected component of $T-V(\ensuremath{\mathsf{path}}_T(u,v))\setminus\{u,v\}$ that contains $v$. \end{enumerate} Such vertices $u,v$ and path $P$ are said to {\em witness the detour}. Moreover, if $P$ has no internal vertex from $(V(T)\setminus V(\ensuremath{\mathsf{path}}_T(u,v)))\cup\{u,v\}$ and its endpoints do not belong to $V_{=1}(T)\setminus\{u,v\}$, then $u,v$ and $P$ are said to {\em witness the detour compactly}. \end{definition} We compute a witness for a detour as follows. Note that this lemma also implies that, if there exists a detour, then there exists a compact witness rather than an arbitrary one. \begin{lemma}\label{lem:computeDetour} There exists an algorithm that, given a good instance $(G,S,T,g,k)$ of \textsf{Planar Disjoint Paths}\ and a Steiner tree $R'$, determines in time $\mathcal{O}(k^2 \cdot n)$ whether $R'$ has a detour. In case the answer is positive, it returns $u,v$ and $P$ that witness the detour compactly. \end{lemma} \begin{proof} Let $Q = \ensuremath{\mathsf{path}}_{R'}(u,v) - \{u,v\}$ for some two vertices $u,v\in V_{\geq 3}(T)\cup V_{=1}(T)$ that are near each other. Then, $R' - V(Q)$ contains precisely two connected components: $R'_u$ and $R'_v$ that contain $u$ and $v$, respectively. Consider a path $P$ of minimum length between vertices $x \in V(R'_u)$ and $y \in V(R'_v)$ in $H$, over all choices of $x$ and $y$. Further, we choose $P$ so that contains as few vertices of $(S \cup T) \setminus \{u,v\}$ as possible. Suppose that $|E(P)| \leq |E(\ensuremath{\mathsf{path}}_{R'}(u,v))| - 1$. Then, we claim that $P$ is a compact detour witness. To prove this claim, we must show that $(i)$~no internal vertex of $P$ lies in $(V(R')\setminus V(\ensuremath{\mathsf{path}}_{R'}(u,v)))\cup\{u,v\} = V(R'_u) \cup V(R'_v)$, and $(ii)$~the endpoints of $P$ do not lie in $V_{=1}(R') \setminus \{u,v\} = (S \cup T) \setminus \{u,v\}$. The first property follows directly from the choice of $P$. Indeed, if $P$ were a path from $x \in V(R'_u)$ to $y \in V(R'_v)$, which contained an internal vertex $z \in V(R'_u)$, then the subpath $P'$ of $P$ with endpoints $z$ and $y$ is a strictly shorter path from $V(R'_u)$ to $V(R'_v)$ (the symmetric argument holds when $z \in V(R'_v)$). For the second property, we give a proof by contradiction. To this end, suppose that some terminal $w \in (S \cup T) \setminus \{u,v\}$ belongs to $P$. Necessarily, $w \in V(R'_u)\cup V(R'_v)$ (by the definition of a Steiner tree). Without loss of generality, suppose that $w \in V(R'_u)$. By the first property, $w$ must be an endpoint of $P$. Let $z \in V(R'_v)$ be the other endpoint of $P$. Because the given instance is good, $w$ has degree $1$ in $G$, thus we can let $n(w)$ denote its unique neighbor in $G$. Observe that $w$ lies on only one face of $G$, which contains both $w$ and $n(w)$. Hence, $w$ is adjacent to exactly two vertices in $H$: $n(w)$ and a vertex $f(w)\in V(H)\setminus V(G)$. Furthermore, $\{n(w),f(w)\} \in E(H)$, i.e.~$w,n(w)$ and $f(w)$ form a triangle in $H$. Thus, $P$ contains exactly one of $n(w)$ or $f(w)$ (otherwise, we can obtain a strictly shorter path connecting $w$ and the other endpoint of $P$ that contradicts the choice of $P$). Let $a(w)\in\{n(w),f(w)\}$ denote the neighbor of $w$ in $P$, and note that, by the first property, $a(w)\not\in V(R'_u)$. Note that it may be the case that $a(w) = z$. Since $w$ is a leaf of $R'$, exactly one of $n(w)$ and $f(w)$ is adjacent to $w$ in $R'$, and we let $b(w)$ denote this vertex. Because $w\neq u$, we have that $V(R'_u)$ contains but is not equal to $\{w\}$, and therefore $b(w) \in V(R'_u)$. In turn, by the first property, this means that $a(w) \neq b(w)$ (because otherwise $a(w) \neq z$ and hence it is an internal vertex of $P$, which cannot belong to $V(R'_u)$). Because $w, a(w)$ and $b(w)$ form a triangle in $H$, we obtain a path $P'\neq P$ in $H$ by replacing $w$ with $b(w)$ in $P$. Observe that $P'$ connects the vertex $b(w) \in V(R'_u)$ to the vertex $z \in V(R'_v)$. Furthermore, because $|E(P')| = |E(P)|$, and $P'$ contains strictly fewer vertices of $(S \cup T) \setminus \{u,v\}$ compared to $P$, we contradict the choice of $P$. Therefore, $P$ also satisfies the second property, and we conclude that $u,v,P$ compactly witness a detour in~$R'$. We now show that a compact detour in $R'$ can be computed in $\mathcal{O}(k^2 \cdot n)$ time. First, observe that if there is a detour witnessed by some $u,v$ and $P$, then $u,v \in V_{\geq 3}(R') \cup V_{=1}(R')$. By Observation \ref{obs:leaIntSteiner}, $|V_{\geq 3}(R') \cup V_{=1}(R')| \leq 4k$. Therefore, there are at most $16k^2$ choices for the vertices $u$ and $v$. We consider each choice, and test if there is detour for it in linear time as follows. Fix a choice of distinct vertices $u,v \in V_{\geq 3}(R') \cup V_{=1}(R')$, and check if they are near each other in $R'$ in $O(|V(R)|)$ time by validating that each internal vertex of $\ensuremath{\mathsf{path}}_{R'}(u,v)$ has degree $2$. If they are not near each other, move on to the next choice. Otherwise, consider the path $Q = \ensuremath{\mathsf{path}}_{R'}(u,v) - \{u,v\}$, and the trees $R'_u$ and $R'_v$ of $R'-V(Q)$ that contain $u$ and $v$, respectively. Now, consider the graph $\widetilde{H}$ derived from $H$ by first deleting $(V(Q) \cup S \cup T) \setminus \{u,v\}$ and then introducing a new vertex $r$ adjacent to all vertices in $V(R'_u)$. We now run a breadth first search (BFS) from $r$ in $\widetilde{H}$. This step takes $\mathcal{O}(n)$ time since $|E(\widetilde{H})| = \mathcal{O}(n)$ (because $H$ is planar). From the BFS-tree, we can easily compute a shortest path $P$ between a vertex $x \in V(R'_u)$ and a vertex $y \in V(R'_v)$. Observe that $V(P) \cap (S \cup T) \subseteq \{u,v\}$ by the construction of $\widetilde{H}$. If $|E(P)| < |E(\ensuremath{\mathsf{path}}_{R'}(u,v))|$, then we output $u,v,P$ as a compact witness of a detour in $R'$. Else, we move on to the next choice of $u$ and $v$. If we fail to find a witness for all choices of $u$ and $v$, then we output that $R'$ has no detour. Observe that the total running time of this process is bounded by $\mathcal{O}(k^2 \cdot n)$. This concludes the proof. \end{proof} Accordingly, as long as $R^1$ has a detour, compactly witnessed by some vertices $u,v$ and a path $P$, we modify it as follows: we remove the edges and the internal vertices of $\ensuremath{\mathsf{path}}_{R^1}(u,v)$, and add the edges and the internal vertices of $P$. We refer to a single application of this operation as {\em undetouring} $R^1$. For a single application, because we consider compact witnesses rather than arbitrary ones, we have the following observation. \begin{observation}\label{obs:undetour} Let $(G,S,T,g,k)$ be a good instance of \textsf{Planar Disjoint Paths}\ with a Steiner tree $R'$. The result of undetouring $R'$ is another Steiner tree with fewer edges than $R'$. \end{observation} \begin{proof} Consider a compact detour witness $u,v,P$ of $R'$. Then, $|E(P)| < |E(\ensuremath{\mathsf{path}}_{R'}(u,v))|$. Let $Q = \ensuremath{\mathsf{path}}_{R'}(u,v) - \{u,v\}$, and let $R'_u$ and $R'_v$ be the two trees of $R' - V(Q)$ that contain $u$ and $v$, respectively. Consider the graph $\tilde{R}$ obtained from $(R' - V(Q)) \cup P$ by iteratively removing any leaf vertex that does not lie in $S \cup T$. We claim that the graph $\widetilde{R}$ that result from undetouring $R'$ (with respect to $u,v,P$) is a Steiner tree with strictly fewer edges than $R'$. Clearly, $\widetilde{R}$ is connected because $P$ reconnects the two trees $R'_u$ and $R'_v$ of $R' - V(Q)$. Further, as $P$ contains no internal vertex from $(V(R')\setminus V(\ensuremath{\mathsf{path}}_{R'}(u,v)))\cup\{u,v\} = V(R'_u) \cup V(R'_v)$, and $R'_u$ and $R'_v$ are trees, $\widetilde{R}$ is cycle-free. Additionally, all the vertices in $S \cup T$ are present in $\widetilde{R}$ by construction and they remain leaves due to the compactness of the witness. Hence, $\widetilde{R}$ is a Steiner tree in $G$. Because $|E(P)| < |E(\ensuremath{\mathsf{path}}_{R'}(u,v))|$, it follows that $\widetilde{R}$ contains fewer edges than $R'$. \end{proof} Initially, $R^1$ has at most $n-1$ edges. Since every iteration decreases the number of edges (by Observation \ref{obs:undetour}) and can be performed in time $\mathcal{O}(k^2 \cdot n)$ (by Lemma \ref{lem:computeDetour}), we obtain the following result. \begin{lemma}\label{obs:undetourExhaustive} Let $(G,S,T,g,k)$ be a good instance of \textsf{Planar Disjoint Paths}\ with a Steiner tree $R'$. An exhaustive application of the operation undetouring $R'$ can be performed in time $\mathcal{O}(k^2 \cdot n^2)$, and results in a Steiner tree that has no detour. \end{lemma} We denote the Steiner tree obtained at the end of Step II by $R^2$. \subsection{Step III: Small Separators for Long Paths} We now show that any two parts of $R^2$ that are ``far'' from each other can be separated by small separators in $H$. This is an important property used in the following sections to show the existence of a ``nice'' solution for the input instance. Specifically, we consider a ``long'' maximal degree-2 path in $R^2$ (which has no short detours in $H$), and show that there are two separators of small cardinality, each ``close'' to one end-point of the path. The main idea behind the proof of this result is that, if it were false, then the graph $H$ would have had large treewidth (see Proposition~\ref{prop:radialDisTw}), which contradicts that $H$ has bounded treewidth (by Corollary~\ref{cor:twReduction}). We first define the threshold that determines whether a path is long or short. \begin{definition}[{\bf Long Paths in Trees}]\label{def:longPath} Let $G$ be a graph with a subtree $T$. A subpath of $T$ is {\em $k$-long} if its length is at least $\alpha_{\mathrm{long}}(k):= 10^4\cdot 2^{ck}$, and {\em $k$-short} otherwise. \end{definition} As $k$ will be clear from context, we simply use the terms long and short. Towards the computation of two separators for each long path, we also need to define which subsets of $V(R^2)$ we would like to separate. \begin{definition}[{\bf $P'_u,P''_u,A_{R^2,P,u}$ and $B_{R^2,P,u}$}] Let $(G,S,T,g,k)$ be a good instance of \textsf{Planar Disjoint Paths}. Let $R^2$ be a Steiner tree that has no detour. For any long maximal degree-2 path $P$ of $R^2$ and for each endpoint $u$ of $P$, define $P'_u$, $P''_u$ and $A_{R^2,P,u},B_{R^2,P,u}\subseteq V(R^2)$ as follows. \begin{itemize} \item $P'_u$ (resp.~$P''_u$) is the subpath of $P$ consisting of the $\alpha_{\mathrm{pat}}(k):=100\cdot 2^{ck}$ (resp.~$\alpha_{\mathrm{pat}}(k)/2 = 50\cdot 2^{ck}$) vertices of $P$ closest to $u$. \item $A_{R^2,P,u}$ is the union of $V(P''_u)$ and the vertex set of the connected component of $R^2-(V(P'_u)\setminus \{u\})$ containing $u$. \item $B_{R^2,P,u}=V(R^2)\setminus(A_{R^2,P,u}\cup V(P'_u))$. \end{itemize} \end{definition} \begin{figure} \caption{Separators and flows for long degree-2 paths.} \label{fig:treeflow1} \end{figure} For each long maximal degree-2 path $P$ of $R^2$ and for each endpoint $u$ of $P$, we compute a ``small'' separator $\ensuremath{\mathsf{Sep}}_{R^2}(P,u)$ as follows. Let $A=A_{R^2,P,u}$ and $B=B_{R^2,P,u}$. Then, compute a subset of $V(H)\setminus (A\cup B)$ of minimum size that separates $A$ and $B$ in $H$, and denote it by $\ensuremath{\mathsf{Sep}}_{R^2}(P,u)$ (see Fig.~\ref{fig:treeflow1}). Since $A\cap B=\emptyset$ and there is no edge between a vertex in $A$ and a vertex in $B$ (because $R^2$ has no detours), such a separator exists. Moreover, it can be computed in time $\mathcal{O}(n|\ensuremath{\mathsf{Sep}}_{R^2}(P,u)|)$: contract each set among $A$ and $B$ into a single vertex and then obtain a minimum vertex $s-t$ cut by using Ford-Fulkerson algorithm. To argue that the size of $\ensuremath{\mathsf{Sep}}_{R^2}(P,u)$ is upper bounded by $2^{\mathcal{O}(k)}$, we make use of the following proposition due to Bodlaender et al.~\cite{DBLP:journals/jacm/BodlaenderFLPST16}. \begin{proposition}[Lemma 6.11 in \cite{DBLP:journals/jacm/BodlaenderFLPST16}]\label{prop:radialDisTw} Let $G$ be a plane graph, and let $H$ be its radial completion. Let $t\in\mathbb{N}$. Let $C,Z,C_1,Z_1$ be disjoint subsets of $V(H)$ such that \begin{enumerate}\setlength\itemsep{0em} \item $H[C]$ and $H[C_1]$ are connected graphs, \item $Z$ separates $C$ from $Z_1\cup C_1$ and $Z_1$ separates $C\cup Z$ from $C_1$ in $H$, \item $\ensuremath{\mathsf{dist}}_{H}(Z,Z_1)\geq 3t+4$, and \item $G$ contains $t+2$ pairwise internally vertex-disjoint paths with one endpoint in $C\cap V(G)$ and the other endpoint in $C_1\cap V(G)$. \end{enumerate} Then, the treewidth of $G[V(M)\cap V(G)]$ is larger than $t$ where $M$ is the union of all connected components of $H\setminus (Z\cup Z_1)$ having at least one neighbor in $Z$ and at least one neighbor in~$Z_1$. \end{proposition} Additionally, the following immediate observation will come in handy. \begin{observation}\label{obs:radialRadial} Let $G$ be a plane graph. Let $H$ be the radial completion of $G$, and let $H'$ be the radial completion of $H$. Then, for all $u,v\in V(H)$, $\ensuremath{\mathsf{dist}}_H(u,v)\leq \ensuremath{\mathsf{dist}}_{H'}(u,v)$. \end{observation} \begin{proof} Note that $H$ is triangulated. Thus, for all $u,v\in V(H)$ and path $P$ in $H'$ between $u$ and $v$, we can obtain a path between $u$ and $v$ whose length is not longer than the length of $P$ by replacing each vertex $w\in V(H')\setminus V(H)$ by at most one vertex of the boundary of the face in $H$ that $w$ represents. \end{proof} We now argue that $\ensuremath{\mathsf{Sep}}_{R^2}(P,u)$ is small. \begin{lemma}\label{lem:sepSmall} Let $(G,S,T,g,k)$ be a good instance of \textsf{Planar Disjoint Paths}. Let $R^2$ be a Steiner tree that has no detour, $P$ be a long maximal degree-2 path of $R^2$, and $u$ be an endpoint of $P$. Then, $|\ensuremath{\mathsf{Sep}}_{R^2}(P,u)|\leq \alpha_{\mathrm{sep}}(k):=\frac{7}{2}\cdot 2^{ck}+2$. \end{lemma} \begin{proof} Denote $P'=P'_u,P''=P''_u,A=A_{R^2,P,u}$ and $B=B_{R^2,P,u}$. Recall that $H$ is the radial completion of $G$ (enriched with parallel edges), and let $H'$ denote the radial completion of $H$. Towards an application of Proposition \ref{prop:radialDisTw}, define $C=A$, $C_1=B$, $Z=N_{H'}(C)$, $Z_1=N_{H'}(C_1)$ and $t=\frac{7}{2}\cdot 2^{ck}$. Since $R^2$ is a subtree of $H$, it holds that $H[C]$ and $H[C_1]$ are connected, and therefore $H'[C]$ and $H'[C_1]$ are connected as well. From the definition of $Z$ and $Z_1$, it is immediate that $Z$ separates $C$ from $Z_1\cup C_1$ and $Z_1$ separates $C\cup Z$ from $C_1$ in $H$. Clearly, $C\cap C_1=\emptyset$, $C\cap Z=\emptyset$, and $C_1\cap Z_1=\emptyset$. We claim that, in addition, $Z\cap C_1=\emptyset$, $Z_1\cap C=\emptyset$ and $Z\cap Z_1=\emptyset$. To this end, it suffices to show that $\ensuremath{\mathsf{dist}}_{H'}(Z,Z_1)\geq 3t+4$. Indeed, because $Z=N_{H'}(C)$ and $Z_1=N_{H'}(C_1)$, we have that each inequality among $Z\cap C_1\neq\emptyset$, $Z_1\cap C=\emptyset$ and $Z\cap Z_1=\emptyset$, implies that $\ensuremath{\mathsf{dist}}_{H'}(Z,Z_1)\leq 2$. Lastly, we show that $\ensuremath{\mathsf{dist}}_{H'}(Z,Z_1)\geq 3t+4$. As $\ensuremath{\mathsf{dist}}_{H'}(C,C_1)\leq \ensuremath{\mathsf{dist}}_{H}(Z,Z_1)+2$, it suffices to show that $\ensuremath{\mathsf{dist}}_{H'}(C,C_1)\geq 3t+6$. Because $C\cup C_1\subseteq V(H)$, Observation \ref{obs:radialRadial} implies that $\ensuremath{\mathsf{dist}}_{H'}(C,C_1)\geq\ensuremath{\mathsf{dist}}_{H}(C,C_1)$. Hence, it suffices to show that $\ensuremath{\mathsf{dist}}_{H}(C,C_1)\geq 3t+6$. However, $\ensuremath{\mathsf{dist}}_{H}(C,C_1)\geq |E(P')|-|E(P'')|$ since otherwise we obtain a contradiction to the supposition that $R^2$ has no detour. This means that $\ensuremath{\mathsf{dist}}_{H}(C,C_1)\geq \alpha_{\mathrm{pat}}(k)/2-1\geq 3t+6$ as required. Recall that $\ensuremath{\mathsf{Sep}}_{R^2}(P,u)$ is a subset of $V(H)\setminus (C\cup C_1')$ of minimum size that separates $C$ and $C_1$ in $H$. We claim that $|\ensuremath{\mathsf{Sep}}_{R^2}(P,u)|\leq \alpha_{\mathrm{sep}}(k)$. Suppose, by way of contradiction, that $|\ensuremath{\mathsf{Sep}}_{R^2}(P,u)|>\alpha_{\mathrm{sep}}(k)=t+2$. By Menger's theorem, the inequality $|\ensuremath{\mathsf{Sep}}_{R^2}(P,u)|>\alpha_{\mathrm{sep}}(k)$ implies that $H$ contains $t+2$ pairwise internally vertex-disjoint paths with one endpoint in $C\subseteq V(H)$ and the other endpoint in $C_1\subseteq V(H)$. From this, we conclude that all of the conditions in the premise of Proposition \ref{prop:radialDisTw} are satisfied. Thus, the treewidth of $H[V(M)\cap V(H)]$ is larger than $t$ where $M$ is the union of all connected components of $H'\setminus (Z\cup Z_1)$ having at least one neighbor in $Z$ and at least one neighbor in~$Z_1$. However, $H[V(M)\cap V(H)]$ is a subgraph of $H$, which means that the treewidth of $H$ is also larger than $t$. By Proposition \ref{prop:twRadial}, this implies that the treewidth of $G$ is larger than $2^{ck}$. This contradicts the supposition that $(G,S,T,g,k)$ is good. From this, we conclude that $|\ensuremath{\mathsf{Sep}}_{R^2}(P,u)|\leq \alpha_{\mathrm{sep}}(k)$. \end{proof} Recall that $\ensuremath{\mathsf{Sep}}_{R^2}(P,u)$ is computable in time $\mathcal{O}(n|\ensuremath{\mathsf{Sep}}_{R^2}(P,u)|)$. Thus, by Lemma \ref{lem:sepSmall}, we obtain the observation below. We remark that the reason we had to argue that the separator is small is not due to this observation, but because the size bound will be crucial in later sections. \begin{observation}\label{obs:sepTime} Let $(G,S,T,g,k)$ be a good instance of \textsf{Planar Disjoint Paths}. Let $R^2$ be a Steiner tree that has no detour, $P$ be a long maximal degree-2 path of $R^2$, and $u$ be an endpoint of $P$. Then, $\ensuremath{\mathsf{Sep}}_{R^2}(P,u)$ can be computed in time $2^{\mathcal{O}(k)}n$. \end{observation} Moreover, we have the following immediate consequence of Proposition \ref{prop:sepCycle}. \begin{observation}\label{obs:sepIsCycle} Let $(G,S,T,g,k)$ be a good instance of \textsf{Planar Disjoint Paths}. Let $R^2$ be a Steiner tree that has no detour, $P$ be a long maximal degree-2 path of $R^2$, and $u$ be an endpoint of $P$. Then, $H[\ensuremath{\mathsf{Sep}}_{R^2}(P,u)]$ is a cycle. \end{observation} \subsection{Step IV: Internal Modification of Long Paths} In this step, we replace the ``middle'' of each long maximal degree-2 path $P = \ensuremath{\mathsf{path}}_{R^2}(u,v)$ of $R^2$ by a different path $P^\star$. This ``middle'' is defined by the two separators obtained in the previous step. Let us informally explain the reason behind this modification. In Section \ref{sec:winding} we will show that, if the given instance $(G,S,T,g,k)$ admits a solution (which is a collection of disjoint paths connecting $S$ and $T$), then it also admits a ``nice'' solution that ``spirals'' only a few times around parts of the constructed Steiner tree. This requirement is crucial, since it is only such solutions $\cal P$ that are discretely homotopic to weak linkages ${\cal W}$ in $H$ aligned with ${\cal P}$ that use at most $2^{\mathcal{O}(k)}$ edges parallel to those in $R$, and none of the edges not parallel to those in $R$. To ensure the existence of nice solutions, we show how an arbitrary solution can be rerouted to avoid too many spirals. This rerouting requires a collection of vertex-disjoint paths between $\ensuremath{\mathsf{Sep}}_{R^2}(P,u)$ and $\ensuremath{\mathsf{Sep}}_{R^2}(P,v)$ which itself does not spiral around the Steiner tree. The replacement of $P$ by $P^\star$ in the Steiner tree, described below, will ensure this property. To describe this modification, we first need to assert the statement in the following simple lemma, which partitions every long maximal degree-2 path $P$ of $R^2$ into three parts (see Fig.~\ref{fig:treeflow1}). \begin{lemma}\label{lem:threeParts} Let $(G,S,T,g,k)$ be a good instance of \textsf{Planar Disjoint Paths}. Let $R^2$ be a Steiner tree with no detour, and $P$ be a long maximal degree-2 path of $R^2$ with endpoints $u$ and $v$. Then, there exist vertices $u'=u'_P\in\ensuremath{\mathsf{Sep}}_{R^2}(P,u)\cap V(P)$ and $v'=v'_P\in\ensuremath{\mathsf{Sep}}_{R^2}(P,v)\cap V(P)$ such that: \begin{enumerate} \item The subpath $P_{u,u'}$ of $P$ with endpoints $u$ and $u'$ has no internal vertex from $\ensuremath{\mathsf{Sep}}_{R^2}(P,u)\cup\ensuremath{\mathsf{Sep}}_{R^2}(P,v)$, and $\alpha_{\mathrm{pat}}(k)/2\leq |V(P_{u,u'})|\leq \alpha_{\mathrm{pat}}(k)$. Additionally, the subpath $P_{v,v'}$ of $P$ with endpoints $v$ and $v'$ has no internal vertex from $\ensuremath{\mathsf{Sep}}_{R^2}(P,u)\cup\ensuremath{\mathsf{Sep}}_{R^2}(P,v)$, and $\alpha_{\mathrm{pat}}(k)/2\leq |V(P_{v,v'})|\leq \alpha_{\mathrm{pat}}(k)$. \item Let $P_{u',v'}$ be the subpath of $P$ with endpoints $u'$ and $v'$. Then, $P=P_{u,u'}-P_{u',v'}-P_{v',v}$. \end{enumerate} \end{lemma} \begin{proof} We first prove that there exists a vertex $u'\in\ensuremath{\mathsf{Sep}}_{R^2}(P,u)\cap V(P)$ such that the subpath $P_{u,u'}$ of $P$ between $u$ and $u'$ has no internal vertex from $\ensuremath{\mathsf{Sep}}_{R^2}(P,u)\cup\ensuremath{\mathsf{Sep}}_{R^2}(P,v)$. To this end, let $P'=P'_u$, and let $\widetilde{P}$ denote the subpath of $P$ that consists of the $\alpha_{\mathrm{pat}}(k)+1$ vertices of $P$ that are closest to $u$. Let $A=A_{R^2,P,u}$ and $B=B_{R^2,P,u}$. Recall that $\ensuremath{\mathsf{Sep}}_{R^2}(P,u)\subseteq V(H)\setminus (A\cup B)$ separates $A$ and $B$ in $H$. Since $\widetilde{P}$ is a path with the endpoint $u$ in $A$ and the other endpoint in $B$, it follows that $\ensuremath{\mathsf{Sep}}_{R^2}(P,u)\cap V(\widetilde{P})\neq\emptyset$. Accordingly, let $u'$ denote the vertex of $P'$ closest to $u$ that belongs to $\ensuremath{\mathsf{Sep}}_{R^2}(P,u)$. Then, $u'\in\ensuremath{\mathsf{Sep}}_{R^2}(P,u)\cap V(P)$ and the subpath $P_{u,u'}$ of $P$ between $u$ and $u'$ has no internal vertex from $\ensuremath{\mathsf{Sep}}_{R^2}(P,u)$. As the number of vertices of $P_{u,u'}$ is between those of $P''_u$ and $P'$, the inequalities $\alpha_{\mathrm{pat}}(k)/2\leq |V(P_{u,u'})|\leq \alpha_{\mathrm{pat}}(k)$ follow. It remains to argue that $P_{u,u'}$ has no internal vertex from $\ensuremath{\mathsf{Sep}}_{R^2}(P,v)$. Because $\ensuremath{\mathsf{Sep}}_{R^2}(P,v)\subseteq V(H)\setminus (A_{R^2,P,v}\cup B_{R^2,P,v})$, the only vertices of $P$ that $\ensuremath{\mathsf{Sep}}_{R^2}(P,v)$ can possibly contain are the $\alpha_{\mathrm{pat}}(k)$ vertices of $P$ that are closest to $v$. Since $P$ is long, none of these vertices belongs to $P'$, and hence $P_{u,u'}$ (which is a subpath of $P'$) has no internal vertex from $\ensuremath{\mathsf{Sep}}_{R^2}(P,v)$. Symmetrically, we derive the existence of a vertex $v'\in\ensuremath{\mathsf{Sep}}_{R^2}(P,v)\cap V(P)$ such that the subpath $P_{v,v'}$ of $P$ between $v$ and $v'$ has no internal vertex from $\ensuremath{\mathsf{Sep}}_{R^2}(P,u)\cup\ensuremath{\mathsf{Sep}}_{R^2}(P,v)$. Lastly, we prove that $P=P_{u,u'}-P_{u',v'}-P_{v',v}$. Since $P_{u,u'},P_{u',v'}$ and $P_{v',v}$ are subpaths of $P$ such that $V(P)=V(P_{u,u'})\cup V(P_{u',v'})\cup V(P_{v,v'})$, it suffices to show that {\em (i)} $V(P_{u,u'})\cap V(P_{u',v'})=\{u'\}$, {\em (ii)} $V(P_{v,v'})\cap V(P_{u',v'})=\{v'\}$, and {\em (iii)} $V(P_{u,u'})\cap V(P_{v,v'})=\emptyset$. Because $P$ is long and $|V(P_{u,u'})|,|V(P_{v,v'})|\leq \alpha_{\mathrm{pat}}(k)$, it is immediate that item {\em (iii)} holds. For item {\em (i)}, note that $V(P_{u,u'})\cap V(P_{u',v'})$ can be a strict superset of $\{u'\}$ only if $P_{u',v'}$ is a subpath of $P_{u,u'}$; then, $v'\in V(P_{u,u'})$, which means that $P_{u,u'}$ has an internal vertex from $\ensuremath{\mathsf{Sep}}_{R^2}(P,v)$ and results in a contradiction. Thus, item {\em (i)} holds. Symmetrically, item {\em (ii)} holds as well. \end{proof} In what follows, when we use the notation $u'_P$, we refer to the vertex in Lemma \ref{lem:threeParts}. Before we describe the modification, we need to introduce another notation and make an immediate observation based on this notation. \begin{definition}[{\bf $\widetilde{A}_{R^2,P,u}$}] Let $(G,S,T,g,k)$ be a good instance of \textsf{Planar Disjoint Paths}. Let $R^2$ be a Steiner tree that has no detour, $P$ be a long maximal degree-2 path of $R^2$, and $u$ be an endpoint of $P$. Then, $\widetilde{A}_{R^2,P,u}=(V(P_{u,u'_P})\setminus\{u'_P\})\cup A_{R^2,p,u}$. \end{definition} \begin{observation}\label{obs:separateComps} Let $(G,S,T,g,k)$ be a good instance of \textsf{Planar Disjoint Paths}. Let $R^2$ be a Steiner tree that has no detour, and $P$ be a long maximal degree-2 path of $R^2$ with endpoints $u$ and $v$. Then, there exists a single connected component $C_{R^2,P,u}$ in $H-(\ensuremath{\mathsf{Sep}}_{R^2}(u,P)\cup\ensuremath{\mathsf{Sep}}_{R^2}(v,P))$ that contains $\widetilde{A}_{R^2,P,u}$ and a different single connected component $C_{R^2,P,v}$ in $H-(\ensuremath{\mathsf{Sep}}_{R^2}(u,P)\cup\ensuremath{\mathsf{Sep}}_{R^2}(v,P))$ that contains $\widetilde{A}_{R^2,P,v}$. \end{observation} We proceed to describe the modification. For brevity, let $S_u = \ensuremath{\mathsf{Sep}}_{R^2}(P,u)$ and $S_v = \ensuremath{\mathsf{Sep}}_{R^2}(P,v)$. Recall that there is a terminal $t^\star\in T$ that lies on the outer face of $H$ (and $G$). By Observation \ref{obs:sepIsCycle}, $S_v$ and $S_v$ induce two cycles in $H$, and $t^\star$ lies in the exterior of both these cycles. Assume w.l.o.g.~that $u$ lies in the interior of both $S_u$ and $S_v$, while $v$ lies in the exterior of both $S_u$ and $S_v$. Then, $S_u$ belongs to the strict interior of $S_v$. We construct a sequence of concentric cycles between $S_u$ and $S_v$ as follows. \begin{lemma}\label{lemma:CC_cons} Let $(G,S,T,g,k)$ be a good instance of \textsf{Planar Disjoint Paths}. Let $R^2$ be a Steiner tree with no detour, and $P$ be a long maximal degree-2 path of $R^2$ with endpoints $u$ and $v$. Let $S_u = \ensuremath{\mathsf{Sep}}_{R^2}(P,u)$ and $S_v = \ensuremath{\mathsf{Sep}}_{R^2}(P,v)$, where $S_u$ lies in the strict interior of $S_v$. Then, there is a sequence of concentric cycles $\mathcal{C}(u,v) = (C_1,C_2,\ldots,C_p)$ in $G$ of length $p\geq 100 \alpha_{\rm sep}(k)$ such that $S_u$ is in the strict interior of $C_1$ in $H$, $S_v$ is in the strict exterior of $C_p$ in $H$, and there is a path $\eta$ in $H$ with one endpoint $v_0\in S_u$ and the other endpoint $v_{p+1}\in S_v$, such that the intersection of $V(\eta)$ with $V(G)\cup S_u\cup S_v$ is $\{v_0, v_1, \ldots, v_{p+1}\}$ for some $v_i \in V(C_i)$ for every $i\in\{1,\ldots,p\}$. Furthermore, $\mathcal{C}(u,v)$ can be computed in linear time. \end{lemma} \begin{proof} Towards the computation of $\mathcal{C}(u,v)$, delete all vertices that lie in the strict interior of $S_u$ or in the strict exterior of $S_v$, as well as all vertices of $V(H) \setminus (V(G)\cup S_u\cup S_v)$. Denote the resulting graph by $G^{+}_{u,v}$, and note that it has a plane embedding in the ``ring'' defined by $H[S_u]$ and $H[S_v]$. Observe that $S_u,S_v\subseteq V(G^{+}_{u,v})$, where $S_v$ defines the outer face of the embedding of $G^{+}_{u,v}$. Thus, any cycle in this graph that separates $S_u$ and $S_v$ must contain $S_u$ in its interior and $S_v$ in its exterior. Furthermore, $G_{u,v} = G^{+}_{u,v} - V(H)$ is an induced subgraph of $G$, which consists of all vertices of $G$ that, in $H$, lie in the strict exterior of $S_u$ and in the strict interior of $S_v$ simultaneously, or lie in $S_v \cup S_v$. In particular, any cycle of $G_{u,v}$ is also a cycle in $G$. Now, $\mathcal{C}(u,v)$ is computed as follows. Start with an empty sequence, and the graph $G_{u,v} - (S_u \cup S_v)$. As long as there is a cycle in the current graph such that all vertices of $S_u$ are in the strict interior of $C$ with respect to $H$, remove vertices of degree at most $1$ in the current graph until no such vertices remain, and append the outer face of the current graph as a cycle to the constructed sequence. It is clear that this process terminates in linear time, and that by the above discussion, it constructs a sequence of concentric cycles $\mathcal{C}(u,v) = (C_1,C_2,\ldots,C_p)$ in $G$ such that $S_u$ is in the strict interior of $C_1$ in $H$, $S_v$ is in the strict exterior of $C_p$ in $H$. To assert the existence of a path $\eta$ in $H$ with one endpoint $v_0\in S_u$ and the other endpoint $v_{p+1}\in S_v$, such that the intersection of $V(\eta)$ with $V(G)\cup S_u\cup S_v$ is $\{v_0,v_1, \ldots, v_{p+1}\}$ for some $v_i \in V(C_i)$ for every $i\in\{1,\ldots,p\}$, we require the following claim. \begin{claim}\label{claim:CC_cons} Let $C_{p+1}=H[S_v]$. For every $i\in \{1,2,\ldots,p\}$ and every vertex $w\in V(C_i)$, there exists a vertex $w'\in V(C_{i+1})$ such that $w$ and $w'$ lie on a common face in $G^{+}_{u,v}$. Moreover, there exist vertices $w\in S_u$ and $w'\in V(C_1)$ that lie on a common face in $G^{+}_{u,v}$. \end{claim} \noindent{\em Proof of Claim \ref{claim:CC_cons}.} Consider $i\in \{1,2,\ldots,p\}$ and a vertex $w\in V(C_i)$. We claim that ${\mathsf{rdist}}(w,V(C_{i+1})) \leq 1$, i.e.~there must be $w' \in V(C_{i+1})$ such that $w,w'$ have a common face in $G^{+}_{u,v}$. By way of contradiction, suppose that ${\mathsf{rdist}}(w,V(C_{i+1}))\geq 2$. Then, by Proposition~\ref{prop:concentricAtAllDists}, there is a cycle $C$ that separates $w$ and $V(C_{i+1})$ in $G^{+}_{u,v}$ such that ${\mathsf{rdist}}(w,w'') = 1$ for every vertex $w'' \in V(C)$. Here, $w$ lies in the strict interior of $C$, and $C$ lies in the strict interior of $C_{i+1}$. Further, $C$ is vertex disjoint from $C_{i+1}$, since ${\mathsf{rdist}}(w, w') \geq 2$ for every $w' \in V(C_{i+1})$. Now, consider the outer face of $G[V(C_i)\cup V(C)]$. By the construction of $C_i$, this outer face must be $C_i$. However, $w\in V(C_i)$ cannot belong to it, hence we reach a contradiction. For the second part, we claim that ${\mathsf{rdist}}(S_u,V(C_{1})) \leq 1$, i.e.~there must be $w\in S_u$ and $w' \in V(C_{1})$ such that $w,w'$ have a common face in $G^{+}_{u,v}$. By way of contradiction, suppose that ${\mathsf{rdist}}(S_u,V(C_{1}))\geq 2$. Then, by Proposition~\ref{prop:concentricAtAllDists}, there is a cycle $C$ that separates $S_u$ and $V(C_{1})$ in $G^{+}_{u,v}$ such that ${\mathsf{rdist}}(S_u,w'') = 1$ for every vertex $w'' \in V(C)$. Further, $C$ is vertex disjoint from $C_{1}$, since ${\mathsf{rdist}}(S_u, w') \geq 2$ for every $w' \in V(C_{1})$. However, this is a contradiction to the termination condition of the construction of $\mathcal{C}(u,v)$. $\diamond$ Having this claim, we construct $\eta$ as follows. Pick vertices $v_0\in S_u$ and $v_1\in V(C_1)$ that lie on a common face in $G^{+}_{u,v}$. Then, for every $i\in\{2,\ldots,p+1\}$, pick a vertex $v_i\in V(C_i)$ such that $v_{i-1}$ and $v_i$ lie on a common face in $G^{+}_{u,v}$. Thus, for every $i\in\{0,1,\ldots,p\}$, we have that $v_i$ and $v_{i+1}$ are either adjacent in $H$ or there exists a vertex $u_i\in V(H)\setminus V(G)$ such that $u_i$ is adjacent to both $v_i$ and $v_{i+1}$. Because $\mathcal{C}(u,v) = (C_1,C_2,\ldots,C_p)$ is a sequence of concentric cycles in $G$ such that $S_u$ is in the strict interior of $C_1$ and $S_v$ is in the strict exterior of $C_p$, the $u_i$'s are distinct. Thus, $\eta=v_0-u_0-v_1-u_1-v_2-u_2-\cdots-v_p-u_p-v_{p+1}$, where undefined $u_i$'s are dropped, is a path as required. Finally, we argue that $p\geq 100\cdot \alpha_{\rm sep}(k)$. Note that $100 \alpha_{\rm sep}(k)=100(\frac{7}{2}\cdot 2^{ck} + 2) \leq 400 \cdot 2^{ck}$, thus it suffices to show that $p\geq 400 \cdot 2^{ck}$. To this end, we obtain a lower bound on the radial distance between $S_u$ and $S_v$ in $G^{+}_{u,v}$. Recall that $|S_u|,|S_v| \leq \alpha_{\rm sep}(k) = \frac{7}{2}\cdot 2^{ck} + 2 \leq 4\cdot 2^{ck}$. Let $P = \ensuremath{\mathsf{path}}_{R^2}(u,v)$, and recall that its length is at least $\alpha_{\rm long}(k) = 10^4 2^{ck}$. Since $R^2$ has not detour, $P$ is a shortest path in $H$ between $u$ and $v$, thus for any two vertices in $V(P)$, the subpath of $P$ between them is a shortest path between them. Now, recall the vertices $u'= u'_P,v'= v'_P$ (defined in Lemma~\ref{lem:threeParts}), and denote the subpath between them by $P'$. By construction, $|E(P')| \geq \alpha_{\rm long}(k) - 2 \cdot \alpha_{\rm pat}(k) = (10^4 - 200)\cdot 2^{ck}$. We claim that the radial distance between $S_u$ and $S_v$ in $G^{+}_{u,v}$ is at least $|E(P')|/2 - |S_u| - |S_v|$. Suppose not, and consider a sequence of vertices in $G^{+}_{u,v}$ that witnesses this fact: $x_1, x_2, x_3, \ldots, x_{p-1}, x_p\in V(G^{+}_{u,v})$ where $x_1 \in S_u$, $x_p \in S_v$ , $p < |E(P')|/2 - |S_v| - |S_u|$, and every two consecutive vertices lie on a common face. Consider a shortest such sequence, which visits each face of $G^{+}_{u,v}$ at most once. In particular, $x_1$ and $x_p$ are the only vertices of $S_u\cup S_v$ in this sequence. Then, we can extend this sequence on both sides to derive another sequence of vertices of $G^{+}_{u,v}$ starting at $u'$ and ending at $v'$ such that the prefix of the new sequence is a path in $G^{+}_{u,v}[S_u]$ from $u'$ to $x_1$, the midfix is $x_1, x_2, x_3, \ldots, x_{p-1}, x_p$, and the suffix is a path in $G^{+}_{u,v}[S_v]$ from $x_p$ to $v'$. Further, the length of the new sequence of vertices is smaller than $|E(P')|/2$. Hence, the radial distance between $u'$ and $v'$ in $H'=H[V(G)\cup S_u\cup S_v]$ (the graph derived from $G^{+}_{u,v}$ by reintroducing the vertices of $G$ that lie inside $S_u$ or outside $S_v$) is smaller than $|E(P')|/2$, and let it be witnessed by a sequence $Q[u',v']$. As $H$ is the radial completion of $G$, observe that $Q[u',v']$ gives rise to a path $Q$ in $H$ between $u'$ and $v'$ of length smaller than $|E(P')|$. However, then $u',v'$ and $Q$ witness a detour in $R^2$, which is a contradiction. Hence, the radial distance between $S_u$ and $S_v$ in $G^{+}_{u,v}$ is at least \begin{align*} &~~ \frac{(10^4 - 200)}{2}\cdot 2^{ck} - (|S_u| + |S_v|) \\ \geq &~~ 4900\cdot 2^{ck} - 2\alpha_{\rm sep}(k) \\ = &~~ 4900 \cdot 2^{ck} - (7 \cdot 2^{ck} + 4) ~~ \geq ~~ 400 \cdot 2^{ck}. \end{align*} Now, observe that $S_u$ and $S_v$ are connected sets in $G^{+}_{u,v}$ and $S_v$ forms the outer-face of $G^{+}_{u,v}$. Then, by Proposition~\ref{prop:concentricAtAllDists}, we obtain a collection of at least $400 \cdot 2^{ck}$ disjoint cycles in $G^{+}_{u,v}$, where each cycle separates $S_u$ and $S_v$. Note that these cycles are disjoint from $S_u \cup S_v$, and hence they lie in $G$. Moreover, each of them contains $S_u$ in its strict interior, and $S_v$ in its strict exterior. Thus, it is clear that the sequence $\mathcal{C}(u,v)$ computed above must contain at least $400 \cdot 2^{ck} > 100 \alpha_{\rm sep}(k)$ cycles. \end{proof} Recall the graph $G_{u,v}$, which is an induced subgraph of $G$, which consists of all vertices of $G$ that, in $H$, lie in the strict exterior of $S_u$ and in the strict interior of $S_v$ simultaneously, or lie in $S_v \cup S_v$. With $\mathcal{C}(u,v)$ at hand, we compute a maximum size collection of disjoint paths from $S_u$ to $S_v$ in $G_{u,v}$ that minimizes the number of edges it traverses outside $E(\mathcal{C}(u,v))$. In this observation, the implicit assumption that $\ell\leq \alpha_{\rm sep}(k)$ is justified by Lemma \ref{lem:sepSmall}. \begin{observation}\label{obs:disjPTime} Let the maximum flow between $S_u\cap V(G)$ and $S_v\cap V(G)$ in $G_{u,v}$ be $\ell\leq \alpha_{\rm sep}(k)$. Given the sequence $\mathcal{C}(u,v)$ of Lemma \ref{lemma:CC_cons}, a collection $\ensuremath{\mathsf{Flow}}_{R^2}(u,v)$ of $\ell$ vertex-disjoint paths in $G_{u,v}$ from $S_u\cap V(G)$ to $S_v\cap V( G)$ that minimizes $|E(\ensuremath{\mathsf{Flow}}_{R^2}(u,v)) \setminus E(\mathcal{C}(u,v))|$ is computable in time $\mathcal{O}(n^{3/2} \log^3 n)$. \end{observation} \begin{proof} We determine $\ell\leq \alpha_{\rm sep}(k)$ in time $2^{\mathcal{O}(k)}n$ by using Ford-Fulkerson algorithm. Next, we define a weight function $w$ on $E(G_{u,v})$ as follows: $$ w(e) = \begin{cases} 0 & \text{if $e \in E(\mathcal{C}(u,v))$}\\ 1 & \text{otherwise} \end{cases} $$ We now compute a minimum cost flow between $S_u\cap V(G)$ and $S_v\cap V(G)$ of value $\ell$ in $G_{u,v}$ under the weight function $w$. This can be done in time $O(n^{3/2} \log^3 n)$ by \cite[Theorem 1]{CK12}, as the cost of such a flow is bounded by $O(n)$. Clearly, the result is a collection $\ensuremath{\mathsf{Flow}}_{R^2}(u,v)$ of $\ell$ vertex-disjoint paths from $S_u\cap V(G)$ to $S_v\in V(G)$ minimizing $|E(\ensuremath{\mathsf{Flow}}_{R^2}(u,v)) \setminus E(\mathcal{C}(u,v))|$. \end{proof} Having $\ensuremath{\mathsf{Flow}}_{R^2}(u,v)$ at hand, we proceed to find a certain path between $u'_P$ and $v'_P$ that will be used to replace $P_{u'_P,v'_P}$. We remark that all vertices and edges of $\ensuremath{\mathsf{Flow}}_{R^2}(u,v)$ lie between $S_u$ and $S_v$ in the plane embedding of $H$. The definition of this path is given by the following lemma, and the construction will make it intuitively clear that the paths in $\ensuremath{\mathsf{Flow}}_{R^2}(u,v)$ do not ``spiral around'' the Steiner tree once we replace $P_{u'_P,v'_P}$ with $P_{u'_P,v'_P}^\star$. Note that in the lemma, we consider a path $P$ in $H$, while the paths in $\ensuremath{\mathsf{Flow}}_{R^2}(u,v)$ are in $G$. \begin{lemma}\label{lem:pathThroughFlow} Let $(G,S,T,g,k)$ be a good instance of \textsf{Planar Disjoint Paths}. Let $R^2$ be a Steiner tree with no detour, and $P$ be a long maximal degree-2 path of $R^2$, with endpoints $u$ and $v$. Let $u'=u'_P$ and $v'=v'_P$. Then, there exists a path $P^\star_{u',v'}$ in $H-(V(C_{R^2,P,u})\cup V(C_{R^2,P,u}))$ between $u'$ and $v'$ with the following property: there do not exist three vertices $x,y,z\in V(P^\star_{u',v'})$ such that {\em (i)} $\ensuremath{\mathsf{dist}}_{P^\star_{u',v'}}(u',x)<\ensuremath{\mathsf{dist}}_{P^\star_{u',v'}}(u',y)<\ensuremath{\mathsf{dist}}_{P^\star_{u',v'}}(u',z)$, and {\em (ii)} there exist a path in $\ensuremath{\mathsf{Flow}}_{R^2}(u,v)$ that contains $x$ and $z$ and a different path in $\ensuremath{\mathsf{Flow}}_{R^2}(u,v)$ that contains $y$. Moreover, such a path $P^\star_{u',v'}$ can be computed in time $\mathcal{O}(n)$. \end{lemma} \begin{proof} Let $P^\star_{u',v'}$ be a path in $H-(V(C_{R^2,P,u})\cup V(C_{R^2,P,u}))$ between $u'$ and $v'$ that minimizes the number of paths $Q\in \ensuremath{\mathsf{Flow}}_{R^2}(u,v)$ for which there exist at least one triple $x,y,z\in V(P^\star_{u',v'})$ that has the two properties in the lemma and $x,z\in V(Q)$. Due to the existence of $P_{u',v'}$, such a path $P^\star_{u',v'}$ exists. We claim that this path $P^\star_{u',v'}$ has no triple $x,y,z\in V(P^\star_{u',v'})$ that has the two properties in the lemma. Suppose, by way of contradiction, that our claim is false, and let $x,y,z\in V(P^\star_{u',v'})$ be a triple that has the two properties in the lemma. Note that, when traversed from $u'$ to $v'$, $P^\star_{u',v'}$ first visits $x$, then visits $y$ and afterwards visits $z$. Let $Q'$ be the path in $\ensuremath{\mathsf{Flow}}_{R^2}(u,v)$ that contains $x$ and $z$. Let $x'$ and $z'$ be the first and last vertices of $Q'$ that are visited by $P^\star_{u',v'}$. Then, replace the subpath of $P^\star_{u',v'}$ between $x'$ and $z'$ by the subpath of $Q'$ between $x'$ and $z'$. This way we obtain a path $P'$ in $H-(V(C_{R^2,P,u})\cup V(C_{R^2,P,u}))$ between $u'$ and $v'$ for which there exist fewer paths $Q\in \ensuremath{\mathsf{Flow}}_{R^2}(u,v)$, when compared to $P^\star_{u',v'}$, for which there exists at least one triple $x,y,z\in V(P')$ that has the two properties in the lemma and such that $x,z\in V(Q)$. As we have reached a contradiction, we conclude that our initial claim is correct. While the proof is existential, it can clearly be turned into a linear-time algorithm. \end{proof} The following is a direct corollary of the above lemma. \begin{corollary}\label{cor:pathThroughFlow} For each path $Q \in \ensuremath{\mathsf{Flow}}_{R^2}(u,v)$, there are at most two edges in $P^\star_{u',v'}$ such that one endpoint of the edge lies in $V(Q)$ and the other lies in $V(G) \setminus V(Q)$. \end{corollary} Having Lemma \ref{lem:pathThroughFlow} at hand, we modify $R^2$ as follows: for every long maximal degree-2 path $P$ of $R^2$ with endpoints $u$ and $v$, replace $P_{u'_P,v'_P}$ by $P^\star_{u'_P,v'_P}$. Denote the result of this modification by $R=R^3$. We refer to $R^3$ as a {\em backbone Steiner tree}. Let us remark that the backbone Steiner tree $R^3$ is always accompanied by the separators $\ensuremath{\mathsf{Sep}}_{R^2}(P,u)$ and $\ensuremath{\mathsf{Sep}}_{R^2}(P,v)$, and the collection $\ensuremath{\mathsf{Flow}}_{R^2}(u,v)$ for every long maximal degree-2 path $P=\ensuremath{\mathsf{path}}_{R^2}(u,v)$ of $R^2$. These separators and flows will play crucial role in our algorithm. In the following subsection, we will prove that $R^3$ is indeed a Steiner tree, and in particular it is a tree. Additionally and crucially, we will prove that the separators computed previously remain separators. Let us first conclude the computational part by stating the running time spent so far. From Lemma \ref{obs:undetourExhaustive}, Observations \ref{obs:sepTime} and \ref{obs:disjPTime}, and by Lemma \ref{lem:pathThroughFlow}, we have the following result. \begin{lemma}\label{lem:goodSteinerTreeComputeTime} Let $(G,S,T,g,k)$ be a good instance of \textsf{Planar Disjoint Paths}. Then, a backbone Steiner tree $R$ can be computed in time $2^{\mathcal{O}(k)}n^{3/2} \log^3 n$. \end{lemma} \subsection{Analysis of $R^3$ and the Separators $\ensuremath{\mathsf{Sep}}_{R^2}(P,u)$} Having constructed the backbone Steiner tree $R^3$, we turn to analyse its properties. Among other properties, we show that useful properties of $R^2$ also transfer to $R^3$. We begin by proving that the two separators of each long maximal degree-2 path $P$ of $R^2$ partition $V(H)$ into five ``regions'', and that the vertices in each region are all close to the subtree of $R$ that (roughly) belongs to that region. Specifically, the regions are $V(C_{R^2,P,u})$, $\ensuremath{\mathsf{Sep}}_{R^2}(P,u)$, $V(C_{R^2,P,v})$, $\ensuremath{\mathsf{Sep}}_{R^2}(P,v)$, and $V(H)\setminus \big(V(C_{R^2,P,u})\cup V(C_{R^2,P,v})\cup\ensuremath{\mathsf{Sep}}_{R^2}(P,u)\cup\ensuremath{\mathsf{Sep}}_{R^2}(P,v) \big)$, and our claim is as follows. \begin{lemma}\label{lem:regionsClose} Let $(G,S,T,g,k)$ be a good instance of \textsf{Planar Disjoint Paths}. Let $R^2$ be a Steiner tree that has no detour, $P$ be a long maximal degree-2 path of $R^2$, and $u$ and $v$ be its endpoints. \begin{enumerate} \item\label{item:threeRegionsClose1} For all $w\in \ensuremath{\mathsf{Sep}}_{R^2}(P,u)$, it holds that $\ensuremath{\mathsf{dist}}_H(w,u'_P)\leq \alpha_{\mathrm{sep}}(k)$. \item\label{item:threeRegionsClose2} For all $w\in \ensuremath{\mathsf{Sep}}_{R^2}(P,v)$, it holds that $\ensuremath{\mathsf{dist}}_H(w,v'_P)\leq \alpha_{\mathrm{sep}}(k)$. \item\label{item:threeRegionsClose3} For all $w\in V(C_{R^2,P,u})$, it holds that $\ensuremath{\mathsf{dist}}_H(w,\widetilde{A}_{R^2,P,u}\cup\{u'_P\})\leq \alpha_{\mathrm{dist}}(k)+\alpha_{\mathrm{sep}}(k)$. \item\label{item:threeRegionsClose4} For all $w\in V(C_{R^2,P,v})$, it holds that $\ensuremath{\mathsf{dist}}_H(w,\widetilde{A}_{R^2,P,v}\cup\{v'_P\})\leq \alpha_{\mathrm{dist}}(k)+\alpha_{\mathrm{sep}}(k)$. \item\label{item:threeRegionsClose5} For all $w\in V(H)\setminus (V(C_{R^2,P,u})\cup V(C_{R^2,P,v})\cup\ensuremath{\mathsf{Sep}}_{R^2}(P,u)\cup\ensuremath{\mathsf{Sep}}_{R^2}(P,v))$, it holds that $\ensuremath{\mathsf{dist}}_H(w,V(R^2)\setminus (\widetilde{A}_{R^2,P,u}\cup\widetilde{A}_{R^2,P,v}))\leq \alpha_{\mathrm{dist}}(k)+\alpha_{\mathrm{sep}}(k)$. \end{enumerate} \end{lemma} \begin{proof} First, note that Conditions \ref{item:threeRegionsClose1} and \ref{item:threeRegionsClose2} follow directly from Lemma \ref{lem:sepSmall} and Observation \ref{obs:sepIsCycle}. For Condition \ref{item:threeRegionsClose3}, consider some vertex $w\in V(C_{R^2,P,u})$. By Lemma \ref{lem:closeToR}, $\ensuremath{\mathsf{dist}}_H(w,V(R^2))\leq \alpha_{\mathrm{dist}}(k)$. Thus, there exists a path $Q$ in $H$ with $w$ as one endpoint and the other endpoint $x$ in $V(R^2)$ such that the length of $Q$ is at most $\alpha_{\mathrm{dist}}(k)$. In case $x\in \widetilde{A}_{R^2,P,u}\cup\{u'_P\}$, we have that $\ensuremath{\mathsf{dist}}_H(w,\widetilde{A}_{R^2,P,u})\leq \alpha_{\mathrm{dist}}(k)$, and hence the condition holds. Otherwise, by the definition of $\ensuremath{\mathsf{Sep}}_{R^2}(P,u)$, the path $Q$ must traverse at least one vertex from $\ensuremath{\mathsf{Sep}}_{R^2}(P,u)$. Thus, $\ensuremath{\mathsf{dist}}_H(w,\ensuremath{\mathsf{Sep}}_{R^2}(P,u))\leq \alpha_{\mathrm{dist}}(k)$. Combined with Condition \ref{item:threeRegionsClose1}, we derive that $\ensuremath{\mathsf{dist}}_H(w,\widetilde{A}_{R^2,P,u}\cup\{u'_P\})\leq \alpha_{\mathrm{dist}}(k)+\alpha_{\mathrm{sep}}(k)$. The proof of Condition \ref{item:threeRegionsClose4} is symmetric. The proof of Condition \ref{item:threeRegionsClose5} is similar. Consider some vertex $w\in V(H)\setminus (V(C_{R^2,P,u})\cup V(C_{R^2,P,v})\cup\ensuremath{\mathsf{Sep}}_{R^2}(P,u)\cup\ensuremath{\mathsf{Sep}}_{R^2}(P,v))$. As before, there exists a path $Q$ in $H$ with $w$ as one endpoint and the other endpoint $x$ in $V(R^2)$ such that the length of $Q$ is at most $\alpha_{\mathrm{dist}}(k)$. In case $x\in \widetilde{A}_{R^2,P,u}\cup\{u'_P\}$, we are done. Otherwise, the path $Q$ must traverse at least one vertex from $\ensuremath{\mathsf{Sep}}_{R^2}(P,u)\cup\ensuremath{\mathsf{Sep}}_{R^2}(P,v)$. Specifically, if $x\in V(C_{R^2,P,u})$, then it must traverse at least one vertex from $\ensuremath{\mathsf{Sep}}_{R^2}(P,u)$, and otherwise $x\in V(C_{R^2,P,v})$ and it must traverse at least one vertex from $\ensuremath{\mathsf{Sep}}_{R^2}(P,u)$. Combined with Conditions \ref{item:threeRegionsClose1} and \ref{item:threeRegionsClose2}, we derive that $\ensuremath{\mathsf{dist}}_H(w,V(R^2)\setminus (\widetilde{A}_{R^2,P,u}\cup\widetilde{A}_{R^2,P,v}))\leq \alpha_{\mathrm{dist}}(k)+\alpha_{\mathrm{sep}}(k)$. \end{proof} An immediate corollary of Lemma \ref{lem:regionsClose} concerns the connectivity of the ``middle region'' as follows. (This corollary can also be easily proved directly.) \begin{corollary}\label{cor:connectivityMidRegion} Let $(G,S,T,g,k)$ be a good instance of \textsf{Planar Disjoint Paths}. Let $R^2$ be a Steiner tree that has no detour, $P$ be a long maximal degree-2 path of $R^2$, and $u$ and $v$ be its endpoints. Then, $H[V(H)\setminus (V(C_{R^2,P,u})\cup V(C_{R^2,P,v}))]$ is a connected graph. \end{corollary} \begin{proof} By Lemma \ref{lem:regionsClose} and the definition of $\ensuremath{\mathsf{Sep}}_{R^2}(P,u)$ and $\ensuremath{\mathsf{Sep}}_{R^2}(P,v)$, for every vertex in $V(H)\setminus (V(C_{R^2,P,u})\cup V(C_{R^2,P,v}))$, the graph $H$ has a path from that vertex to some vertex in $\ensuremath{\mathsf{Sep}}_{R^2}(P,u)\cup\ensuremath{\mathsf{Sep}}_{R^2}(P,v)$ that lies entirely in $H[V(H)\setminus (V(C_{R^2,P,u})\cup V(C_{R^2,P,v}))]$. Thus, the corollary follows from Observation \ref{obs:sepIsCycle}. \end{proof} Next, we utilize Lemma \ref{lem:regionsClose} and Corollary \ref{cor:connectivityMidRegion} to argue that the ``middle regions'' of different long maximal degree-2 paths of $R^2$ are distinct. Recall that we wish to reroute a given solution to be a solution that ``spirals'' only a few times around the Steiner tree. This lemma allows us to independently reroute the solution in each of these ``middle regions''. In fact, we prove the following stronger statement concerning these regions. The idea behind the proof of this lemma is that if it were false, then $R^2$ admits a detour, which is contradiction. \begin{figure} \caption{Illustration of Lemma~\ref{lem:distinctMidRegions}.} \label{fig:treeflow2} \end{figure} \begin{lemma}\label{lem:distinctMidRegions} Let $(G,S,T,g,k)$ be a good instance of \textsf{Planar Disjoint Paths}. Let $R^2$ be a Steiner tree that has no detour. Additionally, let $P$ and $\widehat{P}$ be two distinct long maximal degree-2 paths of $R^2$. Let $u$ and $v$ be the endpoints of $P$, and $\widehat{u}$ and $\widehat{v}$ be the endpoints of $\widehat{P}$. Then, one of the two following conditions holds: \begin{itemize}\setlength\itemsep{0em} \item $V(H)\setminus (V(C_{R^2,P,u})\cup V(C_{R^2,P,v})) \subseteq V(C_{R^2,\widehat{P},\widehat{u}})$. \item $V(H)\setminus (V(C_{R^2,P,u})\cup V(C_{R^2,P,v})) \subseteq V(C_{R^2,\widehat{P},\widehat{v}})$. \end{itemize} \end{lemma} \begin{proof} We first prove that the intersection of $V(H)\setminus (V(C_{R^2,P,u})\cup V(C_{R^2,P,v}))$ with $V(H)\setminus (V(C_{R^2,\widehat{P},\widehat{u}})\cup V(C_{R^2,\widehat{P},\widehat{v}}))$ is empty. To this end, suppose by way of contradiction that there exists a vertex $w$ in this intersection. By Lemma \ref{lem:regionsClose}, the inclusion of $w$ in both $V(H)\setminus (V(C_{R^2,P,u})\cup V(C_{R^2,P,v}))$ and $V(H)\setminus (V(C_{R^2,\widehat{P},\widehat{u}})\cup V(C_{R^2,\widehat{P},\widehat{v}}))$ implies that the two following inequalities are satisfied: \begin{itemize}\setlength\itemsep{0em} \item $\ensuremath{\mathsf{dist}}_H(w,V(R^2)\setminus (\widetilde{A}_{R^2,P,u}\cup\widetilde{A}_{R^2,P,v}))\leq \alpha_{\mathrm{dist}}(k)+\alpha_{\mathrm{sep}}(k)$. \item $\ensuremath{\mathsf{dist}}_H(w,V(R^2)\setminus (\widetilde{A}_{R^2,\widehat{P},\widehat{u}}\cup\widetilde{A}_{R^2,\widehat{P},\widehat{v}}))\leq \alpha_{\mathrm{dist}}(k)+\alpha_{\mathrm{sep}}(k)$. \end{itemize} From this, we derive the following inequality: \[\ensuremath{\mathsf{dist}}_H(V(R^2)\setminus (\widetilde{A}_{R^2,P,u}\cup\widetilde{A}_{R^2,P,v}),V(R^2)\setminus (\widetilde{A}_{R^2,\widehat{P},\widehat{u}}\cup\widetilde{A}_{R^2,\widehat{P},\widehat{v}}))\leq 2(\alpha_{\mathrm{dist}}(k)+\alpha_{\mathrm{sep}}(k)).\] In particular, this means that there exist vertices $x\in V(P_{u'_P,v'_P})$ and $y\in V(\widehat{P}_{\widehat{u}'_{\widehat{P}},\widehat{v}'_{\widehat{P}}})$ and a path $Q$ in $H$ between them whose length is at most $2(\alpha_{\mathrm{dist}}(k)+\alpha_{\mathrm{sep}}(k))$. Note that the unique path in $R^2$ between $x$ and $y$ traverses exactly one vertex in $\{u,v\}$. Suppose w.l.o.g.~that this vertex is $u$. Then, consider the walk $W$ (which might be a path) obtained by traversing $P$ from $v$ to $x$ and then traversing $Q$ from $x$ to $y$. Now, notice that \[\begin{array}{ll} |E(P)|-|E(W)| & \geq |E(P'_{u,u'_P})|-2(\alpha_{\mathrm{dist}}(k)+\alpha_{\mathrm{sep}}(k))\\ & \geq \alpha_{\mathrm{pat}}(k)/2-2(\alpha_{\mathrm{dist}}(k)+\alpha_{\mathrm{sep}}(k))\\ & = 50\cdot 2^{ck} - 2(4\cdot 2^{ck} + \frac{7}{2}\cdot 2^{ck}+2) > 0. \end{array}\] Here, the inequality $|E(P'_{u,u'_P})|\geq \alpha_{\mathrm{pat}}(k)/2$ followed from Lemma \ref{lem:threeParts}. As $|E(P)|-|E(W)|>0$, we have that $u,v$ and any subpath of the walk $W$ between $u$ and $v$ witness that $R^2$ has a detour (see Fig.~\ref{fig:treeflow2}). This is a contradiction, and hence we conclude that the intersection of $V(H)\setminus (V(C_{R^2,P,u})\cup V(C_{R^2,P,v}))$ with $V(H)\setminus (V(C_{R^2,\widehat{P},\widehat{u}})\cup V(C_{R^2,\widehat{P},\widehat{v}}))$ is empty. Having proved that the intersection is empty, we know that \[V(H)\setminus (V(C_{R^2,P,u})\cup V(C_{R^2,P,v})) \subseteq V(C_{R^2,\widehat{P},\widehat{u}}) \cup V(C_{R^2,\widehat{P},\widehat{v}}).\] Thus, it remains to show that $V(H)\setminus (V(C_{R^2,P,u})\cup V(C_{R^2,P,v}))$ cannot contain vertices from both $V(C_{R^2,\widehat{P},\widehat{u}})$ and $V(C_{R^2,\widehat{P},\widehat{v}})$. Since $H[V(H)\setminus (V(C_{R^2,P,u})\cup V(C_{R^2,P,v}))]$ is a connected graph (by Corollary \ref{cor:connectivityMidRegion}), if $V(H)\setminus (V(C_{R^2,P,u})\cup V(C_{R^2,P,v}))$ contains vertices from both $V(C_{R^2,\widehat{P},\widehat{u}})$ and $V(C_{R^2,\widehat{P},\widehat{v}})$, then it must also contain at least one vertex from $\ensuremath{\mathsf{Sep}}_{R^2}(\widehat{P},\widehat{u})\cup\ensuremath{\mathsf{Sep}}_{R^2}(\widehat{P},\widehat{v})\subseteq V(H)\setminus (V(C_{R^2,\widehat{P},\widehat{u}})\cup V(C_{R^2,\widehat{P},\widehat{v}}))$, which we have already shown to be impossible. Thus, the proof is complete. \end{proof} We are now ready to prove that $R^3$ is a Steiner tree. \begin{lemma}\label{lem:R*Steiner} Let $(G,S,T,g,k)$ be a good instance of \textsf{Planar Disjoint Paths}. Let $R^2$ be a Steiner tree that has no detour, and $R^3$ be the subgraph constructed from $R^2$ in Step IV. Then, $R^3$ is a Steiner tree with the following properties. \begin{itemize} \item $R^3$ has the same set of vertices of degree at least $3$ as $R^2$. \item Every short maximal degree-2 path $P$ of $R^2$ is a short maximal degree-2 path of $R^3$. \item For every long maximal degree-2 path $P$ of $R^2$ with endpoints $u$ and $v$, the paths $P_{u,u'_P}$ and $P_{v,v'_P}$ are subpaths of the maximal degree-2 path of $R^3$ with endpoints $u$ and $v$. \end{itemize} \end{lemma} \begin{proof} To prove that $R^3$ is a Steiner tree, we only need to show that $R^3$ is acyclic. Indeed, the construction of $R^3$ immediately implies that it is connected and has the same set of degree-1 vertices as $R^2$, which together with an assertion that $R^3$ is acyclic, will imply that it is a Steiner tree. The other properties in the lemma are immediate consequences of the construction of $R^3$. By its construction, to show that $R^3$ is acyclic, it suffices to prove two conditions: \begin{itemize} \item For every long maximal degree-2 path $P$ of $R^2$ with endpoints $u$ and $v$, it holds that $V(P^\star_{u'_P,v'_P})\cap (V(R^2)\setminus V(P_{u'_P,v'_P}))=\emptyset$. \item For every two distinct long maximal degree-2 paths $P$ and $\widehat{P}$ of $R^2$, it holds that $V(P^\star_{u'_P,v'_P})$ $\cap V(\widehat{P}^\star_{\widehat{u}',\widehat{v}'})=\emptyset$, where $u$ and $v$ are the endpoints of $P$, $\widehat{u}$ and $\widehat{v}$ are the endpoints of $\widehat{P}$, $u'=u'_P, v'=v'_P, \widehat{u}'=\widehat{u}'_{\widehat{P}}$ and $\widehat{v}'=\widehat{v}'_{\widehat{P}}$. \end{itemize} The first condition follows directly from the fact that $P^\star_{u'_P,v'_P}$ is a path in $H-(V(C_{R^2,P,u})\cup V(C_{R^2,P,u}))$ while $V(R^2)\setminus V(P_{u'_P,v'_P})\subseteq V(C_{R^2,P,u})\cup V(C_{R^2,P,u})$. For the second condition, note that Lemma \ref{lem:distinctMidRegions} implies that $V(H)\setminus (V(C_{R^2,P,u})\cup V(C_{R^2,P,v}))$ $\subseteq V(C_{R^2,\widehat{P},\widehat{u}})\cup V(C_{R^2,\widehat{P},\widehat{v}})$. Thus, we have that $V(P^\star_{u'_P,v'_P})\subseteq V(C_{R^2,\widehat{P},\widehat{u}}) \cup V(C_{R^2,\widehat{P},\widehat{v}})$. However, $V(\widehat{P}^\star_{\widehat{u}',\widehat{v}'})\cap (V(C_{R^2,\widehat{P},\widehat{u}}) \cup V(C_{R^2,\widehat{P},\widehat{v}}))=\emptyset$, and hence $V(P^\star_{u'_P,v'_P})\cap V(\widehat{P}^\star_{\widehat{u}',\widehat{v}'})=\emptyset$. \end{proof} We remark that $R^3$ might have detours. (These detours are restricted to $P_{u'_P,v'_P}$ for some long path $P=\ensuremath{\mathsf{path}}_{R^3}(u,v)$ in $R^3$.) However, what is important for us is that we can still use the same small separators as before. To this end, we first define the appropriate notations, in particular since later we will like to address objects corresponding to $R^3$ directly (without referring to $R^2$). The validity of these notations follows from Lemma \ref{lem:R*Steiner}. Recall that $u'_P$ and $P_{u,u'_P}$ refer to the vertex and path in Lemma \ref{lem:threeParts}. \begin{definition}[{\bf Translating Notations of $R^2$ to $R^3$}]\label{lem:translateRtoR*} Let $(G,S,T,g,k)$ be a good instance of \textsf{Planar Disjoint Paths}. Let $R^2$ and $R^3$ be the Steiner trees constructed in Steps II and IV. For any long maximal degree-2 path $\widehat{P}$ of $R^3$ and for each endpoint $u$ of $\widehat{P}$: \begin{itemize} \item Define $\widehat{P}_{R^2}$ as the unique (long maximal degree-2) path in $R^2$ with the same endpoints~as~$\widehat{P}$. \item Let $P=\widehat{P}_{R^2}$. Then, denote $u'_{\widehat{P}}=u'_P$, $\widehat{P}_{u,u'_{\widehat{P}}}=P_{u,u'_P}$ and $\ensuremath{\mathsf{Sep}}_{R^3}(\widehat{P},u)=\ensuremath{\mathsf{Sep}}_{R^2}(P,u)$. \item Define $A^\star_{R^3,\widehat{P},u}$ as the union of $V(\widehat{P}_{u,u'_{\widehat{P}}})$ and the vertex set of the connected component of $R^3-(V(\widehat{P}_{u,u'_{\widehat{P}}})\setminus \{u\})$ containing $u$. \item Define $B^\star_{R^3,\widehat{P},u}=V(R^3)\setminus A^\star_{R^3,\widehat{P},u}$. \end{itemize} \end{definition} In the context of Definition \ref{lem:translateRtoR*}, note that by Lemma \ref{lem:threeParts}, $u'\in\ensuremath{\mathsf{Sep}}_{R^3}(\widehat{P},u)\cap V(\widehat{P})$ where $u'=u'_{\widehat{P}}$, $\widehat{P}_{u,u'}$ is the subpath of $\widehat{P}$ between $u$ and $u'$, $\widehat{P}_{u,u'}$ has no internal vertex from $\ensuremath{\mathsf{Sep}}_{R^3}(\widehat{P},u)\cup\ensuremath{\mathsf{Sep}}_{R^3}(\widehat{P},v)$, and $\alpha_{\mathrm{pat}}(k)/2\leq |V(\widehat{P}_{u,u'})|\leq \alpha_{\mathrm{pat}}(k)$. Additionally, note that $A^\star_{R^3,\widehat{P},u}$ might {\em not} be equal to $A_{R^2,P,u}$ where $P=\widehat{P}_{R^2}$. When $R^3$ is clear from context, we omit it from the subscripts. Now, let us argue why, in a sense, we can still use the same small separators as before. Recall that a backbone Steiner tree is a Steiner tree constructed in Step IV. \begin{lemma}\label{lem:separatorsUnchanged} Let $(G,S,T,g,k)$ be a good instance of \textsf{Planar Disjoint Paths}. Let $R^3$ be a backbone Steiner tree. Additionally, let $\widehat{P}$ be a long maximal degree-2 path of $R^3$, and $u$ be an endpoint of $\widehat{P}$. Then, $\ensuremath{\mathsf{Sep}}(\widehat{P},u)$ separates $A^\star_{\widehat{P},u}$ and $B^\star_{\widehat{P},u}$ in $H$. \end{lemma} \begin{figure} \caption{Illustration of Lemma~\ref{lem:separatorsUnchanged}.} \label{fig:treeflow3} \end{figure} \begin{proof} Denote $P=\widehat{P}_{R^2}$ where $R^2$ is the Steiner tree computed in Step II to construct $R^3$. Then, $\ensuremath{\mathsf{Sep}}(\widehat{P},u)$ separates $V(C_{R^2,P,v})$ and $V(C_{R^2,P,u})$ in $H$. Thus, to prove that $\ensuremath{\mathsf{Sep}}(\widehat{P},u)$ separates $A^\star_{\widehat{P},u}$ and $B^\star_{\widehat{P},u}$ in $H$, it suffices to show that $A^\star_{\widehat{P},u}\subseteq V(C_{R^2,P,u})$ and $A^\star_{\widehat{P},v}\subseteq V(C_{R^2,P,v})$ (because $B^\star_{\widehat{P},u}\setminus A^\star_{\widehat{P},v}\subseteq V(P^\star_{u'_P,v'_P})$ and $P^\star_{u'_P,v'_P}\cap V(C_{R^2,P,u}=\emptyset$ by the construction of $P^\star_{u'_P,v'_P}$). We only prove that $A^\star_{\widehat{P},u}\subseteq V(C_{R^2,P,u})$. The proof of the other containment is symmetric. Clearly, $A_{R^2,P,u}\cap A^\star_{\widehat{P},v}\subseteq V(C_{R^2,P,u})$. Thus, due to Lemma \ref{lem:R*Steiner}, to show that $A^\star_{\widehat{P},u}\subseteq V(C_{R^2,P,u})$, it isuffices to show the following claim: For every long maximal degree-2 path $\widetilde{P}$ of $R^2$ whose vertex set is contained in $V(C_{R^2,P,u})$, it holds that the vertex set of $\widetilde{P}^\star_{\widetilde{u}'_{\widetilde{P}},\widetilde{v}'_{\widetilde{P}}}$ (computed by Lemma \ref{lem:pathThroughFlow}) is contained in $V(C_{R^2,P,u})$ as well, where $\widetilde{u}$ and $\widetilde{v}$ are the endpoints of $\widetilde{P}$. We refer the reader to Fig.~\ref{fig:treeflow3} for an illustration of this statement. For the purpose of proving it, consider some long maximal degree-2 path $\widetilde{P}$ of $R^2$ whose vertex set is contained in $V(C_{R^2,P,u})$. By Lemma \ref{lem:distinctMidRegions}, we know that either $V(H)\setminus (V(C_{R^2,\widetilde{P},\widetilde{u}})\cup V(C_{R^2,\widetilde{P},\widetilde{v}})) \subseteq V(C_{R^2,P,u})$ or $V(H)\setminus (V(C_{R^2,\widetilde{P},\widetilde{u}})\cup V(C_{R^2,\widetilde{P},\widetilde{v}})) \subseteq V(C_{R^2,P,v})$. Moreover, by the definition of $\widetilde{P}^\star_{\widetilde{u}'_{\widetilde{P}},\widetilde{v}'_{\widetilde{P}}}$, its vertex set is contained in $V(H)\setminus (V(C_{R^2,\widetilde{P},\widetilde{u}})\cup V(C_{R^2,\widetilde{P},\widetilde{v}}))$. Thus, to conclude the proof, it remains to rule out the possibility that $V(H)\setminus (V(C_{R^2,\widetilde{P},\widetilde{u}})\cup V(C_{R^2,\widetilde{P},\widetilde{v}})) \subseteq V(C_{R^2,P,v})$. For this purpose, recall that we chose $\widetilde{P}$ such that $V(\widetilde{P})\subseteq V(C_{R^2,P,u})$, and that $V(\widetilde{P})\cap V(C_{R^2,P,u})\neq\emptyset$. Because $V(C_{R^2,P,u})\cap V(C_{R^2,P,v})=\emptyset$, we derive that the containment $V(H)\setminus (V(C_{R^2,\widetilde{P},\widetilde{u}})\cup V(C_{R^2,\widetilde{P},\widetilde{v}})) \subseteq V(C_{R^2,P,v})$ is indeed impossible. \end{proof} \subsection{Enumerating Parallel Edges with Respect to $R^3$}\label{sec:enumParallel} Recall that $H$ is enriched with $4n+1$ parallel copies of each edge of the (standard) radial completion of $G$. While the copies did not play a role in the construction of $R$, they will be important in how we relate a solution of the given instance of \textsf{Planar Disjoint Paths}\ to a weak linkage in $H$. We remind that for a pair of adjacent vertices $u,v\in V(H)$, we denoted the $4n+1$ parallel copies of edges between them by $e_{-2n},e_{-2n+1},\ldots,e_{-1},e_0,e_1,e_2,\ldots,e_{2n}$ where $e=\{u,v\}$, such that when the edges incident to $u$ (or $v$) are enumerated in cyclic order, the occurrences of $e_i$ and $e_{i+1}$ are consecutive (that is, $e_i$ appears immediately before $e_{i+1}$ or vice versa) for every $i\in\{-2n,-2n+1,\ldots,2n-1\}$, and $e_{-2n}$ and $e_{2n}$ are the outermost copies of $e$. We say that such an embedding is {\em valid}. Now, we further refine the embedding of $H_G$ (so that it remains valid)---notice that for each edge, there are two possible ways to order its copies in the embedding so that it satisfies the condition above. Here, we will specify for the edges of $R$ a particular choice among the two to embed their copies. Towards this, for a vertex $v \in V(R)$, let $\wh{E}_R(v) = \{ e \in E_H(v) \mid e \text{ is parallel to an edge } e' \in E(R) \}$. (The set $E_H(v)$ contains all edges in $E(H)$ incident to $v$.) \begin{figure} \caption{The clockwise / anti-clockwise enumeration of parallel edges with respect to $R$.} \label{fig:enumParallel} \end{figure} For the definition of the desired embedding, we remind that any tree can be properly colored in two colors (that is, every vertex is assigned a color different than the colors of its neighbors), and that in such a coloring, for every vertex, all the neighbors of the vertex get the same color. We let ${\sf color}: V(R) \rightarrow \{ \text{red}, \text{green} \}$ be some such coloring of $R$. Then, we embed parallel copies such that for every $v \in V(R)$, the following conditions hold (see Fig.~\ref{fig:enumParallel}). \begin{itemize} \item If ${\sf color}(v) = \text{red}$, then when we enumerate $\wh{E}_R(v)$ in {\em clockwise} order, for every $e \in E_R(v)$, the $4n+1$ copies of $e$ are enumerated in this order: $e_{-2n}, e_{-2n+1}, \ldots, e_0, \ldots, e_{2n}$. We let ${\sf order}_v$ denote such an enumeration starting with an edge indexed $-2n$. \item If ${\sf color}(v) = \text{green}$, then when we enumerate $\wh{E}_R(v)$ in {\em counter-clockwise} order, for every $e \in E_R(v)$, the $4n+1$ copies of $e$ are enumerated in this order: $e_{-2n}, e_{-2n+1}, \ldots, e_0, \ldots, e_{2n}$. We let ${\sf order}_v$ denote such an enumeration starting with an edge indexed $-2n$. \end{itemize} Let us observe that the above scheme is well defined. \begin{observation}\label{obs:enumParallelTime} Let $(G,S,T,g,k)$ be a good instance of \textsf{Planar Disjoint Paths}\ with a backbone Steiner tree $R$. Then, there is a valid embedding of $H$ such that, for every $v\in V(R)$, the enumeration ${\sf order}_v$ is well defined with respect to some proper coloring ${\sf color}: V(R) \rightarrow \{ \text{red}, \text{green} \}$. Furthermore, such an embedding can be computed in time $\mathcal{O}(n^2)$. \end{observation} \begin{proof} Since $e=\{u,v\} \in E(R)$, $u$ and $v$ get different colors under ${\sf color}$. Let us assume that ${\sf color}(u) = \text{red}$ and ${\sf color}(v) = \text{green}$. Then the parallel copies of $e$ are enumerated in clockwise order in ${\sf order}_v$ and anti-clockwise order in ${\sf order}_v$. Hence, they agree and by construction, it is a good enumeration. Finally, to bound the time required to obtain such an embedding, observe that it cam be obtained by starting with any arbitrary embedding of $H$ and then renaming the edges. Since the total number of edges in $E(H)$ (including parallel copies) is at most $\mathcal{O}(n^2)$, this can be done in $\mathcal{O}(n^2)$ time. \end{proof} From now on, we assume that $H$ is embedded in a way so that the enumerations ${\sf order}_v$ are well defined. We also remind that $R$ only contains the $0$-th copies of edges in $H$. Finally, we have the following observation. \begin{observation}\label{obs:enumParallelEdges} Let $(G,S,T,g,k)$ be a good instance of \textsf{Planar Disjoint Paths}\ with a backbone Steiner tree $R$. For every $v \in V(H)$, ${\sf order}_v$ is an enumeration of $\wh{E}_R(v)$ in either clockwise or counter-clockwise order around $v$ (with a fixed start). Further, for any pair $e,e' \in E_R(v)$ such that $e$ occurs before $e'$ in ${\sf order}_v$, the edges $e_0,e_1,\ldots, e_{2n}$ occur before $e'_0,e'_1,\ldots, e'_{2n}$. \end{observation} \newcommand{\renewcommand\qedsymbol{$\diamond$}}{\renewcommand\qedsymbol{$\diamond$}} \section{Existence of a Solution with Small Winding Number} \label{sec:winding} In this section we show that if the given instance admits a solution, then it admits a ``nice solution''. The precise definition of nice will be in terms of ``winding number'' of the solution, which counts the number of times the solution spirals around the backbone steiner tree. Our goal is to show that there is a solution of small winding number. \subsection{Rings and Winding Numbers} Towards the definition of a ring, let us remind that $H$ is the triangulated plane multigraph obtained by introducing $4n+1$ parallel copies of each edge to the a radial completion of the input graph $G$. Hence, each face of $H$ is either a triangle or a 2-cycle. \begin{definition}[{\bf Ring}]\label{def:ring} Let $I_\mathrm{in}$, $I_\mathrm{out}$ be two disjoint cycles in $H$ such that the cycle $I_\mathrm{in}$ is drawn in the strict interior of the cycle $I_\mathrm{out}$. Then, ${\sf Ring}(I_\mathrm{in}, I_\mathrm{out})$ is the plane subgraph of $H$ induced by the set of vertices that are either in $V(I_\mathrm{in}) \cup V(I_\mathrm{out})$ or drawn between $I_\mathrm{in}$ and $I_\mathrm{out}$ (i.e. belong to the exterior of $I_\mathrm{in}$ and the interior of $I_\mathrm{out}$). \end{definition} We call $I_\mathrm{in}$ and $I_\mathrm{out}$ are the \emph{inner} and \emph{outer interfaces} of ${\sf Ring}(I_\mathrm{in}, I_\mathrm{out})$. We also say that this ring is induced by $I_\mathrm{in}$ and $I_\mathrm{out}$. Recall the notion of self-crossing walks defined in Section~\ref{sec:discreteHomotopy}. Unless stated otherwise, all walks considered here are \emph{not self-crossing}. A walk $\alpha$ in ${\sf Ring}(I_\mathrm{in},I_\mathrm{out})$ is {\em{traversing}} the ring if one of its endpoints lies in $I_\mathrm{in}$ and the other lies in $I_\mathrm{out}$. A walk $\alpha$ is \emph{visiting} the ring if both its endpoints together lie in either $I_\mathrm{in}$ or in $I_\mathrm{out}$; moreover $\alpha$ is an \emph{inner visitor} if both its endpoints lie in $I_\mathrm{in}$, and otherwise it is an \emph{outer visitor}. \begin{definition}[{\bf Orienting Walks}]\label{def:orientCurve} Fix an arbitrary ordering of all vertices in $I_\mathrm{in}$ and another one for all vertices in $I_\mathrm{out}$. Then for a walk $\alpha$ in ${\sf Ring}(I_\mathrm{in}, I_\mathrm{out})$ with endpoints in $V(I_\mathrm{in}) \cup V(I_\mathrm{out})$, orient $\alpha$ from one endpoint to another as follows. If $\alpha$ is a traversing walk, then orient it from its endpoint in $I_\mathrm{in}$ to its endpoint in $I_\mathrm{out}$. If $\alpha$ is a visiting walk, then both its endpoints lie either in $I_\mathrm{in}$ or in $I_\mathrm{out}$; then, orient $\alpha$ from its smaller endpoint to its greater endpoint. \end{definition} Observe that if $\alpha$ is a traversing path in the ring, then the orientation of $\alpha$ also defines its \emph{left-side} and \emph{right-side}. These are required for the following definition. \begin{definition}[{\bf Winding Number of a Walk w.r.t. a Traversing Path}] Let $\alpha$ be an a walk in ${\sf Ring}(I_\mathrm{in}, I_\mathrm{out})$ with endpoints in $V(I_\mathrm{in}) \cup V(I_\mathrm{out})$, and let $\beta$ be a traversing path in this ring, such that $\alpha$ and $\beta$ are edge disjoint. The \emph{winding number, $\overline{\sf WindNum}(\alpha,\beta)$, of $\alpha$ with respect to $\beta$} is the signed number of crossings of $\alpha$ with respect to $\beta$. That is, while walking along $\alpha$ (according to the orientation in Definition~\ref{def:orientCurve}, for each intersection of $\alpha$ and $\beta$ record $+1$ if $\alpha$ crosses $\beta$ from left to right, $-1$ if $\alpha$ crosses $\beta$ from right to left, and $0$ if it $\alpha$ does not cross $\beta$. Then, the winding number $\overline{\sf WindNum}(\alpha,\beta)$ is the sum of the recorded numbers. \end{definition} Observe that if $\alpha$ and $\beta$ are edge-disjoint traversing paths, then both $\overline{\sf WindNum}(\alpha, \beta)$ and $\overline{\sf WindNum}(\beta,\alpha)$ are well defined. We now state some well-known properties of the winding number. We sketch a proof of these properties in Appendix~\ref{sec:app:wn}, using homotopy. \begin{proposition}\label{prop:wn-prop} Let $\alpha$, $\beta$ and $\gamma$ be three edge-disjoint paths traversing ${\sf Ring}(I_\mathrm{in},I_\mathrm{out})$. Then, \begin{itemize} \item[(i)] $\overline{\sf WindNum}(\beta,\gamma)=-\overline{\sf WindNum}(\gamma,\beta)$. \item[(ii)] $\Big| \left| \overline{\sf WindNum}(\alpha,\beta) - \overline{\sf WindNum}(\alpha,\gamma) \right| - \left|\overline{\sf WindNum}(\beta,\gamma) \right| \Big| \leq 1$. \end{itemize} \end{proposition} We say that ${\sf Ring}(I_\mathrm{in}, I_\mathrm{out})$ is \emph{rooted} if it is equipped with some fixed path $\eta$ that is traversing it, called the \emph{reference path} of this ring. In a rooted ring ${\sf Ring}(I_\mathrm{in},I_\mathrm{out})$, we measure all winding numbers with respect to $\eta$, hence we shall use the shorthand $\overline{\sf WindNum}(\alpha)=\overline{\sf WindNum}(\alpha,\eta)$ when $\eta$ is implicit or clear from context. Here, we implicitly assume that the walk $\alpha$ is edge disjoint from $\eta$. This requirement will always be met by the following assumptions: $(i)$ $H$ is a plane multigraph where we have $4n+1$ parallel copies of every edge, and we assume that the reference path $\eta$ consists of only the $0$-th copy $e_0$;\; and $(ii)$~whenever we consider the winding number of a walk $\alpha$, it will edge-disjoint from the reference curve $\eta$ as it will not contain the $0$-th copy of any edge. (In particular, the walks of the (weak) linkages that we consider will always satisfy this property.) Note that any visitor walk in ${\sf Ring}(I_\mathrm{in},I_\mathrm{out})$ with both endpoints in $I_\mathrm{in}$ is discretely homotopic to a segment of $I_\mathrm{in}$, and similarly for $I_\mathrm{out}$. Thus, we derive the following observation. \begin{observation}\label{obs:vis_wn} Let $\alpha$ be a visitor in ${\sf Ring}(I_\mathrm{in},I_\mathrm{out})$. Then, $|\overline{\sf WindNum}(\alpha)| \leq 1$. \end{observation} Recall the notion of a weak linkage defined in Section~\ref{sec:discreteHomotopy}, which is a collection of edge-disjoint non-crossing walks. When we use the term \emph{weak linkage of order $k$ in ${\sf Ring}(I_\mathrm{in}, I_\mathrm{out})$}, we refer to a weak linkage such that each walk has both endpoints in $V(I_{\mathrm{in}})\cup V(I_{\mathrm{out}})$. For brevity, we abuse the term `weak linkage' to mean a weak linkage in a ring when it is clear from context. Note that every walk in a weak linkage $\mathcal{P}$ is an inner visitor, or an outer visitor, or a traversing walk. This partitions $\mathcal{P}$ into $\mathcal{P}_\mathrm{in}, \mathcal{P}_\mathrm{out}, \mathcal{P}_\mathrm{traverse}$. A weak linkage is {\em{traversing}} if it consists only of traversing walks. Assuming that ${\sf Ring}(I_\mathrm{in},I_\mathrm{out})$ is rooted, we define the {\em{winding number}} of a traversing weak linkage $\mathcal{P}$ as $\overline{\sf WindNum}(\mathcal{P})=\overline{\sf WindNum}(P_1)$. Recall that any two walks in a weak linkage are non-crossing. Then as observed in~\cite[Observation 4.4]{DBLP:conf/focs/CyganMPP13},\footnote{This inequality also follows from the second property of Proposition~\ref{prop:wn-prop} by setting $\alpha$ to be the reference path, $\beta = P_1$ and $\gamma = P_i$ and noting that $\overline{\sf WindNum}(\beta,\gamma) = 0$.} $$|\overline{\sf WindNum}(P_i)-\overline{\sf WindNum}(\mathcal{P})|\leq 1\qquad\textrm{for all }i=1,\ldots,k.$$ The above definition is extended to any weak linkage $\mathcal{P}$ in the ring as follows: if there is no walk in $\mathcal{P}$ that traverses the ring, then $\overline{\sf WindNum}(\mathcal{P}) = 0$, otherwise $\overline{\sf WindNum}(\mathcal{P}) = \overline{\sf WindNum}(\mathcal{P}_\mathrm{traverse})$. Note that, two aligned weak linkages $\mathcal{P}$ and $\mathcal{Q}$ in the ring may have different winding numbers (with respect to any reference path). Replacing a linkage $\mathcal{P}$ with an aligned linkage $\mathcal{Q}$ having a ``small'' winding number will be the main focus of this section. Lastly, we define a labeling of the edges based on the winding number of a walk (this relation is made explicit in the observation that follows). \begin{definition}\label{def:spiralLabel} Let $(G,S,T,g,k)$ be a good instance of \textsf{Planar Disjoint Paths}, and $H$ be the radial completion of $G$. Let $\alpha$ be a (not self-crossing) walk in $H$, and let $\beta$ be a path in $H$ such that $\alpha$ and $\beta$ are edge disjoint. Let us fix (arbitrary) orientations of $\alpha$ and $\beta$, and define the left and right side of the path $\beta$ with respect to its orientation. The \emph{labeling} ${\sf label}_\beta^\alpha$ of each ordered pair of consecutive edges, $(e,e') \in E_H(\alpha) \times E_H(\alpha)$ by $\{-1, 0 , +1\}$ with respect to $\beta$, where $e$ occurs before $e'$ when traversing $\alpha$ according to its orientation is defined as follows. \begin{itemize} \item The pair $(e,e')$ is labeled $+1$ if $e$ is on the left of $\beta$ while $e'$ is on the right of $\beta$. \item else, $(e,e')$ is labeled $-1$ if $e$ is on the right while $e'$ is on the left of $\beta$; \item otherwise $e$ and $e'$ are on the same side of $\beta$ and $(e,e')$ is labeled $0$. \end{itemize} \end{definition} Note that in the above labeling only pairs of consecutive edges may get a non-zero label, depending on how they cross the reference path. For the ease of notation, we extend the above labeling function to all ordered pairs of edges in $\alpha$ (including pairs of non-consecutive edges), by labeling them $0$. Then we have the following observation, when we restrict $\alpha$ to ${\sf Ring}(I_\mathrm{in}, I_\mathrm{out})$ and set $\beta$ to be the reference path of this ring. \begin{observation}\label{obs:spiralLabel} Let $\alpha$ be a (not self-crossing) walk in ${\sf Ring}(I_\mathrm{in}, I_\mathrm{out})$ with reference path $\eta$. Then $|\overline{\sf WindNum}(\alpha, \eta)| = |\sum_{(e,e') \in E(\alpha) \times E(\alpha)} {\sf label}_\eta^\alpha(e,e')|$. \end{observation} \subsection{Rerouting in a Ring} We now address the question of rerouting a solution to reduce its winding number with respect to the backbone Steiner tree. As a solution is linkage in the graph $G$ (i.e. a collection of vertex disjoint paths), we first show how to reroute linkages within a ring. In the later subsections, we will apply this to reroute a solution in the entire plane graph. We remark that from now onwards, our results are stated and proved only for linkages (rather than weak linkages). Further, define a \emph{linkage of order $k$ in a ${\sf Ring}(I_\mathrm{in},I_\mathrm{out})$} as a collection of $k$ vertex-disjoint paths in $G$ such that each of these paths belongs to ${\sf Ring}(I_\mathrm{in},I_\mathrm{out})$ and its endpoints belong to $V(I_\mathrm{in}) \cup V(I_\mathrm{out})$. As before, we simply use the term `linkage' when the ring is clear from context. We will use the following proposition proved by Cygan et al.~\cite{DBLP:conf/focs/CyganMPP13} using earlier results of Ding et al.~\cite{ding1992disjoint}. Its statement has been rephrased to be compatible with our notation. \begin{proposition}[Lemma 4.8 in~\cite{DBLP:conf/focs/CyganMPP13}]\label{prop:ring-rerouting} Let ${\sf Ring}(I_\mathrm{in},I_\mathrm{out})$ be a rooted ring in $H$ and let $\mathcal{P}$ and $\mathcal{Q}$ be two traversing linkages of the same order in this ring. Then, there exists a traversing linkage $\mathcal{P}'$ in this ring that is aligned with $\mathcal{P}$ and such that $|\overline{\sf WindNum}(\mathcal{P}')-\overline{\sf WindNum}(\mathcal{Q})|\leq 6$. \end{proposition} The formulation of~\cite{DBLP:conf/focs/CyganMPP13} concerns directed paths in directed graphs and assumes a fixed pattern of in/out orientations of paths that is shared by the linkages $\mathcal{P},\mathcal{Q}$ and $\mathcal{P}'$. The undirected case (as expressed above) can be reduced to the directed one by replacing every undirected edge in the graph by two oppositely-oriented arcs with same endpoints, and asking for any orientation pattern (say, all paths should go from $I_\mathrm{in}$ to $I_\mathrm{out}$). Moreover, the setting itself is somewhat more general, where rings and reference paths are defined by curves and (general) homotopy. \paragraph*{Rings with Concentric Cycles.} Let $\mathcal{C} = (C_1, C_2, \ldots, C_p)$ concentric sequence of cycles in ${\sf Ring}(I_\mathrm{in}, I_\mathrm{out})$ (then, $C_{i}$ is in the strict interior of $C_{i+1}$ for $i \in \{1, 2, \ldots p-1\}$). If $I_\mathrm{in}$ is in the strict interior of $C_1$ and $C_p$ is in the strict interior of $I_\mathrm{out}$, then we cay that $\mathcal{C}$ is \emph{encircling}. An encircling concentric sequence $\mathcal{C}$ in ${\sf Ring}(I_\mathrm{in}, I_\mathrm{out})$ is {\em{tight}} if every $C \in \mathcal{C}$ is a cycle in $G$, and there exists a path $\eta$ in $H$ traversing ${\sf Ring}(I_\mathrm{in},I_\mathrm{out})$ such that the set of internal vertices of $\eta$ contain exactly $|\mathcal{C}|$ vertices of $V(G)$, one on each each cycle in $\mathcal{C}$. Let us fix one such encircling tight sequence in the ring ${\sf Ring}(I_\mathrm{in}, I_\mathrm{out})$ along with the path $\eta$ witnessing the tightness. Then, we set the path $\eta$ as the reference path of the ring. Here, we assume w.l.o.g. that $\eta$ contains only the $0$-th copy of each of the edges comprising it. Any paths or linkages that we subsequently consider will not use the $0$-th copy of any edge, and hence their winding numbers (with respect to $\eta$) will be well-defined. This is because that they arise from $G$, and when we consider them in $H$, we choose a `non-$0$-th' copy out of the $4n+1$ copies of any (required) edge. A linkage $\mathcal{P}$ in ${\sf Ring}(I_\mathrm{in},I_\mathrm{out})$ is {\em{minimal}} with respect to $\mathcal{C}$ if among the linkages aligned with $\mathcal{P}$, it minimizes the total number of edges traversed that do not lie on the cycles of $\mathcal{C}$. The following proposition is essentially Lemma~3.7 of~\cite{DBLP:conf/focs/CyganMPP13}. \begin{proposition}\label{lem:shallow-visitors} Let $G$ be a plane graph, and with radial completion $H$. Let ${\sf Ring}(I_\mathrm{in},I_\mathrm{out})$ be a rooted ring in $H$. Suppose $|I_\mathrm{in}|,|I_\mathrm{out}|\leq \ell$, for some integer $\ell$. Further, let $\mathcal{C}=(C_1,\ldots,C_p)$ be an encircling tight concentric sequence of cycles in ${\sf Ring}(I_\mathrm{in},I_\mathrm{out})$. Finally, let $\mathcal{P}$ be a linkage in ${\sf Ring}(I_\mathrm{in}, I_\mathrm{out})$ that is minimal with respect to $\mathcal{C}$. Then, every inner visitor of $\mathcal{P}$ intersects less than $10\ell$ of the first cycles in the sequence $(C_1,\ldots,C_p)$, while every outer visitor of $\mathcal{P}$ intersects less than $10\ell$ of the last cycles in this sequence. \end{proposition} A proof of this proposition can be obtained by first ordering the collection of inner and outer visitors by their `distance' from the inner and outer interfaces, respectively, and the `containment' relation between the cycles formed by them with the interfaces. This gives a partial order on the set of inner visitors and the set of outer visitors. Then if the proposition does not hold for $\mathcal{P}$, then the above ordering and containment relation can be used to reroute these paths along a suitable cycle. This will contradict the minimality of $\mathcal{P}$, since the rerouted linkage is aligned with it but uses strictly fewer edges outside of $\mathcal{C}$. The main result of this section can be now formulated as follows. (Its formulation and proof idea are based on Lemma~8.31 and Theorem~6.45 of~\cite{DBLP:conf/focs/CyganMPP13}.) \begin{lemma}\label{lemma:winding} Let $G$ be a plane graph with radial completion $H$. Let ${\sf Ring}(I_\mathrm{in},I_\mathrm{out})$ be a ring in $H$. Suppose that $|I_\mathrm{in}|,|I_\mathrm{out}|\leq \ell$ for some integer $\ell$, and that in ${\sf Ring}(I_\mathrm{in},I_\mathrm{out})$ there is an encircling tight concentric sequence of cycles $\mathcal{C}$ of size larger than $40\ell$. Let $\eta$ be a traversing path in the ring witnessing the tightness of $\mathcal{C}$, and fix $\eta$ as the reference path. Finally, let $\mathcal{P}=\mathcal{P}_{\mathrm{traverse}}\uplus \mathcal{P}_{\mathrm{visitor}}$ be a linkage in $G$, where $\mathcal{P}_{\mathrm{traverse}}$ is a traversing linkage comprising the paths of $\mathcal{P}$ traversing ${\sf Ring}(I_\mathrm{in},I_\mathrm{out})$, while $\mathcal{P}_{\mathrm{visitor}}=\mathcal{P}\setminus \mathcal{P}_{\mathrm{traverse}}$ consists of the paths whose both endpoints lie in either $V(I_\mathrm{in})$ or $V(I_\mathrm{out})$. Further, suppose that $\mathcal{P}$ is minimal with respect to $\mathcal{C}$. Then, for every traversing linkage $\mathcal{Q}$ in $G$ that is minimal with respect to $\mathcal{C}$ such that every path in $\mathcal{Q}$ is disjoint from $\eta$ and $|\mathcal{Q}| \geq |\mathcal{P}_{\mathrm{traverse}}|$, there is a traversing linkage $\mathcal{P}_{\mathrm{traverse}}'$ in $G$ such that \begin{enumerate}[(a)] \item $\mathcal{P}_{\mathrm{traverse}}'$ is aligned with $\mathcal{P}_{\mathrm{traverse}}$, \item the paths of $\mathcal{P}_{\mathrm{traverse}}'$ are disjoint from the paths of $\mathcal{P}_{\mathrm{visitor}}$, and \item $|\overline{\sf WindNum}(\mathcal{P}_{\mathrm{traverse}}')-\overline{\sf WindNum}(\mathcal{Q})|\leq 60\ell+6$. \end{enumerate} \end{lemma} \begin{proof} Let $\mathcal{C}=(C_1,\ldots,C_p)$, where $p >40\ell$. Recall that $\mathcal{C}$ is a collection of cycles in $G$, and the path $\eta$ that witnesses the tightness of $\mathcal{C}$ contains $|\mathcal{C}|$ vertices of $V(G)$, one on each cycle of $\mathcal{C}$. Let $v_i$ denote the vertex where $\eta$ intersects the cycle $C_i \in \mathcal{C}$ for all $i \in \{1,2,\ldots, p\}$. Since $\mathcal{P}$ is minimal with respect to $\mathcal{C}$, Proposition~\ref{lem:shallow-visitors} implies that the paths in $\mathcal{P}_{\mathrm{visitor}}$ do not intersect any of the cycles $C_{10\ell},C_{10\ell+1},\ldots,C_{p-10\ell+1}$ (note that since $p>40\ell$, this sequence of cycles is non-empty). Call a vertex $x \in V({\sf Ring}(I_\mathrm{in}, I_\mathrm{out}))$ in the ring \emph{non-separated} if there exists a path from $x$ to $C_{10 \ell}$ whose set of internal vertices is disjoint from $\bigcup_{P \in \mathcal{P}_\mathrm{visitor}} V(P)$. Otherwise, we say that the vertex $x$ is \emph{separated}. Observe that every path in $\mathcal{P}_\mathrm{traverse}$ is disjoint from the paths in $\mathcal{P}_\mathrm{visitor}$ and intersects $C_{10\ell}$, hence all vertices on the paths of $\mathcal{P}_{\mathrm{traverse}}$ are non-separated. Let $X$ denote the set of all non-separated vertices in the ring, and consider the graph $H[X]$. Observe that $H[X]$ is an induced subgraph of ${\sf Ring}(I_\mathrm{in}, I_\mathrm{out})$, since it is obtained from ${\sf Ring}(I_\mathrm{in}, I_\mathrm{out})$ by deleting the separated vertices. Further, observe that $H[X]$ is a ring of $H$. Indeed, the inner interface of $H[X]$ is the cycle $\wh{I}_\mathrm{in}$ obtained as follows: let $\mathcal{P}_\mathrm{in}$ be the set of inner visitors in $\mathcal{P}$; then, $\wh{I}_\mathrm{in}$ is the outer face of the plane graph $H[ V(I_\mathrm{in}) \cup\,\, \bigcup_{P \in \mathcal{P}_\mathrm{in}} V(P)]$. It is easy to verify that all vertices on $\wh{I}_\mathrm{in}$ are non-separated, and any vertex of ${\sf Ring}(I_\mathrm{in},I_\mathrm{out})$ that lies in the strict interior of this cycle is separated. We then symmetrically obtain the outer interface $\wh{I}_\mathrm{out}$ of $H[X]$ from the set $\mathcal{P}_\mathrm{out}$ of outer visitors of $\mathcal{P}$. Then, $H[X] = {\sf Ring}(\wh{I}_\mathrm{in}, \wh{I}_\mathrm{out})$. Here, $\wh{I}_\mathrm{in}$ is composed alternately of subpaths of $I_\mathrm{in}$ and inner visitors from $\mathcal{P}_\mathrm{visitor}$, and symmetrically for $\wh{I}_\mathrm{out}$. Note that the paths of $\mathcal{P}_\mathrm{traverse}$ are completely contained in ${\sf Ring}(\wh{I}_\mathrm{in},\wh{I}_\mathrm{out})$ and they all traverse this ring. Thus, $\mathcal{P}_\mathrm{traverse}$ can be regarded also as a traversing linkage in ${\sf Ring}(\wh{I}_\mathrm{in},\wh{I}_\mathrm{out})$. While $\mathcal{P}_\mathrm{traverse}$ may have a different winding number in ${\sf Ring}(\wh{I}_\mathrm{in},\wh{I}_\mathrm{out})$ than in ${\sf Ring}(I_\mathrm{in},I_\mathrm{out})$, the difference is ``small'' as we show below. (Note that the two winding numbers in the following claim are computed in two different rings.) \begin{claim}\label{cl:P-trim} Let $P$ be a path in $G$ that is disjoint from all paths in $\mathcal{P}_\mathrm{visitor}$, such that $P$ belongs to ${\sf Ring}(I_\mathrm{in}, I_\mathrm{out})$ and traverses it. Then, $P$ also belongs to ${\sf Ring}(\wh{I}_\mathrm{in}, \wh{I}_\mathrm{out})$, and $|\overline{\sf WindNum}(P,\eta)-\overline{\sf WindNum}(P,\wh{\eta})|\leq 20\ell$. \footnote{Here $\wh{\eta}$ is the reference path of ${\sf Ring}(\wh{I}_\mathrm{in}, \wh{I}_\mathrm{out})$, which is a subpath of $\eta$ in this ring.} \end{claim} \begin{proof} Since $P$ traverses ${\sf Ring}(I_\mathrm{in}, I_\mathrm{out})$, it must intersect the cycle $C_{10\ell}$. Therefore, as $P$ is disjoint from $\mathcal{P}_{\mathrm{visitor}}$, all vertices in $V(P)$ are non-separated. Hence, $P$ is present in ${\sf Ring}(\wh{I}_\mathrm{in}, \wh{I}_\mathrm{out})$. Next, observe that there are at most $20\ell$ vertices of $G$ that are visited by $\eta$ but not visited by $\wh{\eta}$; indeed, these are vertices in the intersection of $\eta$ with $I_\mathrm{in}, I_\mathrm{out}$ and the first and last $10\ell -1$ cycles of $\mathcal{C}$. It follows that any path in $G$ has at most $20\ell$ more crossings with $\eta$ than with $\wh{\eta}$. Since each such crossing contributes $+1$ or $-1$ to the winding number of $P$ with respect to $\eta$, the winding numbers of $P$ with respect to $\eta$ and $\wh{\eta}$ differ by at most $20\ell$. \renewcommand\qedsymbol{$\diamond$}\end{proof} We now turn our attention to the linkage $\mathcal{Q}$. In essence, our goal is show that every path in $\mathcal{Q}$ can be ``trimmed'' to a path traversing ${\sf Ring}(\wh{I}_\mathrm{in},\wh{I}_\mathrm{out})$ such that their winding numbers are not significantly different. First, however, we prove that the paths in $\mathcal{Q}$ cannot ``oscillate'' too much in ${\sf Ring}(I_\mathrm{in},I_\mathrm{out})$, based on the supposition that $\mathcal{Q}$ is minimal with respect to $C$. \begin{claim}\label{cl:no-oscillators} Let $Q\in \mathcal{Q}$, and let $u\in V(Q)$ such that it also lies on an inner visitor from $\mathcal{P}_\mathrm{visitor}$. Then, the prefix of $Q$ between its endpoint on $I_\mathrm{in}$ and $u$ does not intersect the cycle $C_{20\ell}$. \end{claim} \begin{figure} \caption{Illustration of Claim~\ref{cl:no-oscillators}.} \label{fig:claim-osc} \end{figure} \begin{proof} Suppose, for the sake of contradiction, that the considered prefix contains some vertex $v$ that lies on $C_{20\ell}$. Since $u$ lies on an inner visitor $P \in \mathcal{P}_\mathrm{visitor}$ and Proposition~\ref{lem:shallow-visitors} states that an inner visitor cannot intersect $C_{10\ell}$, we infer that on the infix of $Q$ between $v$ and $u$ there exists a vertex that lies on the intersection of $Q$ and $C_{10\ell}$. Let $a$ be the first such vertex. Similarly, on the prefix of $Q$ from its endpoint on $I_\mathrm{in}$ to $v$ there exists a vertex that lies on the intersection of $Q$ and $C_{10\ell}$. Let $b$ be the last such vertex. Then the whole infix of $Q$ between $a$ and $b$ does not intersect $C_{10\ell}$ internally (see Fig.~\ref{fig:claim-osc}), and hence, apart from endpoints, completely lies in the exterior of $C_{10\ell}$. Call this infix $Q^\star$. Now consider ${\sf Ring}(C_{10\ell},I_\mathrm{out})$, the ring induced by $I_\mathrm{out}$ and $C_{10\ell}$. Moreover, consider the graph $G'$ obtained from $G$ by removing all vertices that are not in ${\sf Ring}(C_{10\ell},I_\mathrm{out})$ and edges that are not in the strict interior of ${\sf Ring}(C_{10\ell},I_\mathrm{out})$; in particular, the edges of $C_{10\ell}$ are removed, but the vertices are not. Note that $G'$ is a subgraph of ${\sf Ring}(C_{10\ell},I_\mathrm{out})$. Finally, let $\mathcal{C}'=\mathcal{C}\setminus \{C_1,C_2,\ldots,C_{10\ell}\}$; then, $\mathcal{C}'$ is an encircling tight sequence of concentric cycles in ${\sf Ring}(C_{10\ell}, I_\mathrm{out})$. Let $\mathcal{Q}'$ be the linkage in $G$ obtained by restricting paths of $\mathcal{Q}$ to $G'$. Here, a path in $\mathcal{Q}$ may break into several paths in $\mathcal{Q}'$ (that are its maximal subpaths contained in $G'$). Since $\mathcal{Q}$ is minimal with respect to $\mathcal{C}$, it follows that $\mathcal{Q}'$ is minimal with respect to $\mathcal{C}'$. Now, observe that $Q^\star$ belongs to $\mathcal{Q}'$, hence it is an inner visitor of ${\sf Ring}(C_{10\ell},I_\mathrm{out})$. However, $Q^\star$ intersects the first $10\ell+1$ concentric cycles $C_{10\ell+1},\ldots,C_{20\ell}$ in the family $\mathcal{C}'$, which contradicts Proposition~\ref{lem:shallow-visitors}. \renewcommand\qedsymbol{$\diamond$}\end{proof} Clearly, an analogous claim holds for outer visitors and the cycle $C_{p-20\ell+1}$. We now proceed to our main claim about the restriction of $\mathcal{Q}$ to ${\sf Ring}(\wh{I}_\mathrm{in},\wh{I}_\mathrm{out})$. \begin{claim}\label{cl:Q-trim} For every path $Q\in \mathcal{Q}$, there exists a subpath $\wh{Q}$ of $Q$ that traverses ${\sf Ring}(\wh{I}_\mathrm{in},\wh{I}_\mathrm{out})$ and such that $|\overline{\sf WindNum}(\wh{Q},\wh{\eta})-\overline{\sf WindNum}(Q,\eta)|\leq 40\ell$. \end{claim} \begin{proof} We think of $Q$ as oriented from its endpoint on $I_\mathrm{in}$ to its endpoint on $I_\mathrm{out}$. Let $a$ be the last vertex on $Q$ that lies on $\wh{I}_\mathrm{in}$ and $b$ be the first vertex on $Q$ that lies on $\wh{I}_\mathrm{out}$. Further, let $Q_\mathrm{in}$ be the prefix of $Q$ from its start to $a$, and $Q_\mathrm{out}$ be the suffix of $Q$ from $b$ to its end. By Claim~\ref{cl:no-oscillators}, $Q_\mathrm{in}$ is entirely contained in the ring induced by $I_\mathrm{in}$ and $C_{20\ell}$. By the claim analogous to Claim~\ref{cl:no-oscillators}, $Q_\mathrm{out}$ is entirely contained in the ring induced by $I_\mathrm{out}$ and $C_{p-20\ell+1}$. Since $p>40\ell$, it follows that $Q_\mathrm{in}$ and $Q_\mathrm{out}$ are disjoint, and in particular $a$ appears before $b$ on $Q$. Let $\wh{Q}$ be the infix of $Q$ between $a$ and $b$. Then, $\wh{Q}$ is a path in ${\sf Ring}(\wh{I}_\mathrm{in},\wh{I}_\mathrm{out})$ that traverses this ring, so it suffices to check that $|\overline{\sf WindNum}(\wh{Q},\wh{\eta})-\overline{\sf WindNum}(Q,\eta)|\leq 40\ell$. Observe that every crossing of $Q$ and $\eta$ that is not a crossing of $\wh{Q}$ and $\wh{\eta}$ has to occur on either $Q_\mathrm{in}$ or $Q_\mathrm{out}$. However, $Q_\mathrm{in}$ and $Q_\mathrm{out}$ can have at most $40\ell$ vertices in common with $\eta$, because these must be among the intersections of $\eta$ with cycles $C_1,\ldots,C_{20\ell},C_{p-20\ell+1},\ldots,C_p$, of which there are $40\ell$. Each such crossing can contribute $+1$ or $-1$ to the difference between the winding numbers $\overline{\sf WindNum}(\wh{Q},\wh{\eta})$ and $\overline{\sf WindNum}(Q,\eta)$, hence the difference between these winding numbers is at most $40\ell$. \renewcommand\qedsymbol{$\diamond$}\end{proof} For every path $Q\in \mathcal{Q}$, fix the path $\wh{Q}$ provided by Claim~\ref{cl:Q-trim}, and let $\wh{\mathcal{Q}} \subseteq \{\wh{Q} \mid Q\in \mathcal{Q}\}$ such that $|\wh{\mathcal{Q}}| = |\mathcal{P}_\mathrm{traverse}|$. Then, $\mathcal{Q}'$ is a traversing linkage in ${\sf Ring}(\wh{I}_\mathrm{in},\wh{I}_\mathrm{out})$. Apply Proposition~\ref{prop:ring-rerouting} to the linkages $\mathcal{P}_\mathrm{traverse}$ and $\wh{\mathcal{Q}}$ in ${\sf Ring}(\wh{I}_\mathrm{in},\wh{I}_\mathrm{out})$, yielding a linkage $\mathcal{P}_\mathrm{traverse}'$ that is aligned from $\mathcal{P}_\mathrm{traverse}$ and such that $$|\overline{\sf WindNum}(\mathcal{P}_\mathrm{traverse}',\wh{\eta})-\overline{\sf WindNum}(\wh{\mathcal{Q}},\wh{\eta})|\leq 6.$$ Clearly, by construction we have that the paths of $\mathcal{P}_\mathrm{traverse}'$ are disjoint with the paths of $\mathcal{P}_\mathrm{visitor}$. Furthermore, the paths in $\mathcal{P}_{\mathrm{traverse}}'$ traverse ${\sf Ring}(I_\mathrm{in}, I_\mathrm{out})$ since they are aligned with $\mathcal{P}_\mathrm{traverse}$ (i.e. they have the same endpoints). Finally, by Claim~\ref{cl:Q-trim} we have $$|\overline{\sf WindNum}(\wh{\mathcal{Q}},\wh{\eta})-\overline{\sf WindNum}(\mathcal{Q},\eta)|\leq 40\ell,$$ and by Claim~\ref{cl:P-trim} (applied to paths in $\mathcal{P}'_\mathrm{traverse}$) we have $$|\overline{\sf WindNum}(\mathcal{P}_\mathrm{traverse}',\eta)-\overline{\sf WindNum}(\mathcal{P}_\mathrm{traverse}',\wh{\eta})|\leq 20\ell.$$ By the above we conclude that $$|\overline{\sf WindNum}(\mathcal{P}_\mathrm{traverse}',\eta)-\overline{\sf WindNum}(\mathcal{Q},\eta)|\leq 60\ell+6,$$ which completes the proof. \end{proof} \subsection{Rings of the Backbone Steiner Tree} \label{sec:STrings} Based on results in previous subsections, we proceed to show that if the given instance admits a solution, then it also admits a solution of small winding number. Recall the backbone Steiner tree $R$ constructed in Section~\ref{sec:steiner}. Let $P = \ensuremath{\mathsf{path}}_{R}(u,v)$ be a long maximal degree-2 path in $R$, where $u,v \in V_{=1}(R) \cup V_{\geq 3}(R)$, and assume without loss of generality (under the supposition that we are given a {\sf Yes} instance) that the subtree of $R - V(P) - \{u,v\})$ containing $v$ also contains the terminal $t^\star \in T$ lying on the outer face of $H$. Recall the (minimal) separators $S_u = \ensuremath{\mathsf{Sep}}_R(P,u)$ and $S_v = \ensuremath{\mathsf{Sep}}_R(P,v)$ in $H$. Hence $H[S_u]$ and $H[S_v]$ form two cycles in $H$, and $H[S_u]$ is contained in the strict interior of $H[S_v]$. Further, recall that $|S_u|,|S_v| \leq \alpha_{\rm sep}(k)$. Consider the ring \emph{induced} by $H[S_u]$ and $H[S_v]$, i.e. ${\sf Ring}(S_u,S_v) := {\sf Ring}(H[S_u], H[S_v])$. Let $V(S_u,S_v)$ denote the set of all vertices (in $V(H)$) that lie in this ring, including those in $S_u$ and $S_v$. Then, ${\sf Ring}(S_u,S_v) = H[V(S_u,S_v)]$. Note that by definition it contains $\ensuremath{\mathsf{Sep}}_R(P,u)$ and $\ensuremath{\mathsf{Sep}}_R(P,v)$. Let $G_{u,v}$ denote the restriction of this graph to $G$, i.e. $G_{u,v} = G[V(G) \cap V(S_u,S_v)]$. Additionally, recall that there are two distinct vertices $u'$ and $v'$ in $P$ that lie in $S_u$ and $S_v$, respectively, such that $P = \ensuremath{\mathsf{path}}_R(u,u') {-} \ensuremath{\mathsf{path}}_R(u',v') {-} (v',v)$ (by Lemma~\ref{lem:threeParts} and Definition~\ref{lem:translateRtoR*}). Lastly, we remind that $A^\star_{R,P,u}$ and $A^\star_{R,P,v}$ are the two components of $R - V(\ensuremath{\mathsf{path}}_R(u',v') - \{u',v'\})$ that contain $u$ and $v$, respectively (Definition~\ref{lem:translateRtoR*}). Then, the following observation is immediate. \begin{observation} The path $\ensuremath{\mathsf{path}}_R(u',v')$ is contained in ${\sf Ring}(S_u,S_v)$ and it traverses ${\sf Ring}(S_u,S_v)$ from $u' \in S_u$ to $v' \in S_v$. Moreover, $A^\star_{R,P,u} - \{u'\}$ is contained in the bounded region of ${{\mathbb R}^2} - H[S_u]$, and similarly $A^\star_{R,P,v} - \{v'\}$ is contained in the unbounded region of ${{\mathbb R}^2} - H[S_v]$. \end{observation} Now, recall the encircling tight sequence of concentric cycles $\mathcal{C}(u,v)$ and the path witnessing its tightness, which we denote by $\eta(u,v)$ (constructed in Lemma~\ref{lemma:CC_cons}). We assume that $\eta(u,v)$ consists of only the $0$-th copies of the edges comprising it, and fix $\eta(u,v)$ as the reference path of the ring ${\sf Ring}(S_u,S_v)$. Moreover, observe that the subpath $\ensuremath{\mathsf{path}}_R(u',v')$ of $R$ also traverses ${\sf Ring}(S_u,S_v)$. We assume w.l.o.g. that $\ensuremath{\mathsf{path}}_R(u',v')$ consists of only the $0$-th copy of the edges comprising it. This will allow us to later define winding numbers with respect to $\ensuremath{\mathsf{path}}_{R}(u',v')$ in ${\sf Ring}(S_u,S_v)$. Note that $\eta(u,v)$ and $\ensuremath{\mathsf{path}}_R(u',v')$ may be different paths with common edges, and further they may not be paths in $G$. Let us first argue that we can consider the rings corresponding to each of the long degree-2 paths in $R$ independently. To this end, consider another long maximal degree-2 path in $R$ different from $P$, denoted by $\ensuremath{\mathsf{path}}_{R}(\wh{u},\wh{v})$, where $\wh{u},\wh{v} \in V_{=1}(R) \cup V_{\geq 3}(R)$ and $t^\star$ lies in the subtree containing $\wh{v}$ in $R - V(\ensuremath{\mathsf{path}}_{R}(\wh{u},\wh{v}) - \{\wh{u},\wh{v}\})$. Let $S_{\wh{u}} = \ensuremath{\mathsf{Sep}}_R(\ensuremath{\mathsf{path}}_{R}(\wh{u},\wh{v}),\wh{u})$ and $S_{\wh{v}} = \ensuremath{\mathsf{Sep}}_R(\ensuremath{\mathsf{path}}_{R}(\wh{u},\wh{v}),\wh{v})$. The ${\sf Ring}(S_{\wh{u}},S_{\wh{v}})$ is symmetric to ${\sf Ring}(S_u,S_v)$ above. The following corollary follows from Lemma~\ref{lem:distinctMidRegions}. \begin{corollary}\label{cor:rings-pairwise-disjoint} The rings ${\sf Ring}(S_u,S_v)$ and ${\sf Ring}(S_{\wh{u}}, S_{\wh{v}})$ have no common vertices. \end{corollary} Let $t$ be the number of pairs of near vertices in $V_{=1}(R) \cup V_{\geq 3}(R)$ such that the path between them in $R$ is long. As above, we have a ring corresponding to each of these paths. Then, we partition the plane graph $H$ into $t+1$ regions, one for each of the $t$ rings of the long maximal degree-2 path in $R$, and the remainder of $H$ that is not contained in any of these rings. \begin{lemma}\label{lemma:R-outside-rings} Let $\{u_1,v_1\}, \{u_2,v_2\}, \ldots, \{u_t,v_t\}$ denote pairs of near vertices in $V_{=1}(R) \cup V_{\geq 3}(R)$ such that $\ensuremath{\mathsf{path}}_R(u_i,v_i)$ is a long maximal degree-2 path in $R$ for all $i \in \{1,2,\ldots t\}$. Then, the corresponding rings ${\sf Ring}(S_{u_1}, S_{v_1}), {\sf Ring}(S_{u_2}, S_{v_2}), \ldots, {\sf Ring}(S_{u_t},S_{v_t})$ are pairwise disjoint. Further, the number of vertices of $R$ lying outside this collection of rings, $\big | V(R) \setminus \bigcup_{i=1}^t V(S_{u_i}, S_{v_i}) \big|$, is upper bounded by $\alpha_{\rm nonRing}(k) = 10^4 k \cdot 2^{ck}$. \end{lemma} \begin{proof} The first statement follows from Corollary~\ref{cor:rings-pairwise-disjoint} applied to each pair of rings. For the second statement, consider the vertices in $V(R) \setminus \bigcup_{i=1}^t V(S_{u_i}, S_{v_i})$. Each such vertex either belongs to a short degree-2 path between a pair of near vertices in $V_{=1}(R) \cup V_{\geq 3}(R)$, or it is a vertex that lies on a maximal degree-2 path of $R$ at distance at most $\alpha_{\rm pat}(k) = 100 \cdot 2^{ck}$ in $R$ from $V_{=1}(R) \cup V_{\geq 3}(R)$. Recall that there are fewer than $4k$ vertices in $V_{=1}(R) \cup V_{\geq 3}(R)$ by Observation~\ref{obs:leaIntSteiner}, and thus at most $4k-2$ maximal degree-2 paths in $R$. Therefore, the number of vertices in $V_{=1}(R) \cup V_{\geq 3}(R)$ is upper bounded by $(4k -2) \cdot \max\{ 2\alpha_{\mathrm{pat}}(k), \alpha_{\mathrm{long}}(k) \}\;\leq 10^4 k \cdot 2^{ck}$. \end{proof} \subsection{Solutions with Small Winding Number} Having established all the required definitions in the previous subsections, we are ready to exhibit a solution with a small winding number. Consider a ring, say ${\sf Ring}(S_u, S_v)$ with an encircling tight concentric sequence of cycles $\mathcal{C}(u,v)$ and reference path $\eta(u,v)$. Consider a linkage $\mathcal{P}$ in $G$, and let $\mathcal{P}(u,v) = \mathcal{P}[V(S_u,S_v) \cap V(\mathcal{P})]$. Observe that $\mathcal{P}(u,v)$ is a linkage in ${\sf Ring}(S_u,S_v)$, and its endpoints lie in $(S_u \cup S_v) \cap V(G)$. Therefore, $\mathcal{P}(u,v)$ is a flow from $S_u \cap V(G)$ to $S_v \cap V(G)$ in $G_{u,v}$. Assume w.l.o.g. that the paths in $\mathcal{P}(u,v)$ use the $1$-st copy of each edge in $H$. \begin{definition} Let $u,v \in V_{=1}(R) \cup V_{\geq 3}(R)$ be a pair of near vertices such that $\ensuremath{\mathsf{path}}_R(u,v)$ is long maximal degree-2 path in $R$. Let $\mathcal{P}$ be a path system in $G$. Then , \emph{winding number of $\cal P$ in ${\sf Ring}(S_u,S_v)$} is defined as ${\sf WindNum}(\mathcal{P}, {\sf Ring}(S_u,S_v)) = \max_{P \in \mathcal{P}(u,v)}\left|\overline{\sf WindNum}(P, \ensuremath{\mathsf{path}}_R(u',v'))\right|$ \end{definition} We remark that the notation above differs from our earlier use of $\overline{\sf WindNum}(\cdot)$ in the choice of the reference path in ${\sf Ring}(S_u,S_v)$. For $\overline{\sf WindNum}(\cdot)$, the path $\eta(u,v)$ was the reference path, whereas for ${\sf WindNum}(\cdot, {\sf Ring}(S_u,S_v))$, we choose $\ensuremath{\mathsf{path}}_R(u',v')$ as the reference path. We will now apply Lemma~\ref{lemma:winding} in each ring of the form ${\sf Ring}(S_{u_i}, S_{v_i})$ to obtain a solution of bounded winding number in that ring. Towards this, fix one pair $(u,v) := (u_i,v_i)$ for some $1 \leq i \leq t$. We argue that $\mathcal{C}(u,v)$ and $\ensuremath{\mathsf{Flow}}_{R}(u,v)$ are suitable for the roles of $\mathcal{C}$ and $\cal Q$ in the premise of Lemma~\ref{lemma:winding}. Recall that $\ensuremath{\mathsf{Flow}}_{R}(u,v)$ is a collection of vertex disjoint paths in the subgraph $G_{u_i,v_i}$ of $G$, and assume w.l.o.g.~that $\ensuremath{\mathsf{Flow}}_{R}(u,v)$ uses only $2$-nd copies of edges in $H$. \begin{observation}\label{obs:pre-rerouting} Let $(G,S,T,g,k)$ be a good {\sf Yes}-instance of \textsf{Planar Disjoint Paths}. Let $R$ be a backbone Steiner tree. Let $u,v$ be a pair of near vertices in $V_{=1}(R) \cup V_{\geq 3}(R)$ such that $\ensuremath{\mathsf{path}}_R(u,v)$ is a long degree-2 path in $R$. Then, the following hold. \begin{itemize} \item[(i)] $\mathcal{C}(u,v)$ is a encircling tight sequence of concentric cycles in ${\sf Ring}(S_u,S_v)$ where each cycle lies in $G_{u,v}$, $S_u$ is in its strict interior and $S_v$ is in its strict exterior. Further, $|\mathcal{C}(u,v)| \geq 40 \alpha_{\rm sep}(k)$ cycles. \item[(ii)] $\ensuremath{\mathsf{Flow}}_{R}(u,v)$ is a maximum flow between $S_u \cap V(G)$ and $S_v \cap V(G)$ in $G_{u,v}$ that is minimal with respect to $\mathcal{C}(u,v)$. \item[(iii)] Each path $Q \in \ensuremath{\mathsf{Flow}}_R(u,v)$ traverses ${\sf Ring}(S_u,S_v)$, and $|\overline{\sf WindNum}(\ensuremath{\mathsf{path}}_R(u',v'), Q)| \leq 1$. \end{itemize} \end{observation} \begin{proof} The first statement directly follows from the construction of $\mathcal{C}(u,v)$ (see Lemma~\ref{lemma:CC_cons}). Similarly, the second statement directly follows from the construction of $\ensuremath{\mathsf{Flow}}_R(u,v)$ using $\mathcal{C}(u,v)$ (see Observation~\ref{obs:disjPTime}). Additionally the construction implies that each path $Q \in \ensuremath{\mathsf{Flow}}_{R}(u,v)$ traverses ${\sf Ring}(S_u,S_v)$. For the second part of the third statement, first note any $Q \in \ensuremath{\mathsf{Flow}}_R(u,v)$ and $\ensuremath{\mathsf{path}}_R(u',v')$ are two edge-disjoint traversing paths in ${\sf Ring}(S_u,S_v)$. Hence, $\overline{\sf WindNum}(\ensuremath{\mathsf{path}}_R(u',v'), P)$ is well defined. Since for any $Q \in \ensuremath{\mathsf{Flow}}_R(u,v)$, there are at most two edges in $\ensuremath{\mathsf{path}}_{R}(u',v')$ with only one endpoint in $V(Q)$ (by Corollary~\ref{cor:pathThroughFlow}, the absolute value of the signed sum of crossing between these two paths is upper-bounded by $1$, i.e. $|\overline{\sf WindNum}(Q, \ensuremath{\mathsf{path}}_R(u',v'))| \leq 1$. \end{proof} Finally we are ready to prove that the existence of a solution of small winding number. \begin{lemma}\label{lemma:nice-solution} Let $(G,S,T,g,k)$ be a good {\sf Yes}-instance of \textsf{Planar Disjoint Paths}. Let $R$ be a backbone Steiner tree. Let $\{u_1,v_1\}, \{u_2,v_2\}, \allowbreak \ldots, \{u_t,v_t\}$ be the pairs of near vertices in $V_{=1}(R) \cup V_{\geq 3}(R)$ such that $\ensuremath{\mathsf{path}}_R(u_i,v_i)$ is a long maximal degree-2 path in $R$ for all $i \in \{1,2,\ldots t\}$. Let ${\sf Ring}(S_{u_1}, S_{v_1}), {\sf Ring}(S_{u_2}, S_{v_2}), \allowbreak \ldots, {\sf Ring}(S_{u_t},S_{v_t})$ be the corresponding rings. Then, there is a solution $\mathcal{P}^\star$ to $(G,S,T,g,k)$ such that $|{\sf WindNum}(\mathcal{P}^\star, {\sf Ring}(S_{u_i},S_{v_i}))| \leq \alpha_{\rm winding}(k)$ for all $i \in \{1,2,\ldots t\}$, where $\alpha_{\rm winding}(k) = 60 \alpha_{\rm sep}(k) + 11 < 300 \cdot 2^{ck}$. \end{lemma} \begin{proof} Consider a solution $\mathcal{P}$ to $(G,S,T,g,k)$. Fix a pair $(u,v) := (u_i,v_i)$ for some $i \in \{1,2, \ldots t\}$. Recall that $|S_u|,|S_v| \leq \ell$ where $\ell = \alpha_{\rm sep}(k)$. Consider ${\sf Ring}(S_u,S_v)$, and the linkage $\mathcal{P}(u,v)$ in $G$. Our goal is to modify $\mathcal{P}(u,v)$ to obtain another linkage $\mathcal{P}'(u,v)$ that is aligned with it and has a small winding number with respect to $\ensuremath{\mathsf{path}}_R(u,v)$ Recall that by Observation~\ref{obs:pre-rerouting}, we have a encircling tight sequence of concentric cycles $\mathcal{C}(u,v)$ in ${\sf Ring}(S_u,S_v)$ that contains at least $40 \ell$ cycles. Let $\eta(u,v)$ be the path in this ring witnessing the tightness of $\mathcal{C}(u,v)$. Further, recall the linkage $\ensuremath{\mathsf{Flow}}_{R}(u,v)$ between $S_u$ and $S_v$ in the subgraph $G_{u,v}$ of $G$ (the restriction of $G$ to ${\sf Ring}(S_u,S_v)$), and that $\ensuremath{\mathsf{Flow}}_{R}(u,v)$ is minimal with respect to $\mathcal{C}(u,v)$. We can assume w.l.o.g. that $\mathcal{P}(u,v)$ is minimal with respect to $\mathcal{C}(u,v)$. Otherwise, there is another solution $\wh{\mathcal{P}}$ such that it is identical to $\mathcal{P}$ in $G - V(S_u,S_v)$ and $\wh{\mathcal{P}}(u,v)$ is a minimal linkage with respect to $\mathcal{C}(u,v)$ (that is aligned with $\mathcal{P}(u,v)$). Then, we can consider $\wh{\mathcal{P}}$ instead of $\mathcal{P}$. Let $\mathcal{P}_\mathrm{traverse}(u,v)$ be the set of traversing paths in $\mathcal{P}(u,v)$. Since $\mathcal{P}_\mathrm{traverse}(u,v)$ is a flow between $S_u$ and $S_v$ in $G_{u,v}$ and $\ensuremath{\mathsf{Flow}}_R(u,v)$ is a maximum flow, clearly $|\ensuremath{\mathsf{Flow}}_{R}(u,v)| \geq |\mathcal{P}_\mathrm{traverse}(u,v)|$. Now, apply Lemma~\ref{lemma:winding} to $\mathcal{P}(u,v)$, $\mathcal{C}(u,v)$ and $\ensuremath{\mathsf{Flow}}_R(u,v)$ in ${\sf Ring}(S_u,S_v)$. We thus obtain a linkage $\mathcal{P}'_\mathrm{traverse}(u,v)$ disjoint from $\mathcal{P}_\mathrm{visitor}(u,v)$ that is aligned with $\mathcal{P}_\mathrm{traverse}(u,v)$. Hence, $\mathcal{P}'(u,v) = \mathcal{P}_\mathrm{visitor}(u,v) \cup \mathcal{P}'_\mathrm{traverse}(u,v)$ is a linkage in $G$ aligned with $\mathcal{P}(u,v)$. Assume w.l.o.g. that $\mathcal{P}'(u,v)$ uses the $3$-rd copy of each edge in $H$. Let us now consider the winding number of $\mathcal{P}'_\mathrm{traverse}(u,v)$ with respect to $\ensuremath{\mathsf{path}}_R(u',v')$. By Lemma~\ref{lemma:winding}(c), $|\overline{\sf WindNum}(\mathcal{P}'_\mathrm{traverse}(u,v)) - \overline{\sf WindNum}(\ensuremath{\mathsf{Flow}}_{R}(u,v))| \leq 60 \ell + 6$. Now, note that for any path $P' \in \mathcal{P}'_\mathrm{traverse}(u,v)$, $|\overline{\sf WindNum}(P') - \overline{\sf WindNum}(\mathcal{P}'_\mathrm{traverse}(u,v))| \leq 1$, (recall that the winding number of paths in a linkage differ by at most $1$). Similarly, for any path $Q \in \ensuremath{\mathsf{Flow}}_{R}(u,v)$, $|\overline{\sf WindNum}(\ensuremath{\mathsf{Flow}}_{R}(u,v)) - \overline{\sf WindNum}(Q)| \leq 1$. Therefore, it follows that for any two paths $P' \in \mathcal{P}'_\mathrm{traverse}(u,v)$ and $Q \in \ensuremath{\mathsf{Flow}}_{R}(u,v)$ $|\overline{\sf WindNum}(P') - \overline{\sf WindNum}(Q)| \leq 60\ell + 8$. Recall that we chose $\eta(u,v)$ as the reference path of ${\sf Ring}(S_u,s_v)$ in the above expression. Hence, we may rewrite it as $|\overline{\sf WindNum}(P', \eta(u,v)) - \overline{\sf WindNum}(Q, \eta(u,v))| \leq 60 \ell + 8$. Note that $P', Q$ and $\eta(u,v)$ are three edge-disjoint paths traversing ${\sf Ring}(S_u,S_v)$. Hence $\overline{\sf WindNum}$ is well defined in ${\sf Ring}(S_u,S_v)$ for any pair of them. By Proposition~\ref{prop:wn-prop}, $|\overline{\sf WindNum}(P',Q)| \leq |\overline{\sf WindNum}(P', \eta(u,v)) - \overline{\sf WindNum}(Q, \eta(u,v))| + 1$. We have so far established that for any $P' \in \mathcal{P}'_\mathrm{traverse}(u,v)$ and $Q \in \ensuremath{\mathsf{Flow}}_{R}(u,v)$, $|\overline{\sf WindNum}(P',Q)| \leq 60\ell +9$. Now, consider $\ensuremath{\mathsf{path}}_{R}(u',v')$ and recall that for any $Q \in \ensuremath{\mathsf{Flow}}_{R}(u,v)$, $|\overline{\sf WindNum}(\ensuremath{\mathsf{path}}_R(u',v'), \allowbreak Q])| \leq 1$ by Observation~\ref{obs:pre-rerouting}. Furthermore $\ensuremath{\mathsf{path}}_R(u',v')$ uses the $0$-th copies of edges in $H$. Hence, for each $P' \in \mathcal{P}'_\mathrm{traverse}(u,v)$ $\overline{\sf WindNum}(P',\ensuremath{\mathsf{path}}_R(u',v'))$ is well defined in ${\sf Ring}(S_u,S_v)$, and by Proposition~\ref{prop:wn-prop}, $|\overline{\sf WindNum}(P', \allowbreak \ensuremath{\mathsf{path}}_R(u',v'))| \leq |\overline{\sf WindNum}(P',Q) - \overline{\sf WindNum}(\ensuremath{\mathsf{path}}_R(u',v'),Q)| + 1 \leq 60\ell + 11$. Finally, consider the paths in $\mathcal{P}'_\mathrm{visitor} = \mathcal{P}_\mathrm{visitor}$. For each $P \in \mathcal{P}_\mathrm{visitor}$ the absolute value of the winding number of $\overline{\sf WindNum}(P, \ensuremath{\mathsf{path}}_R(u',v'))$ is bounded by $1$ (by Observation~\ref{obs:vis_wn}). Hence, we conclude that ${\sf WindNum}(\mathcal{P}', {\sf Ring}(S_u,S_v)) \leq 60 \alpha_{\rm sep}(k) + 11 < 300 \cdot 2^{ck}$. \end{proof} \newcommand{\ensuremath{\mathsf{SegGro}}}{\ensuremath{\mathsf{SegGro}}} \newcommand{\ensuremath{\mathsf{Potential}}}{\ensuremath{\mathsf{Potential}}} \section{Pushing a Solution onto the Backbone Steiner Tree}\label{sec:pushing} In this section, we push a linkage that is a solution of small winding number onto the backbone Steiner tree to construct a ``pushed weak linkage'' with several properties that will make its reconstruction (in Section \ref{sec:reconstruction}) possible. Let us recall that we have an instance $(G,S,T,g,k)$ and $H$ is the radial completion of $G$ enriched with $4n+1$ parallel copies of each edge. Then we construct a backbone Steiner tree $R$ (Section~\ref{sec:steiner}), which uses the $0$-th copy of each edge. Formally, a pushed weak linkage is defined as follows. \begin{definition}[{\bf Pushed Weak Linkage}] Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. A weak linkage $\cal W$ in $H$ is {\em pushed (onto $R$)} if $E({\cal W})\cap E(R)=\emptyset$ and every edge in $E({\cal W})$ is parallel to some edge of $R$. \end{definition} In what follows, we first define the properties we would like the pushed weak linkage to satisfy. Additionally, we partition any weak linkage into segments that give rise to a potential function whose maintenance will be crucial while pushing a solution of small winding number onto $R$. Afterwards, we show how to push the solution, simplify it, and analyze the result. We remark that the simplification (after having pushed the solution) will be done in two stages. \subsection{Desired Properties and Potential of Weak Linkages} The main property we would like a pushed weak linkage to satisfy is to use only ``few'' parallel copies of any edge. This quantity is captured by the following definition of multiplicity, which we will eventually like to upper bound by a small function of $k$. \begin{definition}[{\bf Multiplicity of Weak Linkage}] Let $H$ be a plane graph. Let $\cal W$ be a weak linkage in $H$. Then, the {\em multiplicity} of $\cal W$ is the maximum, across all edges $e$ in $H$, of the number of edges parallel to $e$ that belong to $E({\cal W})$. \end{definition} Towards bounding the multiplicity of the pushed weak linkage we construct, and also an important ingredient on its own in the reconstruction in Section \ref{sec:reconstruction}, we need to repeatedly eliminate U-turns in the weak linkage we deal with. Here, U-turns are defined as follows. \begin{definition}[{\bf U-Turn in Weak Linkage}] Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $\cal W$ be a weak linkage in $H$. Then, a {\em U-turn} in $\cal W$ is a pair of parallel edges $\{e,e'\}$ visited consecutively by some walk $W\in{\cal W}$ such that the strict interior of the cycle formed by $e$ and $e'$ does not contain the first or last edge of any walk in $\cal W$. We say that $\cal W$ is {\em U-turn-free} if it does not have any U-turn. \end{definition} Still, having a pushed weak linkage of low multiplicity and no U-turns does not suffice for faithful reconstruction due to ambiguity in which edge copies are being used. This ambiguity will be dealt with by the following definition. \begin{definition}[{\bf Canonical Weak Linkage}] Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $\cal W$ be a weak linkage in $H$ pushed onto $R$. Then, $\cal W$ is {\em canonical} if $(i)$ for every edge $e_i\in E({\cal W})$, $i\geq 1$ $(ii)$ if $e_i\in E({\cal W})$, then all the parallel edges $e_j$, for $1 \leq j < i$, are also~in~$E({\cal W})$. \end{definition} For brevity, we say that a weak linkage $\cal W$ in $H$ is {\em simplified} if it is sensible, pushed onto $R$, canonical, U-turn-free and has multiplicity upper bounded by $\alpha_{\rm mul}(k):=2\alpha_{\rm potential}(k)$ where $\alpha_{\rm potential}(k)=2^{\mathcal{O}(k)}$ will be defined precisely in Lemma \ref{lem:solPotential}. For the process that simplifies a pushed weak linkage, we will maintain a property that requires multiplicity at most $2n$ as well as a relaxation of canonicity. This property is defined as follows. \begin{definition}[{\bf Extremal Weak Linkage}] Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $\cal W$ be a weak linkage in $H$ that is pushed onto $R$. Then, $\cal W$ is {\em extremal} if its multiplicity is at most $2n$ and for any two parallel edges $e_i,e_j\in E({\cal W})$ where $i\geq 1$ and $j\leq -1$, we have $(i-1)+|j+1|\geq 2n$. \end{definition} Additionally, we will maintain the following property. \begin{definition}[{\bf Outer-Terminal Weak Linkage}] Let $(G,S,T,g,k)$ be a nice instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $\cal W$ be a weak linkage in $H$. Then, $\cal W$ is outer-terminal if it uses exactly one edge incident to $t^\star$. \end{definition} \paragraph{Segments, Segment Groups and Potential.} To analyze the ``complexity'' of a weak linkage, we partition it into segments and segment groups, and then associate a potential function with it based on this partition. Intuitively, a segment of a walk is a maximal subwalk that does not cross $R$ (see Fig.~\ref{fig:segment}). Formally, it is defined as follows. \begin{figure} \caption{Segments, segment groups and their labeling.} \label{fig:segment} \end{figure} \begin{definition}[{\bf Segment}] Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $W$ be a walk in $H$ that is edge-disjoint from $R$. A {\em crossing} of $W$ with $R$ is a crossing $(v,e,\widehat{e},e',\widehat{e}')$ of $W$ and some path in $R$.\footnote{The path might not be a maximal degree-2 path, thus $(v,e,\widehat{e},e',\widehat{e}')$ may concern a vertex $v\in V_{\geq 3}(R)$.} Then, a {\em segment} of $W$ is a maximal subwalk of $W$ that has no crossings with $R$. Let $\ensuremath{\mathsf{Seg}}(W)$ denote the set\footnote{Because we deal with walks that do not repeat edges, $\ensuremath{\mathsf{Seg}}(W)$ is necessarily a set rather than a multiset.} of segments of $W$. \end{definition} We remind that $R$ only contains $0$-copies of edges, hence we can ensure that we deal with walks that are edge-disjoint from $R$ by avoiding the usage of $0$-copies. Towards the definition of potential for a weak linkage, we group segments together as follows (see Fig.~\ref{fig:segment}). \begin{definition}[{\bf Segment Group}] Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $W$ be a walk in $H$ that is edge-disjoint from $R$. A {\em segment group} of $W$ is a maximal subwalk $W'$ of $W$ such that either {\em (i)} $\ensuremath{\mathsf{Seg}}(W')\subseteq \ensuremath{\mathsf{Seg}}(W)$ and all of the endpoints of all of the segments in $\ensuremath{\mathsf{Seg}}(W')$ are internal vertices of the same maximal degree-2 path of $R$, or {\em (ii)} $W'\in\ensuremath{\mathsf{Seg}}(W)$ and the two endpoints of $W'$ are not internal vertices of the same maximal degree-2 path in $R$.\footnote{That is, the two endpoints of $W'$ are internal vertices in different maximal degree-2 paths in $R$ or at least one endpoint of $W'$ is a vertex in $V_{=1}(R)\cup V_{\geq 3}(R)$.} The set of segment groups of $W$ is denoted by $\ensuremath{\mathsf{SegGro}}(W)$. \end{definition} Observe that the set of segments, as well as the set of segment groups, define a partition of a walk. We define the ``potential'' of a segment group based on its winding number in the ring that corresponds to its path (in case it is a long path where a ring is defined). To this end, recall the labeling function in Definition \ref{def:spiralLabel}. Note that the labeling is defined for any two walks irrespective of the existence of a ring. \begin{definition}[{\bf Potential of Segment Group}] Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $W$ be a walk in $H$ that is edge-disjoint from $R$ and whose endpoints are in $V_{=1}(R)$. Let $W'\in\ensuremath{\mathsf{SegGro}}(W)$. If $|\ensuremath{\mathsf{Seg}}(W')|=1$, then the {\em potential} of $W'$, denoted by $\ensuremath{\mathsf{Potential}}(W')$, is defined to be $1$. Otherwise, it is defined as follows. \[\ensuremath{\mathsf{Potential}}(W') = 1+|\sum_{(e,e')\in E(W^\star) \times E(W^\star)}{\sf label}_P^{W'}(e,e')|,\] where $W^\star$ is the walk obtained from $W'$ be adding two edges to $W'$---the edge consecutive to the first edge of $W'$ in $W$ and the edge consecutive to the last edge of $W'$ in $W$, and $P$ is the maximal degree-2 path of $R$ such that all of the endpoints of all of the segments in $\ensuremath{\mathsf{Seg}}(W')$ are its internal vertices. \end{definition} The potential of a segment group is well defined as we use the function ${\sf label}$ only for edges incident to internal vertices of the maximal degree-2 paths in $R$. For an example of a potential of a segment groups, see Fig.~\ref{fig:segment}. Now, we generalize the notion of potential from segment groups to weak linkages as follows. \begin{definition}[{\bf Potential of Weak Linkage}] Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $\cal W$ be a weak linkage. Then, the {\em potential} of $\cal W$ is \[\ensuremath{\mathsf{Potential}}({\cal W}) = \sum_{W'\in\ensuremath{\mathsf{SegGro}}({\cal W})}\ensuremath{\mathsf{Potential}}(W'),\] where $\ensuremath{\mathsf{SegGro}}({\cal W})=\bigcup_{W\in{\cal W}}\ensuremath{\mathsf{SegGro}}(W)$. \end{definition} To upper bound the potential of a solution of a small winding number, we first upper bound the number of segment groups. \begin{lemma}\label{lem:numSegGro} Let $(G,S,T,g,k)$ be a good \textsf{Yes}-instance of \textsf{Planar Disjoint Paths}, and $R$ be a backbone Steiner tree. Then, there exists a solution $\cal P$ to $(G,S,T,g,k)$ such that $|\ensuremath{\mathsf{SegGro}}({\cal P})|\leq \alpha_{\rm segGro}(k):=10^5 k \cdot 2^{ck} $. \end{lemma} \begin{proof} First, notice that for every path $P\in{\cal P}$, every segment group of $P$ whose endpoints are both internal vertices of some maximal degree-2 path of $R$ has the following property: it is neither a prefix nor a suffix of $P$ (when $P$ is oriented arbitrarily, say, from its endpoint in $S$ to its endpoint in $T$), and the segment groups that appear immediately before and after it necessarily satisfy that each of them does not have both of its endpoints being internal vertices of some maximal degree-2 path of $R$. Let $s$ denote the number of segment groups of $\cal P$ that do not have both of their endpoints being internal vertices of some maximal degree-2 path of $R$. Then, we have that $|\ensuremath{\mathsf{SegGro}}({\cal P})|\leq 2s-1$, and therefore to complete the proof, it suffices to show that $s\leq \alpha_{\rm segGro}(k)/2$. Let $\{u_1,v_1\}, \{u_2,v_2\}, \ldots, \{u_t,v_t\}$ denote the pairs of near vertices in $V_{=1}(R) \cup V_{\geq 3}(R)$ such that for each $i \in \{1,\ldots t\}$, $\ensuremath{\mathsf{path}}_R(u_i,v_i)$ is a long maximal degree-2 path in $R$. Let ${\sf Ring}(S_{u_1}, S_{v_1}),$ $ {\sf Ring}(S_{u_2}, S_{v_2}), \ldots, {\sf Ring}(S_{u_t},S_{v_t})$ be the corresponding rings. By Lemma \ref{lemma:R-outside-rings}, the number of vertices of $R$ lying outside these rings, $|V(R) \setminus \bigcup_{i=1}^t V(S_{u_i}, S_{v_i})|$, is upper bounded by $\alpha_{\rm nonRing}(k) = 10^4 k \cdot 2^{ck}$. We further classify each segment group $S$ of $\cal P$ that does not have both of its endpoints being internal vertices of some maximal degree-2 path of $R$ as follows. (We remark that, by the definition of a segment group, $S$ must consist of a single segment.) \begin{itemize}\setlength\itemsep{0em} \item $S$ has at least one endpoint in $V(R) \setminus \bigcup_{i=1}^t V(S_{u_i}, S_{v_i})$. Denote the number of such segment groups by $s_1$. \item $S$ has one endpoint in $V(S_{u_i}, S_{v_i})$ and another endpoint in $V(S_{u_j}, S_{v_j})$ for some $i,j\in\{1,\ldots,t\}$ such that $i\neq j$. Denote the number of such segment groups by $s_2$. \end{itemize} Then, $s_1+s_2=s$. Now, notice that the paths in $\cal P$ are pairwise vertex-disjoint, and the collection of segment groups of each path $P\in{\cal P}$ forms a partition of $P$. Thus, because $|V(R) \setminus \bigcup_{i=1}^t V(S_{u_i}, S_{v_i})|\leq \alpha_{\rm nonRing}(k)$ and each vertex in $V(R)$ is shared as an endpoint by at most two segment groups, we immediately derive that $s_1\leq \alpha_{\rm nonRing}(k)$. To bound $s_2$, note that each segment group of the second type traverses at least one vertex in $S_{u_i}\cup S_{v_i}$ for some $i\in\{1,\ldots,t\}$ (due to Lemma \ref{lem:separatorsUnchanged}). By Lemma \ref{lem:sepSmall}, for every $i\in\{1,\ldots,t\}$, $|S_{u_i}|,|S_{v_i}|\leq \alpha_{\mathrm{sep}}(k)=\frac{7}{2}\cdot 2^{ck}+2$. Moreover, by Observation \ref{obs:leaIntSteiner}, $t<4k$. Thus, we have that $s_2\leq 4\cdot 4k\cdot \alpha_{\mathrm{sep}}(k)$, where the multiplication by $4$ is done because two segment groups can share an endpoint and each maximal degree-2 path is associated with two separators. From this, we conclude that $s\leq 10^4 k \cdot 2^{ck} + 16k(\frac{7}{2}\cdot 2^{ck}+2)\leq \alpha_{\rm segGro}(k)/2$. \end{proof} Now, based on Observation \ref{obs:spiralLabel} and Lemmas \ref{lemma:nice-solution} and \ref{lem:numSegGro}, we derive the existence of a solution with low potential (if there exists a solution). \begin{lemma}\label{lem:solPotential} Let $(G,S,T,g,k)$ be a good \textsf{Yes}-instance of \textsf{Planar Disjoint Paths}, and $R$ be a backbone Steiner tree. Then, there exists a solution $\cal P$ to $(G,S,T,g,k)$ such that $\ensuremath{\mathsf{Potential}}({\cal P})\leq \alpha_{\rm potential}(k):=(10^4\cdot 4^{ck}+1)\cdot \alpha_{\rm segGro}(k)$. \end{lemma} \begin{proof} Let $\cal Q$ be a solution with the property in Lemma \ref{lemma:nice-solution}. Let ${\cal S}$ denote the segment groups in $\ensuremath{\mathsf{SegGro}}({\cal Q})$ whose both endpoints belong to the same maximal degree-2 path of $R$. For each segment group $S\in{\cal S}$, denote $\mathsf{labPot}(S)=|\sum_{(e,e')\in E(S) \times E(S)}{\sf label}_{P_S}^{S}(e,e')|$ where $P_S$ is the path in $\cal Q$ that has $S$ as a segment group. Furthermore, denote $M=\max_{S\in{\cal S}}\mathsf{labPot}(S)$. Now, by the definition of potential, \[\ensuremath{\mathsf{Potential}}({\cal Q}) = |\ensuremath{\mathsf{SegGro}}({\cal Q})| + \sum_{S\in {\cal S}}\mathsf{labPot}(S)\leq (M+1)|\ensuremath{\mathsf{SegGro}}({\cal Q})|.\] By Lemma \ref{lem:numSegGro}, $|\ensuremath{\mathsf{SegGro}}({\cal Q})|\leq \alpha_{\rm segGro}(k)$. Thus, to complete the proof, it suffices to show that $M\leq 10^4\cdot 4^{ck}$. To this end, consider some segment group $S\in{\cal S}$. Notice that $\mathsf{labPot}(S)$ is upper bounded by the number of crossings of $S$ with $P_S$. Thus, if $P_S$ is short, it follows that $\mathsf{labPot}(S)<\alpha_{\mathrm{long}}(k) = 10^4\cdot 2^{ck}$. Now, suppose that $P_S$ is long, and let ${\sf Ring}(S_u,S_v)$ be the ring that corresponds to $P_S$. By the choice of $\cal Q$, $|{\sf WindNum}({\cal Q}, {\sf Ring}(S_{u},S_{v}))| \leq \alpha_{\rm winding}(k) < 300 \cdot 2^{ck}$. Let $\widehat{\cal A}$ be collection of maximal subpaths of $S$ that are fully contained within ${\sf Ring}(S_{u},S_{v})$, and let $\widehat{P}_S$ be the maximal subpath of $P_S$ that is fully contained within ${\sf Ring}(S_{u},S_{v})$. Then, by Observation \ref{obs:spiralLabel}, \[\mathsf{labPot}(S) \leq |V(P_S)\setminus V(\widehat{P}_S)| + \sum_{\widehat{A}\in\widehat{\cal A}}|\overline{\sf WindNum}(\widehat{A}, \widehat{P}_S)|.\] By the definition of ${\sf WindNum}$, we have that $|\overline{\sf WindNum}(\widehat{A}, \widehat{P}_S)|\leq |{\sf WindNum}({\cal Q}, {\sf Ring}(S_{u},S_{v}))|$ for each $\widehat{A}\in\widehat{\cal A}$. Moreover, by Lemma \ref{lem:sepSmall}, $|\widehat{\cal A}|\leq |S_u|+|S_v| \leq 2\alpha_{\mathrm{sep}}(k)=7\cdot 2^{ck}+4$. Additionally, by Lemmas \ref {lem:threeParts} and \ref{lem:R*Steiner}, we have that $|V(P_S)\setminus V(\widehat{P}_S)|\leq 2\alpha_{\mathrm{pat}}(k)=200\cdot 2^{ck}$. From this, \[\mathsf{labPot}(S) \leq 200\cdot 2^{ck} + (7\cdot 2^{ck}+4)\cdot 300 \cdot 2^{ck}\leq 10^4\cdot 4^{ck}.\] Thus, because the choice of $S$ was arbitrary, we conclude that $M\leq 10^4\cdot 4^{ck}$. \end{proof} \subsection{Pushing a Solution onto $R$} Let us now describe the process of pushing a solution onto $R$. To simplify this process, we define two ``non-atomic'' operations that encompass sequences of atomic operations in discrete homotopy. We remind that we only deal with walks that do not repeat~edges. \begin{definition}[{\bf Non-Atomic Operations in Discrete Homotopy}]\label{def:nonAtomicDiscreteHomotopy} Let $G$ be a triangulated plane graph with a weak linkage ${\cal W}$, and $C$ be a cycle\footnote{A pair of parallel edges is considered to be a cycle.} in $G$. Let $W\in {\cal W}$. \begin{itemize} \item {\bf Cycle Move.} Applicable to $(W,C)$ if there exists a subpath $Q$ of $C$ such that {\em (i)} $Q$ is a subpath of $W$, {\em (ii)} $1\leq |E(Q)|\leq |E(C)|-1$, {\em (iii)} no edge in $E(C)\setminus E(Q)$ belongs to any walk in $\cal W$, and {\em (iii)} no edge drawn in the strict interior of $C$ belongs to any walk in $\cal W$. Then, the cycle move operation replaces $Q$ in $W$ by the unique subpath of $C$ between the endpoints of $Q$ that is edge-disjoint from $Q$. \item {\bf Cycle Pull.} Applicable to $(W,C)$ if {\em (i)} $C$ is a subwalk $Q$ of $W$, and {\em (ii)} no edge drawn in the strict interior of $C$ belongs to any walk in $\cal W$. Then, the cycle pull operation replaces $Q$ in $W$ by a single occurrence of the first vertex in $Q$. \end{itemize} \end{definition} We now prove that the operations above are compositions of atomic operations. \begin{lemma}\label{lem:nonAtomicDiscreteHomotopy} Let $G$ be a triangulated plane graph with a weak linkage ${\cal W}$, and $C$ be a cycle in $G$. Let $W\in {\cal W}$ with a non-atomic operation applicable to $(W,C)$. Then, the result of the application is a weak linkage that is discretely homotopic to $\cal W$. \end{lemma} \begin{proof} We prove the claim by induction on the number of faces of $G$ in the interior of $C$. In the basis, where $C$ encloses only one face, then the cycle move and cycle pull operations are precisely the face move and face pull operations, respectively, and therefore the claim holds. Now, suppose that $C$ encloses $i\geq 2$ faces and that the claim is correct for cycles that enclose at most $i-1$ faces. We consider several cases as follows. First, suppose that $C$ has a path $P$ fully drawn in its interior whose endpoints are two (distinct) vertices $u,v\in V(C)$, and whose internal vertices and all of its edges do not belong to $C$. (We remark that $P$ might consist of a single edge, and that edge might be parallel to some edge of $C$.) Now, notice that $P$ partitions the interior of $C$ into the interior of two cycles $C_1$ and $C_2$ that share only $P$ in common as follows: one cycle $C_1$ consists of one subpath of $C$ between $u$ and $v$ and the path $P$, and the other cycle $C_2$ consists of the second subpath of $C$ between $u$ and $v$ and the path $P$. Notice that $C_1$ encloses less faces than $C$, and so does $C_2$. At least one of these two cycles, say, $C_1$, contains at least one edge of $Q$. Then, the cycle move operation is applicable to $(W,C_1)$. Indeed, let $\widehat{Q}$ be the subpath of $Q$ that is a subpath of $C$, and notice that $E(P)\subseteq E(C_1)\subseteq E(C)\cup E(P)$ and $E(P)\cap E({\cal W})=\emptyset$ (because the cycle move/pull operation is applicable to $(W,C)$). Therefore, $\widehat{Q}$ is a subpath of $W$, $1\leq |E(\widehat{Q})|\leq |E(C_1)|-1$, and no edge in $E(C_1)\setminus E(\widehat{Q})$ belongs to any walk in $\cal W$. Moreover, because $C_1$ belongs to the interior (including the boundary) of $C$, no edge drawn in the strict interior of $C$ belongs to any walk in $\cal W$. Now, notice that after the application of the cycle move operation for $(W,C_1)$, $C_2$ also has at least one edge used by the walk $W'$ into which $W$ was modified---in particular, $E(P)\subseteq E(W')$. Moreover, consider the subpath (or subwalk that is a cycle) $Q'$ of $W'$ that results from the replacement of $\widehat{Q}$ in $Q$ by the subpath of $C_1$ between the endpoints of $Q'$ that does not belong to $W$. Then, $Q'$ traverses some subpath (possible empty) of $C_1$ or $C_2$, then traverses $P$, and next traverses some other subpath of $C_1$ or $C_2$. So, the restriction of $Q'$ to $C_2$ is a non-empty path or cycle $Q^\star$ that is a subwalk of $W'$. Furthermore, because $C_2$ is drawn in the interior of $C$ and the cycle move/pull operation is applicable to $(W,C)$, we have that no edge of $E(C_2)\setminus E(Q^\star)$ or the strict interior of $C_2$ belongs to $E({\cal W})$. Thus, the cycle move/pull operation is applicable to $(W',C_2)$. Now, the result of the application of this operation is precisely the result of the application of the original cycle move or pull operation applicable to $(W,C)$. To see this, observe that the edges of $E(C)\setminus E(W)$ that occur in $C_1$ along with $E(P)$ have replaced the edges of $E(C)\cap E(W)$ that occur in $C_1$ in the first operation, and the edges of $E(C)\setminus E(W)$ that occur in $C_2$ have replaced the edges of $E(C)\cap E(W))$ that occur in $C_1$ along with $E(P)$ in the second operation. Thus, by the inductive hypothesis with respect to $(W,C_1)$ and $(W',C_2)$, and because discrete homotopy is transitive, the claim follows. Thus, it remains to prove that $C$ has a path $P$ fully drawn in its interior whose endpoints are two (distinct) vertices $u,v\in V(C)$, and whose internal vertices and all of its edges do not belong to $C$. In case $C$ has a chord (that is, an edge in $G$ between two vertices of $C$ that does not belong to $C$), then the chord is such a path $P$. Therefore, we now suppose that this is not the case. Then, $C$ does not contain in its interior an edge parallel to an edge of $C$. In turn, because $G$ is triangulated), when we consider some face $f$ in the interior of $C$ that contains an edge $e$ of $C$, this face must be a triangle. Moreover, the vertex of $f$ that is not incident to $e$ cannot belong to $C$, since otherwise we obtain a chord in $C$. Thus, the subpath (that consists of two edges) of $f$ between the endpoints of $e$ that does not contain $e$ is a path $P$ with the above mentioned properties. \end{proof} In the process of pushing a solution onto $R$, we push parts of the solution one-by-one. We refer to these parts as sequences, defined as follows (see Fig.~\ref{fig:seq}). \begin{figure} \caption{A Sequence, its projecting cycle and a shrinking cycle.} \label{fig:seq} \end{figure} \begin{definition}[{\bf Sequence}] Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $W$ be a walk. Then, a {\em sequence} of $W$ is a maximal subwalk of $W$ whose internal vertices (if any exists) do not belong to $R$ and which contains at least one edge that is not parallel to an edge of $R$. The set of sequences of $W$ is denoted by $\ensuremath{\mathsf{Seq}}(W)$. For a weak linkage $\cal W$, the set of sequences of $\cal W$ is defined as $\ensuremath{\mathsf{Seq}}({\cal W})=\bigcup_{W'\in{\cal W}}\ensuremath{\mathsf{Seq}}(W')$. \end{definition} Notice that the set of sequences of a walk does not necessarily form a partition of the walk because the walk can traverse edges parallel to the edges of $R$ and these edges do not belong to any sequence. Moreover, for sensible weak linkages, the endpoints of every sequence belong to $R$. To deal only with sequences that are paths, we need the following definition. \begin{definition}[{\bf Well-Behaved Weak Linkage}] Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. A weak linkage $\cal W$ is {\em well-behaved} if every sequence in $\ensuremath{\mathsf{Seq}}({\cal W})$ is a path or a cycle. \end{definition} When we will push sequences one-by-one, we ensure that the current sequence to be pushed can be handled by the cycle move operation in Definition \ref{def:nonAtomicDiscreteHomotopy}. To this end, we define the notion of an innermost sequence, based on another notion called a projecting cycle (see Fig.~\ref{fig:seq}) We remark that this cycle will not necessarily be the one on which we apply a cycle move operation, since this cycle might contain in its interior edges of some walks of the weak linkage. \begin{definition}[{\bf Projecting Cycle}] Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $\cal W$ be a sensible well-behaved weak linkage, and $S\in\ensuremath{\mathsf{Seq}}({\cal W})$. The {\em projecting cycle} of $S$ is the cycle $C$ formed by $S$ and the subpath $P$ of $R$ between the endpoints of $S$. Additionally, $\mathsf{Volume}(S)$ denotes the number of faces enclosed by the projecting cycle of~$S$. \end{definition} Now, we define the notion of an innermost sequence. \begin{definition}[{\bf Innermost Sequence}] Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $\cal W$ be a sensible well-behaved weak linkage, and $S\in\ensuremath{\mathsf{Seq}}({\cal W})$. Then, $S$ is {\em innermost} if every edge in $E({\cal W})$ that is drawn in the interior of its projecting cycle is parallel to some edge of $R$. \end{definition} We now argue that, unless the set of sequences is empty, there must exist an innermost one. \begin{lemma}\label{lem:innermostSeqExists} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $\cal W$ be a sensible well-behaved weak linkage such that $\ensuremath{\mathsf{Seq}}({\cal W})\neq\emptyset$. Then, there exists an innermost sequence in $\ensuremath{\mathsf{Seq}}({\cal W})$. \end{lemma} \begin{proof} Let $S\in\ensuremath{\mathsf{Seq}}({\cal W})$ be a sequence that minimizes $\mathsf{Volume}(S)$. We claim that $S$ is innermost. Suppose, by way of contradiction, that this claim is false. Then, there exists an edge $e\in E({\cal W})$ that is drawn in the interior of the projecting cycle of $S$ and is not parallel to any edge of $R$. Thus, $e$ belongs to some sequence $S'\in\ensuremath{\mathsf{Seq}}({\cal W})$. Because $\cal W$ is well-behaved, $S$ and $S'$ are vertex disjoint. This implies that the projecting cycle of $S'$ is contained in the interior of the projecting cycle of $S$. Because $H$ is triangulated, this means that $\mathsf{Volume}(S')<\mathsf{Volume}(S)$, which is a contradiction to the choice of $S$. \end{proof} When we push the sequence onto $R$, we need to ensure that we have enough copies of each edge of $R$ to do so. To this end, we need the following definition. \begin{definition}[{\bf Shallow Weak Linkage}]\label{def:shallow} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $\cal W$ be a sensible well-behaved weak linkage. Then, $\cal W$ is {\em shallow} if for every edge $e_0\in E(R)$, the following condition holds. Let $\ell$ (resp.~$h$) be the number of sequences $S\in\ensuremath{\mathsf{Seq}}({\cal W})$ whose projecting cycle encloses $e_1$ (resp.~$e_{-1}$). Then, $e_i$ is not used by $\cal W$ for every $i\in\{-n,-n+1,\ldots,-n+\ell-1\}\cup \{0\}\cup\{n-h+1,n-h+2,\ldots,n\}$. \end{definition} To ensure that we make only cycle moves/pulls as in Definition \ref{def:nonAtomicDiscreteHomotopy}, we do not necessarily push the sequence at once, but gradually shrink the area enclosed by its projecting cycle.\footnote{Instead, we could have also always pushed a sequence at once by defining moves and pulls for closed walks, which we find somewhat more complicated to analyze formally.} \begin{definition}[{\bf Shrinking Cycle}]\label{def:shrinkCyc} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $\cal W$ be a sensible well-behaved weak linkage, and $S\in\ensuremath{\mathsf{Seq}}({\cal W})$ with an endpoint $v\in V(R)$. Then, a cycle $C$ in $H$ is a {\em shrinking cycle} for $(S,v)$ if it has no edge of $R$ in its interior and it can be partitioned into three paths where the first has at least one edge and the last has at most one edge: {\em (i)} a subpath $P_1$ of $S$ with $v$ as an endpoint; {\em (ii)} a subpath $P_2$ from the other endpoint $u$ of $P_1$ to a vertex on $R$ that consists only of edges drawn in the strict interior of the projecting cycle of $S$ and no vertex of $S$ apart from $u$; {\em (iii)} a subpath $P_3$ that has $v$ as an endpoint and whose edge (if one exists) is either not parallel to any edge of $R$ or it is the $i$-th copy of an edge parallel to some edge of $R$ for some $i\in\{-n+\ell-1,n-h+1\}$, where $\ell$ and $h$ are as in Definition \ref{def:shallow}. \end{definition} With respect to shrinking cycles, we prove two claims. First, we assert their existence. \begin{figure} \caption{An illustration of Lemma~\ref{lem:shrinkCycExists}} \label{fig:shrinkingCycle} \end{figure} \begin{lemma}\label{lem:shrinkCycExists} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $\cal W$ be a sensible well-behaved weak linkage, and $S\in\ensuremath{\mathsf{Seq}}({\cal W})$ with an endpoint $v\in V(R)$. Then, there exists a shrinking cycle for $(S,v)$. \end{lemma} \begin{proof} Let $e$ be an edge in $S$ incident to $v$ (if there are two such edges, when $S$ is a cycle, pick one of them arbitrarily), and denote the other endpoint of $e$ by $u$. Because $H$ is triangulated, $e$ belongs to the boundary $B$ of a face $f$ of $H$ in the interior of $C$ such that $B$ is a cycle that consists of only two or three edges. If $B$ does not contain any vertex of $V(R)\cup V(S)$ besides $u$ and $v$, then it is clearly a shrinking cycle (see Fig.~\ref{fig:shrinkingCycle}) Thus, we now suppose that $B$ is a cycle on three edges whose third vertex, $w$, belongs to $V(R)\cup V(S)$. If $w\in V(S)$, then the cycle that consists of the subpath of $S$ from $v$ to $w$ and the edge in $E(B)$ between $v$ and $w$ is also clearly a shrinking cycle (see Fig.~\ref{fig:shrinkingCycle}). Thus, we now also suppose that $w\in V(R)$. We further distinguish between two cases. First, suppose that $w$ is not adjacent to $v$ on $R$. In this case, $B$ does not enclose any edge of $R$ as well as any edge parallel to an edge of $R$. Moreover, $B$ can be partitioned into $P_1,P_2$ and $P_3$ that are each a single edge, where $P_1$ consists of the edge in $B$ between $v$ and $u$, $P_2$ consists of the edge in $B$ between $u$ and $w$, and $P_3$ consists of the edge in $B$ between $w$ and $v$, thereby complying with Definition \ref{def:shrinkCyc}. Thus, $B$ is a shrinking cycle for $(S,v)$. Now, suppose that $w$ is adjacent to $v$ on $R$. Then, define $P_1,P_2$ and $P_3$ similarly to before except that to $P_3$, we do not take the edge of $B$ between $v$ and $w$ but its parallel $i$-th copy where $i\in\{-n+\ell-1,n-h+1\}$ such that $\ell$ and $h$ are as in Definition \ref{def:shallow}. The choice of whether $i=-n+\ell-1$ or $i=n-h+1$ is made so that the cycle $B'$ consisting of $P_1,P_2$ and $P_3$ does not enclose any edge of $R$. (Such a choice necessarily exists, see~Fig.~\ref{fig:shrinkingCycle}). \end{proof} Now, we prove that making a cycle move/pull operation using a shrinking cycle is valid and maintains some properties of weak linkages required for our analysis. \begin{lemma}\label{lem:shrinkCycMaintain} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $\cal W$ be a sensible, well-behaved, shallow and outer-terminal weak linkage, and $S\in\ensuremath{\mathsf{Seq}}({\cal W})$ be innermost with an endpoint $v\in V(R)\setminus\{t^\star\}$. Let $C$ be a shrinking cycle for $(S,v)$ that encloses as many faces as possible. Then, the cycle move/pull operation is applicable to $(W,C)$ where $W\in{\cal W}$ is the walk having $S$ as a sequence. Furthermore, the resulting weak linkage ${\cal W}'$ is sensible, well-behaved, shallow and outer-terminal having the same potential as $\cal W$, and $\sum_{\widehat{S}\in \ensuremath{\mathsf{Seq}}({\cal W}')}\mathsf{Volume}(\widehat{S})<\sum_{\widehat{S}\in \ensuremath{\mathsf{Seq}}({\cal W})}\mathsf{Volume}(\widehat{S})$. \end{lemma} \begin{proof} We first argue that the cycle move/pull operation is applicable to $(W,C)$. Let $P_1,P_2$ and $P_3$ be the partition of $C$ in Definition \ref{def:shrinkCyc}. Note that because $S$ is innermost, its projecting cycle does not contain any edge in $E({\cal W})$ that is not parallel to some edge of $R$, and hence so does $C$ as it is drawn in the interior (including the boundary) of the projecting cycle. Furthermore, the only edges parallel to $R$ that $C$ can contain are those parallel to the edge $e_i$ of $P_3$ whose subscripts have absolute value larger than $|i|$. However, none of these edges belong to $\cal W$ because $i\in\{-n+\ell-1,n-h+1\}$ where $\ell$ and $h$ are as in Definition \ref{def:shallow}, and $\cal W$ is shallow. Lastly, note that $P_1$ is either $S$ (which might be a cycle) or a subpath of $S$, and hence it is a subwalk of $W$. Thus, the cycle move or pull (depending on whether $P_1$ is a cycle) is applicable to $(W,C)$. Furthermore, the new walk $W'$ that results from the application is the modification of $W$ that replaces $P_1$ by the path consisting of $P_2$ and $P_3$. Because $\cal W$ is sensible and the endpoints of no walk in $\cal W$ are changed in ${\cal W}'$, we have that ${\cal W}'$ is sensible as well. Moreover, the vertices of $P_2$ are not used by any walk in $\cal W$ apart from $W'$ and only in its subwalk that traverses $P_2$, and therefore, as $\cal W$ is well-behaved, so is ${\cal W}'$. Additionally, note that $\cal W$ is shallow and that each edge belongs to at most as many projecting cycles of sequences in ${\cal W}'$ as it does in $\cal W$. Thus, if $P_3$ does not contain an edge (the only edge parallel to an edge of $R$ that might be used by ${\cal W}'$ but not by $\cal W$ is the edge $e_i$ of $P_3$, if it exists), it is immediate that ${\cal W}'$ is shallow. Now, suppose that $e_i$ exists. Let $b\in\{-1,1\}$ have the same sign as $i$. Recall that $i\in\{-n+\ell-1,n-h+1\}$ where $\ell$ and $h$ are as in Definition \ref{def:shallow}, thus to conclude that ${\cal W}'$ is shallow, we only need to argue that $e_b$ belongs to the interior of fewer projecting cycles of sequences in ${\cal W}'$ as it does in $\cal W$. However, this holds since the only difference between the sequences of $\cal W$ and ${\cal W}'$ is that the sequence $S$ occurs in $\cal W$ (and contains $e_b$ in the interior of its projecting cycle), but is transformed into (one or two) other sequences in ${\cal W}'$, and these new sequences, by the definition of $W'$, no longer contain $e_b$ in their projecting cycles. In this context, also note that the projecting cycles of the (one or two) new sequences enclose disjoint areas contained in the area enclosed by the projecting cycle of $S$, and the projecting cycles of the new sequences do not enclose the faces enclosed by $C$, but the projecting cycle of $S$ does enclose them. Thus, $\sum_{\widehat{S}\in \ensuremath{\mathsf{Seq}}({\cal W}')}\mathsf{Volume}(\widehat{S})<\sum_{\widehat{S}\in \ensuremath{\mathsf{Seq}}({\cal W})}\mathsf{Volume}(\widehat{S})$. It remains to show that ${\cal W}'$ is outer-terminal and that has the same potential as $\cal W$. The second claim is immediate since $\cal W$ and ${\cal W}'$ have precisely the same crossings with $R$. For the first claim, note that since $\cal W$ is outer-terminal, it uses exactly one edge incident to $t^\star$. The only vertex of $R$ that can possibly be incident to more edges in $E({\cal W}')$ that in $E({\cal W})$ is the other endpoint, say, $w$, of the edge of $P_3$ in the case where $P_3$ contains an edge. So, suppose that $P_3$ does contain an edge and that $w=t^\star$, else we are done. Since $t^\star$ is a leaf or $R$ that belongs to the boundary of the outer-face of $H$, it cannot be enclosed in the strict interior of the projecting cycle of $S$ and therefore it must be a vertex of $S$. However, this together with the maximality of the number of cycles enclosed by the shrinking cycle $C$ implies that the $C$ is equal to the projecting cycle of $S$. Thus, by the definition of $W'$, the only difference between the edges incident to $t^\star$ in ${\cal W}$ compared to ${\cal W}'$ is that in $\cal W$ it is incident to an edge of $S$, while in ${\cal W}'$ it is incident to the edge of $P_3$. In particular, this means that ${\cal W}'$ has exactly one edge incident to $t^\star$ and therefore it is outer-terminal. \end{proof} Having Lemmas \ref{lem:solPotential}, \ref{lem:innermostSeqExists}, \ref{lem:shrinkCycExists} and \ref{lem:shrinkCycMaintain} at hand, we are ready to push a solution onto $R$. Since this part is only required to be existential rather than algorithmic, we give a simpler proof by contradiction rather than an explicit process to push the solution. Notice that once the solution has already been pushed, rather than using the notion of shallowness, we only demand to have multiplicity at most $2n$. \begin{lemma}\label{lem:pushSequencesFinal} Let $(G,S,T,g,k)$ be a good \textsf{Yes}-instance of \textsf{Planar Disjoint Paths}, and $R$ be a backbone Steiner tree. Then, there exists a sensible outer-terminal weak linkage $\cal W$ in $H$ that is pushed onto $R$, has multiplicity at most $2n$, discretely homotopic in $H$ to some solution of $(G,S,T,g,k)$ and $\ensuremath{\mathsf{Potential}}({\cal W})\leq \alpha_{\rm potential}(k)$. \end{lemma} \begin{proof} By Lemma \ref{lem:solPotential}, there exists a solution $\cal P$ to $(G,S,T,g,k)$ such that $\ensuremath{\mathsf{Potential}}({\cal P})\leq \alpha_{\rm potential}(k)$. Because the paths in $\cal P$ are pairwise vertex-disjoint and $\cal P$ has a path where $t^\star$ is an endpoint, it is clear that $\cal P$ is a sensible, well-behaved, shallow and outer-terminal weak linkage. Since $\cal P$ is discretely homotopic to itself, it is well defined to let $\cal W$ a weak linkage that, among all sensible, well-behaved, shallow and outer-terminal weak linkages that are discretely homotopic to $\cal P$, minimizes $\sum_{S\in\ensuremath{\mathsf{Seq}}({\cal W})}\mathsf{Volume}(S)$. Notice that shallowness is a stronger demand than having multiplicity at most $2n$, and that being pushed onto $R$ is equivalent to having an empty set of sequences. Thus, to conclude the proof, it suffices to argue that $\ensuremath{\mathsf{Seq}}({\cal W})=\emptyset$. Suppose, by way of contradiction, that $\ensuremath{\mathsf{Seq}}({\cal W})\neq\emptyset$. Then, by Lemmas \ref{lem:innermostSeqExists} and \ref{lem:shrinkCycExists}, there exist an innermost sequence $S\in\ensuremath{\mathsf{Seq}}({\cal W})$ and a shrinking cycle $C$ for $(S,v)$ where we pick $v$ as an endpoint of $S$ that is not $t^\star$ (because $\cal W$ is outer-terminal, not both endpoints of $S$ can be $t^\star$), and we pick a shrinking cycle enclosing as many faces as possible. By Lemma \ref{lem:shrinkCycMaintain}, the cycle move/pull operation is applicable to $(W,C)$ where $W\in{\cal W}$ is the walk having $S$ as a sequence. Furthermore, the resulting weak linkage ${\cal W}'$ is sensible, well-behaved, shallow and outer-terminal having the same potential as $\cal W$, and $\sum_{\widehat{S}\in \ensuremath{\mathsf{Seq}}({\cal W}')}\mathsf{Volume}(\widehat{S})<\sum_{\widehat{S}\in \ensuremath{\mathsf{Seq}}({\cal W})}\mathsf{Volume}(\widehat{S})$. Since discrete homotopy is an equivalence relation, ${\cal W}'$ is discretely homotopic to $\cal P$. However, this contradicts the choice of $\cal W$. \end{proof} \subsection{Bounding the Total Number of Segments} Having pushed the solution onto $R$, we further need to make the resulting weak linkage simplified, which requires to make it have low multiplicity, be U-turn-free and canonical. We first show that we can focus only on the first two properties, as being canonical can be easily derived using cycle move operations on cycles consisting of two parallel edges. \begin{lemma}\label{lem:makingCanonical} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $\cal W$ be a weak linkage in $H$ that is sensible and pushed onto $R$, whose multiplicity is at most $2n$. Then, there exists a weak linkage ${\cal W}'$ that is sensible, pushed onto $R$, canonical, discretely homotopic to $\cal W$, and whose multiplicity is upper bounded by the multiplicity of $\cal W$. \end{lemma} \begin{proof} Let us first argue that, there is a weak linkage $\cal W'$ that is sensible, pushed onto $R$, discretely homotopic to $\cal W$, and whose multiplicity is upper bounded by the multiplicity of $\cal W$ such that for all edges $e_i \in E({\cal W'})$ $i \geq i$. In other words, $\cal W'$ contains only edges of positive subscript. Consider all weak linkages in $H$ that are sensible, pushed onto $R$, discretely homotopic to $\cal W$, and whose multiplicity is upper bounded by the multiplicity of $\cal W$. Among these weak linkages, let ${\cal W}'$ be one such that the sum of the subscripts of the edge copies in $E({\cal W}')$ is maximized. Let us argue that for every $e_i \in E({\cal W})$, $i \geq 1$ and for every $j > i$, $e_i \in E({\cal W})$. Suppose not, and there exists an edge $e_i$ that is used by ${\cal W}'$ such that {\em (i)} $i\leq 0$ (in fact, $i=0$ is not possible as ${\cal W}'$ is pushed onto $R$), or {\em (ii)} there exists an edge $e_j$ (parallel to $e_i$) for some $1\leq j < i$ that is not used by $E({\cal W}')$. Since ${\cal W}'$ has multiplicity at most $2n$, the satisfaction of the first condition implies the satisfaction of the second one, thus there exists such an edge $e_j$. Let $e_t$ be the edge (parallel to $e_j$ and $e_i$) of largest $j$ that is used by $E({\cal W}')$. Moreover, let $C$ be the cycle (which might be the boundary of a single face) that consists of two edges: $e_j$ and $e_t$. By the choice of $e_t$, the strict interior of $C$ does not contain any edge of $E({\cal W}')$. Thus, the cycle move operation is applicable to $(W,C)$ where $W$ is the walk in ${\cal W}'$ that uses $e_t$. Let ${\cal W}^\star$ be the result of the application of this operation. Then, the only difference between ${\cal W}^\star$ and ${\cal W}'$ is the replacement of $e_t$ by $e_j$. Because ${\cal W}^\star$ is discretely homotopic to ${\cal W}'$, ${\cal W}'$ is discretely homotopic to $\cal W$ and discrete homotopy is transitive, we derive that ${\cal W}^\star$ is discrete Moreover, the endpoints of the walks in ${\cal W}'$ were not changed when the cycle move operation was applied. Thus, because ${\cal W}'$ is sensible, so is ${\cal W}^\star$. Moreover, it is clear that ${\cal W}^\star$ is pushed onto $R$ and has the same multiplicity as ${\cal W}'$. However, as $j>t$, the sum of the subscripts of the edge copies in $E({\cal W}^\star)$ is larger than that of $E({\cal W}')$, which contradicts the choice of ${\cal W}'$. Hence, there exist weak linkages $\cal W'$ that are sensible, pushed onto $R$, discretely homotopic to $\cal W$, whose multiplicity is upper bounded by the multiplicity of $\cal W$, and for all edges $e_i \in E({\cal W'})$ $i \geq i$. Consider the collection of all such weak linkages, and let $\cal W^\star$ be the one maximizing $w(E({\cal W^\star})) = \sum{e \in E({\cal W^\star})} w(e)$ where Let us begin by defining a weight function $w:E(H) \rightarrow \mathbb{Z}$ on the parallel copies of edges in $H$ as follows. $$ w(e) = \begin{cases} -2n & e \text{ is not parallel to any edge in } E(R) \\ -2n & e = e_i \text{ is parallel to an edge in } E(R) \text{ and } i \leq 0 \\ 2n-i & e = e_i \text{ is parallel to an edge in } E(R) \text{ and } i \geq 1 \end{cases} $$ We claim that $\cal W^\star$ is canonical, i.e. for every edge $e_i \in E({\cal W^\star})$ the subscript $i \geq 1$, and for every parallel edge $e_j$ where $i \leq j < i$, $e_j \in E({\cal W})$. The first property is ensured by the choice of $\cal W^\star$. For the second property, we argue as before. Suppose not, then choose $i$ and $j$ such that $i-j$ is minimized. Then clearly $j = i-1$, since any parallel copy $e_t$ with $j < t < i$ is either in $E({\cal W^\star})$ contradicting the choice of $i$, or not in $E({\cal W^\star})$ contradicting the choice if $j$. Therefore, the edges $e_i$ and $e_j$ form a cycle $C$ such that the interior of $C$ contains no edge of any walk in $\cal W^*$. Let $W \in {\cal W}^\star$ be the walk containing $e_i$, and observe that the cycle move operation is applicable to $(W,C)$. Let $\wh{\cal W}$ be the result of this operation. Then observe that $w(E(\wh{\cal W})) > w(E({\cal W^\star}))$, since $w(e_j) > w(e_i)$ and $E(\wh{\cal W}) \setminus \{e_j\} = E({\cal W^\star}) \setminus \{e_i\}$. And, $\wh{\cal W}$ is discretely homotopic to $\cal W'$ which is in turn discretely homotopic to $\cal W$, as before we can argue that $\wh{\cal W}$ contradicts the choice of $\cal W^\star$. Hence, $\cal W^\star$ must be canonical. \end{proof} In case we are interested only in extremality rather than canonicity, we can use the following lemma that does not increase potential. The proof is very similar to the proof of Lemma \ref{lem:makingCanonical}, except that now we can ``move edges in either direction'', and hence avoid creating new crossings. \begin{lemma}\label{lem:makingExtremal} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $\cal W$ be a weak linkage in $H$ that is sensible and pushed onto $R$, whose multiplicity is at most $2n$. Then, there exists a weak linkage ${\cal W}'$ that is sensible, pushed onto $R$, extremal, discretely homotopic to $\cal W$, and whose potential is upper bounded by the potential of $\cal W$. \end{lemma} \begin{proof} Consider all weak linkages in $H$ that are sensible, pushed onto $R$, discretely homotopic to $\cal W$, and whose potential and multiplicity are upper bounded by the potential and multiplicity, respectively, of $\cal W$. Among these weak linkages, let ${\cal W}'$ be one such that the sum of the absolute values of the subscripts of the edge copies in $E({\cal W}')$ is maximized. We claim that ${\cal W}'$ is extremal, which will prove the lemma. To this end, suppose by way of contradiction that ${\cal W}'$ is not extremal. Thus, there exist an edge $e_i,e_j\in{\cal W}'$ where $i\geq 1, j\leq -1$ and $(i-1)+|j+1|\leq 2n-1$. Because the multiplicity of ${\cal W}'$ is at most $2n$, this means that there exists an edge $e_p$ (parallel to $e_i$ and $e_j$) for some $p>i\geq 1$ that is not used by $E({\cal W}')$. Let $e_t$ be the edge (parallel to $e_j$ and $e_i$) of largest subscript smaller than $p$ that is used by $E({\cal W}')$. Moreover, let $C$ be the cycle (which might be the boundary of a single face) that consists of two edges: $e_p$ and $e_t$. By the choice of $e_t$, the strict interior of $C$ does not contain any edge of $E({\cal W}')$. Thus, the cycle move operation is applicable to $(W,C)$ where $W$ is the walk in ${\cal W}'$ that uses $e_t$. Let ${\cal W}^\star$ be the result of the application of this operation. Then, the only difference between ${\cal W}^\star$ and ${\cal W}'$ is the replacement of $e_t$ by $e_p$. Because ${\cal W}^\star$ is discretely homotopic to ${\cal W}'$, ${\cal W}'$ is discretely homotopic to $\cal W$ and discrete homotopy is transitive, we derive that ${\cal W}^\star$ is discrete Moreover, the endpoints of the walks in ${\cal W}'$ were not changed when the cycle move operation was applied. Thus, because ${\cal W}'$ is sensible, so is ${\cal W}^\star$. Moreover, it is clear that ${\cal W}^\star$ is pushed onto $R$, because ${\cal W}^\star$ and ${\cal W}'$ cross $R$ exactly in the same vertices and in the same direction (indeed, we have only replaced one edge of positive subscript by another parallel edge of positive subscript), they have the same potential. However, as $p>t\geq 1$, the sum of the absolute values of the subscripts of the edge copies in $E({\cal W}^\star)$ is larger than that of $E({\cal W}')$, which contradicts the choice of ${\cal W}'$. \end{proof} To achieve the properties of having low multiplicity and being U-turn-free, we perform two stages. In the first stage, that is the focus of this subsection, we make modifications that bound the total number of segments (rather than only the number of segments groups). The second stage, where we conclude the two properties, will be performed in the next subsection. The first stage in itself is partitioned into two phases as follows. \paragraph{Phase I: Eliminating Special U-Turns.} We eliminate some of the U-turns, but not all of them. Specifically, the elimination of some U-turns may result in too major changes in the segment groups, and hence we only deal with them after we bound the total number of segments, in which case classification into segment groups becomes immaterial. The U-turns we eliminate now are defined as follows (see Fig.~\ref{fig:uturn}). \begin{figure} \caption{Special U-Turns.} \label{fig:uturn} \end{figure} \begin{definition}[{\bf Special U-Turn}] Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $\cal W$ be weak linkage that is pushed onto $R$. Let $U=\{e_i,e_j\}$ be a U-turn. Then, $U$ is {\em special} if at least one among the following conditions hold: {\em (i)} $i$ and $j$ have the same sign, i.e. they are on the same side of $R$; {\em (ii)} both endpoints of $e_i$ and $e_j$ do not belong to $V_{=1}(R)\cup V_{\geq 3}(R)$. \end{definition} We eliminate special U-turns one-by-one, where the U-turn chosen to eliminate at each step is an innermost one, defined as follows. \begin{definition}[{\bf Innermost U-Turn}]\label{def:innermostUTurn} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $\cal W$ be weak linkage that is pushed onto $R$. Let $U=\{e_i,e_j\}$ be a U-turn. Then, $U$ is {\em innermost} if there does not exist a parallel edge $e_\ell\in E({\cal W})$ such that $\min\{i,j\}\leq \ell\leq \max\{i,j\}$. We say that $U$ is {\em crossing} is the signs of $i$ and $j$ are different, i.e. $e_i$ and $e_j$ are on opposite sides of $R$. \end{definition} We argue that if there is a (special) U-turn, then there is also an innermost (special) one. \begin{lemma}\label{lem:existsInnermostUTurn} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $\cal W$ a weak linkage pushed onto $R$ with at least one U-turn $U=\{e_i,e_j\}$. Then, $\cal W$ has at least one innermost U-turn $U'=\{e_x,e_y\}$ whose edges lies in the interior (including the boundary) of the cycle $C$ formed by $e_i$ and $e_j$. \end{lemma} \begin{proof} Denote the endpoints of $e_i$ and $e_j$ by $u$ and $v$. Among all U-turns whose edges lies in the interior (including the boundary) of the cycle $C$ formed by $e_i$ and $e_j$ (because $U$ satisfies these conditions, there exists at least one such U-turn), let $U'=\{e_x,e_y\}$ be one whose edges $e_x$ and $e_y$ form a cycle $C'$ that contains minimum number of edges of $H$ in its interior. Let $W'$ be the walk in ${\cal W}$ that traverses $e_x$ and $e_y$ consecutively. Without loss of generality, suppose that when we traverse $W'$ so that we visit $e_x$ and then $e_y$, we first visit $u$, then $v$, and then $u$ again. We claim that $U'$ is innermost. To this end, suppose by way of contradiction that $U'$ is not innermost. Thus, by Definition \ref{def:innermostUTurn}, this means that $C'$ contains an edge $e_\ell$ in its strict interior that belongs to some walk $\widehat{W}\in{\cal W}$ (possibly $\widehat{W}=W'$). Because $U'$ is a U-turn, $e_\ell$ is neither the first nor the last edge of $\widehat{W}$. Thus, when we traverse $\widehat{W}$ so that when we visit $e_\ell$, we first visit $u$ and then $v$, we next visit an edge $e'$. Because $\cal W$ is a weak linkage, this edge must belong to the strict interior of $C'$ (because otherwise we obtain that $(v,e_x,e_y,e_\ell,)$ is a crossing or an edge is used more than once). However, this implies that $e'$ is parallel to the edges $e_\ell, e_x, e_y$, and $\widehat{U}=\{e_\ell,e'\}$ is a U-turn whose edges lies in the interior (including the boundary) of the cycle $C$ and which forms a cycle $\widehat{C}$ that contains fewer edge of $H$ than $C'$ in its interior. This is a contradiction to the choice of $U'$. \end{proof} We now prove that an innermost U-turn corresponds to a cycle on which we can perform the cycle pull operation, and consider the result of its application. \begin{lemma}\label{lem:eliminateInnermostUTurn} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $\cal W$ be a sensible, outer-terminal, extremal weak linkage that is pushed onto $R$, and let $U=\{e_i,e_j\}$ be an innermost U-turn. Let $W$ be the walk in $\cal W$ that uses $e_i$ and $e_j$, and $C$ be the cycle in $H$ that consists of $e_i$ and $e_j$. Then, the cycle pull operation is applicable to $(W,C)$. Furthermore, the resulting weak linkage ${\cal W}'$ is sensible, outer-terminal, extremal, pushed onto $R$, having fewer edges than $\cal W$, $|\ensuremath{\mathsf{Seg}}({\cal W}')|\leq |\ensuremath{\mathsf{Seg}}({\cal W})|$, and if $U$ is special, then also its potential is upper bounded by the potential of $\cal W$. \end{lemma} \begin{proof} Because $U$ is innermost, there does not exist an edge in the strict interior of $C$ that belongs to $E({\cal W})$. Therefore, the cycle pull operation is applicable to $(W,C)$. The only difference between ${\cal W}'$ and ${\cal W}$ is that ${\cal W}'$ does not use the edge $e_i$ and $e_j$ and hence the vertex, say, $v$, that $W$ visits between them. Therefore, because $\cal W$ be a sensible, outer-terminal , extremal and pushed onto $R$, so is ${\cal W}'$. Moreover, the walks in ${\cal W}'$ have the same endpoints as their corresponding walks in $\cal W$, and thus because $\cal W$ is sensible, so is ${\cal W}'$. Let $u$ be the other endpoints of the edges $e_i$ and $e_j$, and let $W'$ be the walk in ${\cal W}'$ that resulted from $W$. Observe that $W'$ has at most as many crossings with $R$ as $W$ has---indeed, if the elimination of $e_i$ and $e_j$ created a new crossing at $u$ (this is the only new crossing that may be created), then $W$ crosses $R$ between $e_i$ and $e_j$ and this crossing does not occur in $W'$. Thus, $|\ensuremath{\mathsf{Seg}}({\cal W}')|\leq |\ensuremath{\mathsf{Seg}}({\cal W})|$. \begin{figure} \caption{U-Turn with $e_i$ and $e_j$ on the same side} \label{fig:uturn1} \end{figure} Now, suppose that $U$ is special. Then, at least one among the following conditions holds: {\em (i)} $i$ and $j$ have the same sign; {\em (ii)} $u,v\notin V_{=1}(R)\cup V_{\geq 3}(R)$. We first consider the case where $i$ and $j$ have the same sign, say positive (without loss of generality; see Fig.~\ref{fig:uturn1}). Let $e'_x,\widehat{e}_y\in E(W)$ be such that $e'_x$ and $e_i$ are consecutive in $W$ (if such an edge $e'_x$ exists) and denote the segment that contains $e'_x$ by $S_x$, and $\widehat{e}_y$ and $e_j$ are consecutive in $W$ (if such an edge $\widehat{e}_y$ exists) and denote the segment that contains $\widehat{e}_y$ by $S_y$. Possibly some edges among $e'_x,\widehat{e}_y$ and $e_i$ are parallel. In case $e'_x$ does not exist (see Fig.~\ref{fig:uturn1}(a)), then $u\in V_{=1}(R)$ and therefore $e_i$ and $e_j$ belong to a segment that is in a singleton segment group. When we remove $e_i$ and $e_j$, either this segment shrinks (and remains in a singleton group) or it is removed completely together with its segment group. If the segment shrinks, the potential clearly remains unchanged, and otherwise the reduction of segment groups makes the potential decrease by $1$ (the potential of the consecutive segment group remains unchanged as it is a singleton segment group because it has an endpoint in $V_{=1}(R)$). The case where $\widehat{e}_y$ does not exist is symmetric, thus we now assume that both $e'_x$ and $\widehat{e}_y$ exist. In case both $x$ and $y$ are on the same side as $e_i$ and $e_j$ (see Fig.~\ref{fig:uturn1}(b)), then the removal of $e_i$ and $e_j$ only shrinks the segment $S_x=S_y$ where all of the four edges $e_i,e_j,e'_x$ and $\widehat{e}_y$ lie, and thus does not change the potential. Similarly, if $e'_x$ is on the same side as $e_i, e_j$ and $\wh{e}_y$ is on the opposite side (see Fig.~\ref{fig:uturn1}(c)), then we only shrink the segment where $e'_x, e_i$ and $e_j$ lie, and rather than crossing from $e_j$ to $\widehat{e}_y$, we cross from $e'_x$ to $\widehat{e}_y$ (which have the same label as both cross from the positive side to the negative side). The case where $e'_x$ is on the opposite side of $e_i,e_j$ and $\wh{e}_y$ is on the same side is not possible, since then $W$ crosses itself. Now, consider the case where both $e'_x$ and $\wh{e}_y$ are both on the opposite side of $e_i,e_j$ (see Fig.~\ref{fig:uturn2}) Then, $e_i$ and $e_j$ form a complete segment, which we call $S_{ij}$, and the segments $S_x,S_{ij}$ and $S_y$ are different. Notice that the two crossings with $R$, one consisting of $e'_x$ with $e_i$, and the other consisting of $e_j$ with $\widehat{e}_y$, cross in different directions. If both $S_x$, $S_{ij}$ and $S_y$ belong to the same segment group (see Fig.~\ref{fig:uturn2}(a)), then the removal of $S_{ij}$ removes the contributions of its two crossings (mentioned above), which sum to $0$ as their direction is opposite. Then, the potential remains unchanged. Now, suppose that exactly one among $S_x$ and $S_y$ belongs to the same group as $S_{ij}$. Without loss of generality, suppose that it is $S_x$ (the other case is symmetric; see Fig.~\ref{fig:uturn2}(b)). Then, when we remove $S_{ij}$, the segments $S_x$ and $S_y$ merge into one segment that has endpoints in different maximal degree-2 paths in $R$ (or on vertices of degree other than $2$) and hence forms its own group. This group replaces the singleton group of $\cal W$ that contained only $S_y$. Furthermore, the labeling of all crossings remain the same (as well as all other associations into segment groups), apart from the two crossings consisting of $e'_x$ with $e_i$, and of $e_j$ with $\widehat{e}_y$, which are both eliminated but have previously contributed together $0$ (as they cross in opposite directions). We remark that the size of the segment group that previously contained $S_x$ might become $1$ or completely removed, but this does not increase potential. Thus, overall the potential does not increase. \begin{figure} \caption{U-Turn with $e_i,e_j$ on one side and $e'_x,\wh{e}_y$ is on the opposite side.} \label{fig:uturn2} \end{figure} Lastly, suppose that $S_x,S_{ij}$ and $S_y$ belong to different segment groups. Then, $S_x$ and $S_y$ had endpoints in different maximal degree-2 paths in $R$ (or on vertices of degree other than $2$), hence each one among $S_x,S_{ij}$ and $S_y$ belonged to a singleton segment group of $\cal W$. The removal of $S_{ij}$ eliminates all of these three segment groups, which results in a decrease of $3$ in the potential. However, now $S_x$ and $S_y$ belong to the same segment group. If they form a singleton segment group (see Fig.~\ref{fig:uturn2}(c)), the overall the potential decreases by $2$. Else, they join an existing segment group, and we have several subcases as follows. In the first subcase, suppose that they join only the group that contains the segment $\widetilde{S}_x$ of ${\cal W}$ consecutive to $S_x$ (in the walk in $\cal W$ to which $S_x,S_{ij}$ and $S_y$ belong; see Fig.~\ref{fig:uturn2})(d)). Then, the crossing at the endpoint of $S_y$ is now contributing (1 or -1) to the sum of labels in the potential, and if $\widetilde{S}_x$ was in a singleton group in $\cal W$, then so are its crossings. Overall, this results in a contribution of at most $3$, so in total the potential does not increase. The subcase where they join only the group that contains the segment $\widetilde{S}_y$ of ${\cal W}$ consecutive to $S_y$ is symmetric. Now, consider the subcase where they join both of these groups and hence merge them (see Fig.~\ref{fig:uturn2}(e)). In this subcase, we have four new crossings that may contribute to the sum of labels, but we have also merged two groups which makes the potential decreased by at least $1$, so overall the potential does not increase. Now, suppose that only case {\em (ii)} holds. That is, $u,v\notin V_{=1}(R)\cup V_{\geq 3}(R)$, and $i$ and $j$ have the different signs (see Fig.~\ref{fig:uturn3}). Without loss of generality, suppose that $i\geq 1$ and $j\leq -1$. Because $u,v\notin V_{=1}(R)\cup V_{\geq 3}(R)$ and $\cal W$ is sensible, $e'_x$ and $\widehat{e}_y$ exist. The case where $e'_x$ is on the opposite side of $e_i$ and $\wh{e}_y$ is on the same side as $e_i$ cannot occur since then $W$ crosses itself. Thus, we are left with three cases: {\em (a)} $e'_x$ is on the same side as $e_i$ and $\wh{e}_y$ is on the opposite side of $e_i$; {\em (b)} both $e'_x$ and $\wh{e}_y$ are on the opposite side of $e_i$; and {\em (c)} both $e'_x, \wh{e}_y$ are on the same side as $e_i$. The cases {\em (b)} and {\em (c)} are symmetric, therefore we will only consider cases {\em (a)} and {\em (b)}. In case {\em (a)}, $e'_x$ and $e_i$ belong to one segment, and $e_j$ and $\widehat{e}_y$ belong to a different segment (see Fig.~\ref{fig:uturn3}(a)). The removal of $e_i$ and $e_j$ only shirks these two segments by one edge each, and does not change the labeling of the crossings at their endpoints---previously, we crossed from $e_i$ to $e_j$, and now we cross from $e'_x$ to $\widehat{e}_y$, which are both crossings from the side of $e_i$ to the opposite side. Thus, the potential does not increase. Lastly, consider case {\em (b)} (see Fig.~\ref{fig:uturn3}(b)). In this case, $e_i$ belongs to a segment $S_i$ containing only $e_i$, and $e_j$ belongs to $S_y$. Further, when crossing from $e_x$ to $e_i$, we cross from the opposite side of $e_i$ to the same side, and when we cross from $e_i$ to $e_j$, we cross from the side of $e_i$ to the opposite side. Additionally, notice that the elimination of $e_i$ and $e_j$ results in the elimination of $S_i$, and the merge of $S_x$ and $S_y$ with $e_j$ removed. We consider several subcases as follows. In the subcase where $S_x,S_i$ and $S_y$ belong to the same group (see Fig.~\ref{fig:uturn3}(c)), then the only possible effect with respect to the potential of this group is the cancellation of the two crossings (of $e'_x$ with $e_i$ and of $e_i$ with $e_j$), but these two crossings together contribute $0$ to the sum of labels because they cross in opposite directions. Possibly the size of the segment group shrunk to $1$, but this does not increase potential. Thus, in this subcase, the potential does not increase. \begin{figure} \caption{U-Turn with $e_i$ on one side and $e_j$ on the opposite side.} \label{fig:uturn3} \end{figure} Now, consider the subcase where $S_x$ and $S_i$ are in the same segment group, and $S_y$ is in a different segment group (see Fig.~\ref{fig:uturn3}(d)). Then, $S_y$ is in a singleton segment group because its endpoints belong to different maximal degree-2 path of $R$ (or vertices of degree other than $2$). In ${\cal W}'$, the segment group that resulted from the merge of $S_x$ and $S_y$ is also a singleton segment group. Furthermore, the segment group of $\cal W$ that contained $S_x$ and $S_i$ does not change in terms of its labeled sum since the crossings at the endpoints of $S_i$ crossed in opposite directions. Possibly the size of the segment group shrunk to $1$, but this does not increase potential. Thus, in this subcase, the potential does not increase. Next, we note that the analysis of the subcase where $S_i$ and $S_y$ are in the same segment group, and $S_x$ is in a different segment group, is symmetric. Lastly, suppose that $S_x,S_{i}$ and $S_y$ belong to different segment groups (see Fig.~\ref{fig:uturn3}(e)). The analysis of this case is the same as the analysis of the last subcase of case {\em (i)} (i.e., the subcase where $S_x,S_{ij}$ and $S_y$ belong to the same segment group, where now we have $S_i$ instead of $S_{ij}$). \end{proof} We are now ready to assert that all special U-turns can be eliminated as follows. \begin{lemma}\label{lem:noUTurns} Let $(G,S,T,g,k)$ be a good \textsf{Yes}-instance of \textsf{Planar Disjoint Paths}, and $R$ be a backbone Steiner tree. Then, there exists a sensible, outer-terminal, extremal weak linkage $\cal W$ in $H$ that has no special U-turns, is pushed onto $R$ and discretely homotopic in $H$ to some solution of $(G,S,T,g,k)$ and $\ensuremath{\mathsf{Potential}}({\cal W})\leq \alpha_{\rm potential}(k)$. \end{lemma} \begin{proof} By Lemma \ref{lem:pushSequencesFinal}, there exists a sensible outer-terminal weak linkage in $H$ that is pushed onto $R$, has multiplicity at most $2n$, discretely homotopic in $H$ to some solution of $(G,S,T,g,k)$ and has potential at most $\alpha_{\rm potential}(k)$. By Lemma \ref{lem:makingExtremal} and because discrete homotopy is an equivalence relation, there also exists such a weak linkage ${\cal W}'$ that is extremal. Thus, there exists a weak linkage $\cal W$ that among all sensible, outer-terminal, extremal weak linkages in $H$ that are pushed onto $R$, discretely homotopic in $H$ to some solution of $(G,S,T,g,k)$ and satisfy $\ensuremath{\mathsf{Potential}}({\cal W})\leq \alpha_{\rm potential}(k)$, the weak linkage $\cal W$ is one that minimizes the number of edges that it uses. To conclude the proof, it suffices to argue that $\cal W$ has no special U-turns. Suppose, by way of contradiction, that $\cal W$ has at least one special U-turn. Then, by Lemma \ref{lem:existsInnermostUTurn}, $\cal W$ has an innermost special U-turn $U=\{e_i,e_j\}$. Let $W$ be the walk in $\cal W$ that uses $e_i$ and $e_j$, and $C$ be the cycle in $H$ that consists of $e_i$ and $e_j$. Then, by Lemma \ref{lem:eliminateInnermostUTurn}, the cycle pull operation is applicable to $(W,C)$. Furthermore, by Lemma \ref{lem:eliminateInnermostUTurn}, the resulting weak linkage ${\cal W}'$ is sensible, outer-terminal, extremal, pushed onto $R$, has fewer edges than $\cal W$, and its potential is upper bounded by the potential of $\cal W$. Since discrete homotopy is an equivalence relation, ${\cal W}'$ is discretely homotopic to some solution of $(G,S,T,g,k)$. However, this is a contradiction to the choice of $\cal W$. \end{proof} \paragraph{Phase II: Eliminating Swollen Segments.} The goal of the second phase is to eliminate the existence crossings with opposing ``signs'' for each segment and thereby, as the potential is bounded, bound the number of segments (rather than only the number of segment groups). We remark that one can show, even without this step, that the multiplicity is bounded, however this complicates the analysis. Towards this, we eliminate ``swollen'' segments (see Fig.~\ref{fig:swollen-seg}). \begin{figure} \caption{A swollen segment and the cycles in its move-through tuple.} \label{fig:swollen-seg} \end{figure} \begin{definition}[{\bf Swollen Segment}] Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $\cal W$ be weak linkage that is pushed onto $R$. Consider segment $S\in\ensuremath{\mathsf{Seg}}(W)$ for some $W\in{\cal W}$ such that $S$ does not contain two the extreme edges of $W$. Let $e$ and $e'$ be the extreme edges of $S$ (possibly $e=e'$), and let $\widehat{e}$ and $\widehat{e}'$ be the edges of $E(W)\setminus E(S)$ that are consecutive to $e$ and $e'$ on $W$, respectively. Then, $S$ is {\em swollen} if its endpoints are internal vertices of the same maximal degree-2 path $P$ of $R$ and ${\sf label}_P^{W}(e,e')\neq {\sf label}_P^{W}(\widehat{e},\widehat{e}')$. \end{definition} We show that due to the first phase, when we deal with outer-terminal weak linkages, the swollen segments have a ``clean appearance'' as stated in the following lemma. \begin{lemma}\label{lem:cleanSwollenSegments} Let $(G,S,T,g,k)$ be a nice instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $\cal W$ be an outer-terminal weak linkage that is pushed onto $R$ and has no special U-turns, and $S\in\ensuremath{\mathsf{Seg}}({\cal W})$ be swollen. Then, $S$ is parallel to a subpath of a maximal degree-2 path of $R$. \end{lemma} \begin{proof} Let $e_i$ and $e'_j$ be the first and last edges of $S$, and denote the endpoints of $S$ by $u$ and $v$ where $u$ is an endpoint of $e_i$. Because $S$ is a swollen segment, both these edges are on the same side of $\ensuremath{\mathsf{path}}_{R}(u,v)$. Let $P$ be the unique subpath in $R$ between $u$ and $v$. Because $\cal W$ does not have any special U-turn, we have that one of the following cases occurs (see Fig.~\ref{fig:swollenSegPath}): {\em (i)} $S$ traverses a path that starts at $e_i$, consists of edges parallel to $P$ and ends at $e'_j$; {\em (ii)} $S$ traverses a path that starts at $e_i$, consists of edges parallel to $P$ but does not end at $e'_j$, and hence (to reach $e'_j$ without having U-turns) $S$ traverses at least two copies (on opposite sides) of every edge of $R$; {\em (iii}) the first edge that $S$ traverses after $e_i$ is not parallel to an edge of $P$, and hence (to reach $e'_j$ without having U-turns) $S$ traverses at least two copies (on opposite sides) of every edge of $R$ except possibly for the edges of $P$. In the first case, we are done. In the other two cases, we have that $E({\cal W}$ contains more than one copy of the edge incident to $t^\star$ in $R$, which contradicts the assumption that $\cal W$ is outer-terminal. \end{proof} The segment chosen to move at each step is an innermost one, formally defined as follows. \begin{definition}[{\bf Innermost Swollen Segment}] Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $\cal W$ be weak linkage that is pushed onto $R$. Let $S\in\ensuremath{\mathsf{Seg}}({\cal W})$ be swollen. Then, $S$ is {\em innermost} if there do not exist parallel edges $e_i\in E(S)$ and $e_j\in E({\cal W})\setminus E(S)$ such that $i$ and $j$ have the same sign and $|j|<|i|$. \end{definition} We now argue that if there is a swollen segment, then there is also an innermost one. \begin{lemma}\label{lem:existsInnermostSwollenSeg} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $\cal W$ an outer-terminal weak linkage that has no special U-turns and is pushed onto $R$, such that $\ensuremath{\mathsf{Seg}}({\cal W})$ contains at least one swollen segment. Then, $\ensuremath{\mathsf{Seg}}({\cal W})$ contains at least one innermost swollen segment. \end{lemma} \begin{proof} Let $S$ be a swollen segment of $\cal W$ such that the sum of the absolute values of the indices of the edge copies it uses is minimized. By Lemma \ref{lem:cleanSwollenSegments}, $S$ is parallel to a subpath $P$ of a maximal degree-2 path of $R$. Thus, because $S$ is a segment, all the edge copies it uses are on the same side. We claim that $S$ is innermost. Suppose, by way of contradiction, that this claim is false. Thus there exist parallel edges $e_i\in E(S)$ and $e_j\in E({\cal W})\setminus E(S)$ such that $i$ and $j$ have the same sign and $|j|<|i|$. Let $S'$ be the segment of ${\cal W}'$ to which $e_j$ belongs. Because $\cal W$ has no special U-turns and because weak linkages contain neither crossings not repeated edges, it follows that $S'$ is parallel to a subpath $Q$ of $P$ and consists only of edge copies whose indices strictly smaller absolute value than the edges of $P$ they are parallel to. However, this implies that $S'$ is a swollen segment of $\cal W$ such that the the sum of the absolute value of the indices of the edge copies it uses is smaller than the sum of $S$. This contradicts the choice of $S$. \end{proof} Given an innermost swollen segment whose copies have, on one side of $R$, we would like to move the segment to ``the other side'' of $R$. We know that these copies will be free in case we handle an extremal weak linkage. We now define a tuple of cycles on which we will perform move operations (see Fig.~\ref{fig:swollen-seg}). The fact that this notion is well-defined (in the sense that the indices $\ell$ in the definition exist) will be argued in the lemma that follows it. \begin{definition}[{\bf Move-Through Tuple}] Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $\cal W$ an outer-terminal extremal weak linkage that has no special U-turns and is pushed onto $R$, and let $S\in\ensuremath{\mathsf{Seg}}({\cal W})$ be an innermost swollen segment. Let $e^1_{i_1},e^2_{i_2},\ldots,e^t_{i_t}$, where $t=|E(S)|$, be the edges of $S$ in the order occurred when $S$ is traversed from one endpoint to another.\footnote{To avoid ambiguity in the context of this definition, suppose that we have a fixed choice (e.g., lexicographic) of which endpoint is traversed first.} Then, the {\em move-through tuple} of $S$ is $T=(C_1,\ldots,C_t)$ where for every $j\in\{1,\ldots,t\}$, $C_j$ is a cycle that consists of two parallel edges: $e^j_{i_j}$ and $e^j_\ell$ where $\ell$ is the index of sign opposite to $i_j$ that has the largest absolute value such that all indices $r$ of the same sign as $\ell$ and whose absolute value is upper bounded by $|\ell|$ satisfy that $e^j_r\notin E({\cal W})$. The {\em application of $T$} is done by applying the cycle move operation to $(W,C_i)$ for $i$ from $1$ to~$t$ (in this order) where $W$ is the walk that contains $S$ as a segment.\footnote{Note that $W$ changes in each application, thus by $W$ we mean the current walk with the same endpoints as the original walk in $\cal W$ that had $S$ as a segment.} \end{definition} \begin{figure} \caption{Illustration of Lemma~\ref{lem:cleanSwollenSegments}.} \label{fig:swollenSegPath} \end{figure} Now, we prove that application of a move-through tuple is valid. \begin{lemma}\label{lem:moveThroughTupleValid} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $\cal W$ an outer-terminal extremal weak linkage that has no special U-turns and is pushed onto $R$, and let $S\in\ensuremath{\mathsf{Seg}}({\cal W})$ be an innermost swollen segment. Then, the move-through tuple $T=(C_1,\ldots,C_t)$ of $S$ is well-defined, and the application of $T$ is valid (that is, the cycle move operation is applicable to $(W,C_i)$ when it is done). \end{lemma} \begin{proof} Let $e^1_{i_1},e^2_{i_2},\ldots,e^t_{i_t}$, where $t=|E(S)|$, be the edges of $S$ in the order occurred when $S$ is traversed from one endpoint to another. To assert that $T$ is valid, consider some $j\in\{1,\ldots,t\}$. We need to show that $e^j_b$, where $b\in\{-1,1\}$ has sign opposite to the sign of $i_j$, does not belong to $E({\cal W})$. Indeed, then there also exists an index $\ell_j$ of sign opposite to $i_j$ that has the largest absolute value such that all indices $r$ of the same sign as $\ell_j$ and whose absolute value is upper bounded by $|\ell_j|$ satisfy that $e^j_r\notin E({\cal W})$. To this end, suppose by contradiction that $e^j_b\in E({\cal W})$. Without loss of generality, suppose that $b=-1$ (the other case is symmetric). Then, we have $(i_j-1)+|-b+1| = i_j-1\leq 2n-1$, which contradicts the assumption that $\cal W$ is extremal. Thus, $T$ is valid. Notice that for every $j\in\{1,\ldots,t\}$, all of the edges parallel to $e^j_{i_j}$ (and $e^j_{\ell_j}$) whose index is of the same sign as $i_j$ and has absolute value is smaller than $|i_j|$ do not belong to $E({\cal W})$ (because $S$ is innermost). Additionally for every $j\in\{1,\ldots,t\}$, all of the edges parallel to $e^j_{i_j}$ (and $e^j_{\ell_j}$) whose index is of sign opposite to $i_j$ and has absolute value is smaller or equal to than $|\ell_j|$ do not belong to $E({\cal W})$ (by the choice of $\ell_j$. Thus, for each $j\in\{1,\ldots,t\}$ the interior of $C_j$ does not contain any edge of $\cal W$, and in particular the cycle move operation is applicable to it. When we apply the cycle move operation to some cycle $C_j$, it replaces $e^j_{i_j}$ by $e^j_{i_j}$. By Lemma \ref{lem:existsInnermostSwollenSeg}, these replacements are done on edges not parallel to one another---that is, for every pair of distinct $j,j'\in\{1,\ldots,t\}$, the edges $e^j_{i_j}$ and $e^{j'}_{i_{j'}}$ are not parallel. Thus, the application of one cycle move operation in the application of $T$ does not effect the applicability of any other cycle move operation in the application of $T$. Therefore, the application of $T$ is valid. \end{proof} Now, we consider the properties of the weak linkage that results from the application of a move-through tuple. \begin{lemma}\label{lem:moveThroughTupleProperties} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $\cal W$ be a sensible, outer-terminal, extremal weak linkage that has no special U-turns and is pushed onto $R$, and let $S\in\ensuremath{\mathsf{Seg}}({\cal W})$ be an innermost swollen segment. Let $T$ be the move-through tuple of $S$, and let ${\cal W}'$ be the weak linkage that results from the application of $T$. Then, ${\cal W}'$ is a sensible, outer-terminal, extremal weak linkage that has no special U-turns and is pushed onto $R$, whose potential is upper bounded by the potential of $\cal W$ and which has fewer segments than $\cal W$. \end{lemma} \begin{proof} Let $u$ and $v$ be the endpoints of $S$. Because $S$ is swollen, $u$ and $v$ are internal vertices of the same maximal degree-2 path $P$ of $R$. Thus, because $\cal W$ is sensible, there exists a segment $S_1$ and a segment $S_2$ that the walk $W\in{\cal W}$ that has $S$ as a segment traverses immediately before visiting the endpoint $u$ of $S$, and immediately after visiting the endpoint $v$ of $S$, respectively. (Here, we supposed without loss of generality that $W$ is traversed from one endpoint to another such that the endpoint $u$ of $S$ is visited before the endpoint $v$ of $S$.) By Lemma \ref{lem:existsInnermostSwollenSeg}, $S$ is parallel to the subpath $Q$ of $P$ with endpoints $u$ and $v$, and without loss of generality, we suppose that it uses the positive copies of the edges of $Q$. Then, the application of $T$ replaces each one of these positive copies by a negative copy. Thus, the segments $S_1,S,S_2$ are removed and replaced by one segment $S^\star$ that consists of $S_1$, the new negative copies of the edges of $Q$, and $S_2$. Hence, ${\cal W}'$ has fewer (by $2$) segments than $\cal W$. Therefore, because $\cal W$ is sensible, outer-terminal, has no special U-turn and is pushed onto $R$, it is clear that ${\cal W}'$ also has these properties. Now, we show that ${\cal W}'$ also has the property that it is extremal. Clearly, because $\cal W$ is extremal, the multiplicity of ${\cal W}'$ is also upper bounded by $2n$, and for any two parallel edges $e_i,e_j\in E({\cal W}')$ that are not parallel to an edge of $Q$, where $i\geq 1$ and $j\leq -1$, we have $(i-1)+|j+1|\geq 2n$. Now, consider some two edges $e_i,e_j\in E({\cal W}')$ that are parallel to an edge $e_0$ of $Q$, where $i\geq 1$ and $j\leq -1$. Let $e_t$ be the edge of $S$ parallel to $e_i$ and $e_j$, and let $e_r$ be the edge with whom $e_t$ is replaced in the application of $t$ (thus, $e_t\in E({\cal W})\setminus E({\cal W}')$ and $e_r\in E({\cal W}')\setminus E({\cal W})$). Without loss of generality, suppose that $t\geq 1$. Then, by the applicability of $T$, we have that $i\geq t+1$ and $j\leq r$. Thus, $(i-1)+|j+1|\geq t+|r+1|=(t-1)+|(r-1)+1|$. Furthermore, by the definition of a move-through tuple, either $e_{r-1}\in E({\cal W})$ or $r=-2n$. In the first case, because ${\cal W}$ is extremal, we obtain that $(t-1)+|(r-1)+1|\geq 2n$, and in the second case we obtain that $(t-1)+|(r-1)+1|\geq 2n$ as well (because $t\geq 1$). It remains to prove that the potential of ${\cal W}'$ is upper bounded by the potential of $\cal W$. For this purpose, we consider several cases as follows. First, suppose that $S_1,S$ and $S_2$ belong to the same segment group in $\cal W$, then the only effect on the potential that might increase it is the removal of the two crossings at the endpoints of $S$. However, by the definition of a swollen segment, these crossings have opposite labels, and hence their removal does not effect the potential. (Possibly the segment group that contained $S_1,S$ and $S_2$ has shrunk to a singleton group with respect to ${\cal W}'$, but this does not increase the potential.) Now, suppose that only one among $S_1$ and $S_2$ is in the same segment group as $S$ in $\cal W$, and without loss of generality, suppose that it is $S_1$. Then, in ${\cal W}'$ the segment $S^\star$ belongs to a singleton group (as its endpoints belong to different maximal degree-2 paths of $R$ or it has an endpoint in $V_{=1}(R)\cup V_{\geq 3}(R)$). Moreover, in ${\cal W}$ the segment $S_2$ belongs to a singleton segment group. Thus, one singleton segment group has been replaced by another, and the size of existing segment groups might have shrunk. Since the only change in terms of crossing is that the crossing at the endpoints of $S$ were eliminated, and as in the previous case, this does not effect the potential, we conclude that the potential does not increase. Lastly, we consider the case where $S_1,S$ and $S_2$ belong to different segment groups in $\ensuremath{\mathsf{SegGro}}({\cal W})$. These three groups are singleton groups ---$S_1$ and $S_2$ have endpoints that belong to different maximal degree-2 paths of $R$ (or an endpoint in $V_{=1}(R)\cup V_{\geq 3}(R)$), and $S$ lies in between them. In ${\cal W}'$, these three groups are eliminates, which results in a decrease of $3$ in the potential. If $S^\star$ forms a singleton segment group, the overall the potential decreases by $2$. Else, $S^\star$ joins an existing segment group, and we have several subcases as follows. In the first subcase, suppose that $S^\star$ joins only the group that contains the segment $\widetilde{S}_1$ of ${\cal W}$ consecutive to $S_1$ in $W$ (that is not $S$). Then, the crossing at the endpoint of $S_2$ is now contributing (1 or -1) to the sum of labels in the potential of the group, and if $\widetilde{S}_1$ was in a singleton group in $\cal W$, then so are its two crossings. Overall, this results in a contribution of at most $3$, so in total the potential does not increase. The subcase where $S^\star$ joins only the group that contains the segment $\widetilde{S}_2$ of ${\cal W}$ consecutive to $S_2$ is symmetric. Now, consider the subcase where $S^\star$ joins both of these groups and hence we merge them with respect to ${\cal W}'$. In this subcase, we have four new crossings that may contribute to the sum of labels, but we have also merged two groups which makes the potential decreased by at least $1$, so overall the potential does not increase. \end{proof} Lastly, we assert that all swollen segments can be eliminated as follows. \begin{lemma}\label{lem:noSwollenSegments} Let $(G,S,T,g,k)$ be a good \textsf{Yes}-instance of \textsf{Planar Disjoint Paths}, and $R$ be a backbone Steiner tree. Then, there exists a sensible, outer-terminal, extremal weak linkage $\cal W$ in $H$ that is pushed onto $R$, has no special U-turns and swollen segments, is discretely homotopic in $H$ to some solution of $(G,S,T,g,k)$ and $\ensuremath{\mathsf{Potential}}({\cal W})\leq \alpha_{\rm potential}(k)$. \end{lemma} \begin{proof} By Lemma \ref{lem:noUTurns}, there exists a sensible, outer-terminal, extremal weak linkage in $H$ that has no special U-turns, is pushed onto $R$, discretely homotopic in $H$ to some solution of $(G,S,T,g,k)$ and whose potential is upper bounded by $\alpha_{\rm potential}(k)$. Among all such weak linkages, let $\cal W$ be one with minimum number of segments. To conclude the proof, it suffices to argue that $\cal W$ has no swollen segments. Suppose, by way of contradiction, that $\cal W$ has at least one swollen segment. Then, by Lemma \ref{lem:existsInnermostSwollenSeg}, $\cal W$ has an innermost swollen segment $S$. By Lemma \ref{lem:moveThroughTupleValid}, the move-through tuple $T$ of $S$ is well-defined, and its application is valid. Furthermore, by Lemma \ref{lem:moveThroughTupleProperties}, the resulting weak linkage ${\cal W}'$ is sensible, outer-terminal, extremal, pushed onto $R$, has no special U-turns and fewer segments than $\cal W$, and its potential is upper bounded by the potential of $\cal W$. Since discrete homotopy is an equivalence relation, ${\cal W}'$ is discretely homotopic to some solution of $(G,S,T,g,k)$. However, this is a contradiction to the choice of $\cal W$. \end{proof} Lastly, we prove that having eliminated all swollen segments indeed implies that the total number of segments is small. \begin{lemma}\label{lem:fewSegments} Let $(G,S,T,g,k)$ be a nice instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $\cal W$ be a sensible, outer-terminal, extremal weak linkage that is pushed onto $R$ and has no special U-turns and swollen segments. Then, $|\ensuremath{\mathsf{Seg}}({\cal W})|\leq \ensuremath{\mathsf{Potential}}({\cal W})$. \end{lemma} \begin{proof} To prove that $|\ensuremath{\mathsf{Seg}}({\cal W})|\leq \ensuremath{\mathsf{Potential}}({\cal W})$, it suffices to show that for every segment group $W\in\ensuremath{\mathsf{SegGro}}({\cal W})$, we have that $|\ensuremath{\mathsf{Seg}}(W)|\leq \ensuremath{\mathsf{Potential}}(W)$. For segment groups of size $1$, this inequality is immediate. Thus, we now consider a segment group $W\in\ensuremath{\mathsf{SegGro}}({\cal W})$ of size at least $2$. Then, \[\ensuremath{\mathsf{Potential}}(W) = 1+|\sum_{(e,e')\in E(W) \times E(W)}{\sf label}_P^{W}(e,e')|,\] where $P$ is the maximal degree-2 path of $R$ such that all of the endpoints of all of the segments in $\ensuremath{\mathsf{Seg}}(W)$ are its internal vertices. Because there do not exist swollen segments, we have that ${\sf label}_P^{W}$ assigns either only non-negative values (0 or 1) or only non-positive values (0 or -1). Without loss of generality, suppose that it assigns only non-negative values. Now, notice that every pair of edges consecutively visited by $W$ that below to different segments of $W$ is assigned $1$ (because it creates a crossing with $P$). However, the number of segments of $\cal W$ is upper bounded by one plus the number of such pairs of edges. Thus, $|\ensuremath{\mathsf{Seg}}(W)|-1\leq \sum_{(e,e')\in E(W) \times E(W)}{\sf label}_P^{W}(e,e')$. From this, we conclude that $|\ensuremath{\mathsf{Seg}}(W)|\leq \ensuremath{\mathsf{Potential}}(W)$. \end{proof} \subsection{Completion of the Simplification} The purpose of this section is to prove the following lemma. \begin{lemma}\label{lem:pushOutcome} Let $(G,S,T,g,k)$ be a good \textsf{Yes}-instance of \textsf{Planar Disjoint Paths}, and $R$ be a backbone Steiner tree. Then, there exists a simplified weak linkage $\cal W$ in $H$ that is discretely homotopic in $H$ to some solution of $(G,S,T,g,k)$. \end{lemma} To this end, we first eliminate all remaining (non-special) U-turns based on Lemmas \ref{lem:eliminateInnermostUTurn}, \ref{lem:noSwollenSegments} and \ref{lem:fewSegments}, similarly to the proof of Lemma \ref{lem:noUTurns}. \begin{lemma}\label{lem:noUTurnsFinal} Let $(G,S,T,g,k)$ be a good \textsf{Yes}-instance of \textsf{Planar Disjoint Paths}, and $R$ be a backbone Steiner tree. Then, there exists a sensible, outer-terminal, U-turn-free weak linkage $\cal W$ in $H$ that is pushed onto $R$, discretely homotopic in $H$ to some solution of $(G,S,T,g,k)$, and $|\ensuremath{\mathsf{Seg}}({\cal W})|\leq \alpha_{\rm potential}(k)$. \end{lemma} \begin{proof} By Lemma \ref{lem:noSwollenSegments}, there exists a sensible, outer-terminal, extremal weak linkage in $H$ that is pushed onto $R$, has no special U-turns and swollen segments, is discretely homotopic in $H$ to some solution of $(G,S,T,g,k)$ and whose potential is upper bounded by $\alpha_{\rm potential}(k)$. By Lemma \ref{lem:fewSegments}, its number of segments is also upper bounded $\alpha_{\rm potential}(k)$. Among all such weak linkages that are sensible, outer-terminal, pushed onto $R$ and whose number of segments is upper bounded $\alpha_{\rm potential}(k)$, let $\cal W$ be one with minimum number of edges. To conclude the proof, it suffices to argue that $\cal W$ is U-turn-free. Suppose, by way of contradiction, that $\cal W$ has at least one U-turn. Then, by Lemma \ref{lem:existsInnermostUTurn}, $\cal W$ has an innermost U-turn $U=\{e_i,e_j\}$. Let $W$ be the walk in $\cal W$ that uses $e_i$ and $e_j$, and $C$ be the cycle in $H$ that consists of $e_i$ and $e_j$. Then, by Lemma \ref{lem:eliminateInnermostUTurn}, the cycle pull operation is applicable to $(W,C)$. Furthermore, by Lemma \ref{lem:eliminateInnermostUTurn}, the resulting weak linkage ${\cal W}'$ is sensible, outer-terminal, pushed onto $R$, has fewer edges than $\cal W$, and its number of segments is upper bounded by the number of segments of $\cal W$. Since discrete homotopy is an equivalence relation, ${\cal W}'$ is discretely homotopic to some solution of $(G,S,T,g,k)$. However, this is a contradiction to the choice of $\cal W$. \end{proof} Now, we prove that having no U-turns implies that each segment can use only two parallel copies of every edge. \begin{lemma}\label{lem:eachSegContribTwo} Let $(G,S,T,g,k)$ be a nice instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let $\cal W$ be an outer-terminal, U-turn-free weak linkage that is pushed onto $R$. Then, each segment $S\in\ensuremath{\mathsf{Seg}}({\cal W})$ uses at most two copies of every edge in $E(R)$. \end{lemma} \begin{proof} Consider some segment $S\in\ensuremath{\mathsf{Seg}}({\cal W})$. Suppose, by way of contradiction, that there exists some edge $e_0\in E(R)$ such that $S$ contains at least three edges parallel to $e_0$ (but it cannot contain $e_0$ as it is pushed onto $R$). Then, without loss of generality, suppose that it contains two copies with positive subscript, $e_i$ and $e_j$, and let $S'$ be the subwalk of $S$ having these edge copies as the extreme edges. Then, because $S$ is U-turn free, when we traverse $S'$ from $e_i$ to $e_j$, then we must visit the positive and negative copy of every other edge in $E(R)$ exactly once. However, this means that $E({\cal W}$ contains more than one copy of the edge incident to $t^\star$ in $R$, which contradicts the assumption that $\cal W$ is outer-terminal. \end{proof} Having established Lemmas \ref{lem:makingCanonical}, \ref{lem:noUTurnsFinal} and \ref{lem:eachSegContribTwo}, we are ready to prove Lemma \ref{lem:pushOutcome}. \begin{proof}[Proof of Lemma \ref{lem:pushOutcome}.] By Lemma \ref{lem:noUTurnsFinal}, there exists a sensible, outer-terminal, U-turn-free weak linkage ${\cal W}'$ in $H$ that is pushed onto $R$, discretely homotopic in $H$ to some solution of $(G,S,T,g,$ $k)$, and whose number of segments is upper bounded by $\alpha_{\rm potential}(k)$. By Lemma \ref{lem:eachSegContribTwo}, the multiplicity of ${\cal W}'$ is upper bounded by $2\alpha_{\rm potential}(k)=\alpha_{\rm mul}(k)$. By Lemma \ref{lem:makingCanonical}, there exists a weak linkage ${\cal W}$ that is sensible, pushed onto $R$, canonical, discretely homotopic to ${\cal W}'$, and whose multiplicity is upper bounded by the multiplicity of ${\cal W}'$. Thus, $\cal W$ is simplified. Moreover, since discrete homotopy is an equivalence relation, $\cal W$ is discretely homotopic to some solution of $(G,S,T,g,k)$. \end{proof} \section{Reconstruction of Pushed Weak Linkages from Templates}\label{sec:reconstruction} In this section, based on the guarantee of Lemma \ref{lem:pushOutcome}, we only attempt to reconstruct simplified weak linkages. Towards this, we introduce the notion of a template (based on another notion called a pairing). Roughly speaking, a template indicates how many parallel copies of each edge incident to a vertex in $V_{=1}(R)\cup V_{\geq 3}(R)$ are used by the walks in the simplified weak linkage $\cal W$ under consideration, and how many times, for each pair $(e,e')$ of non-parallel edges sharing a vertex, the walks in $\cal W$ traverse from a copy of $e$ to a copy of $e'$. Observe that a template does not indicate which edge copy is used by each walk, but only specifies certain numbers. Nevertheless, we will show that this is sufficient for faithful reconstruction of simplified weak linkages. The great advantage of templates, proved later, is that there are only few~of~them. \subsection{Generic Templates and Templates of Simplified Weak Linkages} We begin with the definition of the notion of a pairing, which will form the basis of a template. Let $V^\star(R) = V_{=1}(R) \cup V_{\geq 3}(R) \cup V^\star_2(R)$ where $ V^\star_2(R) = \{ v \in V_{=2}(R) \mid \exists u \in V_{=1}(R) \cup V_{\geq 3}(R) \text{ such that } \{u,v\} \in E(R) \}$. Observe that $|V^\star_2(R)| \leq 2(|V_{=1}(R)| + |V_{\geq 3}(R)| - 1) \leq 8k$, by Observation~\ref{obs:leaIntSteiner}. Therefore, $|V^\star(R)| \leq 12k$. Let $E^\star(R)$ denote the set of edges in $E(R)$ that are incident on a vertex of $V^\star(R)$, and observe that $|E^\star(R)| \leq 24k$ (since $R$ is a tree). \begin{definition}[{\bf Pairing}]\label{def:pairing} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}\ with a Steiner tree $R$. For a vertex $v\in V_{\geq 3}(R)$, a {\em pairing at $v$} is a set $\ensuremath{\mathsf{pairing}}_v$ of unordered pairs of distinct edges in $R$ incident to $v$. For a vertex $v \in V^\star_2(R)$, a \emph{ pairing at $v$} is a collection of pairs (possibly non-distinct) edges in $E_R(v)$. And, for a vertex $v\in V_{=1}(R)$, it is the empty set or singleton set of the pair where the (unique) edge incident to $v$ in $R$ occurs twice. A collection $\{\ensuremath{\mathsf{pairing}}_u\}|_{u\in V^\star(R)}$, where $\ensuremath{\mathsf{pairing}}_u$ is a pairing at $u$ for every vertex $u\in V^\star(R)$, is called a {\em pairing}. \end{definition} As we will see later, simplified weak linkages can only give rise to a specific type of pairings, which we call non-crossing pairings. \begin{definition}[{\bf Non-Crossing Pairing}]\label{def:noncrossingPairing} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}. Let $R$ be a Steiner tree. Consider a vertex $v\in V^\star(R)$, and let $e^1,e^2,\ldots,e^r$ be the edges in $E(R)$ incident to $v$ in clockwise order where the first edge $e^1$ is chosen arbitrarily. A pairing $\ensuremath{\mathsf{pairing}}_v$ at $v$ is {\em non-crossing} if there do not exist two pairs $(e^i,e^j)$ and $(e^x,e^y)$ in $\ensuremath{\mathsf{pairing}}_v$, where $i<j$ and $x<y$, such that $i<x<j<y$ or $x<i<y<j$. More generally, a pairing $\{\ensuremath{\mathsf{pairing}}_u\}|_{u\in V^\star(R)}$ is {\em non-crossing} if, for every $u\in V^\star(R)$, the pairing $\ensuremath{\mathsf{pairing}}_u$ is non-crossing. \end{definition} We now show that a non-crossing pairing can contain only $\mathcal{O}(k)$ pairs, which is better than a trivial bound of $\mathcal{O}(k^2)$. This bound will be required to attain a running time of $2^{\mathcal{O}(k^2)}n^{\mathcal{O}(1)}$ rather than~$2^{\mathcal{O}(k^3)}n^{\mathcal{O}(1)}$. \begin{lemma}\label{lem:numNonCrossingLinK} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}. Let $R$ be a Steiner tree. Let $\{\ensuremath{\mathsf{pairing}}_v\}|_{v\in V^\star(R)}$ be a non-crossing pairing. Then, $|\bigcup_{v\in V^\star(R)}\ensuremath{\mathsf{pairing}}_v|\leq \alpha_{\mathrm{npair}}(k):=48k$. \end{lemma} \begin{proof} Towards the bound on $|\bigcup_{v\in V^\star(R)}\ensuremath{\mathsf{pairing}}_v|$, we first obtain a bound on each individual set $\ensuremath{\mathsf{pairing}}_v$. To this end, consider some vertex $v\in V_{\geq 3}(R)$, and let $e^0,e^1,\ldots,e^{r-1}$ are the edges in $E(R)$ incident to $v$ in clockwise order. Consider the undirected graph $C$ on vertex set $\{u_{e^i} \mid i\in\{0,1,\ldots,r-1\}\}$ and edge set $\{\{u_{e^i},u_{e^{(i+1)\mod r}}\} \mid i\in\{0,1,\ldots,r-1\}\}\cup\{\{u_{e^i},u_{e^j}\} \mid (e^i,e^j)\in \ensuremath{\mathsf{pairing}}_v\}$. Now, notice that $C$ is an outerplanar graph (Fig.~\ref{fig:pairOuterplanar}). To see this, draw the vertices of $C$ on a circle on the plane, so that the curves on the cycle that connect them correspond to the drawing of the edges in $\{\{u_{e^i},u_{e^{(i+1)\mod r}}\} \mid i\in\{0,1,\ldots,r-1\}\}$. Now, for each edge in $\{\{u_{e^i},u_{e^j}\} \mid (e^i,e^j)\in \ensuremath{\mathsf{pairing}}_v\}$, draw a straight line segment inside the circle that connects $u_{e^i}$ and $u_{e^j}$. The condition that asserts that $\ensuremath{\mathsf{pairing}}_v$ is non-crossing ensures that no two lines segments among those drawn previously intersect (except for at their endpoints). As an outerplanar graph on $q$ vertices can have at most $2q-3$ edges, we have that $|E(C)|<2|V(C)|=2r$. Because $|\ensuremath{\mathsf{pairing}}_v|\leq |E(C)|$, we have that $|\ensuremath{\mathsf{pairing}}_v|\leq 2r$. For a vertex in $v \in V^\star_2(R)$, since it has only two edges incident on it, $|\ensuremath{\mathsf{pairing}}_v| \leq 3$. Finally, for $v \in V_{=1}(R)$, $|\ensuremath{\mathsf{pairing}}_v| \leq 1$. Thus, for every vertex $v\in V^\star(R)$, $|\ensuremath{\mathsf{pairing}}_v|$ is bounded by twice the degree of $v$ in $R$. Since $|V^\star(R)| \leq 12k$, the sum of the degrees in $R$ of the vertices in $V^\star(R)$ is upper bounded by $24k$. From this, we conclude that $|\bigcup_{v\in V^\star(R)}\ensuremath{\mathsf{pairing}}_v|\leq \alpha_{\mathrm{npair}}(k)$. \end{proof} \begin{figure} \caption{Illustration of the outerplanar graph in Lemma~\ref{lem:numNonCrossingLinK}.} \label{fig:pairOuterplanar} \end{figure} Now, we associate a pairing with a pushed weak linkage. \begin{definition}[{\bf Pairing of a Weak Linkage}]\label{def:pairingOfStitching} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}. Let $R$ be a Steiner tree, and let $\cal W$ be a weak linkage pushed onto $R$. For a vertex $v\in V^\star(R)$, the {\em pairing of $\cal W$ at $v$} is the set that contains every pair of edges $(e,e')$ in $R$ that are incident to $v$ and such that there exists at least one walk in $\cal W$ where $e_i$ and $e'_j$ occur consecutively, where $e_i$ and $e'_j$ are parallel copies of $e$ and $e'$, respectively. More generally, the {\em pairing of $\cal W$} is the collection $\{\ensuremath{\mathsf{pairing}}_u\}|_{u\in V^\star(R)}$, where $\ensuremath{\mathsf{pairing}}_u$ is the pairing of $\cal W$ at $u$ for every vertex $u\in V^\star(R)$. \end{definition} Apart from a pairing, to be able to reconstruct a simplified weak linkage, we need additional information in the form of an assignment of numbers to the pairs in the pairing. To this end, we have the definition of a template. \begin{definition}[{\bf Template}]\label{def:template} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}. Let $R$ be a Steiner tree. Let $\ensuremath{\mathsf{pairing}}_v$ be a pairing at some vertex $v\in V^\star(R)$. A {\em template for $\ensuremath{\mathsf{pairing}}_v$} is a function $\ensuremath{\mathsf{template}}_v: \ensuremath{\mathsf{pairing}}_v\rightarrow \mathbb{N}$. If the maximum integer assigned by $\ensuremath{\mathsf{template}}_v$ is upper bounded by $N$, for some $N\in\mathbb{N}$, then it is also called an \emph{$N$-template}. More generally, a {\em template (resp.~$N$-template) of a pairing $\{\ensuremath{\mathsf{pairing}}_u\}|_{u\in V^\star(R)}$} is a collection $\{\ensuremath{\mathsf{template}}_u\}|_{u\in V^\star(R)}$, where $\ensuremath{\mathsf{template}}_u$ is a template (resp.~$N$-template) for $\ensuremath{\mathsf{pairing}}_u$ for every vertex $u\in V^\star(R)$. \end{definition} We proceed to associate a template with a weak linkage. \begin{definition}[{\bf Template of Weak Linkage}]\label{def:templateOfStitch} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}. Let $R$ be a Steiner tree, and let $\cal W$ be a weak linkage pushed onto $R$. Let $\{\ensuremath{\mathsf{pairing}}_u\}|_{u\in V^\star(R)}$ be the pairing of $\cal W$. For a vertex $v\in V^\star(R)$, the {\em template of $\cal W$ at $v$} is the function $\ensuremath{\mathsf{template}}_v: \ensuremath{\mathsf{pairing}}_v\rightarrow \mathbb{N}$ such that for every $(e,e')\in \ensuremath{\mathsf{pairing}}_v$, we have \[\begin{array}{ll} \ensuremath{\mathsf{template}}_v((e,e')) = |\big\{\{\widehat{e},\widehat{e}'\} \mid & \widehat{e}\ \mathrm{is\ parallel\ to}\ e, \widehat{e}'\ \mathrm{is\ parallel\ to}\ e',\\ & \exists W\in{\cal W}\ \mathrm{s.t.}\ W\ \mathrm{traverses}\ \widehat{e}\ \mathrm{and}\ \widehat{e}'\ \mathrm{consecutively}\big\}|. \end{array}\] More generally, the {\em template of $\cal W$} is the collection $\{\ensuremath{\mathsf{template}}_u\}|_{u\in V^\star(R)}$, where $\ensuremath{\mathsf{template}}_u$ is the template of $\cal W$ at $u$ for every vertex $u\in V^\star(R)$. \end{definition} Now, we claim that the pairing of a pushed weak linkage non-crossing. \begin{lemma}\label{lem:pairingOfFlowIsNoncrossing} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}. Let $R$ be a Steiner tree. Then, the pairing of any weak linkage $\cal W$ pushed onto $R$ is non-crossing. \end{lemma} \begin{proof} Let $\{\ensuremath{\mathsf{pairing}}_v\}|_{v\in V^\star(R)}$ be the pairing of $\cal W$. Suppose, by way of contradiction, that $\{\ensuremath{\mathsf{pairing}}_v\}|_{v\in V^\star(R)}$ is crossing. If $v\in V_{=1}(R) \cup V^\star_2(R)$, then $\ensuremath{\mathsf{pairing}}_v$ is trivially non-crossing. Thus, there exists a vertex $v\in V_{\geq 3}(R)$ such that $\ensuremath{\mathsf{pairing}}_v$ is crossing. Let $e^1,e^2,\ldots,e^r$ be the edges in $E(R)$ incident to $v$ in clockwise order. Because $\ensuremath{\mathsf{pairing}}_v$ is crossing, there exist two pairs $(e^i,e^j)$ and $(e^x,e^y)$ in $\ensuremath{\mathsf{pairing}}_v$, where $i<j$ and $x<y$, such that $i<x<j<y$ or $x<i<y<j$. By the definition of $\ensuremath{\mathsf{pairing}}_v$, this means that there exist walks $\widehat{W},\overline{W}\in {\cal W}$ (possibly $\widehat{W}=\overline{W}$) and edges $\widehat{e}^i,\widehat{e}^j,\overline{e}^x$ and $\overline{e}^y$ that are parallel to $e^i,e^j,e^x$ and $e^y$, respectively, such that $\widehat{W}$ traverses $\widehat{e}^i$ and $\widehat{e}^j$ consecutively, and $\overline{W}$ traverses $\overline{e}^x$ and $\overline{e}^y$ consecutively. However, because $i<x<j<y$ or $x<i<y<j$, and because parallel edges incident to each vertex appear consecutively in its cyclic order, we derive that $(v,\widehat{e}^i,\widehat{e}^j,\overline{e}^x,\overline{e}^y)$ is a crossing of $\widehat{W}$ and~$\overline{W}$. \end{proof} From Lemmas \ref{lem:numNonCrossingLinK} and \ref{lem:pairingOfFlowIsNoncrossing}, we obtain the following corollary. \begin{corollary}\label{cor:pairingOfFlowIsLinear} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}\ with a backbone Steiner tree $R$. Let $\cal W$ be a weak linkage pushed onto $R$ with a pairing $\{\ensuremath{\mathsf{pairing}}_v\}|_{v\in V^\star(R)}$. Then, $|\bigcup_{v\in V^\star(R)}\ensuremath{\mathsf{pairing}}_v|$ $\leq \alpha_{\mathrm{npair}}(k)$. \end{corollary} Additionally, we claim that we can focus our attention on pushed flows whose templates are $\alpha_{\mathrm{weight}}(k)$-templates. \begin{lemma}\label{lem:templateOfFlowIsBounded} Let $(G,S,T,g,k)$ be a good \textsf{Yes}-instance of \textsf{Planar Disjoint Paths}\ with a backbone Steiner tree $R$. Then, there exists a simplified weak linkage that is discretely homotopic in $H$ to some solution of $(G,S,T,g,k)$, whose template is an $\alpha_{\mathrm{mul}}(k)$-template. \end{lemma} \begin{proof} By Lemma~\ref{lem:pushOutcome}, there exists a simplified weak linkage $\cal W$ that is discretely homotopic in $H$ to some solution of $(G,S,T,g,k)$. Because $\cal W$ is simplified, its multiplicity is upper bounded by $\alpha_{\rm mul}(k)$. Let $\{\ensuremath{\mathsf{pairing}}_v\}|_{v\in V^\star(R)}$ and $\{\ensuremath{\mathsf{template}}_v\}|_{v\in V^\star(R)}$ be the pairing and template of $\cal W$, respectively. Consider some vertex $v\in V^\star(R)$ and pair $(e,e')\in \ensuremath{\mathsf{pairing}}_v$. To complete the proof, we need to show that $\ensuremath{\mathsf{template}}_v((e,e'))\leq \alpha_{\mathrm{mul}}(k)$. By the definition of a template, \[\begin{array}{ll} \ensuremath{\mathsf{template}}_v((e,e')) = |\{\{\widehat{e},\widehat{e}'\} \mid & \widehat{e}\ \mathrm{is\ parallel\ to}\ e, \widehat{e}'\ \mathrm{is\ parallel\ to}\ e',\\ & \exists W\in{\cal W}\ \mathrm{s.t.}\ W\ \mathrm{traverses}\ \widehat{e}\ \mathrm{and}\ \widehat{e}'\ \mathrm{consecutively}\}|. \end{array}\] Thus, because every the walks in $\cal W$ are edge-disjoint and each walk in $\cal W$ visits distinct edges (by the definition of a weak linkage), $\ensuremath{\mathsf{template}}_v((e,e'))$ is upper bounded by the number of edges parallel to $e$ that belong to $E({\cal W})$. Thus, by the definition of the multiplicity of a linkage, we conclude that $\ensuremath{\mathsf{template}}_v((e,e'))\leq \alpha_{\mathrm{mul}}(k)$. \end{proof} In light of Corollary \ref{cor:pairingOfFlowIsLinear} and Lemma \ref{lem:templateOfFlowIsBounded}, we define the set of all pairings and templates in which we will be interested as follows. \begin{definition}[{\bf The Set $\ensuremath{\mathsf{ALL}}$}]\label{ref:allTemplates} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}. Let $R$ be a Steiner tree. The set $\ensuremath{\mathsf{ALL}}$ contains every collection $\{(\ensuremath{\mathsf{pairing}}_v,\ensuremath{\mathsf{template}}_v)\}|_{v\in V^\star(R)}$ where $\{\ensuremath{\mathsf{pairing}}_v\}|_{v\in V^\star(R)}$ is a non-crossing pairing that satisfies $|\bigcup_{v\in V^\star(R)}\ensuremath{\mathsf{pairing}}_v|\leq \alpha_{\mathrm{npair}}(k)$, and $\{\ensuremath{\mathsf{template}}_v\}|_{v\in V^\star(R)}$ is an $\alpha_{\mathrm{mul}}(k)$-template for $\{\ensuremath{\mathsf{pairing}}_v\}|_{v\in V^\star(R)}$. \end{definition} From Corollary \ref{cor:pairingOfFlowIsLinear} and Lemma \ref{lem:templateOfFlowIsBounded}, we have the following result. \begin{corollary}\label{cor:existsInAll} Let $(G,S,T,g,k)$ be a good \textsf{Yes}-instance of \textsf{Planar Disjoint Paths}. Let $R$ be a backbone Steiner tree. Then, there exists a simplified weak linkage that is discretely homotopic in $H$ to some solution of $(G,S,T,g,k)$ and satisfies the following property: There exists $\{(\ensuremath{\mathsf{pairing}}_v,\ensuremath{\mathsf{template}}_v)\}|_{v\in V^\star(R)}\in\ensuremath{\mathsf{ALL}}$ such that $\{\ensuremath{\mathsf{pairing}}_v\}|_{v\in V^\star(R)}$ is the pairing of $\cal W$, and $\{\ensuremath{\mathsf{template}}_v\}|_{v\in V^\star(R)}$ is the template of $\cal W$. \end{corollary} Because we only deal with pairings having just $\mathcal{O}(k)$ pairs and upper bound the largest integer assigned by templates by $2^{\mathcal{O}(k)}$, the set $\ensuremath{\mathsf{ALL}}$ is ``small'' as asserted by the following~lemma. \begin{lemma}\label{lem:enumerateTemplates} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}. Let $R$ be a Steiner tree. Then, $|\ensuremath{\mathsf{ALL}}|=2^{\mathcal{O}(k^2)}$. Moreover, $\ensuremath{\mathsf{ALL}}$ can be computed in time $2^{\mathcal{O}(k^2)}$. \end{lemma} \begin{proof} First, we upper bound the number of pairings $\{\ensuremath{\mathsf{pairing}}_v\}|_{v\in V^\star(R)}$ that satisfy $|\bigcup_{v\in V^\star(R)}\ensuremath{\mathsf{pairing}}_v|\leq \alpha_{\mathrm{npair}}(k)$. By Observation \ref{obs:leaIntSteiner}, the number of edges in $E^\star(R)$ is at most $24k$, and hence the number of pairs of edges in $E^\star(R)$ is at most $(24k)^2$. Thus, the number of choices for $\bigcup_{v\in V^\star(R)}\ensuremath{\mathsf{pairing}}_v$ is at most $\binom{(24k)^2}{\alpha_{\mathrm{npair}}(k)}$. Note that each pair of edges in this union belongs to $\ensuremath{\mathsf{pairing}}_v$ for at most one vertex $v\in V^\star(R)$. From this, we conclude that the number of pairings $\{\ensuremath{\mathsf{pairing}}_v\}|_{v\in V^\star(R)}$ that satisfy $|\bigcup_{v\in V^\star(R)}\ensuremath{\mathsf{pairing}}_v|\leq \alpha_{\mathrm{npair}}(k)$ is at most $\binom{(24k)^2}{\alpha_{\mathrm{npair}}(k)}\cdot 2^{\alpha_{\mathrm{npair}}(k)}=2^{\mathcal{O}(k\log k)}$ (as $\alpha_{\mathrm{npair}}(k)=\mathcal{O}(k)$). Now, fix some pairing $\{\ensuremath{\mathsf{pairing}}_v\}|_{v\in V^\star(R)}$ that satisfies $|\bigcup_{v\in V^\star(R)}\ensuremath{\mathsf{pairing}}_v|\leq \alpha_{\mathrm{npair}}(k)$. We test if this pairing is a non-crossing pairing, at each vertex $v \in V^\star(R)$, by testing all possible $4$-tuples of edges in $E_R(v)$. This takes $k^{\mathcal{O}(1)}$ time in total. Then, the number of $\alpha_{\mathrm{mul}}(k)$-templates for $\{\ensuremath{\mathsf{pairing}}_v\}|_{v\in V^\star(R)}$ is upper bounded by $(\alpha_{\mathrm{mul}}(k))^{\alpha_{\mathrm{npair}}(k)}=2^{\mathcal{O}(k^2)}$ (as $\alpha_{\mathrm{mul}}(k)=2^{\mathcal{O}(k)}$ and $\alpha_{\mathrm{npair}}(k)=\mathcal{O}(k)$). Thus, we have that $|\ensuremath{\mathsf{ALL}}|=2^{\mathcal{O}(k^2)}$. It should also be clear that these arguments, by simple enumeration, imply that $\ensuremath{\mathsf{ALL}}$ can be computed in time $2^{\mathcal{O}(k^2)}$. \end{proof} \paragraph{Extension of Pairings and Templates.} To describe the reconstruction of simplified weak linkages from their pairings and templates, we must extend them from $V^\star(R)$ to all of $V(R)$. Intuitively, this extension is based on the observation that $\cal W$ is U-turn free and sensible. Therefore, if a walk in $\cal W$, visits a maximal degree-2 path in $R$, then it must traverse the entirety of this path. Hence, the pairings and templates at any internal vertex of a degree-2 path can be directly obtained from the endpoint vertices of the path. We begin by identifying which collections of pairings and templates can be extended. \begin{definition} \label{def:extpairtemplCheck} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and let $R$ be a backbone Steiner tree. And let ${\cal A} = \{(\ensuremath{\mathsf{pairing}}_v, \ensuremath{\mathsf{template}}_v)\}_{v\in V^\star(R)}$. Then $\cal A$ is \emph{extensible to all of $V(R)$} if the following conditions are true for every maximal degree-2 path in $R$. \begin{itemize} \item Let $u,v \in V^\star_2(R)$ such that they lie on the same maximal degree-2 path of $R$. Consider the subpath $\ensuremath{\mathsf{path}}_{R}(u,v)$ with endpoints $u$ and $v$, and let $e^u$ and $e^v$ be the edges in $E(\ensuremath{\mathsf{path}}_{R}(u,v))$ incident on $u$ and $v$, respectively. Suppose that $e^u$ and $e^v$ are distinct edges. Then $(e,e^u) \in \ensuremath{\mathsf{pairing}}_u$ if and only if $(e',e^v) \in \ensuremath{\mathsf{pairing}}_v$, where $E_R(u) = \{e,e^u\}$ and $E_R(v)=\{e',e^v\}$. \item Assuming that the above condition is true, furthermore $\ensuremath{\mathsf{template}}_u(e,e^u) = \ensuremath{\mathsf{template}}_v(e',e^v)$. \end{itemize} \end{definition} The following lemma shows that pairings and templates of simplified weak linkages are extensible to all of $V(R)$. \begin{lemma} Let $(G,S,T,g,k)$ be a good instance of \textsf{Planar Disjoint Paths}. Let $R$ be a backbone Steiner tree. Let $\cal W$ be a simplified weak linkage with pairing and template ${\cal A} = \{(\ensuremath{\mathsf{pairing}}_v, \allowbreak \ensuremath{\mathsf{template}}_v)\}|_{v\in V^\star(R)}$. Then, $\cal A$ is extensible to all of $V(R)$. \end{lemma} \begin{proof} Let $u,v \in V^\star_2(R)$ such that they lie on the same maximal degree-2 path of $R$. Consider the subpath $\ensuremath{\mathsf{path}}_{R}(u,v)$ with endpoints $u$ and $v$, and let $e^u$ and $e^v$ be the edges in $E(\ensuremath{\mathsf{path}}_{R}(u,v))$ incident on $u$ and $v$, respectively. Suppose that $e^u$ and $e^v$ are distinct edges. Then $\ensuremath{\mathsf{path}}_{R}(u,v)$ must have an internal vertex. Let $V(\ensuremath{\mathsf{path}}_{R}(u,v)) = \{ u = w_0, w_1, w_2, \ldots, w_p, w_{p+1} = v \}$ and let $E(\ensuremath{\mathsf{path}}_{R}(u,v)) = \{e^i=\{w_i,w_{i+1}\} \mid 0 \leq i \leq p\}$ where $e^0 = e^u$ and $e^p = e^v$. Consider an internal vertex $w_i$ of $\ensuremath{\mathsf{path}}_{R}(u,v)$ and note that $E_R(w_i) = \{e^{i-1},e^i\}$. Observe that, as $\cal W$ is U-turn free, $w^i \notin N_R(V_{=1}(R))$ and the endpoints of every the walk in $\cal W$ lies in $V_{=1}(R)$, there is no walk in $\cal W$ that visits two parallel copies of an edge in $E_R(w_i)$ consecutively, as that would constitute a U-turn. Therefore, any walk in $\cal W$ that visits $e^{i-1}_{j_{i-1}}$ must also visit $e^i_{j_i}$, where $e^{i-1}_{j_{i-1}}$ and $e^i_{j_i}$ are parallel copies of $e^{i-1}$ and $e^i$ respectively. This holds for all vertices $w_1, w_2, \ldots, w_p$. Let $E^\star(\ensuremath{\mathsf{path}}_{R}(u,v))$ be the collection of all the parallel copies of every edge in $E(\ensuremath{\mathsf{path}}_{R}(u,v))$. Then, for a walk $W \in \cal{W}$, let $\wh{W}_1, \wh{W}_2, \ldots, \wh{W}_t$ be the maximal subwalks of $W$ restricted to $E^\star(\ensuremath{\mathsf{path}}_{R}(u,v))$. Then each $\wh{W}_i$ is a path from $u$ to $v$ along the parallel copies of $e^0,e^1, \ldots, e_p$. Hence, if $(e, e^u) = (e,e_0) \in \ensuremath{\mathsf{pairing}}_u$ if and only if $(e',e^v) = (e',e_p) \in \ensuremath{\mathsf{pairing}}_v$ where $e$ and $e'$ are the edges in $E(R) \setminus E(\ensuremath{\mathsf{path}}_{R}(u,v))$ that are incident on $u$ and $v$, respectively. Further, observe that, each time a walk $W \in \cal W$ visits a parallel copy of $e^u = e^0$ (immediately after visiting a parallel copy of $e$) it must traverse a parallel copy of $e^1, e^2, \ldots e^p = e^v$ consecutively (and then immediately visit a parallel copy of $e'$), and vice-versa. Therefore by definition, $\ensuremath{\mathsf{template}}_u(e,e^u) = \ensuremath{\mathsf{template}}_{w_1}(e^u,e^1) = \ensuremath{\mathsf{template}}_{w_2}(e^1, e^2) = \ldots = \ensuremath{\mathsf{template}}_v(e_{p-1},e^v) = \ensuremath{\mathsf{template}}_v(e',e^v)$. \end{proof} We now have the following definition. \begin{definition}[\bf Extension of Pairings and Templates.] \label{def:extendPairTemplate} Let $(G,S,T,g,k)$ be a good instance of \textsf{Planar Disjoint Paths}. Let $R$ be a backbone Steiner tree. Let ${\cal A} = \{(\ensuremath{\mathsf{pairing}}_v,\ensuremath{\mathsf{template}}_v)\}|_{v\in V^\star(R)}$. Then, the \emph{extension of $\cal A$ to $V(R)$} is the collection $\wh{\cal A} = \{(\wh{\ensuremath{\mathsf{pairing}}}_v,\wh{\ensuremath{\mathsf{template}}}_v)\}|_{v(R)}$ such that: \begin{itemize} \item If $\cal A$ is not extensible (Definition~\ref{def:extpairtemplCheck}), then $\wh{\cal A}$ is invalid. \item Otherwise, $\cal A$ is extensible and we have two cases. \begin{itemize} \item If $v \in V_{=1}(R) \cup V_{\geq 3}(R)$, then $\wh{\ensuremath{\mathsf{pairing}}_v} = \ensuremath{\mathsf{pairing}}_v$ and $\wh{\ensuremath{\mathsf{template}}_v} = \ensuremath{\mathsf{template}}_v$. \item Otherwise, $v \in V_{=2}(R)$ and let $u,w \in V_{=1}(R) \cup V_{\geq 3}(R)$ such that $v \in V(\ensuremath{\mathsf{path}}_{R}(u,v))$. Let $e_u$ and $e_w$ be the two edges in $\ensuremath{\mathsf{path}}_{R}(u,v)$ incident on $u$ and $w$, respectively. If $(e,e_u) \notin \ensuremath{\mathsf{pairing}}_u$ for any $e \in E_R(u)$, then $\ensuremath{\mathsf{pairing}}_v = \emptyset$. Otherwise, there is some $e \in E_R(u)$ such that $(e,e_u) \in \ensuremath{\mathsf{pairing}}_u$. Then $\ensuremath{\mathsf{pairing}}_v = \{(e',e'')\}$ where $e'$ and $e''$ are the two edges in $E(R)$ incident on $v$. Further, $\ensuremath{\mathsf{template}}_v(e',e'') = \ensuremath{\mathsf{template}}_u(e,e_u)$. \end{itemize} \end{itemize} \end{definition} Let $\wh{\ensuremath{\mathsf{ALL}}}$ denote the collection of extensions of all the pairings and templates in $\ensuremath{\mathsf{ALL}}$. Then we have the following corollary of Lemma~\ref{lem:enumerateTemplates}. \begin{lemma}\label{lem:enumerateExtTemplates} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}. Let $R$ be a Steiner tree. Then, $|\wh{\ensuremath{\mathsf{ALL}}}|=2^{\mathcal{O}(k^2)}$. Moreover, $\wh{\ensuremath{\mathsf{ALL}}}$ can be computed in time $2^{\mathcal{O}(k^2)} n$. \end{lemma} \begin{proof} Given ${\cal A} = \{(\ensuremath{\mathsf{pairing}}_v, \ensuremath{\mathsf{template}}_v)\}|_{v \in V^\star(R)} \in \ensuremath{\mathsf{ALL}}$, we apply Definition~\ref{def:extendPairTemplate} to obtain the extension $\wh{\cal A}$. Note that, for every vertex $v\in V^\star(R)$, we have that $|\ensuremath{\mathsf{pairing}}_v|=\mathcal{O}(k)$ and the numbers assigned by $\ensuremath{\mathsf{template}}_v$ are bounded by $2^{\mathcal{O}(k)}$. Further, since $|V^\star(R)| \leq 12k$, we can test the conditions in Definition~\ref{def:extendPairTemplate} in $k^{\mathcal{O}(1)}$ time. Finally we can construct $\wh{\cal A} = \{(\wh{\ensuremath{\mathsf{pairing}}}_v, \wh{\ensuremath{\mathsf{template}}}_v)\}|_{v \in V(R)}$ in total time $2^{\mathcal{O}(k)} n$, as $|V(R)| \leq n$. Since $|\ensuremath{\mathsf{ALL}}| = 2^{\mathcal{O}(k^2)}$ and it can be enumerated in time $2^{\mathcal{O}(k^2)}$, it follows that $|\wh{\ensuremath{\mathsf{ALL}}}| = 2^{\mathcal{O}(k^2)}$ and it can be enumerated in $2^{\mathcal{O}(k^2)} n$ time. \end{proof} In the rest of this section, we only require pairings and templates that are extended to all of $V(R)$. For convenience, we abuse the notation to denote the extension of a collection of pairings and templates, ${\cal A} \in \ensuremath{\mathsf{ALL}}$ to all of $V(R)$, by $\{\ensuremath{\mathsf{pairing}}_v\}|_{v \in V(R)}$ and $\{\ensuremath{\mathsf{template}}_v\}|_{v \in V(R)}$, respectively. The following corollary follows from the definition of $\wh{\ensuremath{\mathsf{ALL}}}$ and Corollary~\ref{cor:existsInAll}. \begin{corollary}\label{cor:existsInAll2} Let $(G,S,T,g,k)$ be a good \textsf{Yes}-instance of \textsf{Planar Disjoint Paths}. Let $R$ be a backbone Steiner tree. Then, there exists a simplified weak linkage that is discretely homotopic in $H$ to some solution of $(G,S,T,g,k)$ and satisfies the following property: There exists $\{(\ensuremath{\mathsf{pairing}}_v,\ensuremath{\mathsf{template}}_v)\}|_{v\in V(R)}\in\wh{\ensuremath{\mathsf{ALL}}}$ such that $\{\ensuremath{\mathsf{pairing}}_v\}|_{v\in V(R)}$ is the pairing of $\cal W$, $\{\ensuremath{\mathsf{template}}_v\}|_{v\in V(R)}$ is the template of $\cal W$. \end{corollary} \paragraph{Stitching of Weak Linkages.} Let us now introduce the notion of a stitching, which gives a localized view of a weak linkage pushed onto $R$ at each vertex in $V(R)$. Intuitively, the stitching at a vertex $v \in V(R)$ is function on the set of edges incident on $v$, that maps each edge to the next (or the previous) edge in a weak linkage. Note that, only a subset of the edges incident on $v$ may participate in a weak linkage. Therefore, we introduce the following notation: an edge is mapped to $\bot$ to indicate that it is not part of weak linkage. Also recall that, for a vertex $v \in V(R)$, ${\sf order}_v$ is an enumeration of the edges in $\wh{E}_R(v)$ in either clockwise or anticlockwise order, where $\wh{E}_R(v) = \{ e \in E_H(v) \mid e \text{ is parallel to an edge } e' \in E(R) \}$. \begin{definition}[\bf Stitching at a Vertex] Let $(G,S,T,g,k)$ be a good {\sf Yes}-instance of \textsf{Planar Disjoint Paths}, and let $R$ be a backbone Steiner tree. For a vertex $v \in V(R)$, a function $f_v: E_H(v) \rightarrow E_H(v) \cup \bot$ is a \emph{stitching at $v$} if it satisfies the following conditions. \begin{itemize} \item For any edge $e \in E_H(v) \setminus \wh{E}_R(v)$, $f_v(e) = \bot$. \item For a pair of (possibly non-distinct) edges $e, e' \in \wh{E}_R(v)$, $f_v(e) = e'$ if and only if $f_v(e') = e$. \item If $v \in S \cup T$, then there is exactly one edge such that $f_v(e) = e$. Otherwise, there is no such edge. \item If $e_1, e_2, e_3, e_4 \in \wh{E}_R(v)$ such that $f_v(e_1) = e_2$ and $f_v(e_3) = e_4$. Then $\{e_1,e_2\}$ and $\{e_3, e_4\}$ are disjoint and non-crossing in ${\sf order}_v$.\footnote{That is, in a clockwise (or anticlockwise) enumeration of $\wh{E}_R(v)$ starting from $e_1$, these edges occur as either $e_1, e_2, e_3, e_4$ or $e_1, e_3, e_4, e_2$, where without loss of generality we assume that $e_3$ occurs before $e_4$ in this ordering.} \end{itemize} Let $\{f_v\}|_{v \in V(R)}$ be a collection of functions such that $f_v$ is a stitching at $v$ for each $v \in V(R)$. Then, this collection is called a \emph{stitching} if for every edge $e=\{u,v\} \in E_H(R)$, $f_u(e) = \bot$ if and only if $f_v(e) = \bot$. \end{definition} Let us now describe the stitching of a weak linkage that is pushed onto $R$. \begin{definition}[\bf Stitching of a Weak Linkage Pushed onto $R$] \label{def:linkageStitching} Let $(G,S,T,g,k)$ be a good {\sf Yes}-instance of \textsf{Planar Disjoint Paths}, and let $R$ be a backbone Steiner tree. Let $\cal W$ be a weak linkage pushed onto $R$. Then we define the \emph{stitching of $\cal W$} as the collection of functions $\{\ensuremath{\mathsf{stitch}}_v\}|_{v \in V(R)}$, where $\ensuremath{\mathsf{stitch}}_v: E_H(v) \rightarrow E_H(v) \cup \bot$ satisfies the following. \begin{itemize} \item If there is $W\! \in\! {\cal W}$ where $e\!=\!\{v,w\}$ is the first edge of $W$, then $v \!\in\! S \!\cup\! T$ and~$\ensuremath{\mathsf{stitch}}_v(e) \!=\! e$. \item If there is $W\! \in\!{\cal W}$ where $e\!=\!\{u,v\}$ is the last edge of $W$, then $v \!\in\! S \!\cup\! T$ and~$\ensuremath{\mathsf{stitch}}_v(e) \!=\! e$. \item If there is a walk $W \in {\cal W}$ such that $e, e' \in E_H(v)$ are consecutive edges with a common endpoint $v \in V(R)$, then $\ensuremath{\mathsf{stitch}}_v(e) = e'$ and $\ensuremath{\mathsf{stitch}}_v(e') = e$. \item If $e \in E_H(v)$ is not part of any walk in $\cal W$, then $\ensuremath{\mathsf{stitch}}_v(e) = \bot$. \end{itemize} \end{definition} It is easy to verify that $\{\ensuremath{\mathsf{stitch}}_v\}|_{v \in V(R)}$ is indeed a stitching. Let us make a few more observations on the properties of this stitching. \begin{observation}\label{obs:stitchingProp} Let $(G,S,T,g,k)$ be a good {\sf Yes}-instance of \textsf{Planar Disjoint Paths}, and let $R$ be a backbone Steiner tree. Let $\cal W$ be a weak linkage pushed onto $R$ and let $\{\ensuremath{\mathsf{stitch}}_v\}|_{v \in V(R)}$ be the stitching of $\cal W$. Let $\{\ensuremath{\mathsf{pairing}}_v\}|_{v \in V(R)}$ and $\{\ensuremath{\mathsf{template}}_v\}|_{v \in V(R)}$ be the pairing and template of $\cal W$, respectively. Then the following holds. \begin{itemize} \item Let $e_i,e'_j \in E_H(v)$, then $\ensuremath{\mathsf{stitch}}_v(e_i) = e'_j$ if and only if $\ensuremath{\mathsf{stitch}}_v(e'_j) = e_i$. \item Let $e,e' \in E_R(v)$. Then $(e,e') \in \ensuremath{\mathsf{pairing}}_v$ if and only if there is a pair $e_i,e'_j$ of edges in $E_H(R)$, where $e_i$ is parallel to $e$ and $e'_j$ is parallel to $e'$, such that $\ensuremath{\mathsf{stitch}}_v(e_i) = e'_j$ and $\ensuremath{\mathsf{stitch}}_v(e'_j) = e_i$. Further, the number of pairs of parallel edges is equal to $\ensuremath{\mathsf{template}}_v(e,e')$. \item If $e_i,e'_j$ and $e^\star_p,\wh{e}_q$ are pairs of edges in $E_H(v)$ such that $\ensuremath{\mathsf{stitch}}_v(e_i) = e'_j$ and $\ensuremath{\mathsf{stitch}}_v(e^\star_p) = \wh{e}_q$, then the pairs $e_i,e'_j$ and $e^\star_p,\wh{e}_q$ are non-crossing in ${\sf order}_v$. \item If the multiplicity of $\cal W$ is upperbounded by $\ell$, then for each edge $e=\{u,v\} \in E(R)$, $|\{e' \in E(H) \mid e' \text{ is parallel to } e \text{ and } \ensuremath{\mathsf{stitch}}_v(e') \neq \bot \}| \leq k \cdot \ell$. \end{itemize} \end{observation} \subsection{Translating a Template Into a Stitching} Given {\em (i)} an instance $I=(G,S,T,g,k)$ of \textsf{Planar Disjoint Paths}, {\em (ii)} a backbone Steiner tree $R$, and {\em (iii)} a collection $\{(\ensuremath{\mathsf{pairing}}_v,\ensuremath{\mathsf{template}}_v)\}|_{v \in V(R)}\in \wh{\ensuremath{\mathsf{ALL}}}$, our current objective is to either determine that $\{(\ensuremath{\mathsf{pairing}}_v,\ensuremath{\mathsf{template}}_v)\}|_{v \in V(R)}$ is invalid or construct a multiplicity function $\ell$ and a stitching $\{f_v\}|_{v \in V(R)}$ to reconstruct the a weak linkage. The cases where we determine that $\{(\ensuremath{\mathsf{pairing}}_v,\ensuremath{\mathsf{template}}_v)\}|_{v \in V(R)}$ is invalid will be (some of the) cases where there exists no simplified weak linkage whose pairing and template are $\{\ensuremath{\mathsf{pairing}}_v\}|_{v \in V(R)}$ and $\{\ensuremath{\mathsf{template}}_v\}|_{v \in V(R)}$, respectively. Let us begin with the notion of multiplicity function $\ell$ of a collection of pairings and templates, as follows. \begin{definition}[{\bf Multiplicity Function}]\label{def:locaWLofTemplate} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}, and $R$ be a Steiner tree. Let ${\cal A}=\{(\ensuremath{\mathsf{pairing}}_v,\ensuremath{\mathsf{template}}_v)\}|_{v \in V(R)}\in \wh{\ensuremath{\mathsf{ALL}}}$. For every vertex $v\in V^\star(R)$, let $\ell_v$ be the function that assigns $\sum_{e': (e,e') \in\ensuremath{\mathsf{pairing}}_v}\ensuremath{\mathsf{template}}_v((e,e'))$ to every edge $e\in E(R)$ incident to $v$. If one of the following conditions is satisfied, then the {\em multiplicity function extracted from ${\cal A}$} is {\em invalid}. \begin{enumerate} \item There exists an edge $e=\{u,v\}$ such that $u,v\in V^\star(R)$ and $\ell_u(e)\neq \ell_v(e)$. \item There exists a terminal $v\in S\cup T$ such that $\ensuremath{\mathsf{pairing}}_v=\emptyset$. \end{enumerate} Otherwise, the {\em multiplicity function extracted from ${\cal A}$} is {\em valid} and it is the function $\ell: E_{1,3+}(R)\rightarrow \mathbb{N}_0$ such that for each $e\in E_{1,3+}(R)$, $\ell(e)=\ell_v(e)$ where $v$ is an endpoint of $e$ in $V^\star(R)$.\footnote{The choice of the endpoint when both belong to $V^\star(R)$ is immaterial by the definition of invalidity.} \end{definition} Let $\cal W$ be a weak linkage pushed onto $R$. The \emph{multiplicity function of a $\cal W$} is defined as the multiplicity function $\ell$ extracted from $\cal A$, the pairings and templates of $\cal W$. It is clear that the multiplicity of $\cal W$ is $\max_{e \in E(R)} \ell(e)$. \begin{observation}\label{obs:weaklinkmult} Let $(G,S,T,g,k)$ be an instance of \textsf{Planar Disjoint Paths}\ with a simplified weak linkage $\cal W$, and let $\ell$ is the multiplicity function of $\cal W$. For any $e \in E(R)$, $\ell(e) \leq \alpha_{\mathrm{mul}}(k)$. \end{observation} Having extracted a multiplicity function, we turn to extract a stitching. Towards this, recall the embedding of $H$ with respect to $R$, and the resulting enumeration of edges around vertices in $V(R)$ (see Section~\ref{sec:enumParallel}). Let us now describe the stitching extraction at a terminal vertex. \begin{definition}[{\bf Stitching Extraction at Terminals}] \label{def:locaStitchExtractTerminal} Let $(G,S,T,g,k)$ be a nice instance of \textsf{Planar Disjoint Paths}. Let $R$ be a Steiner tree. Consider a collection ${\cal A}= \break \{(\ensuremath{\mathsf{pairing}}_v,\ensuremath{\mathsf{template}}_v)\}|_{v \in V(R)}\in \wh{\ensuremath{\mathsf{ALL}}}$. Let $\ell$ be the multiplicity function extracted from ${\cal A}$, and suppose that $\ell$ is valid. Let $v\in S\cup T$, and let $e^\star$ be the unique edge in $E(R)$ incident to $v$. If $\ell(e^\star)$ is an even number, then the {\em local stitching extracted from ${\cal A}$ at $v$} is {\em invalid}. Otherwise, the {\em stitching extracted from ${\cal A}$ at $v$} is {\em valid} and it is the involution $f_v: E_H(v) \rightarrow E_H(v) \cup \bot$ defined as follows. \begin{equation*} f_v(e) = \begin{cases*} e^\star_{\ell(e)+1 - i } & if $e = e^\star_i$ and $1 \leq i \leq \ell(e)$ \\ \bot & otherwise. \end{cases*} \end{equation*} \end{definition} Next, we describe how to extract a stitching at a vertex $v\in V_{=2}(R) \cup V_{\geq 3}(R)$. \begin{definition}[{\bf Stitching Extraction at Non-Terminals}] \label{def:locaStitchExtractNonTerminal} Let $(G,S,T,g,k)$ be a nice instance of \textsf{Planar Disjoint Paths}. Let $R$ be a backbone Steiner tree. Consider a collection ${\cal A}= \break \{(\ensuremath{\mathsf{pairing}}_v,\ensuremath{\mathsf{template}}_v)\}|_{v \in V(R)}\in \wh{\ensuremath{\mathsf{ALL}}}$. Let $\ell$ be the multiplicity function extracted from ${\cal A}$, and suppose that $\ell$ is valid. Let $v\in V_{=2}(R) \cup V_{\geq 3}(R)$, and suppose that let $e^1,e^2,\ldots,e^{r}$ denote the arcs in $E(R)$ incident to $v$ enumerated as per ${\sf order}_v$ starting from $e^1$. Then the define a function $f_v: E_H(v) \rightarrow E_H(v) \cup \bot$ as follows. \begin{itemize} \item For each $(e,e') \in \ensuremath{\mathsf{pairing}}_v$ such that $\ensuremath{\mathsf{template}}_v(e,e') > 0$, where $e$ occurs before $e'$ in ${\sf order}_v$, let ${\sf inner}(e,e') = \{e^\star \in E_R(v) \mid e^\star \text{ occurs between } e \text{ and } e' \text{ in } {\sf order}_v\}$, and ${\sf outer}(e,e') = \{e^\star \in E_R(v) \mid \text{ either } e^\star \text{ occurs before } e \text{ or occurs after } e' \text{ in } {\sf order}_v\}$. \item Then, for each $i \in \{1,\ldots, \ensuremath{\mathsf{template}}_v(e,e')\}$, let $f_v(e_{i+x}) = e'_{y-i}$ and $f_v(e'_{y-i}) = e_{i+x}$ where $$x = \sum_{e^\star \in {\sf outer}(e,e')} \ensuremath{\mathsf{template}}_v(e,e^\star),$$ and $$y = 1 + \ensuremath{\mathsf{template}}_v(e,e') + \sum_{e^\star \in {\sf inner}(e,e')} \ensuremath{\mathsf{template}}_v(e,e^\star)$$ \item For all other edges in $E_H(v)$, define $f_v(e) = \bot$. \end{itemize} If the assignment $f_v$ is fixed point free, then $f_v$ is the {\em stitching extracted from ${\cal A}$ at $v$}, which is said to be {\em valid}. Otherwise, it is {\em invalid} \end{definition} Lastly, based on Definitions \ref{def:locaStitchExtractTerminal} and \ref{def:locaStitchExtractNonTerminal}, we extract the stitching as follows, \begin{definition}[{\bf Stitching Extraction}]\label{def:locaStitchExtract} Let $(G,S,T,g,k)$ be a nice instance of \textsf{Planar Disjoint Paths}. Let $R$ be a Steiner tree. Consider a collection ${\cal A}= \{(\ensuremath{\mathsf{pairing}}_v,\ensuremath{\mathsf{template}}_v)\}|_{v \in V(R)}\in \wh{\ensuremath{\mathsf{ALL}}}$. For each $v \in V(R)$, let $f_v$ be the stitching extracted from ${\cal A}$ at $v$. Then the \emph{stitching extracted from $\cal A$ is invalid} if it satisfies one of the following conditions. \begin{itemize} \item There is a vertex $v\in V_{=1}(R))$ such that the stitching extracted from ${\cal A}$ at $v$ is invalid. \item There is an edge $e=\{u,v\} \in E(H)$ parallel to an edge in $E(R)$ such that $f_u(e) = \bot$ and $f_v(e) \neq \bot$. \end{itemize} Otherwise, the {\em stitching extracted from $\cal A$} is {\em valid} and defined as the collection $\{f_v\}|_{v \in V(R)}$ where $f_v$ is the stitching extracted from $\cal A$ at $v$ for every $v\in V(R)$. \end{definition} Less obviously, we also show that in case we are given a collection in $\wh{\ensuremath{\mathsf{ALL}}}$ that corresponds to weak linkage, not only is the stitching extracted from that collection valid, but also, most crucially, it is the stitching we were originally given (under the assumption that the pair of flow and stitching we deal with is simplified). In other words, we are able to faithfully reconstruct a stitching from the template of weak linkage. The implicit assumption in this lemma that $\{(\ensuremath{\mathsf{pairing}}_v,\ensuremath{\mathsf{template}}_v)\}|_{v \in V(R)}$ belongs to $\wh{\ensuremath{\mathsf{ALL}}}$ is supported by Corollary \ref{cor:existsInAll2}. \begin{lemma}\label{lem:locaStitchofTemplate1} Let $(G,S,T,g,k)$ be a good {\sf Yes}-instance of \textsf{Planar Disjoint Paths}, and let $R$ be a backbone Steiner tree. Let $\cal W$ be a simplified weak linkage in $H$ and let $\ell$ be the multiplicity function of $\cal W$. Consider the collection ${\cal A}= \{(\ensuremath{\mathsf{pairing}}_v,\ensuremath{\mathsf{template}}_v)\}|_{v \in V(R)}\in \wh{\ensuremath{\mathsf{ALL}}}$, such that it is the collection of pairings and templates of $\cal W$. Let $\{f_v\}|_{v \in V(R)}$ be the stitching extracted from $\cal A$. Then for every vertex $v \in V_{=1}(R)$, $\ensuremath{\mathsf{stitch}}_v(e) = f_v(e)$ for every edge $e \in E_H(v)$. \end{lemma} \begin{proof} Let $E^\star(v) = \{ e^\star_i \mid 1 \leq i \leq \ell(e^\star) \}$ where $e^\star_i$ denotes the $i$-th parallel copy of $e^\star$, in the enumeration in ${\sf order}_v$. Observe that, since $\cal W$ is simplified, $E^\star(v)$ is exactly the set of edges from $E_H(v)$ that appear in $\cal W$. Hence, for any edge $e \in E_H(v)$, if $e \notin E^\star(v)$ then $\ensuremath{\mathsf{stitch}}_v(e) = \bot$. Let us now consider the edges in $E^\star(v)$. Since $\cal W$ is a weak linkage, exactly one walk, say $W_1$, that has the vertex $v$ as an endpoint. Any other walk in $\cal W$ contains an even number of edges from $E_H(v)$, and since $\cal W$ is pushed onto $R$, these edges are all parallel copies of $e^\star$. Hence, the walks in $\cal W$ contain an odd number of parallel copies of $e^\star$ in total, i.e. $\ell(e^\star)$ is an odd number. Since $v$ is an endpoint of $W_1 \in {\cal W}$, there is exactly one edge in $e^\star_{z} \in E^\star(v)$ such that $\ensuremath{\mathsf{stitch}}_v(e^\star_z) = e^\star_z$. We claim that $z = \frac{\ell(e^\star)+1}{2}$. Towards this, let us argue that for any edge $e^\star_i$, where $i < z$, if $\ensuremath{\mathsf{stitch}}_v(e^\star_i) = e^\star_j$ then $j > z$. Suppose not, and without loss of generality assume that $i < j < z$. Let us choose $i$ such that $|j-i|$ is minimized, and note that $j \neq i$. Consider the collection of edges $e^\star_p$ such that $i < p < j$. If this collection is empty, i.e. $j = i+1$, then observe that the edges $(e_i,e_j)$ form a U-turn, since $\ensuremath{\mathsf{stitch}}_v(e_i) = e_j$ only if they were consecutive edges of some walk in $\cal W$ and there is no edge in the strict interior of the cycle formed by the parallel edges $e_i$ and $e_j$. Otherwise this collection is non-empty, then observe that if $\ensuremath{\mathsf{stitch}}_v(e^\star_p) = e^\star_q$ then $i < q < j$. Indeed, if this were not the case then the pairs $e^\star_i,e^\star_j$ and $e^\star_p,e^\star_q$ are crossing at $v$, since they occur as $e^\star_i < e^\star_p < e^\star_j < e^\star_q$ in ${\sf order}_v$. This contradicts the weak linkage $\cal W$ is non-crossing. Otherwise, $i < q < j$ and hence $|q-p| < |j-i|$. But this contradicts the choice of $i$. Hence, for every $i < z$, $\ensuremath{\mathsf{stitch}}_v(e^\star_i) = e^\star_j$ where $j > z$. A symmetric argument holds for the other case, i.e. if $i > z$ then $\ensuremath{\mathsf{stitch}}_v(e^\star_i) = e^\star_j$ where $j < z$. Therefore, we can conclude that $z = \frac{\ell(e^\star)+1}{2}$, and hence $stitch_v(e^\star_z) = e^\star_{\ell(e^\star)+1 - z}$ Let us now consider the other edges in $E^\star(v)$. Suppose that there exist integers $1 \leq i,p \leq \ell(e^\star)$ such that $\ensuremath{\mathsf{stitch}}_v(e^\star_i) = e^\star_j$, $\ensuremath{\mathsf{stitch}}_v(e^\star_p) = e^\star_q$ such that $i < p < z$ and $z < j < q$. Then it is clear that the pairs $e^\star_i,e^\star_j$ and $(e^\star_p,e^\star_q)$ are crossing at $v$, which is a contradiction. Therefore, if $i<p<z$ then $z < q < j$, and this holds for every choice of $i$ and $p$. A symmetric arguments holds in the other direction, i.e. if $i > p > z$ and $\ensuremath{\mathsf{stitch}}_v(e^\star_i) = e^\star_j$ and $\ensuremath{\mathsf{stitch}}_v(e^\star_p) = e^\star_q$, then $j < q < z$. Now we claim that for any $i \in \{1,2, \ldots, \ell{e^\star}\}$, if $\ensuremath{\mathsf{stitch}}_v(e^\star_i) = e^\star_j$ then $j = \ell(e^\star)+1 - i$. Suppose not, and consider the case $i < z$, and further let $j < \ell(e^\star)+1 - i$. Then observe that, for any edge $e^\star_p \in \{ e^\star_{i+1}, \ldots, e^\star_{z-1} \}$, $\ensuremath{\mathsf{stitch}}_v(e^\star_p) \in \{e^\star_{z+1}, \ldots, e^\star{j-1}\}$. But $\left| \{ e^\star_{i+1}, \ldots, e^\star_{z-1} \} \right|$ is strictly larger than $\left| \{e^\star_{z+1}, \ldots, e^\star{j-1}\} \right|$, which is a contradiction to the definition of $\ensuremath{\mathsf{stitch}}_v$. Hence, $j \geq \ell(e^\star)+1 - i$. A symmetric argument implies that $j \leq \ell(e^\star)+1 - i$. Therefore, for any $i < z$, $\ensuremath{\mathsf{stitch}}_v(e^\star_i) = e^\star_{\ell(e^\star)+1 -i}$. We can similarly argue that for $i > z$ $\ensuremath{\mathsf{stitch}}_v(e^\star_i) = e^\star_{\ell(e^\star)+1 -i}$. Since we have already shown that $\ensuremath{\mathsf{stitch}}_v(e^\star_z) = e^\star_z$, this concludes the proof of this lemma. \end{proof} \begin{lemma}\label{lem:locaStitchofTemplate2} Let $(G,S,T,g,k)$ be a good {\sf Yes}-instance of \textsf{Planar Disjoint Paths}, and let $R$ be a backbone Steiner tree. Let $\cal W$ be a simplified weak linkage in $H$ and let $\ell$ be the multiplicity function of $\cal W$. Consider the collection ${\cal A}= \{(\ensuremath{\mathsf{pairing}}_v,\ensuremath{\mathsf{template}}_v)\}|_{v \in V(R)}\in \wh{\ensuremath{\mathsf{ALL}}}$, such that it is the collection of pairings and templates of $\cal W$. Let $\{f_v\}|_{v \in V(R)}$ be the stitching extracted from $\cal A$. Then, for every vertex $v\in V_{=2}(R) \cup V_{\geq 3}(R)$, $stitch_v(e) = f_v(e)$ for all edges $e \in E_H(v)$. \end{lemma} \begin{proof} Let $\ell$ be the multiplicity function of the simplified weak linkage $\cal W$. Then, as $\cal W$ is canonical, for each edge $e \in E_R(v)$ with a parallel copy $e_i$, $\ensuremath{\mathsf{stitch}}_v(e_i) \neq \bot$ if and only if $i \in \{1,2, \ldots, \ell(e^x)\}$. Since $\cal W$ is a sensible and $v \not\in V_1(R)$, it cannot be the endpoint of any walk in $\cal W$. Hence, any walk contains an even number of edges from $E_H(v)$, and further any such edge is a parallel copy of an edge in $E_R(v) = \{e^1,e^2, \ldots, e^r\}$, where these edges are enumerated according to ${\sf order}_v$. Note that, the collections of parallel copies of theses edges also occur in the same manner in ${\sf order}_v$. We present our arguments in three steps. \begin{claim} Consider a pair of edges $(e,e') \in \ensuremath{\mathsf{pairing}}_v$, such that $\ensuremath{\mathsf{template}}_v(e,e') > 0$. Then $\ensuremath{\mathsf{stitch}}_v$ maps each edge in $\{e_{(x_{e,e'}+1)}, \ldots, e_{(x_{e,e'}+\ensuremath{\mathsf{template}}_v(e,e'))}\}$ to some edge in $\{e'_1, e'_2, \ldots, e'_{\ell(e')} \}$, and vice versa. \end{claim} \begin{proof} Suppose not, and consider the case where $e$ occur before $e'$ in ${\sf order}_v$. Consider a parallel copy of $e$, say $e_i \in \{e_{(x_{e,e'}+1)}, \ldots, e_{(x_{e,e'}+\ensuremath{\mathsf{template}}_v(e,e'))}\}$ such that $\ensuremath{\mathsf{stitch}}_v(e_i) = \wh{e}_j$, where $\wh{e} \in E_R(v)$ and $\wh{e}_j$ is the $j$-th parallel copy of $\wh{e}$. Let us choose $e'$ (with respect to $e$) so that the $x_{e,e'}$ is minimized, and then choose $e_i$ such that $i$ is minimized. Here, note that $i > x_{e,e'}$. Let us argue that $\wh{e} = e'$. Suppose not, and note that $(e,\wh{e}) \in \ensuremath{\mathsf{pairing}}_v$ and $\ensuremath{\mathsf{template}}_v(e,\wh{e}) > 0$. Then we have three cases depending on the position of these edges in ${\sf order}_v$, either $e < \wh{e} < e'$, or $\wh{e} < e < e'$, or $e < e' < \wh{e}$. Consider the first case, and note that every parallel copy of $\wh{e}$ occurs before all parallel copies of $e'$ and after all parallel copies of $e$ in ${\sf order}_v$. We claim that for any $e_p \in \{ e_{i+1}, \ldots, e_{\ell(e)}\}$, $\ensuremath{\mathsf{stitch}}_v(e_p) \notin \{e'_{1}, \ldots, e'_{\ell(e')}\}$. If this claim were false, then observe that, as $e < \wh{e} < e'$, we have $ e_i < e_p < \wh{e}_j < \ensuremath{\mathsf{stitch}}_v(e_p)$ in ${\sf order}_v$. Hence $e_i,\wh{e}_j$ and $e_p, \ensuremath{\mathsf{stitch}}_v(e_p)$ are crossing pairs at $v$, in the weak linkage $\cal W$, which is a contradiction. On the other hand, if $\ensuremath{\mathsf{stitch}}_v(e_p) \notin \{e'_{1}, \ldots, e'_{\ell(e')}\}$ for any $e_p \in \{ e_{i+1}, \ldots, e_{\ell(e)}\}$, then we claim that $\ensuremath{\mathsf{stitch}}_v$ maps strictly fewer than $\ensuremath{\mathsf{template}}_v(e,e')$ edges from $\{e_{1}, \ldots, e_{\ell(e)} \}$ to $\{e'_1, \ldots, e'_{\ell(e')}\}$. Indeed, we choose $e'$ such that $x_{e,e'}$ is minimized, and hence the edges in $\{e_1, \ldots, e_{x_{e,e'}}\}$ are not mapped to any edge in $\{e'_1, \ldots, e'_{\ell(e')}\}$. And since, no edge in $\{ e_i, e_{i+1}, \ldots, e_{\ell(e)}\}$ maps to $\{e'_{1}, \ldots, e'_{\ell(e')}\}$, only the edges in $\{ e_{(x_{e,e'}+1)}, \ldots, e_{i-1}\}$ remain, which is strictly fewer than $\ensuremath{\mathsf{template}}_v(e,e')$. But this contradicts the definition of $\ensuremath{\mathsf{template}}_v(e,e')$. Hence, it cannot be the case that $e < \wh{e} < e'$ in ${\sf order}_v$. Next, consider the case when $e < e' < \wh{e}$. Note that $\wh{e} \in {\sf outer}(e,e')$, and by definition $x_{e,\wh{e}} < x_{e,e'}$. Since we choose $e'$ to minimize $x_{e,e'}$, and we didn't choose $e' = \wh{e}$, $\ensuremath{\mathsf{stitch}}_v$ maps the edges in $\{e_{(x_{e,\wh{e} + 1})}, \ldots, e_{(x_{e,\wh{e}+\ensuremath{\mathsf{template}}_v(e,\wh{e})})} \}$ to $\ensuremath{\mathsf{template}}_v(e,\wh{e})$ edges in $\{\wh{e}_1, \ldots, \wh{e}_{\ell(\wh{e})}\}$. Therefore, if $\ensuremath{\mathsf{stitch}}_v(e_i) = \wh{e}_j$, then there are $\ensuremath{\mathsf{template}}_v(e,\wh{e})+1$ parallel copies of $e$ that are mapped to parallel copies $\wh{e}$, which is a contradiction. Hence it is not possible that $e < e' < \wh{e}$. The last case, $\wh{e} < e < e'$ is similar to the previous case, since $\wh{e} \in {\sf outer}(e,e')$ in this case as well. Hence, we conclude that if $\ensuremath{\mathsf{stitch}}_v(e_i) = \wh{e}_j$ then $\wh{e} = e'$. Therefore, when $e$ occurs before $e'$ in ${\sf order}_v$, $\ensuremath{\mathsf{stitch}}_v$ maps each edge in $\{e_{(x_{e,e'}+1)}, \ldots, e_{(x_{e,e'}+\ensuremath{\mathsf{template}}_v(e,e'))}\}$ to some edge in $\{e'_1, e'_2, \ldots, e'_{\ell(e')} \}$. By a symmetric argument, we obtain that for any edge in $\{e'_{(y_{e,e'} - \ensuremath{\mathsf{template}}_v(e,e'))}, \ldots, e'_{(y_{e,e'} - 1)}\}$ maps to an edge in $\{e_1, e_2, \ldots, e_{\ell(e)} \}$.\footnote{Note that, this is equivalent to the case when $e'$ occurs before $e$ in ${\sf order}_v$. Here, we obtain a contradiction by choosing $e$ (with respect to $e'$) that maximizes $y_{e,e'}$, and then choosing the maximum $i$ such that $y_{e,e'} - \ensuremath{\mathsf{template}}_v(e,e') \leq i \leq y_{e,e'}$ and $\ensuremath{\mathsf{stitch}}_v(e'_i) \notin \{ e_1, \ldots, e_{\ell(e)}\}$.} \renewcommand\qedsymbol{$\diamond$}\end{proof} We now proceed to further restrain the mapping of edges to the ranges determined by $x_{e,e'}, y_{e,e'}$ and $\ensuremath{\mathsf{template}}_v(e,e')$. \begin{claim} Consider a pair $(e,e') \in \ensuremath{\mathsf{pairing}}_v$ such that $\ensuremath{\mathsf{template}}_v(e,e') > 0$ Then, $\ensuremath{\mathsf{stitch}}_v$ maps $\{e_{(x_{e,e'}+1)}, \allowbreak \ldots, e_{(x_{e,e'}+\ensuremath{\mathsf{template}}_v(e,e'))}\}$ to $\{e'_{(y_{e,e'} - \ensuremath{\mathsf{template}}_v(e,e'))}, \allowbreak \ldots, e'_{(y_{e,e'}-1)}\}$ and vice-versa\footnote{Note that, by definition of $\ensuremath{\mathsf{stitch}}_v$, this immediately implies the other direction.}. \end{claim} \begin{proof} Suppose not, and without loss of generality assume that $e$ occurs before $e'$ in ${\sf order}_v$. Then consider the case when there is an edge $e_i \in \{e_{(x_{e,e'}+1)}, \ldots, e_{(x_{e,e'}+\ensuremath{\mathsf{template}}_v(e,e'))}\}$ such that $\ensuremath{\mathsf{stitch}}_v(e_i) = e'_j$, where either $j > y_{e,e'}-1 $ or $j < y_{e,e'} - \ensuremath{\mathsf{template}}_v(e,e')$. Then consider the collection $\{e'_j \} \cup \{e'_{(y_{e,e'} - \ensuremath{\mathsf{template}}_v(e,e'))}, \ldots, e'_{(y_{e,e'}-1)}\}$, and observe that each edge in this collection is mapped to a distinct edge in $\{e_1, e_2, \ldots, e_{\ell(e)}\}$. But then there are $\ensuremath{\mathsf{template}}_v(e,e') + 1$ edges in $\{e'_1, \ldots e'_{\ell(e')}\}$ that map to an edge in $\{e_1, \ldots, e_{\ell(e)}\}$ under $\ensuremath{\mathsf{stitch}}_v$. This is a contradiction to the definition of $\ensuremath{\mathsf{template}}_v(e,e')$. \renewcommand\qedsymbol{$\diamond$}\end{proof} Finally, we show that $\ensuremath{\mathsf{stitch}}_v$ is equal to $f_v$. \begin{claim} Consider a pair $(e,e') \in \ensuremath{\mathsf{pairing}}_v$ such that $\ensuremath{\mathsf{template}}_v(e,e') > 0$. Then for each $i \in \{1,2,\ldots, \ensuremath{\mathsf{template}}_v(e,e')\}$, $\ensuremath{\mathsf{stitch}}_v(e_{x_{e,e'}+i}) = e'_{y_{e,e'}-i}$ and $\ensuremath{\mathsf{stitch}}_v(e'_{y_{e,e'}-i}) = e_{x_{e,e'}+i}$. \end{claim} \begin{proof} Suppose not and consider the case when $\ensuremath{\mathsf{stitch}}_v(e_{x_{e,e'}+i}) = e'_j$ where $ j \neq y_{e,e'}-i$. Note that $e'_j \in \{e'_{(y_{e,e'} - \ensuremath{\mathsf{template}}_v(e,e'))}, \ldots, \allowbreak e'_{(y_{e,e'} - 1)}\}$ by previous arguments. Consider the case when $j > y_{e,e'} - i$. We claim that, the edges in $\{e'_{(y_{e,e'} - \ensuremath{\mathsf{template}}_v(e,e'))}, \ldots, e'_{(j-1)} \}$ must map to the edges in $\{e_{(x_{e,e'} + i+1)}, \ldots, e_{(x_{e,e'}+ \ensuremath{\mathsf{template}}_v(e,e'))}\}$. If not, then consider an edge $e'_p \in \{e'_{(y_{e,e'} - \ensuremath{\mathsf{template}}_v(e,e'))}, \ldots, e'_{(j-1)} \}$ such that $\ensuremath{\mathsf{stitch}}_v(e'_p) = e_q$ where $q < x_{e,e'}+i$. Then consider the pairs $e_{x_{e,e'}+i}, e_j$ and $e_p, e_q$ in ${\sf order}_v$, and observe that $e_q < e_{x_{e,e'}+i} < e'_p < e'_j$ in ${\sf order}_v$. Then these pairs of edges are crossing at $v$, which is a contradiction to the fact that $\cal W$ is weak linkage. On the other hand, $\left| \{e'_{(y_{e,e'} - \ensuremath{\mathsf{template}}_v(e,e'))}, \ldots, e'_{(j-1)} \} \right|$ is strictly larger than $\left|\{e_{(x_{e,e'} + i+1)}, \ldots, e_{(x_{e,e'}+ \ensuremath{\mathsf{template}}_v(e,e'))}\} \right|$, which is again a contradiction, since all edges in $\{e'_1, \ldots, e'_{\ell(e')}\}$ are mapped to distinct edges by $\ensuremath{\mathsf{stitch}}_v$, and they are not mapped to $\bot$. By symmetric arguments, the case when $j < y_{e,e'}-i$ also leads to a contradiction. \renewcommand\qedsymbol{$\diamond$}\end{proof} Now, by considering all pairs in $\ensuremath{\mathsf{pairing}}_v$ and applying the above claims, we obtain that $\ensuremath{\mathsf{stitch}}_v = f_v$ for all $v \in V_{=2}(R) \cup V_{\geq e}(R)$. This concludes the proof of this lemma. \end{proof} The following lemma is a corollary of Lemma~\ref{lem:locaStitchofTemplate1} and Lemma~\ref{lem:locaStitchofTemplate2} \begin{lemma}\label{lem:locaStitchofTemplate} Let $(G,S,T,g,k)$ be a nice instance of \textsf{Planar Disjoint Paths}. Let $R$ be a Steiner tree. Let ${\cal W}$ be a simplified weak linkage, and let ${\cal A} =\{(\ensuremath{\mathsf{pairing}}_v,\ensuremath{\mathsf{template}}_v)\}|_{v \in V(R)} \in \wh{\ensuremath{\mathsf{ALL}}}$ be pairing and template of ${\cal W}$. Let $\{f_v\}|_{v \in V(R)}$ be the stitching extracted from ${\cal A}$. Then $f_v = \ensuremath{\mathsf{stitch}}_v$ for every vertex $v \in V(R)$. \end{lemma} Now, we consider the computational aspect of the definitions considered so far in this section. \begin{lemma}\label{lem:computeLocStitchTime} Let $(D,S,T,g,k)$ be a nice instance of \textsf{Directed Planar Disjoint Paths}. Let $R$ be a Steiner tree. Let ${\cal A}=\{(\ensuremath{\mathsf{pairing}}_v,\ensuremath{\mathsf{template}}_v)\}|_{v \in V(R)}\in \wh{\ensuremath{\mathsf{ALL}}}$. Then, the multiplicity function extracted from ${\cal A}$ can be computed in time $k^{\mathcal{O}(1)} n$, and the stitching extracted from $\cal A$ can be computed in time $2^{\mathcal{O}(k)} n$. \end{lemma} \begin{proof} First we consider the computation of the multiplicity function $\ell = \{\ell_v\}|_{v \in V(R)}$ extracted from ${\cal A}$ according to Definition \ref{def:locaWLofTemplate}. Note that, for every vertex $v\in V(R)$, we have that $|\ensuremath{\mathsf{pairing}}_v|=\mathcal{O}(k)$ and the numbers assigned by $\ensuremath{\mathsf{template}}_v$ are bounded by $2^{\mathcal{O}(k)}$. Therefore, $\ell_v$ can be computed in $2^{\mathcal{O}(k)}$ time for each $v \in V(R)$, taking a total of $2^{\mathcal{O}(k)} n$ time. Now, note that for any vertex $v\in V(R)$, it holds that $|\ell_v(e)|=2^{\mathcal{O}(k)}$ for any edge $e \in E_H(v)$ (because $\ensuremath{\mathsf{template}}_v$ is a $2^{\mathcal{O}(k)}$-template). Let $\{f_v\}|_{v\in V(R)}|$ be the stitching extracted from $\cal A$ by Definition \ref{def:locaStitchExtractTerminal}. Observe that when describing the stitching $f_v$ extracted at a vertex $v \in V(R)$, we only need to describe it for the parallel copies of edges in $E(R)$, and then only for the parallel copies $\{e_1, e_2, \ldots, e_{\ell(v)} \}$ of $e \in E(R)$, where $\ell$ is the multiplicity function extracted from $\cal A$. For all other edges and parallel copies, the stitching maps them to $\bot$. Since $\ell(e) \leq \alpha_{\mathrm{mul}}(k)$, and the tree $R$ has at most $2k$ leaves, the stitching at each vertex can be described by a collection of $\mathcal{O}(k \cdot \alpha_{\mathrm{mul}}) = 2^{\mathcal{O}(k)}$ pairs of edges in $E_H(v) \times E_H(v)$. Further, by the construction described in Definitions \ref{def:locaStitchExtractTerminal} and \ref{def:locaStitchExtractNonTerminal}, the stitching $f_v$ at each vertex $v\in V(R)$ can be constructed in time $2^{\mathcal{O}(k)}$ time. Therefore, the collection $\{f_v\}_{v \in V(R)}$ can be constructed in $2^{\mathcal{O}(k)} n$ time. Finally, we need to test if this collection is a valid stitching, as described in Definition~\ref{def:locaStitchExtract}, which can be done by picking each edge $e \in E(R)$ and testing the parallel copies $\{e_1, e_2, \ldots, e_{\ell(e)}\}$ one by one, which again takes $2^{\mathcal{O}(k)}n$ time. Hence the total time required to extract the stitching is $2^{\mathcal{O}(k)}n$. \end{proof} \subsection{Reconstruction of Weak Linkages from Templates} Now we describe the construction of a weak linkage from a valid stitching. \begin{definition}[Weak Linkage of a stitching.] \label{def:stitchweaklinkage} Let $(G,S,T,g,k)$ be a good instance of \textsf{Planar Disjoint Paths}, and let $R$ be a backbone Steiner tree. Let $\{f_v\}|_{v \in V(R)}$ be a stitching and suppose that is is valid. Then the \emph{weak linkage $\cal W$ constructed from $f_v$} is obtained as follows. \begin{itemize} \item For each $v \in S \cup T$, Let $e_v \in E_H(v)$ be the unique edge such that $f_v(e_v) = e_v$. \item Then the walk $W_v$ is defined as the sequence of edges $e_{0}, e_{1}, e_{2}, \ldots, e_{p_v}$, where $e_{v_0} = e_v$, and for each $i \in \{0, \ldots, p_v-1\}$, the edge $e_{i} =\{v_i, v_{i+1}\}$ satisfies $(i)$~$f_{v_{i+1}}(e_i) = e_{i+1}$ where $v_0 = v$;\; and $(ii)$ $f_{v_{p_v}}(e_{p_v}) = p_v$. \item We iteratively construct a sequence of walks $W_{v_1}, W_{v_2}, \ldots,$ where the walk $W_{v_i}$ starts from a vertex $v_i$ that is not the endpoint of any of the previous walks. Finally, we output $\cal W$ as the collection of these walks. \end{itemize} \end{definition} It is clear that running time for the construction of a weak linkage from a stitching $\{f_v\}_{v \in V(R)}$ is upperbounded by the number of pairs of edges in $E_H(V(R))$ such that they are images of each other in the stitching. The following observation is follows directly from Definition~\ref{def:stitchweaklinkage} and Definition~\ref{def:linkageStitching}. and the fact that \begin{observation}\label{obs:weaklinkagestitching} Let $(G,S,T,g,k)$ be a good instance of \textsf{Planar Disjoint Paths}, and let $R$ be a backbone Steiner tree. Let $\cal W$ be a weak linkage that is pushed onto $R$, and let $\{\ensuremath{\mathsf{stitch}}_v\}|_{v \in V(R)}$ be the stitching of $\cal W$. Then the weak linkage constructed from this stitching is equal to $\cal W$. \end{observation} Let ${\cal W}_\ensuremath{\mathsf{ALL}}$ denote the collection of weak linkages extracted from $\wh{\ensuremath{\mathsf{ALL}}}$. The following Lemma is the main result of this section. \begin{lemma}\label{lemma:enumerateSimplifiedWeakLinkages} Let $(G,S,T,g,k)$ be a good {\sf Yes}-instance of $\textsf{Planar Disjoint Paths}$, and let $R$ be a backbone steiner tree. Then, there exists a collection of $2^{\mathcal{O}(k^2)}$ simplified linkages ${\cal W}_\ensuremath{\mathsf{ALL}}$ such that there is a weak linkage ${\cal W} \in {\cal W}_\ensuremath{\mathsf{ALL}}$ that is discretely homotopic in $H$ to some solution of $(G,S,T,g,k)$. Further, collection can be enumerated in $2^{\mathcal{O}(k^2)} n$ time. \end{lemma} \begin{proof} By Lemma~\ref{lem:pushOutcome}, the given instance has a solution that is discretely homotopic to some simplified weak linkage ${\cal W}$, and by Corollary~\ref{cor:existsInAll2}, the pairing and template of $\cal W$, denoted by ${\cal A}$ lies in the collection $\wh{\ensuremath{\mathsf{ALL}}}$. Then, by Lemma~\ref{lem:locaStitchofTemplate}, the stitching $\{f_v\}|_{v \in V(R)}$ is equal to $\{\ensuremath{\mathsf{stitch}}_v\}|_{v \in V(R)}$, the stitching of $\cal W$, and it can be computed in $2^{\mathcal{O}(k)} n$ time by Lemma~\ref{lem:computeLocStitchTime}. Finally, we can construct a weak linkage ${\cal W}'$ from the stitching $\{f_v\}|_{v \in V(R)}$, and by Observation~\ref{obs:weaklinkagestitching}, ${\cal W'} = {\cal W}$. Note that, as $\cal W$ is a simplified weak linkage, its multiplicity is upperbounded by $\alpha_{\mathrm{mul}}(k) = 2^{\mathcal{O}(k)}$. Hence by Observation~\ref{obs:stitchingProp} and ~\ref{obs:weaklinkmult}, the number of pairs in $(e,e') \in E_H(v) \times E_H(v)$ such that $f_v(e) = e'$ is upperbounded by $k \cdot \alpha_{\mathrm{mul}}(k) = 2^{\mathcal{O}(k)}$. Hence, it is clear that we reconstruct $\cal W$ from the stitching $\{f_v\}_{v \in V(R)}$ in time $2^{\mathcal{O}(k)} n$ time. To enumerate the collection ${\cal W}_\ensuremath{\mathsf{ALL}}$, we iterate over $\wh{\ensuremath{\mathsf{ALL}}}$. For each ${\cal A} \in \wh{\ensuremath{\mathsf{ALL}}}$, we attempt to construct a stitching and if it returns an invalid stitching, we move on to next iteration. Otherwise, we construct a weak linkage from this stitching and output it. Observe that for each ${\cal A} \in \wh{\ensuremath{\mathsf{ALL}}}$ we can compute the corresponding weak linkage $\cal W$, if it exists, in time $2^{\mathcal{O}(k)} n$ time. Since $|\wh{\ensuremath{\mathsf{ALL}}}| = 2^{\mathcal{O}(k^2)}$, clearly $|{\cal W}_\ensuremath{\mathsf{ALL}}| = 2^{\mathcal{O}(k^2)}$ and it can be enumerated in $2^{\mathcal{O}(k^2)} n$ time, we can enumerate ${\cal W}_\ensuremath{\mathsf{ALL}}$ in $2^{\mathcal{O}(k^2)} n$ time. \end{proof} \section{The Algorithm}\label{sec:algorithm} Having set up all required definitions and notions, we are ready to describe our algorithm. Afterwards, we will analyze its running time and prove its correctness. \subsection{Execution of the Algorithm} We refer to this algorithm as \textsf{PDPAlg}. It takes as input an instance $(\widetilde{G},\widetilde{S},\widetilde{T},\widetilde{g},k)$ of \textsf{Planar Disjoint Paths}, and its output is the decision whether this instance is a \textsf{Yes}-instance. The specification of the algorithm is as follows. \noindent{\bf Step I: Preprocessing.} First, \textsf{PDPAlg}\ invokes Corollary \ref{cor:twReduction} to transform $(\widetilde{G},\widetilde{S},\widetilde{T},\widetilde{g},k)$ into an equivalent good instance $(G,S,T,g,k)$ of \textsf{Planar Disjoint Paths}\ where $|V(G)|=\mathcal{O}(|V(\widetilde{G})|)$. \noindent{\bf Step II: Computing a Backbone Steiner Tree.} Second, \textsf{PDPAlg}\ invokes Lemma \ref{lem:goodSteinerTreeComputeTime} with respect to $(G,S,T,g,k)$ to compute a backbone Steiner tree, denoted by $R$. Then, \textsf{PDPAlg}\ computes the embedding of $H$ with respect to $R$ (Section~\ref{sec:enumParallel}). \noindent{\bf Step III: Looping on ${\cal W}_\ensuremath{\mathsf{ALL}}$.} Now, \textsf{PDPAlg}\ invokes Lemma \ref{lemma:enumerateSimplifiedWeakLinkages} to enumerate ${\cal W}_\ensuremath{\mathsf{ALL}}$. For each weak linkage ${\cal W} \in {\cal W}_\ensuremath{\mathsf{ALL}}$, the algorithm applies the algorithm of Corollary~\ref{cor:discreteHomotopy} to $(G,S,T,g,k)$ and $\cal W$; if the algorithm finds a solution, then \textsf{PDPAlg}\ determines that $(\widetilde{G},\widetilde{S},\widetilde{T},\widetilde{g},k)$ is {\sf Yes}-instance and terminates. \noindent{\bf Step IV: Reject.} \textsf{PDPAlg}\ determines that $(\widetilde{G},\widetilde{S},\widetilde{T},\widetilde{g},k)$ is a \textsf{No}-instance and terminates. \subsection{Running Time and Correctness} Let us first analyze the running time of \textsf{PDPAlg}. \begin{lemma}\label{lem:pdpAlgTime} \textsf{PDPAlg}\ runs in time $2^{\mathcal{O}(k^2)}n^{\mathcal{O}(1)}$. \end{lemma} \begin{proof} By Corollary \ref{cor:twReduction}, the computation of $(G,S,T,g,k)$ in Step I is performed in time $2^{\mathcal{O}(k)}n^2$. By Lemma \ref{lem:goodSteinerTreeComputeTime}, the computation of $R$ in Step II is performed in time $2^{\mathcal{O}(k)}n^{3/2}\log^3 n$. Moreover, the computation of $R$ can clearly be done in time linear in $n$. Let $H = H_G$ be the radial completion of $G$ enriched with $4|V(G)|+1$ parallel copies of each edge, and note that $|V(H)| = \mathcal{O}(|V(G)|)$. By Observation~\ref{obs:enumParallelTime}, we can compute the embedding of $H$ with respect to $R$ in time $\mathcal{O}(n^2)$. By Lemma \ref{lemma:enumerateSimplifiedWeakLinkages}, $|{\cal W}_\ensuremath{\mathsf{ALL}}|=2^{\mathcal{O}(k^2)}$, and it can be enumerated in $2^{\mathcal{O}(k^2)} n$ time. Finally, for each ${\cal W} \in {\cal W}_\ensuremath{\mathsf{ALL}}$, Corollary~\ref{cor:discreteHomotopy} takes $n^{\mathcal{O}(1)}$ time to test if there is a solution that is discretely homotopic to $\cal W$. Thus, \textsf{PDPAlg}\ runs in time $2^{\mathcal{O}(k^2)}n^{\mathcal{O}(1)}$. \end{proof} The reverse direction of the correctness of \textsf{PDPAlg}\ is trivially true. \begin{lemma}\label{lem:pdpAlgReverse} Let $(\widetilde{G},\widetilde{S},\widetilde{T},\widetilde{g},k)$ be an instance of \textsf{Planar Disjoint Paths}. If $(\widetilde{G},\widetilde{S},\widetilde{T},\widetilde{g},k)$ is accepted by \textsf{PDPAlg}, then $(\widetilde{G},\widetilde{S},\widetilde{T},\widetilde{g},k)$ is a \textsf{Yes}-instance. \end{lemma} Now, we handle the forward direction of the correctness of \textsf{PDPAlg}. \begin{lemma}\label{lem:pdpAlgForward} Let $(\widetilde{G},\widetilde{S},\widetilde{T},\widetilde{g},k)$ be a instance of \textsf{Planar Disjoint Paths}. If $(\widetilde{G},\widetilde{S},\widetilde{T},\widetilde{g},k)$ is a \textsf{Yes}-instance, then $(\widetilde{G},\widetilde{S},\widetilde{T},\widetilde{g},k)$ is accepted by \textsf{PDPAlg}. \end{lemma} \begin{proof} Suppose that $(\widetilde{G},\widetilde{S},\widetilde{T},\widetilde{g},k)$ is a \textsf{Yes}-instance of \textsf{Planar Disjoint Paths}. Then, by Corollary \ref{cor:twReduction}, $(G,S,T,g,k)$ is a \textsf{Yes}-instance of \textsf{Planar Disjoint Paths}. Then by Lemma~\ref{lem:pushOutcome} and Lemma~\ref{lemma:enumerateSimplifiedWeakLinkages}, there is a collection ${\cal W}_\ensuremath{\mathsf{ALL}}$ of $2^{\mathcal{O}(k^2)}$ simplified weak linkages containing at least one simplified weak linkage ${\cal W}^\star$, that is discretely homotopic in $H$ to some solution of $(G,S,T,g,k)$. Here $H$ is the radial completion of $G$ enriched with $4|V(G)|+1$ parallel copies of each edge. Then in Step~III, Corollary~\ref{cor:discreteHomotopy} ensures that we obtain a solution to $(G,S,T,g,k)$ in the iteration we consider ${\cal W}^\star$. Hence \textsf{PDPAlg}\ accepts the instance $(\widetilde{G},\widetilde{S},\widetilde{T},\widetilde{g},k)$. \end{proof} Lastly, we remark that \textsf{PDPAlg}\ can be easily modified not only to reject or accept the instance $(\widetilde{G},\widetilde{S},\widetilde{T},\widetilde{g},k)$, but to return a solution in case of acceptance, within time $2^{\mathcal{O}(k^2)}n^{\mathcal{O}(1)}$. \appendix \section{Properties of Winding Number}\label{sec:app:wn} In this section we sketch a proof of Proposition~\ref{prop:wn-prop} using homotopy. Towards this, we introduce some notation that are extensions of the terms introduced Section~\ref{sec:winding} in the continuous setting. Recall that we have a plane graph ${\sf Ring}(I_\mathrm{in}, I_\mathrm{out})$ and we are interested in the winding number of paths in this graph, where $I_\mathrm{in}$ and $I_\mathrm{out}$ are two cycles such that $I_\mathrm{out}$ is the outer-face, and there are no vertices or edges in the interior of $I_\mathrm{in}$. Let us denote the (closed) curves defined by these two curves by $\rho_\mathrm{out}$ and $\rho_\mathrm{in}$, respectively. Then consider the collection of all the points in the plane that lie in the exterior of $\rho_\mathrm{in}$ and interior of $\rho_\mathrm{out}$. The closure of this set of points defines a surface called a \emph{ring}, which we denote by ${\sf Ring}(\rho_\mathrm{in}, \rho_\mathrm{out})$ by abusing notation. Observe that the graph ${\sf Ring}(I_\mathrm{in}, I_\mathrm{out})$ is embedded in this ring, where the vertices of $I_\mathrm{in}$ and $I_\mathrm{out}$ lie on $\rho_\mathrm{in}$ and $\rho_\mathrm{out}$ respectively. A curve $\alpha$ in the ${\sf Ring}(\rho_\mathrm{in}, \rho_\mathrm{out})$ \emph{traverses} it if it has one endpoint in $\rho_\mathrm{in}$ and the other in $\rho_\mathrm{out}$. We then orient this curve from its endpoint in $\rho_\mathrm{in}$ to its endpoint $\rho_\mathrm{out}$. A curve $\beta$ \emph{visits} ${\sf Ring}(\rho_\mathrm{in},\rho_\mathrm{out})$ if both its endpoints line on either $\rho_\mathrm{in}$ or $\rho_\mathrm{out}$. In this case we orient this curve as follows. We first fix an arbitrary ordering of all points in the curve $\rho_\mathrm{in}$ and another one for all the points in the curve $\rho_\mathrm{out}$. We then orient $\beta$ from the smaller endpoint to the greater one. Consider two curves $\alpha,\alpha'$ that are either traversing or visiting ${\sf Ring}(\rho_\mathrm{in},\rho_\mathrm{out})$ are {\em{homotopic}} if there exists a homotopy of the ring that fixes $\rho_\mathrm{in}$ and $\rho_\mathrm{out}$ and transforms $\alpha$ to $\alpha'$. Note that homotopic curves have same endpoints. Two curves $\beta, \beta'$ are \emph{transversally intersecting} if $\beta \cap \beta'$ is a finite collection of points. Let us remark that the above orientation is preserved under homotopy, since any two homotopic curves have the same endpoints. Furthermore, when we speak of oriented curves in a ring, it is implicit that such curves are either visitors or traversing the ring. Now we are ready to define the winding number of oriented curves in a ring. \begin{definition}[Winding Number of Transversally Intersecting Curves] Two curves $\alpha,\beta$ in ${\sf Ring}(\rho_\mathrm{in},\rho_\mathrm{out})$ {\em{intersect transversally}} if $\alpha\cap \beta$ is a finite set of points. For two curves $\alpha$ and $\beta$ in ${\sf Ring}(\rho_\mathrm{in},\rho_\mathrm{out})$ that intersect transversally, we define the {\em{winding number}} $\overline{\sf WindNum}(\alpha,\beta)$ as the signed number of traversings of $\beta$ along $\alpha$. That is, for every intersection point of $\alpha$ and $\beta$ we record $+1$ if $\beta$ crosses $\alpha$ from left to right, and $-1$ if it crosses from right to left (of course, with respect to the chosen direction of traversing $\beta$) and $0$ if it does not cross at that point. The winding number $\overline{\sf WindNum}(\alpha,\beta)$ is the sum of the recorded numbers. \end{definition} It can be easily observed that if $\alpha$ and $\alpha'$ are homotopic curves traversing ${\sf Ring}(\rho_\mathrm{in},\rho_\mathrm{out})$ and both intersect $\beta$ transversally, then $\overline{\sf WindNum}(\alpha,\beta)=\overline{\sf WindNum}(\alpha',\beta')$. Here we rely on the fact that two homotopic curves have the same end-points. Observe that that every intersection point of the curves $\alpha'$ and $\beta'$ is a traversing point, i.e. the point assigned either $+1$ or $-1$. Therefore, we can extend the notion of the winding number to pairs of curves not necessarily intersecting transversally as follows. \begin{definition}[Winding Number] If $\alpha,\beta$ are two curves in ${\sf Ring}(\rho_\mathrm{in},\rho_\mathrm{out})$, then we define $\overline{\sf WindNum}(\alpha,\beta)$ to be the winding number $\overline{\sf WindNum}(\alpha',\beta')$ for any $\alpha',\beta'$ such that $\alpha$ and $\alpha'$ are homotopic, $\beta$ and $\beta'$ are homotopic, and $\alpha',\beta'$ intersect transversally, and each common point of $\alpha'$ and $\beta'$ is a traversing point. \end{definition} Note that such $\alpha',\beta'$ always exist and the definition of $\overline{\sf WindNum}(\alpha,\beta)$ does not depend on the particular curves. Let us now proceed towards a proof of Proposition~\ref{prop:wn-prop}. \begin{lemma}\label{lem:additivity-clean} Suppose $\alpha,\beta,\gamma$ are curves traversing a ring ${\sf Ring}(\rho_\mathrm{in},\rho_\mathrm{out})$ that pairwise intersect transversally. Further, letting $a,b,c$ and $a',b',c'$ be the endpoints of $\alpha,\beta,\gamma$ on $\rho_\mathrm{in}$ and $\rho_\mathrm{out}$ respectively, suppose that $a,b,c$ are different and appear in the clockwise order on $\rho_\mathrm{in}$, and that $a',b',c'$ are different and appear in the clockwise order on $\rho_\mathrm{out}$. Then $$\overline{\sf WindNum}(\alpha,\beta)+\overline{\sf WindNum}(\beta,\gamma)=\overline{\sf WindNum}(\alpha,\gamma).$$ \end{lemma} \begin{proof} First, we argue that we may assume that $\overline{\sf WindNum}(\alpha,\gamma)=0$. This can be done as follows. Let $k=\overline{\sf WindNum}(\alpha,\gamma)$. Glue a ring ${\sf Ring}(\rho_\mathrm{out},\rho_\mathrm{out}')$ to ${\sf Ring}(\rho_\mathrm{in},\rho_\mathrm{out})$ along $\rho_\mathrm{out}$, for some non-self-traversing closed curve $\rho_\mathrm{out}'$ that encloses $\rho_\mathrm{out}$, thus obtaining ring ${\sf Ring}(\rho_\mathrm{in},\rho_\mathrm{out}')$. Pick $a'',b'',c''$ in the clockwise order on $\rho_\mathrm{out}'$. Extend $\alpha$ to a curve $\alpha'$ traversing ${\sf Ring}(\rho_\mathrm{in},\rho_\mathrm{out}')$ using any curve within ${\sf Ring}(\rho_\mathrm{out},\rho_\mathrm{out}')$ connecting $a'$ with $a''$. Next, extend $\beta$ to $\beta'$ in the same way, but choose the extending segment so that it does not cross $\alpha'$. Finally, extend $\gamma$ to $\gamma'$ in the same way, but choose the extending segment so that it crosses $\alpha'$ (and thus also $\beta'$) exactly $-k$ times (where we count signed traversings). Thus we have $\overline{\sf WindNum}(\alpha',\gamma')=0$ and if we replace $\alpha,\beta,\gamma$ with $\alpha',\beta',\gamma'$, both sides of the postulated equality are decremented by $k$. Hence, it suffices to prove this equality for $\alpha',\beta',\gamma'$, for which we known that $\overline{\sf WindNum}(\alpha',\gamma')=0$. Having assumed that $\overline{\sf WindNum}(\alpha,\gamma)=0$, it remains to prove that $\overline{\sf WindNum}(\alpha,\beta)+\overline{\sf WindNum}(\beta,\gamma)=0$, or equivalently \begin{equation}\label{eq:after-parallel} \overline{\sf WindNum}(\alpha,\beta)+\overline{\sf WindNum}(\gamma^{-1},\beta)=0. \end{equation} Since $\overline{\sf WindNum}(\alpha,\gamma)=0$, we may further replace $\alpha$ and $\gamma$ with homotopic curves that do not cross at all. Note that this does not change the winding numbers in the postulated equality. Hence, from now on we assume that $\alpha$ and $\gamma$ are disjoint. Let us connect $c$ with $a$ using an arbitrary curve $\epsilon$ through the interior of the disk enclosed by $\rho_\mathrm{in}$, and let us connect $a'$ with $c'$ using an arbitrary curve $\epsilon'$ outside of the disk enclosed by $\rho_\mathrm{out}$. Thus, the concatenation of $\alpha,\epsilon',\gamma^{-1},\epsilon$ is a closed curve in the plane without self-traversings; call it $\delta$. Then $\delta$ separates the plane into two regions $R_1,R_2$. Since $a,b,c$ appear in the same order on $\rho_\mathrm{in}$ as $a',b',c'$ on $\rho_\mathrm{out}$, it follows that $b$ and $b'$ are in the same region, say $R_1$. Now consider travelling along $\beta$ from the endpoint $b$ to the endpoint $b'$. Every traversing of $\alpha$ or $\gamma$ along $\beta$ is actually a traversing of $\delta$ that contributes to the left hand side of~\eqref{eq:after-parallel} with $+1$ if on the traversing $\beta$ passes from $R_1$ to $R_2$, and with $-1$ if it passes from $R_2$ to $R_1$. Since $\beta$ starts and ends in $R_1$, the total sum of those contributions has to be equal to $0$, which proves~\eqref{eq:after-parallel}. \end{proof} \begin{lemma}\label{lem:wn-prop} For any curves $\alpha,\beta,\gamma$ traversing a ring ${\sf Ring}(\rho_\mathrm{in},\rho_\mathrm{out})$, it holds that $$|(\overline{\sf WindNum}(\alpha,\beta)+\overline{\sf WindNum}(\beta,\gamma)) - \overline{\sf WindNum}(\alpha,\gamma)|\leq 1.$$ \end{lemma} \begin{proof} By slightly perturbing the curves using homotopies, we may assume that they pairwise intersect transversally. Further, we modify the curves in the close neighborhoods of $\rho_\mathrm{in}$ and $\rho_\mathrm{out}$ so that we may assume that the endpoints of $\alpha,\beta,\gamma$ on $\rho_\mathrm{in}$ and $\rho_\mathrm{out}$ are pairwise different and appear in the same clockwise order on both cycles; for the latter property, we may add one traversing between two of the curves, thus modifying one of the numbers $\overline{\sf WindNum}(\alpha,\beta),\overline{\sf WindNum}(\beta,\gamma),\overline{\sf WindNum}(\alpha,\gamma)$ by one. It now remains to use Lemma~\ref{lem:additivity-clean}. \end{proof} \paragraph{Proof of Proposition~\ref{prop:wn-prop}.} Recall that the graph ${\sf Ring}(I_\mathrm{in}, I_\mathrm{out})$ is embedded in the ring. The first property of follows directly from the definition of winding numbers. For the second property, we apply Lemma~\ref{lem:wn-prop} to the curves defined by the paths $\alpha, \beta$ and $\gamma$ in ${\sf Ring}(\rho_\mathrm{in}, \rho_\mathrm{out})$. \qed \end{document}
arXiv
\begin{document} \title{Algebra with Conjugation} \begin{abstract} In the paper, I consider properties and mappings of free algebra with unit. I consider also conjugation of free algebra with unit. \end{abstract} \ShowEq{contents} \end{document}
arXiv
Anticonvulsant effects of antiaris toxicaria aqueous extract: investigation using animal models of temporal lobe epilepsy Priscilla Kolibea Mante1, Donatus Wewura Adongo2 & Eric Woode1 BMC Research Notes volume 10, Article number: 167 (2017) Cite this article Antiaris toxicaria has previously shown anticonvulsant activity in acute animal models of epilepsy. The aqueous extract (AAE) was further investigated for activity in kindling with pentylenetetrazole and administration of pilocarpine and kainic acid which mimic temporal lobe epilepsy in various animal species. ICR mice and Sprague–Dawley rats were pre-treated with AAE (200–800 mg kg−1) and convulsive episodes induced using pentylenetetrazole, pilocarpine and kainic acid. The potential of AAE to prevent or delay onset and alter duration of seizures were measured. In addition, damage to hippocampal cells was assessed in kainic acid-induced status epilepticus test. 800 mg kg−1 of the extract suppressed the kindled seizure significantly (P < 0.05) as did diazepam. AAE also produced significant effect (P < 0.01) on latency to first myoclonic jerks and on total duration of seizures. The latency to onset of wet dog shakes was increased significantly (P < 0.05) by AAE on kainic acid administration. Carbamazepine and Nifedipine (30 mg kg−1) also delayed the onset. Histopathological examination of brain sections showed no protective effect on hippocampal cells by AAE and nifedipine. Carbamazepine offered better preservation of hippocampal cells in the CA1, CA2 and CA3 regions. Antiaris toxicaria may be effective in controlling temporal lobe seizures in rodents. Epilepsy is a common neurological disorder which may be due to an imbalance between excitatory and inhibitory arms of the central nervous system—produced by a decrease in GABAergic and/or an increase in glutamatergic transmission [1]. Kindling and Status Epilepticus are the two most commonly used animal models of Temporal Lobe Epilepsy (TLE). Both models provide a dependable induction of a persistent, epileptic-like condition, despite their unique characteristics. Kindling is a simple phenomenon in which repeated induction of focal seizure discharge produces a progressive, highly reliable, increase in epileptic response to the inducing agent, usually electrical stimulation [2]. However, the use of chemical inducing agents, such as pentylenetetrazole has been shown to be equally effective [3, 4]. Acute administration of a high dose of pilocarpine in rodents is widely used to study the pathophysiology of seizures. It was first described by Turski et al. in 1983 [5, 6]. Pilocarpine-induced seizures reveal behavioural and electroencephalographic features that are similar to those of human temporal lobe epilepsy. Kainic acid, like pilocarpine, can also be used to induced a similar TLE or status epilepticus state in a variety of species using either systemic, intrahippocampal or intra-amygdaloid administrations [7]. Temporal lobe epilepsy is the most common form of complex partial seizures accounting for approximately 60% of all patients with epilepsy. Medial temporal lobe epilepsy which is the commonest temporal lobe epilepsy is also frequently resistant to medications and associated with hippocampal sclerosis. Management is challenging and often surgery has to be resorted to [8]. The plant Antiaris toxicaria (family Moraceae) is a common plant in Ghanaian forests. It has been employed traditionally as an analgesic and anticonvulsant [9]. Previous studies have shown that Antiaris possesses anticonvulsant activity in various acute murine models [10]. Antiaris toxicaria in this present study was evaluated to determine its properties in kindling models and post-status models of temporal lobe epilepsy. This investigation sought to determine if the extract possessed potential as an antiepileptogenic agent as well as efficacy in the management of temporal lobe epilepsy. Stem bark of A. toxicaria was harvested from the KNUST campus, Kumasi and identified by a staff member of the Pharmacognosy Department where a voucher specimen (KNUST/HM1/011/S007) has been retained in the herbarium. Preparation of Antiaris toxicaria aqueous extract The dry stem bark was powdered using a commercial grinder. The coarse powder (431 g) was extracted by cold maceration with distilled water as solvent at room temperature for 5 days. The resultant filtrate was oven-dried to obtain 23.40%\({\raise0.7ex\hbox{${ w}$} \!\mathord{\left/ {\vphantom {{ w} w}}\right.\kern-0pt} \!\lower0.7ex\hbox{$w$}}\) of A. toxicaria aqueous extract (AAE). Naïve male ICR mice (20–25 g) and Sprague–Dawley Rats (120-145 g) were obtained from Noguchi Memorial Institute for Medical Research, Accra, Ghana and kept in the departmental Animal House. Animals were maintained under laboratory conditions (room temperature; 12-h light–12-h dark cycle) in stainless steel cages (34 × 47 × 18 cm3) with wood shavings as bedding and allowed free access to water and food ad libitum. They were fed with normal commercial diet (GAFCO Ltd). Animals were tested in groups of eight. Groups were assigned randomly. Sample size was calculated using method of power analysis using the G-power software version 3.0.5. Experiments were carried out during the day. All animals were handled in accordance the Guide for the Care and Use of Laboratory Animals [11] and experiments were approved by the Faculty of Pharmacy and Pharmaceutical Sciences Ethics Committee, KNUST. Drugs and chemicals Diazepam (DZP), pentylenetetrazole (PTZ) pilocarpine (PILO) and kainic acid (KA) were purchased from Sigma-Aldrich Inc., St. Louis, MO, USA. Kindling induction PTZ kindling was initiated using a subconvulsive dose of PTZ 40 mg kg−1 body weight injected into the soft skin fold of the neck on every 2nd day (i.e. Day 1, Day 3, Day 5…). The PTZ injections were stopped when the control animals showed adequate kindling, i.e. Racine score of 5. After each PTZ injection, the convulsive behaviour of the rodent was observed for 30 min in an observation chamber. The resultant seizures were scored as follows: Stage 0 (no response); Stage 1 (hyperactivity, restlessness and vibrissae twitching); Stage 2 (head nodding, head clonus and myoclonic jerks); Stage 3 (unilateral or bilateral limb clonus); Stage 4 (forelimb clonic seizures); Stage 5 (generalized clonic seizures with loss of postural control). AAE was tested at doses of 200, 400 and 800 mg kg−1 body weight orally and diazepam (0.1, 0.3 and 1 mg kg−1, i.p). PTZ was injected 30 min after administration of test drugs. Control animals received 3 ml kg−1 of distilled water. Seven groups of eight animals each were used. Group 1 = distilled water-treated control group; groups 2–4 = AAE-treated groups and groups 5–7 = diazepam-treated group. Pilocarpine-induced Status epilepticus Seizures were induced by an i.p. injection of pilocarpine (PILO) (300 mg kg−1, i.p.) into drug or vehicle-treated male rats. Rats were pre-treated with AAE (100–1000 mg kg−1, p.o.) or diazepam (0.3–3.0 mg kg−1, i.p.) for 30 or 15 min, respectively, before PILO injection. To reduce peripheral autonomic effects produced by PILO, the animals were pre-treated with n-butyl-bromide hyoscine (1 mg kg−1, i.p.) 30 min before PILO administration. Animals were placed in observation cages and observed via video recordings. Latency to and duration of seizures were scored. Rat kainate model Animals were pre-treated with the plant extract 30 min as above before administration of kainic acid (10 mg kg−1, i.p.). Other animals were treated with carbamazepine (30 mg kg−1, p.o) and nifedipine (30 mg kg−1, p.o) 30 min before induction of convulsions. Animals were observed for wet dog shakes over a 1 h period [12]. Brains were harvested for histopathological examination after an hour. Tissues were fixed in 10% buffered formalin (pH 7.2). Dehydration was done with a series of ethanolic solutions, embedded in paraffin wax and processed for histological analysis. Coronal sections (2 µm thick) were cut and stained with haematoxylin-eosin for examination. The stained tissues were observed through an Olympus microscope (BX-51) and photographed by a chare-couple device (CCD) camera. Data were presented as mean ± S.E.M and significant differences between means determined by one-way analysis of variance (ANOVA) followed by Newman–Keuls' post hoc test. Statistical analyses were carried out with Graph Pad Prism® Version 5.0 (GraphPad Software, San Diego, CA, USA) and SigmaPlot® Version 11.0 (Systat Software, Inc.). Data from 5 to 8 animals in each group were included in the analyses. P < 0.05 was considered significant in all cases. None were excluded. Effects in kindling In PTZ + vehicle-treated group, repeated administration of subconvulsive dose of PTZ (40 mg kg−1) on every alternate day for 20 days resulted in increasing convulsive activity leading to generalized clonic seizures (Racine score of 5). Administration of AAE in the dose of 200 and 400 mg kg−1 did not modify the course of kindling induced by PTZ significantly. However, a higher dose of 800 mg kg−1 suppressed the kindled seizure significantly (P < 0.05; Fig. 1a, b) as the group could not achieve a mean score of 5. The standard anticonvulsant diazepam significantly (P < 0.01; Fig. 1c, d) modified the course of kindling at all three dose levels compared to the control. ED50 obtained for the extract was 276.70 mg kg−1 compared to 0.05 mg kg−1 for diazepam. The extract was however more efficacious than diazepam achieving an Emax of 88.83% compared to 60.36% for diazepam (Fig. 2). Effects of AAE (200, 400 and 800 mgkg−1, p. o.; a and b) and diazepam (0.1, 0.3 and 1 mgkg−1, i.p.; c and d) on the stages of convulsion attained in PTZ-induced kindling. Data are presented as group mean ± SEM (n = 8). *P < 0.05, ** P < 0.01, ***P < 0.001 compared with vehicle treated group (One-way analysis of variance followed by Newman–Keuls post hoc test) Dose-response curves of AAE and diazepam on the % decrease in stages of convulsions in PTZ-induced kindling. Each point represents mean ± S.E.M (n = 8) Pilocarpine induced behavioural changes including hypoactivity, tremor and myoclonic movements of the limbs progressing to recurrent myoclonic convulsions with rearing, falling, and status epilepticus. AAE produced significant effect (P < 0.01, Fig. 3a) on the latency to first myoclonic jerks as compared to control at the highest dose only. It had a similar effect on the total duration of seizures (Fig. 3c). Diazepam was used as the reference drug and it also significantly reduced the total duration of seizures (P < 0.01, Fig. 3d) and latency (P < 0.001, Fig. 3b) at 1 and 3 mg kg−1. Diazepam was more potent than the extract in increasing the % latency with an ED50 of 0.66 mg kg−1 as against 424.50 mg kg−1 for the extract (Fig. 4a). Diazepam was also more efficacious achieving an Emax of 108.90% compared to 100% for the extract. Likewise, for the % duration AAE produced ED50 = 80.06 mg kg−1 and Emax = 100% while the standard diazepam achieved ED50 = 1.67 mg kg−1 and Emax = 100% (Fig. 4b). Effect of AAE (100–1000 mg kg−1, p.o.) and diazepam (0.3–3 mg kg−1, i.p.) on the latency to (a and b) and total duration of seizures (c and d) induced by PILO. Each column represents the mean ± SEM (n = 8). **P < 0.01, ***P < 0.001 compared to vehicle-treated group (One-way ANOVA followed by Newman–Keuls post hoc test) Dose-response curves of AAE and diazepam on the % increase in latency (a) and % decrease in durations (b) of status epilepticus induced with pilocarpine. Each point represents mean ± S.E.M (n = 8) Effects in rat kainate model Kainic acid (10 mg kg−1, i. p) produced wet dog shakes in all animals. AAE (400 mg kg−1) produced a significant (P < 0.05) increase in time taken to the onset of wet dog shakes (Fig. 5a). Carbamazepine (30 mg kg−1) and Nifedipine (30 mg kg−1) also delayed the onset. Histopathological examination of the coronal section of the brain showed no protective effect on hippocampal cells by AAE and nifedipine. Carbamazepine offered better preservation of hippocampal cells in the CA1, CA2 and CA3 regions (Fig. 6). The brain to body ratio decreased significantly (P < 0.001; Fig. 5b) with all three treatments. Effects of AAE (400 mgkg−1, p.o.), carbamazepine (30 mgkg−1, p.o.) and nifedipine (30 mgkg−1, p.o.) on the latency to wet dog shakes (a) and % brain to body ratio (b) in rat kainate model. Data are presented as group mean ± SEM (n = 8). *P < 0.05, ***P < 0.001 compared to vehicle-treated group (One-way analysis of variance followed by Newman–Keuls' post hoc Test) Photomicrographs of coronal sections of the brain of rats AAE (400 mg kg−1), carbamazepine (30 mg kg−1) and nifedipine (30 mg kg−1) on kainate– induced hippocampal damage (H & E, ×100) Kindling is a chronic model of epilepsy and epileptogenesis. Repeated administration of a subconvulsive dose of PTZ (a blocker of the GABAA receptor) results in the progressive intensification of convulsant activity, culminating in a generalized seizure [3, 4]. The highest dose of AAE (800 mg kg−1) significantly delayed progression of convulsion similarly to diazepam. Many substances interacting with GABA receptors have been shown to produce potent anticonvulsant effects on seizures in previously kindled animals [2, 13]. It has been shown that AAE produces anticonvulsant effects by interacting with the GABAA receptor. The fact that it acts via GABAergic mechanisms may be a possible explanation for anticonvulsant effects being exhibited in the kindling model. There is some evidence that free radicals are actively involved in physiological processes during oxidative stress induced by administration of convulsants [14]. Of all the free oxygen radicals that occur in vivo, the hydroxyl-free radicals (OH−) are considered to be most hazardous [15,16,17]. Different mechanisms may lead to the increase of the free radicals in PTZ-induced convulsions. It may be assumed that further reason exist for the increased formation of OH− in kindled animals during PTZ seizure, such as reduced activity of superoxide dismutase (SOD), a major defence system for counteracting the toxic effects of reactive oxygen species such as O2−. However, antioxidant activity of AAE has not been firmly established. AAE exhibited anticonvulsant effects against pilocarpine-induced seizures. Pilocarpine is a cholinergic agonist, widely used experimentally to induce limbic seizures in structures containing a high concentration of muscarinic receptors such as the cerebrum [18,19,20]. Status Epilepticus produces significant decreases in M 1, M 2, and GABAergic receptor densities [21] and hence neurotransmission. Freitas et al. have also reported in 2004 on increased levels of superoxide dismutase and catalase and reductions in acetylcholinesterase enzymatic activities in the rat frontal cortex and hippocampus. During pilocarpine-induced seizures and SE in adult rats, lipid peroxidation processes are increased [21, 22] suggesting free radical involvement in the pilocarpine-induced brain damage. Certain antioxidants, such as ascorbic acid, have therefore been shown to possess anticonvulsant activity against pilocarpine-induced SE [22, 23]. Muscarinic receptor stimulation is alleged to be responsible for the onset of pilocarpine-induced seizures, while glutamate acting on NMDA receptors sustains seizure activity [18]. Analysis of the brain morphology after pilocarpine administration demonstrates that the CA1 hippocampal neurones and the hilus of dentate gyrus are predominantly susceptible to neuronal cell loss [5]. Neuronal cell death during SE occurs largely by excitotoxic injury caused by the activation of glutamatergic pathways [6, 24]. Thus, the ability of AAE to attenuate seizures induced by pilocarpine could be attributed to cholinergic antagonism at the M1 or M2 receptors, increase in GABA, and/or its receptor densities, decrease in glutamate levels or through antioxidant pathways. Activation of potassium ion conductance can also contribute as it results in inhibition of the release of glutamate [25, 26]. AAE may therefore have potential in the management of status epilepticus. Kainic acid is a neuro excitotoxic analogue of glutamate used in studies of epilepsy to model experimentally induced limbic seizures [27, 28]. Kainate-treated rats may respond differently. Some may produce wet dog shakes (equivalent to a class III seizure on the Racine scale) or more severe seizures [29]. Previous studies have shown pattern of neurodegeneration in the hippocampus with high concentration of high affinity KA binding sites (CA3 pyramidal cells of the hippocampus) [7, 30]. The dentate gyrus from kainate-treated rats has shown the presence of mossy fibre sprouting in the inner molecular layer [7, 31]. Examination of the hippocampus after seizures revealed hippocampal damage, especially in the CA3 and CA2 regions as shown in the photomicrographs. The extract showed no significant protection against such damage even though it significantly delayed the latency to wet dog shakes. This implies that the extract possesses general anticonvulsant properties but offers no protection against morphological changes. The kainate-treated rat model is used to study temporal lobe epilepsy. However, similarity of seizure occurrence to human temporal lobe epilepsy has not been studied comprehensively. But there are several characteristics of the seizures that resemble temporal lobe epilepsy in humans. For instance, some of the animals produce a few observed motor seizures (even with several months of observation) after a latent period, while other animals have seizures at a frequency as high as 1–2 hz which is reminiscent of epilepsy in the human population [8]. The rats often demonstrate confusion (e.g. hyperactive exploration of their cage) after a seizure, resembling the post-ictal confusion in most humans with temporal lobe epilepsy [32, 33]. Rats treated with the extract exhibited fewer seizures as compared to the control implying a possibility that it might be effective in the treatment of temporal lobe epilepsy. The CA1 and CA3 regions of the hippocampus are also known to possess one of the highest densities of the dihydropyridine receptors in the rat brain [34, 35]. The results of many experimental studies have shown that calcium channel blockers are effective against several different types of seizures [35,36,37]. Hence, nifedipine proved its effect in this model. As much as these experiments model human temporal lobe epilepsy, caution has to be exercised in translating the results directly to man without the needed clinical trials. The results however further lend scientific credence to the traditional use of A. toxicaria as an antiepileptic. Antiaris toxicaria possesses anticonvulsant properties in kindling and status epilepticus murine models and may be antiepileptogenic and a candidate for the management of temporal lobe epilepsy (Additional file 1). Meldrum BS. Antiepileptic drugs potentiating GABA. Electroencephalogr Clin Neurophysiol Suppl. 1999;50:450–7. Morimoto K, Fahnestock M, Racine RJ. Kindling and status epilepticus models of epilepsy: rewiring the brain. Prog Neurobiol. 2004;73(1):1–60. Corda MG, Giorgi O, Orlandi M, Longoni B, Biggio G. Chronic administration of negative modulators produces chemical kindling and GABAA receptor down-regulation. Adv Biochem Psychopharmacol. 1990;46:153–66. Dhir A. Pentylenetetrazol (PTZ) kindling model of epilepsy. Curr Protoc Neurosci. 2012. doi:10.1002/0471142301.ns0937s58. Turski WA, Cavalheiro EA, Schwarz M, Czuczwar SJ, Kleinrok Z, Turski L. Limbic seizures produced by pilocarpine in rats: behavioural, electroencephalographic and neuropathological study. Behav Brain Res. 1983;9(3):315–35. Lopes MW, Lopes SC, Costa AP, Gonçalves FM, Rieger DK, Peres TV, et al. Region-specific alterations of AMPA receptor phosphorylation and signaling pathways in the pilocarpine model of epilepsy. Neurochem Int. 2015;87:22–33. Levesque M, Avoli M. The kainic acid model of temporal lobe epilepsy. Neurosci Biobehav Rev. 2013;37(10 Pt 2):2887–99. French J, Williamson P, Thadani V, Darcey T, Mattson R, Spencer S, et al. Characteristics of medial temporal lobe epilepsy: i. results of history and physical examination. Ann Neurol. 1993;34(6):774–80. Mshana RN, Abbiw DK, Addae-Mensah I, Adjanouhoun E, Ahyi MRA, Ekpere JA, et al. Traditional medicine and pharmacopoeia; Contribution to the revision of ethnobotanical and floristic studies in Ghana. Organization of African Unity/Scientific, Technical & Research Commission; 2000. Mante PK, Adongo DW, Woode E, Kukuia KK, Ameyaw EO. Anticonvulsant effect of Antiaris toxicaria (Pers.) Lesch. (Moraceae) aqueous extract in rodents. ISRN Pharmacol. 2013;2013:9. National Research Council. Guide for the care and use of laboratory animals. Washington D.C.: The National Academies Press; 1996. Cilio M, Bolanos A, Liu Z, Schmid R, Yang Y, Stafstrom C, et al. Anticonvulsant action and long-term effects of gabapentin in the immature brain. Neuropharmacology. 2001;40(1):139–47. Bittencourt S, Dubiela FP, Queiroz C, Covolan L, Andrade D, Lozano A, et al. Microinjection of GABAergic agents into the anterior nucleus of the thalamus modulates pilocarpine-induced seizures and status epilepticus. Seizure. 2010;19(4):242–6. Coyle JT, Puttfarcken P. Oxidative stress, glutamate, and neurodegenerative disorders. Science. 1993;262(5134):689–95. Halliwell B. Reactive oxygen species and the central nervous system. J Neurochem. 1992;59(5):1609–23. Halliwell B. Oxidative stress and neurodegeneration: where are we now? J Neurochem. 2006;97(6):1634–58. Nowak JZ. Oxidative stress, polyunsaturated fatty acidsderived oxidation products and bisretinoids as potential inducers of CNS diseases: focus on age-related macular degeneration. Pharmacol Rep. 2013;65(2):288–304. Turski L, Ikonomidou C, Turski WA, Bortolotto ZA, Cavalheiro EA. Review: cholinergic mechanisms and epileptogenesis. The seizures induced by pilocarpine: a novel experimental model of intractable epilepsy. Synapse. 1989;3(2):154–71. Clifford DB, Olney JW, Maniotis A, Collins RC, Zorumski CF. The functional anatomy and pathology of lithium-pilocarpine and high-dose pilocarpine seizures. Neuroscience. 1987;23(3):953–68. Gao F, Liu Y, Li X, Wang Y, Wei D, Jiang W. Fingolimod (FTY720) inhibits neuroinflammation and attenuates spontaneous convulsions in lithium-pilocarpine induced status epilepticus in rat model. Pharmacol Biochem Behav. 2012;103(2):187–96. Freitas RM, Sousa FC, Vasconcelos SM, Viana GS, Fonteles MM. Pilocarpine-induced status epilepticus in rats: lipid peroxidation level, nitrite formation, GABAergic and glutamatergic receptor alterations in the hippocampus, striatum and frontal cortex. Pharmacol Biochem Behav. 2004;78(2):327–32. Xavier SM, Barbosa CO, Barros DO, Silva RF, Oliveira AA, Freitas RM. Vitamin C antioxidant effects in hippocampus of adult Wistar rats after seizures and status epilepticus induced by pilocarpine. Neurosci Lett. 2007;420(1):76–9. Tejada S, Sureda A, Roca C, Gamundi A, Esteban S. Antioxidant response and oxidative damage in brain cortex after high dose of pilocarpine. Brain Res Bull. 2007;71(4):372–5. Cavalheiro EA, Naffah-Mazzacoratti MG, Mello LE, Leite JP. The pilocarpine model of seizures. Models of seizures and epilepsy. New York: Elsevier; 2006. p. 433-448. Morales-Villagrán A, Tapia R. Preferential stimulation of glutamate release by 4-aminopyridine in rat striatum in vivo. Neurochem Int. 1996;28(1):35–40. Maljevic S, Lerche H. Potassium channels: a review of broadening therapeutic possibilities for neurological diseases. J Neurol. 2013;260(9):2201–11. Ben-Ari Y, Cossart R. Kainate, a double agent that generates seizures: two decades of progress. Trends Neurosci. 2000;23(11):580–7. Biziere K, Slevin J, Zaczek R, Collins J. Kainic acid neurotoxicity and receptor in CNS pharmacology neuropeptides. In: Proceedings of the 8th international congress of pharmacology, Tokyo, 1981. Elsevier; 2013. Hellier JL, Dudek FE. Chemoconvulsant model of chronic spontaneous seizures. Curr Protoc Neurosci. Chapter 9: Unit 9.19. Fisher RS. Animal models of the epilepsies. Brain Res Rev. 1989;14(3):245–78. Buckmaster PS, Dudek FE. Neuron loss, granule cell axon reorganization, and functional changes in the dentate gyrus of epileptic kainate-treated rats. J Comp Neurol. 1997;385(3):385–404. Fenwick P. Psychiatric disorders and epilepsy. In: Epilepsy, Hopkins A, Shorvon S, Cascino G, editors. London: Chapman and Hall Medical; 1995. p. 453–502. Liu A, Bryant A, Jefferson A, Friedman D, Minhas P, Barnard S, et al. Exploring the efficacy of a 5-day course of transcranial direct current stimulation (TDCS) on depression and memory function in patients with well-controlled temporal lobe epilepsy. Epilepsy Behav. 2016;55:11–20. Meyer JH, Gruol DL. Dehydroepiandrosterone sulfate alters synaptic potentials in area CA1 of the hippocampal slice. Brain Res. 1994;633(1):253–61. Koskimäki J, Matsui N, Umemori J, Rantamäki T, Castrén E. Nimodipine activates TrkB neurotrophin receptors and induces neuroplastic and neuroprotective signaling events in the mouse hippocampus and prefrontal cortex. Cell Mol Neurobiol. 2015;35(2):189–96. van Luijtelaar G, Wiaderna D, Elants C, Scheenen W. Opposite effects of T-and L-type Ca 2 + channels blockers in generalized absence epilepsy. Eur J Pharmacol. 2000;406(3):381–9. Kriz J, Župan G, Simonić A. Differential effects of dihydropyridine calcium channel blockers in kainic acid-induced experimental seizures in rats. Epilepsy Res. 2003;52(3):215–25. PKM: Was involved in the conception and design, acquisition of all data as well as the analysis and interpretation of data. She was also involved in drafting and revising the manuscript. DWA: Was involved in the acquisition of data, analysis and interpretation of data in addition to drafting and revising the manuscript. EW: Was involved in the conception and design, analysis and interpretation of data as well as drafting and revising the manuscript. All the authors read and approved the final manuscript. All data generated or analysed during this study are included in this published article as supplementary information files. All animals were handled in accordance the Guide for the Care and Use of Laboratory Animals [11] and experiments were approved by the Faculty of Pharmacy and Pharmaceutical Sciences Ethics Committee, KNUST. Study was funded solely by the authors of this publication. Department of Pharmacology, Faculty of Pharmacy and Pharmaceutical Sciences, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana Priscilla Kolibea Mante & Eric Woode University of Health and Allied Sciences, Ho, Ghana Donatus Wewura Adongo Priscilla Kolibea Mante Eric Woode Correspondence to Priscilla Kolibea Mante. Raw data. Mante, P.K., Adongo, D.W. & Woode, E. Anticonvulsant effects of antiaris toxicaria aqueous extract: investigation using animal models of temporal lobe epilepsy. BMC Res Notes 10, 167 (2017). https://doi.org/10.1186/s13104-017-2488-x DOI: https://doi.org/10.1186/s13104-017-2488-x Kainic acid Pentylenetetrazole Pilocarpine
CommonCrawl
Subjectsfractional differential equations (17)numerical methods (14)Error estimates (10)Caputo fractional derivative (9)error estimates (9)Group rings (9)stability (9)Codes over rings (8)codes over rings (7)self-dual codes (7)View MoreJournalJournal of Computational and Applied Mathematics (11)Applied Numerical Mathematics (9)Advances in Mathematics of Communications (5)Journal of computational and applied mathematics (5)Applied Mathematics and Computation (3)View MoreAuthorsFord, Neville J. (75)Yan, Yubin (43)Gildea, Joe (31)Baker, Christopher T. H. (22)Kaya, Abidin (19)Kavallaris, Nikos I. (16)Lumb, Patricia M. (14)Antonopoulou, Dimitra (13)Diethelm, Kai (12)Korban, Adrian (12)View MoreTypesArticle (160)Technical Report (17)Book chapter (8)Preprint (8)Meetings & Proceedings (4)View More We are an active university Mathematics Department with a strong teaching and research reputation. We offer students the chance to study at undergraduate or postgraduate level on degree programmes leading to: BSc in Mathematics, BSc/BA joint courses in Mathematics or Applied Statistics and a wide range of other subjects. We have an active research group focusing on Computational Applied Mathematics, with research students studying for the degrees of MPhil and PhD, postdoctoral workers and associated collaborators from across the world. A Novel Averaging Principle Provides Insights in the Impact of Intratumoral Heterogeneity on Tumor Progression Hatzikirou, Haralampos; orcid: 0000-0002-1270-7885; email: [email protected]; Kavallaris, Nikos I.; Leocata, Marta; orcid: 0000-0002-5261-3699; email: [email protected] (MDPI, 2021-10-09) Typically stochastic differential equations (SDEs) involve an additive or multiplicative noise term. Here, we are interested in stochastic differential equations for which the white noise is nonlinearly integrated into the corresponding evolution term, typically termed as random ordinary differential equations (RODEs). The classical averaging methods fail to treat such RODEs. Therefore, we introduce a novel averaging method appropriate to be applied to a specific class of RODEs. To exemplify the importance of our method, we apply it to an important biomedical problem, in particular, we implement the method to the assessment of intratumoral heterogeneity impact on tumor dynamics. Precisely, we model gliomas according to a well-known Go or Grow (GoG) model, and tumor heterogeneity is modeled as a stochastic process. It has been shown that the corresponding deterministic GoG model exhibits an emerging Allee effect (bistability). In contrast, we analytically and computationally show that the introduction of white noise, as a model of intratumoral heterogeneity, leads to monostable tumor growth. This monostability behavior is also derived even when spatial cell diffusion is taken into account. Oscillatory and stability of a mixed type difference equation with variable coefficients Yan, Yubin; Pinelas, Sandra; Ramdani, Nedjem; Yenicerioglu, Ali Fuat; RUDN University; University of Saad Dahleb Blida; Kocaeli University; University of Chester (Inderscience, 2021-08-12) The goal of this paper is to study the oscillatory and stability of the mixed type difference equation with variable coefficients \[ \Delta x(n)=\sum_{i=1}^{\ell}p_{i}(n)x(\tau_{i}(n))+\sum_{j=1}^{m}q_{j}(n)x(\sigma_{i}(n)),\quad n\ge n_{0}, \] where $\tau_{i}(n)$ is the delay term and $\sigma_{j}(n)$ is the advance term and they are positive real sequences for $i=1,\cdots,l$ and $j=1,\cdots,m$, respectively, and $p_{i}(n)$ and $q_{j}(n)$ are real functions. This paper generalise some known results and the examples illustrate the results. Spatial discretization for stochastic semilinear subdiffusion driven by integrated multiplicative space-time white noise Yan, Yubin; Hoult, James; Wang, Junmei; University of Chester; LuLiang University (MDPI, 2021-08-12) Spatial discretization of the stochastic semilinear subdiffusion driven by integrated multiplicative space-time white noise is considered. The spatial discretization scheme discussed in Gy\"ongy \cite{gyo_space} and Anton et al. \cite{antcohque} for stochastic quasi-linear parabolic partial differential equations driven by multiplicative space-time noise is extended to the stochastic subdiffusion. The nonlinear terms $f$ and $\sigma$ satisfy the global Lipschitz conditions and the linear growth conditions. The space derivative and the integrated multiplicative space-time white noise are discretized by using finite difference methods. Based on the approximations of the Green functions which are expressed with the Mittag-Leffler functions, the optimal spatial convergence rates of the proposed numerical method are proved uniformly in space under the suitable smoothness assumptions of the initial values. Error estimates of a continuous Galerkin time stepping method for subdiffusion problem Yan, Yubin; Yan, Yuyuan; Liang, Zongqi; Egwu, Bernard; Jimei University; University of Chester (Springer, 2021-07-29) A continuous Galerkin time stepping method is introduced and analyzed for subdiffusion problem in an abstract setting. The approximate solution will be sought as a continuous piecewise linear function in time $t$ and the test space is based on the discontinuous piecewise constant functions. We prove that the proposed time stepping method has the convergence order $O(\tau^{1+ \alpha}), \, \alpha \in (0, 1)$ for general sectorial elliptic operators for nonsmooth data by using the Laplace transform method, where $\tau$ is the time step size. This convergence order is higher than the convergence orders of the popular convolution quadrature methods (e.g., Lubich's convolution methods) and L-type methods (e.g., L1 method), which have only $O(\tau)$ convergence for the nonsmooth data. Numerical examples are given to verify the robustness of the time discretization schemes with respect to data regularity. A Comprehensive Review of the Composition, Nutritional Value, and Functional Properties of Camel Milk Fat Bakry, Ibrahim A; Yang, Lan; Farag, Mohamed A.; orcid: 0000-0001-5139-1863; email: [email protected]; Korma, Sameh A; Khalifa, Ibrahim; orcid: 0000-0002-7648-2961; email: [email protected]; Cacciotti, Ilaria; orcid: 0000-0002-3478-6510; Ziedan, Noha I.; Jin, Jun; Jin, Qingzhe; Wei, Wei; et al. (MDPI, 2021-09-13) Recently, camel milk (CM) has been considered as a health-promoting icon due to its medicinal and nutritional benefits. CM fat globule membrane has numerous health-promoting properties, such as anti-adhesion and anti-bacterial properties, which are suitable for people who are allergic to cow's milk. CM contains milk fat globules with a small size, which accounts for their rapid digestion. Moreover, it also comprises lower amounts of cholesterol and saturated fatty acids concurrent with higher levels of essential fatty acids than cow milk, with an improved lipid profile manifested by reducing cholesterol levels in the blood. In addition, it is rich in phospholipids, especially plasmalogens and sphingomyelin, suggesting that CM fat may meet the daily nutritional requirements of adults and infants. Thus, CM and its dairy products have become more attractive for consumers. In view of this, we performed a comprehensive review of CM fat's composition and nutritional properties. The overall goal is to increase knowledge related to CM fat characteristics and modify its unfavorable perception. Future studies are expected to be directed toward a better understanding of CM fat, which appears to be promising in the design and formulation of new products with significant health-promoting benefits. Layer Dynamics for the one dimensional $\eps$-dependent Cahn-Hilliard / Allen-Cahn Equation Antonopoulou, Dimitra; Karali, Georgia; Tzirakis, Konstantinos; University of Chester; University of Crete; IACM/FORTH (Springer, 2021-08-27) We study the dynamics of the one-dimensional ε-dependent Cahn-Hilliard / Allen-Cahn equation within a neighborhood of an equilibrium of N transition layers, that in general does not conserve mass. Two different settings are considered which differ in that, for the second, we impose a mass-conservation constraint in place of one of the zero-mass flux boundary conditions at x = 1. Motivated by the study of Carr and Pego on the layered metastable patterns of Allen-Cahn in [10], and by this of Bates and Xun in [5] for the Cahn-Hilliard equation, we implement an N-dimensional, and a mass-conservative N−1-dimensional manifold respectively; therein, a metastable state with N transition layers is approximated. We then determine, for both cases, the essential dynamics of the layers (ode systems with the equations of motion), expressed in terms of local coordinates relative to the manifold used. In particular, we estimate the spectrum of the linearized Cahn-Hilliard / Allen-Cahn operator, and specify wide families of ε-dependent weights δ(ε), µ(ε), acting at each part of the operator, for which the dynamics are stable and rest exponentially small in ε. Our analysis enlightens the role of mass conservation in the classification of the general mixed problem into two main categories where the solution has a profile close to Allen-Cahn, or, when the mass is conserved, close to the Cahn-Hilliard solution. New Extremal Binary Self-dual Codes from block circulant matrices and block quadratic residue circulant matrices Gildea, Joe; Kaya, Abidin; Taylor, Rhian; Tylyshchak, Alexander; Yildiz, Bahattin; University of Chester; Sampoerna University; Uzhgorod National University; Northern Arizona University (Elsevier, 2021-08-20) In this paper, we construct self-dual codes from a construction that involves both block circulant matrices and block quadratic residue circulant matrices. We provide conditions when this construction can yield self-dual codes. We construct self-dual codes of various lengths over F2 and F2 + uF2. Using extensions, neighbours and sequences of neighbours, we construct many new self-dual codes. In particular, we construct one new self-dual code of length 66 and 51 new self-dual codes of length 68. New Self-dual Codes from 2 x 2 block circulant matrices, Group Rings and Neighbours of Neighbours Gildea, Joe; Kaya, Abidin; Roberts, Adam; Taylor, Rhian; Tylyshchak, Alexander; University of Chester; Harmony Public Schools; Uzhgorod National University (American Institute of Mathematical Sciences, 2021-09-01) In this paper, we construct new self-dual codes from a construction that involves a unique combination; $2 \times 2$ block circulant matrices, group rings and a reverse circulant matrix. There are certain conditions, specified in this paper, where this new construction yields self-dual codes. The theory is supported by the construction of self-dual codes over the rings $\FF_2$, $\FF_2+u\FF_2$ and $\FF_4+u\FF_4$. Using extensions and neighbours of codes, we construct $32$ new self-dual codes of length $68$. We construct 48 new best known singly-even self-dual codes of length 96. Galerkin finite element approximation of a stochastic semilinear fractional subdiffusion with fractionally integrated additive noise Yan, Yubin; Kang, Wenyan; Egwu, Bernard; Pani, Amiya; University of Chester, Lvliang University, P. R. China, Indian Institute of Technology Bombay A Galerkin finite element method is applied to approximate the solution of a semilinear stochastic space and time fractional subdiffusion problem with the Caputo fractional derivative of the order $ \alpha \in (0, 1)$, driven by fractionally integrated additive noise. After discussing the existence, uniqueness and regularity results, we approximate the noise with the piecewise constant function in time in order to obtain a regularized stochastic fractional subdiffusion problem. The regularized problem is then approximated by using the finite element method in spatial direction. The mean squared errors are proved based on the sharp estimates of the various Mittag-Leffler functions involved in the integrals. Numerical experiments are conducted to show that the numerical results are consistent with the theoretical findings. New binary self-dual codes of lengths 56, 58, 64, 80 and 92 from a modification of the four circulant construction. Gildea, Joe; Korban, Adrian; Roberts, Adam; University of Chester (Elsevier, 2021-05-31) In this work, we give a new technique for constructing self-dual codes over commutative Frobenius rings using $\lambda$-circulant matrices. The new construction was derived as a modification of the well-known four circulant construction of self-dual codes. Applying this technique together with the building-up construction, we construct singly-even binary self-dual codes of lengths 56, 58, 64, 80 and 92 that were not known in the literature before. Singly-even self-dual codes of length 80 with $\beta \in \{2,4,5,6,8\}$ in their weight enumerators are constructed for the first time in the literature. Composite Matrices from Group Rings, Composite G-Codes and Constructions of Self-Dual Codes Dougherty, Steven; Gildea, Joe; Korban, Adrian; Kaya, Abidin; University of Scranton; University of Chester; Harmony School of Technology (Springer, 2021-05-19) In this work, we define composite matrices which are derived from group rings. We extend the idea of G-codes to composite G-codes. We show that these codes are ideals in a group ring, where the ring is a finite commutative Frobenius ring and G is an arbitrary finite group. We prove that the dual of a composite G-code is also a composite G-code. We also define quasi-composite G-codes. Additionally, we study generator matrices, which consist of the identity matrices and the composite matrices. Together with the generator matrices, the well known extension method, the neighbour method and its generalization, we find extremal binary self-dual codes of length 68 with new weight enumerators for the rare parameters $\gamma$ = 7; 8 and 9: In particular, we find 49 new such codes. Moreover, we show that the codes we find are inaccessible from other constructions. High order algorithms for numerical solution of fractional differential equations Asl, Mohammad Shahbazi; Javidi, Mohammad; Yan, Yubin; University of Chester; University of Tabriz In this paper, two novel high order numerical algorithms are proposed for solving fractional differential equations where the fractional derivative is considered in the Caputo sense. The total domain is discretized into a set of small subdomains and then the unknown functions are approximated using the piecewise Lagrange interpolation polynomial of degree three and degree four. The detailed error analysis is presented, and it is analytically proven that the proposed algorithms are of orders 4 and 5. The stability of the algorithms is rigorously established and the stability region is also achieved. Numerical examples are provided to check the theoretical results and illustrate the efficiency and applicability of the novel algorithms. G-Codes, self-dual G-Codes and reversible G-Codes over the Ring Bj,k Dougherty, Steven; Gildea, Joe; Korban, Adrian; Sahinkaya, Serap; Tarsus University; University of Chester (Springer, 2021-05-03) In this work, we study a new family of rings, Bj,k, whose base field is the finite field Fpr . We study the structure of this family of rings and show that each member of the family is a commutative Frobenius ring. We define a Gray map for the new family of rings, study G-codes, self-dual G-codes, and reversible G-codes over this family. In particular, we show that the projection of a G-code over Bj,k to a code over Bl,m is also a G-code and the image under the Gray map of a self-dual G-code is also a self-dual G-code when the characteristic of the base field is 2. Moreover, we show that the image of a reversible G-code under the Gray map is also a reversible G2j+k-code. The Gray images of these codes are shown to have a rich automorphism group which arises from the algebraic structure of the rings and the groups. Finally, we show that quasi-G codes, which are the images of G-codes under the Gray map, are also Gs-codes for some s. The multi-dimensional Stochastic Stefan Financial Model for a portfolio of assets Antonopoulou, Dimitra; Bitsaki, Marina; Karali, Georgia; University of Chester; University of Crete The financial model proposed in this work involves the liquidation process of a portfolio of n assets through sell or (and) buy orders placed, in a logarithmic scale, at a (vectorial) price with volatility. We present the rigorous mathematical formulation of this model in a financial setting resulting to an n-dimensional outer parabolic Stefan problem with noise. The moving boundary encloses the areas of zero trading, the so-called solid phase. We will focus on a case of financial interest when one or more markets are considered. In particular, our aim is to estimate for a short time period the areas of zero trading, and their diameter which approximates the minimum of the n spreads of the portfolio assets for orders from the n limit order books of each asset respectively. In dimensions n = 3, and for zero volatility, this problem stands as a mean field model for Ostwald ripening, and has been proposed and analyzed by Niethammer in [25], and in [7] in a more general setting. There in, when the initial moving boundary consists of well separated spheres, a first order approximation system of odes had been rigorously derived for the dynamics of the interfaces and the asymptotic pro le of the solution. In our financial case, we propose a spherical moving boundaries approach where the zero trading area consists of a union of spherical domains centered at portfolios various prices, while each sphere may correspond to a different market; the relevant radii represent the half of the minimum spread. We apply It^o calculus and provide second order formal asymptotics for the stochastic version dynamics, written as a system of stochastic differential equations for the radii evolution in time. A second order approximation seems to disconnect the financial model from the large diffusion assumption for the trading density. Moreover, we solve the approximating systems numerically. Numerical approximation of the Stochastic Cahn-Hilliard Equation near the Sharp Interface Limit Antonopoulou, Dimitra; Banas, Lubomir; Nurnberg, Robert; Prohl, Andreas; University of Chester; University of Bielefeld; Imperial College London; University of Tuebingen Abstract. We consider the stochastic Cahn-Hilliard equation with additive noise term that scales with the interfacial width parameter ε. We verify strong error estimates for a gradient flow structure-inheriting time-implicit discretization, where ε only enters polynomially; the proof is based on higher-moment estimates for iterates, and a (discrete) spectral estimate for its deterministic counterpart. For γ sufficiently large, convergence in probability of iterates towards the deterministic Hele-Shaw/Mullins-Sekerka problem in the sharp-interface limit ε → 0 is shown. These convergence results are partly generalized to a fully discrete finite element based discretization. We complement the theoretical results by computational studies to provide practical evidence concerning the effect of noise (depending on its 'strength' γ) on the geometric evolution in the sharp-interface limit. For this purpose we compare the simulations with those from a fully discrete finite element numerical scheme for the (stochastic) Mullins-Sekerka problem. The computational results indicate that the limit for γ ≥ 1 is the deterministic problem, and for γ = 0 we obtain agreement with a (new) stochastic version of the Mullins-Sekerka problem. Entropy-driven cell decision-making predicts "fluid-to-solid" transition in multicellular systems Kavallaris, Nikos; Barua, Arnab; Syga, Simon; Mascheroni, Pietro; Meyer-Hermann, Michael; Deutsch, Andreas; Hatzikirou, Haralampos; University of Chester; Helmholtz Centre for Infection Research; Technische Univesität Dresden; Technische Universität Braunschweig; Khalifa University Cellular decision making allows cells to assume functionally different phenotypes in response to microenvironmental cues, with or without genetic change. It is an open question, how individual cell decisions influence the dynamics at the tissue level. Here, we study spatio-temporal pattern formation in a population of cells exhibiting phenotypic plasticity, which is a paradigm of cell decision making. We focus on the migration/resting and the migration/proliferation plasticity which underly the epithelial-mesenchymal transition (EMT) and the go or grow dichotomy. We assume that cells change their phenotype in order to minimize their microenvironmental entropy following the LEUP (Least microEnvironmental Uncertainty Principle) hypothesis. In turn, we study the impact of the LEUP-driven migration/resting and migration/proliferation plasticity on the corresponding multicellular spatiotemporal dynamics with a stochastic cell-based mathematical model for the spatio-temporal dynamics of the cell phenotypes. In the case of the go or rest plasticity, a corresponding mean-field approximation allows to identify a bistable switching mechanism between a diffusive (fluid) and an epithelial (solid) tissue phase which depends on the sensitivity of the phenotypes to the environment. For the go or grow plasticity, we show the possibility of Turing pattern formation for the "solid" tissue phase and its relation with the parameters of the LEUP-driven cell decisions. Extending an Established Isomorphism between Group Rings and a Subring of the n × n Matrices Dougherty, Steven; Gildea, Joe; Korban, Adrian; University of Scranton; University of Chester In this work, we extend an established isomorphism between group rings and a subring of the n × n matrices. This extension allows us to construct more complex matrices over the ring R. We present many interesting examples of complex matrices constructed directly from our extension. We also show that some of the matrices used in the literature before can be obtained by a direct application of our extended isomorphism. Two high-order time discretization schemes for subdiffusion problems with nonsmooth data Yan, Yubin; Wang, Yanyong; Yang, Yan; University of Chester; Lvliang University Two new high-order time discretization schemes for solving subdiffusion problems with nonsmooth data are developed based on the corrections of the existing time discretization schemes in literature. Without the corrections, the schemes have only a first order of accuracy for both smooth and nonsmooth data. After correcting some starting steps and some weights of the schemes, the optimal convergence orders $O(k^{3- \alpha})$ and $O(k^{4- \alpha})$ with $0< \alpha <1$ can be restored for any fixed time $t$ for both smooth and nonsmooth data, respectively. The error estimates for these two new high-order schemes are proved by using Laplace transform method for both homogeneous and inhomogeneous problem. Numerical examples are given to show that the numerical results are consistent with the theoretical results. Dynamics of shadow system of a singular Gierer-Meinhardt system on an evolving domain Kavallaris, Nikos I.; Bareira, Raquel; Madzvamuse, Anotida; University of Chester; Polytechnic Institute of Setubal; University of Lisbon; Sussex University The main purpose of the current paper is to contribute towards the comprehension of the dynamics of the shadow system of a singular Gierer-Meinhardt model on an isotropically evolving domain. In the case where the inhibitor's response to the activator's growth is rather weak, then the shadow system of the Gierer-Meinhardt model is reduced to a single though non-local equation whose dynamics is thoroughly investigated throughout the manuscript. The main focus is on the derivation of blow-up results for this non-local equation, which can be interpreted as instability patterns of the shadow system. In particular, a diffusion-driven instability (DDI), or Turing instability, in the neighbourhood of a constant stationary solution, which then is destabilised via diffusion-driven blow-up, is observed. The latter indicates the formation of some unstable patterns, whilst some stability results of global-in-time solutions towards non-constant steady states guarantee the occurrence of some stable patterns. Most of the theoretical results are verified numerically, whilst the numerical approach is also used to exhibit the dynamics of the shadow system when analytical methods fail. DOMestic Energy Systems and Technologies InCubator (DOMESTIC) and indoor air quality of the built environment Li, Jinghua; Khalid, Yousaf; Phillips, Gavin J.; University of Chester Oral presentation at RMetS Students and Early Career Scientists Conference 2020 on research project DOMESTIC (DOMestic Energy Systems and Technologies InCubator), which aims to build a facility for the demonstration of domestic technologies and design methodologies (i.e. air quality, energy efficiency).
CommonCrawl
EURASIP Journal on Bioinformatics and Systems Biology Inference of protein-protein interaction networks from multiple heterogeneous data Lei Huang1, Li Liao1 & Cathy H. Wu1,2 EURASIP Journal on Bioinformatics and Systems Biology volume 2016, Article number: 8 (2016) Cite this article Protein-protein interaction (PPI) prediction is a central task in achieving a better understanding of cellular and intracellular processes. Because high-throughput experimental methods are both expensive and time-consuming, and are also known of suffering from the problems of incompleteness and noise, many computational methods have been developed, with varied degrees of success. However, the inference of PPI network from multiple heterogeneous data sources remains a great challenge. In this work, we developed a novel method based on approximate Bayesian computation and modified differential evolution sampling (ABC-DEP) and regularized laplacian (RL) kernel. The method enables inference of PPI networks from topological properties and multiple heterogeneous features including gene expression and Pfam domain profiles, in forms of weighted kernels. The optimal weights are obtained by ABC-DEP, and the kernel fusion built based on optimal weights serves as input to RL to infer missing or new edges in the PPI network. Detailed comparisons with control methods have been made, and the results show that the accuracy of PPI prediction measured by AUC is increased by up to 23 %, as compared to a baseline without using optimal weights. The method can provide insights into the relations between PPIs and various feature kernels and demonstrates strong capability of predicting faraway interactions that cannot be well detected by traditional RL method. Uncovering protein-protein interaction (PPI) is crucial to having a better understanding of intracellular signaling pathways, modeling of protein complex structures and elucidating various biochemical processes. Although several high-throughput experimental methods, such as yeast two-hybrid system and mass spectrometry method, have been used to determine a larger number of protein interactions, these methods are known to be prone to having high false-positive rates, besides of their high cost. Therefore, efficient and accurate computational methods for PPI prediction are urgently needed. Generally, current computational methods for PPI prediction can be classified into two categories: A) pair-wise biological similarity based methods and B) network level-based methods. For category A, computational approaches have been developed to predict if any given pair of proteins interact with each other, based on various properties such as sequence homology, gene co-expression and phylogenetic profiles [1–5]. Moreover, some previous work also demonstrated that three-dimensional structural information, when available, can be used to predict PPIs with accuracy superior to predictions based on non-structural evidence [6, 7]. However, with no first principles to tell deterministically yet if two given proteins interact or not, the pair-wise biological similarity based on various features and attributes can run out its predictive power, as often the signals may be too weak or noisy. Therefore, recently, many researches have been focused on integrating heterogeneous pair-wise features, e.g., genomic features, semantic similarities, in seek of better prediction accuracy [8–11]. It is biologically meaningful if we can disentangle the relations among various pair-wise biological similarities and PPIs, but it is still in early stage for the incomplete and noisy pair-wise similarity kernels. To circumvent the limitations with using pair-wise biological similarity, efforts have also been made to investigate PPI prediction in the context of networks, which may provide extra information to resolve ambiguities incurred at pairwise level. A network can be constructed from reliable pair-wise PPIs, with nodes representing proteins and edges representing interactions. Topological features, such as the number of neighbors, can be collected for nodes and then are used to measure the similarity for any given node pair to make PPI prediction for the corresponding proteins [12–15]. Inspired by the PageRank algorithm [16], variants of random walk-based methods have been proposed to go beyond these node centric topological features to get the whole network involved; the probability of interaction between given two proteins is measured in terms of how likely a random walk in the network starting at one node will reach the other node [17–19]. These methods are suitable for PPI prediction in cases when the task is to find all interacting partners for a particular protein, by using it as the start node for random walks. The computational cost increases from O(N) to O(N 2) for all-against-all PPI prediction. To overcome the limitation of single start-node random walk, many kernels on network for link prediction and semi-supervised classification have been systemically studied [20], which can measure the random-walk distance for all node pairs at once. Compared with the random walk methods, kernel methods are obviously more efficient and applicable to various network types. But, both the variants of random walk and random walk-based kernels cannot differentiate faraway interacting candidates well. Besides, instead of computing proximity measures between nodes from the network structure directly, Kuchaiev et al. and Cannistraci et al. proposed geometric de-noise methods that embed PPI network into a low-dimensional geometric space, in which protein pairs that are closer to each other represent good candidate interactions [1, 21]. Furthermore, when the network is represented as an adjacent matrix, the prediction problem can be transformed into a spectral analysis and matrix completion problem. For example, Symeonidis et al. [22] did link prediction for biological and social networks based on multi-way spectral clustering. Wang et al. [23] and Krishna et al. [24] predicted PPI interactions through matrix factorization-based methods. By and large, the prediction task will be reduced to convex a optimization problem, and the performance depends on the objective function, which should be carefully designed to ensure fast convergence and avoidance of being stuck in the local optima. The two kinds of methods, pair-wise biological similarity-based methods and network level-based methods, can be mutually beneficial. For example, weights can be assigned to edges in the network using pair-wise biological similarity scores. In Backstrom et al. [19], a supervised learning task is proposed to learn a function that assigns weighted strengths to edges in the network such that a random walker is more likely to visit the nodes to which new links will be created in the future. The matrix factorization-based methods proposed by Wang et al. [23] and Krishna et al. [24] also included multi-modal biological sources to enhance the prediction performance. In these methods, however, only the pair-wise features for the existing edges in the network will be utilized, even though from a PPI prediction perspective, what is particularly useful is to incorporate pair-wise features for node pairs that are not currently linked by a direct edge but will if a new edge (PPI) is predicted. Therefore, it would be of great interest if we can infer PPI network directly from multi-modal biological features kernels that involve all node pairs. In Yamanishi et al. [25], a method is developed to infer protein networks from multiple types of genomic data based on a variant of kernel canonical correlation analysis (CCA). In that work, all genomic kernels are simply added together, with no weights to regulate these heterogeneous and potentially noisy data sources for their contribution towards PPI prediction. Also, it seems that the partial network needed for supervised learning based on kernel CCA needs to be sufficiently large, e.g., a leave-one-out cross validation is used, to attain good performance. In this paper, we propose a new method based on ABC-DEP sampling method and regularized Laplacian (RL) kernel to infer PPI networks from multiple hetergeneous data. The method uses both topological features and various genomic kernels, which are weighted to form a kernel fusion. The weights are optimized using ABC-DEP sampling [26]. Unlike data fusion with genomic kernels for binary classification [27], the combined kernel in our case will be used instead to create a regularized Laplacian kernel [20, 28] for PPI prediction. We demonstrate how the method circumvents the issue of unbalanced data faced by many machine-learning methods in bioinformatics. One main advantage of our method is that only a small partial network is needed for training in order to make the inference at the whole network level. Moreover, the results show that our method works particularly well with detecting interactions between nodes that are far apart in the network, which has been a difficult task for other methods. Tested on Yeast PPI data and compared to two control methods, traditional regularized Laplacian kernel method and regularized Laplacian kernel based on equally weighted kernels, our method shows a significant improvement of over 20 % increase in performance measured by ROC score. Methods and data Problem definition Formally, a PPI network can be represented as a graph G=(V,E) with V nodes (proteins) and E edges (interactions). G is defined by the adjacency matrix A with V×V dimension: $$ {A_{i,j}} = \left\{ \begin{array}{c} 1, {if}\, {(i,j)}\in{E} \\ 0, {if}\, {(i,j)}\notin{E} \\ \end{array} \right.\, $$ where i and j are two nodes in the nodes set V, and (i,j) represents an edge between i and j, (i,j)∈E. The graph is called connected if there is a path of edges to connect any two nodes in the graph. For supervised learning, we divide the network into three parts: connected training network G tn =(V,E tn ), validation set G vn =(V vn ,E vn ), and testing set G tt =(V tt ,E tt ). For G tn , it consists of a minimum spanning tree, augmented with a small set of randomly selected edges. Because all edges are equally weighted, each time a minimum spanning tree is newly built, it will be different from a previous one. And G vn and G tt are two non-overlapping subsets of edges randomly chosen from the edges that are not in G tn . A kernel is a symmetric positive definite matrix K, whose elements are defined as a real-valued function K(x, y) satisfying K(x, y)=K(y, x) for any two proteins x and y in the data set. Intuitively, the kernel for a given dataset can be regarded as a measure of similarity between protein pairs with respect to the biological properties, from which kernel function takes its value. Treated as an adjacency matrix, a kernel can also be thought of as a complete network in which all the proteins are connected by weighted edges. Kernel fusion is a way to integrate multiple kernels from different data sources by a linear combination. For our task, this combination is made of the connected training network and various feature kernels K i ,i=1,2,3…n by optimized weights W i ,i=0,1,2,3…n, which formally is defined by Eq. (2) $$ K_{fusion} = W_{0}G_{tn} + \sum\limits_{i=1}^{n} W_{i}K_{i} $$ Note that the training network is incomplete, i.e., with many edges taken away and reserved as testing examples. Therefore, our inferring task is to predict or recover the interactions in the testing set G tt based on the kernel fusion. How to infer PPI network? Once the kernel fusion is obtained, it will be used to make PPI inference, in the spirit of random walk. However, instead of directly doing random walk, we apply regularized Laplacian (RL) kernel to the kernel fusion, which allows for PPI inference at the whole network level. The regularized Laplacian kernel [28, 29] is also called the normalized random walk with restart kernel in Mantrach et al. [30] because of the underlying relations to the random walk with restart model [17, 31]. Formally, it is defined as Eq. (3) $$ \textit{RL} = \sum\limits_{k=0}^{\infty} \alpha^{k}{(-L)}^{k} = {(I+\alpha\ast L)}^{-1} $$ where L=D−A is the Laplacian matrix made of the adjacency matrix A and the degree matrix D; and 0<α<ρ(L)−1 where ρ(L) is the spectral radius of L. Here, we use kernel fusion in place of the adjacent matrix, so that various feature kernels in Eq. (2) are incorporated in influencing the random walk with restart on the weighted networks [19]. With the regularized Laplacian matrix, no random walk is actually needed to measure how "close" two nodes are and then use that closeness to infer if the two corresponding proteins interact. Rather, RL K is the inferred matrix, and is interpreted as a probability matrix P in which P i,j indicates the probability of an interaction for protein i and j. Algorithm 1 shows the general steps to infer PPI network from a optimal kernel fusion. Figure 1 contains a toy example to show the process of inference, where both the kernel fusion and the regularized Laplacian are shown as heatmap. The lighter a cell is, the more likely the corresponding proteins. However, to ensure good inference, it is important to learn optimal weights for G tn and various K i to build kernel fusion K fusion . Otherwise, given the multiple heterogeneous kernels from different data sources, the kernel fusion without optimized weights is likely to generate erroneous inference on PPI. An example to show the inference process. The example comprises of a small module in the DIP yeast PPI network, which consists of protein P25358 (ELO2, elongation of fatty acids protein 2) and its 1∼3 hops away neighbors. The kernel fusion and the regularized Laplacian are shown as heatmap. The lighter a cell is, the more likely the corresponding proteins interact ABC-DEP sampling method for learning weights In this work, we revise the ABC-DEP sampling method [26] to optimize the weights for kernels in Eq. (2). ABC-DEP sampling method, based on approximate Bayesian computation with differential evolution and propagation, shows strong capability of accurately estimating parameters for multiple models at one time. The parameter optimization task here is relatively easier than that in [26] as there is only one RL-based prediction model. Specifically, given the connected training network G tn and N feature kernels in Eq. (2), the length of the particle in ABC-DEP would be N+1, where particle can also be seen as a sample including the N+1 weight values. As mentioned before, the PPI network is divided into three parts: the connected training network G tn , validation set G vn and testing set G tt . To obtain the optimal particle(s), a population of particles with size N p is intialized, and ABC-DEP sampling is run iteratively until a particle is found in the evolving population that maximizes the AUC of inferring training network G tn , validation set G vn . The validation set G vn is used to avoid over-fitting as the algorithm converges. Algorithm 2 shows the detailed sampling process. Algorithm 2 is the main structure in which a population of particles with equal importance is initialized and each particle consists of kernel weights randomly generated from a uniform prior. Given the particle population, Algorithm 3 samples through the parameter space for good particles and assigns them weights according to the predicting quality of their corresponding kernel fusion K fusion . Note that, different from the ABC-DEP sampling method in [26] where the logarithm of the Boltzmann distribution is adopted, here, we accept or reject a new candidate particle based on Boltzmann distribution with simulated annealing method [32]. Through the evolution process, bad particles will be filtered out and good particles will be kept for the next generation. We repeat this process until the algorithm converges. The optimal particle is used to build kernel fusion K fusion for PPI prediction. Data and kernels We use yeast PPI networks downloaded from DIP database (Release 20150101) [33] to test our algorithm. Notably, some interactions without Uniprotkb ID have been filtered out in order to do name mapping and make use of genomic similarity kernels [27]. As a result, the PPI network contains 5093 proteins and 22,423 interactions, from which the largest connected component is used to serve as golden standard network. It consists of 5030 proteins and 22,394 interactions. Only tens of proteins and interactions are not included in the largest connected component, which makes the golden standard data almost as complete as the original network. As mentioned before, the golden standard PPI network is divided into three parts that are connected training network G tn , validation set G vn and testing set G tt , where training network G tn is included in the kernel fusion, validation set G vn is used to find optimal weights for feature kernels and testing set G tt is used to evaluate the inference capability of our method. Six feature kernels are obtained from http://noble.gs.washington.edu/proj/sdp-svm/ for this study and the following list is about the detailed information of these kernels. G tn : G tn is the connected training network that provides connectivity information. It can also be thought of as a base network to do the inference. K Jaccard [34]: This kernel measure the similarity of protein pairs i,j in term of \(\frac {neigbors(i) \cap neighbors(j)}{neighbors(i) \cup neighbors(j)}\). K SN : It measures the total number of neighbors of protein i and j, K SN =neighbors(i)+neighbors(j). K B [27]: It is a sequence-based kernel matrix that is generated using the BLAST [35]. K E [27]: This is a gene co-expression kernel matrix constructed entirely from microarray gene expression measurements. K Pfam [27]: This is a generalization of the previous pairwise comparison-based matrices in which the pairwise comparison scores are replaced by expectation values derived from hidden Markov models (HMMs) in the Pfam database [36]. These kernels are positive semi-definite. Please refer to [27] for detailed analysis (or proof). Moreover, Eq. (2) is guaranteed to be positive semi-definite, because basic algebraic operations such as addition, multiplication, and exponentiation preserve the key property of positive semi-definiteness [37]. Finally, all these kernels are normalized to the scale of (0,1) in order to avoid bias. Inferring PPI network To show how well our method can infer PPI network from the kernel fusion, we make the task challenging by dividing the golden standard yeast PPI network into the following three parts: the connected training network G tn has 5030 nodes and 5394 edges, the validation set G vn has 1000 edges, and the testing set G tt has 16,000 edges. This means that we need to infer and recover a large number of testing edges based on the kernel fusion and a small validation set. Firstly, we check the converging process of finding the optimal weights that used to combine feature kernels, which is shown by the Fig. 2. It clearly shows that when the AUC of predicting the training network G tn reaches to 1 quickly, but the AUC of predicting the validation set G vn is still in an upward trend. So G tn alone cannot guarantee the optimality of the weights when the algorithm converges, which is the reason the validation set G vn is used. After several iterations, the ABC-DEP algorithm is converged when both AUCs have become steady. The converging process of ABC-DEP sampling used to obtain optimal weights With the optimal weights obtained from ABC-DEP sampling, we build the kernel fusion K fusion by Eq. (2). PPI network inference is made with RL kernel Eq. (3). The performance of inference is evaluated by how well the testing set G tt is recovered. Specifically, all node pairs are ranked in decreasing order by their edge weights in the RL matrix, and edges in the testing set G tt are then labeled as positive and node pairs with no edges in G are labeled as negative. A ROC curve is plotted for true positive vs. false positives, by running down the ranked list of node pairs. Figure 3 shows the ROC curves and AUCs for three PPI network inferences: RL OPT-K , \( RL_{G_{\textit {tn}}} \), and RL EW-K , where RL OPT-K indicates the RL-based PPI inference is from kernel fusion that built by optimal weights, \( RL_{G_{\textit {tn}}} \) indicates RL-based PPI inference is solely from the training network G tn , and RL EW-K represents RL-based PPI inference is from kernel fusion built by equal weights, e.g., W i =1,i=0,1…n. Additionally, G set ∼n indicates that there is n number of edges in the set G set , e.g., G tn ∼5394 means the connected training network G tn contains 5394 edges. As shown by Fig. 3, the PPI reference RL OPT-K based on our method significantly outperforms the other two control methods, with a 20 % increase over \( RL_{G_{\textit {tn}}} \) and a 23.6 % over RL EW-K in terms of AUC. It is noted that the AUC of PPI inference RL EW-K based on the equally weighted built kernel fusion is even worse than that of \( RL_{G_{\textit {tn}}} \) based on a really small training network. It means there should be a lot of noises if we just naively combine different feature kernels to do PPI prediction. Our method provides an effective way to make good uses of various features for improving PPI prediction performance. ROC curves of predicting G tt ∼16,000 by \(RL_{G_{\textit {tn}}\sim 5394}\), RL OPT-K , RL EW-K , and RL WOLP-K-i In Fig. 3, we also compared with another method, WOLP, which uses linear programming to optimize the weights W i for the various kernel features [38]. It can be seen that WOLP, with AUC at about 0.83, also performs signigicantly better than the baseline, indicating that the method is effective in weighting various features to improve PPI inference. Note that although reference [38] has "random walk" in its title, the method WOLP does not do sampling; instead, the weights for kernel features are optimized by linear programming, constrained with the transition matrix from the training network for any would-be random walk over the PPI network when kernel features are incorporated. As such, WOLP is more computationally efficient but with a trade-off of slightly worse performance as compared to ABC-DEP, which has the best AUC, 0.86, in this study. Effects of the training data Usually, given a golden standard data, we need to retrain the prediction model for different divisions of training sets and testing sets. However, if optimal weights have been found for building kernel fusion, our PPI network inference method enable us to train the model once, and do prediction or inference for different testing sets. To demonstrate that, we keep the two PPI inferences RL OPT-K and RL EW-K obtained before (in last section) unchanged and evaluate the prediction ability for different testing sets. We also examine how performance is affected by sizes of various sets. Specifically, while the size of training network G tn for \( RL_{G_{\textit {tn}}} \) increases, sizes of RL OPT-K and RL EW-K are kept unchanged. Therefore, we design several experiments by dividing the golden standard network into \( G_{\textit {tn}}^{i} \) and \( G_{\textit {tt}}^{i} \), i=1,…,n, and building PPI inference \( RL_{G_{\textit {tn}}^{i}} \) to predict \( G_{\textit {tt}}^{i} \) for every time. To make comparison, we also use RL OPT-K and RL EW-K to predict \( G_{\textit {tt}}^{i} \). Figure 4 shows the ROC curves of predicting G tt ∼15000 by \( RL_{G_{\textit {tn}}\sim 7394} \), RL OPT-K and RL EW-K . Figures 5, 6 and 7 show similar results but just for different G tn and G tt sets. As shown by the Figs. 4, 5, 6, and 7, RL OPT-K trained on only 5394 golden standard edges still performs better than the control methods that employ significantly more golden standard edges. ROC curves of predicting G tt ∼15,000 by \( RL_{G_{\textit {tn}}\sim 7394} \), RL OPT-K , and RL EW-K ROC curves of predicting G tt ∼12,000 by \( RL_{G_{\textit {tn}}\sim 10394} \), RL OPT-K , and RL EW-K Detection of interacting pairs far apart in the network It is known that the basic idea of using random walk or random walk based kernels [17–20] for PPI prediction is that good interacting candidates usually are not faraway from the start node, e.g., only 2,3 edges away in the network. Consequently, for some existing network-level link prediction methods, testing nodes have been chosen to be within a certain distance range, which largely contributes to their good performance reported. In reality, however, a method that is capable and good at detecting interacting pairs far apart in the network can be even more useful, such as in uncovering cross talk between pathways that are not nearby in the PPI network. To investigate how our proposed method performs at detecting faraway interactions, we still use \( RL_{G_{\textit {tn}}\sim 6394} \), RL OPT-K , and RL EW-K for inferring PPIs, but we select node pairs (i,j) that satisfy dist(i,j)>3 given G tn ∼6394 from G tt as new testing set and name it \( G_{\textit {tt}}^{(dist(i,j)>3)} \). Figure 8 shows that RL OPT-K has not only a significant margin over the control methods in detecting long-distance PPIs but also maintains a high ROC score of 0.8438 comparable to that of all PPIs. In contrast, \( RL_{G_{\textit {tn}}\sim 6394} \) performs poorly and worse than RL EW-K , which means the traditional RL kernel based on adjacent training network alone cannot detect faraway interactions well. ROC curves of predicting \( G_{\textit {tt}}^{(dist(i,j)>3)} \) by \( RL_{G_{\textit {tn}}\sim 6394} \), RL OPT-K , and RL EW-K Analysis of weights and efficiency As the method incorporates multiple heterogeneous data, it can be insightful to inspect the final optimal weights. In our case, the optimal weights are 0.8608, 0.1769, 0.9334, 0, 0.0311, 0.9837, respectively for feature kernels G tn , K Jaccard , K SN , K B , K E , and K Pfam . These weights indicate that K SN and K Pfam are the predominant contributors to PPI prediction. This observation is consistent with the intuition that proteins interact via interfaces made of conserved domains [39], and PPI interactions can be classified based on their domain families and domains from the same family tend to interact [40–42]. Although the true strength of our method lies in integrating multiple heterogeneous data for PPI network inference, the optimal weights can serve as a guidance to select most relevant features when time and resources are limited. Lastly, despite of the common concern of time efficiency with methods based on evolutionary computing, the issue is mitigated in our case. In our experiments, only a small number of particles, 150 to be exact, is needed for the initial population for ABC-DEP sampling. Also, as shown in the Fig. 2, our ABC-DEP algorithm is quickly converged, within 10 iterations. Moreover, since the PPI inference from RL OPT-K is shown to be less sensitive to the size of training data, only 5394 gold standard edges, less than 25 % of the total number, are used. And, we do not need to retrain the model for different testing data, which is another time-saving property of our method. In this work, we developed a novel supervised method that enables inference of PPI networks from topological and genomic feature kernels in an optimized integrative way. Tested on DIP yeast PPI network, the results show that our method exhibits competitive advantages over control methods in several ways. First, the proposed method achieved superior performance in PPI prediction, as measured by ROC score, over 20 % higher than the baseline, and this margin is maintained even when the control methods use a significantly larger training set. Second, we also demonstrated that by integrating topological and genomic features into regularized Laplacian kernel, the method avoids the short-range problem encountered by random-walk based methods—namely the inference becomes less reliable for nodes that are far from the start node of the random walk, and show obvious improvements on predicting faraway interactions. Lastly, our method can also provide insights into the relations between PPIs and various similarity features of protein pairs, thereby helping us make good use of these features. As more features with respect to proteins are collected from various -omics studies, they can be used to characterize protein pairs in terms of feature kernels from different perspectives. Thus, we believe that our method provides a useful framework in fusing various feature kernels from heterogeneous data to improve PPI prediction. O Kuchaiev, M Rašajski, DJ Higham, N Pržulj, Geometric de-noising of protein-protein interaction networks. PLoS Comput. Biol.5(8), 1000454 (2009). Y Murakami, K Mizuguchi, Homology-based prediction of interactions between proteins using averaged one-dependence estimators. BMC Bioinforma.15(1), 213 (2014). L Salwinski, D Eisenberg, Computational methods of analysis of protein-protein interactions. Curr. Opin. Struct. Biol.13(3), 377–382 (2003). R Craig, L Liao, Phylogenetic tree information aids supervised learning for predicting protein-protein interaction based on distance matrices. BMC Bioinforma.8(1), 6 (2007). A Gonzalez, L Liao, Predicting domain-domain interaction based on domain profiles with feature selection and support vector machines. BMC Bioinforma.11(1), 537 (2010). QC Zhang, D Petrey, L Deng, L Qiang, Y Shi, CA Thu, B Bisikirska, C Lefebvre, D Accili, T Hunter, T Maniatis, A Califano, B Honig, Structure-based prediction of protein-protein interactions on a genome-wide scale. Nature. 490(7421), 556–560 (2012). R Singh, D Park, J Xu, R Hosur, B Berger, Struct2net: a web service to predict protein-protein interactions using a structure-based approach. Nucleic Acids Res.38(suppl 2), 508–515 (2010). Y Deng, L Gao, B Wang, ppipre: predicting protein-protein interactions by combining heterogeneous features. BMC Syst. Biol.7(Suppl 2), 8 (2013). J Sun, Y Sun, G Ding, Q Liu, C Wang, Y He, T Shi, Y Li, Z Zhao, Inpreppi: an integrated evaluation method based on genomic context for predicting protein-protein interactions in prokaryotic genomes. BMC Bioinforma.8(1), 414 (2007). Y-R Cho, M Mina, Y Lu, N Kwon, P Guzzi, M-finder: uncovering functionally associated proteins from interactome data integrated with go annotations. Proteome Sci.11(Suppl 1), 3 (2013). S-H Jung, W-H Jang, D-S Han, A computational model for predicting protein interactions based on multidomain collaboration. IEEE/ACM Trans. Comput. Biol. Bioinforma.9(4), 1081–1090 (2012). H-H Chen, L Gou, XL Zhang, CL Giles, in Proceedings of the 27th Annual ACM Symposium on Applied Computing. Discovering missing links in networks using vertex similarity measures. SAC '12 (ACMNew York, 2012), pp. 138–143. L Lü, T Zhou, Link prediction in complex networks: a survey. Physica A. 390(6), 11501170 (2011). C Lei, J Ruan, A novel link prediction algorithm for reconstructing protein-protein interaction networks by topological similarity. Bioinformatics. 29(3), 355–364 (2013). N Pržulj, Protein-protein interactions: making sense of networks via graph-theoretic modeling. BioEssays. 33(2), 115–123 (2011). L Page, S Brin, R Motwani, T Winograd, The PageRank Citation Ranking: Bringing Order to the Web (Stanford InfoLab, Stanford, CA, USA, 1999). Previous number = SIDL-WP-1999-0120, http://ilpubs.stanford.edu:8090/422/. H Tong, C Faloutsos, J-Y Pan, Random walk with restart: fast solutions and applications. Knowl. Inf. Syst.14(3), 327–346 (2008). doi:10.1007/s10115-007-0094-2. R-H Li, JX Yu, J Liu, in Proceedings of the 20th ACM International Conference on Information and Knowledge Management. Link Prediction: The Power of Maximal Entropy Random Walk (ACMNew York, NY, USA, 2011), pp. 1147–1156. http://doi.acm.org/10.1145/2063576.2063741. L Backstrom, J Leskovec, in Proceedings of the Fourth ACM International Conference on Web Search and Data Mining. Supervised random walks: Predicting and recommending links in social networks. WSDM '11 (ACMNew York, 2011), pp. 635–644. F Fouss, K Francoisse, L Yen, A Pirotte, M Saerens, An experimental investigation of kernels on graphs for collaborative recommendation and semisupervised classification. Neural Netw.31(0), 53–72 (2012). CV Cannistraci, G Alanis-Lobato, T Ravasi, Minimum curvilinearity to enhance topological prediction of protein interactions by network embedding. Bioinformatics. 29(13), 199–209 (2013). P Symeonidis, N Iakovidou, N Mantas, Y Manolopoulos, From biological to social networks: link prediction based on multi-way spectral clustering. Data Knowl. Eng.87(0), 226–242 (2013). H Wang, H Huang, C Ding, F Nie, Predicting protein–protein interactions from multimodal biological data sources via nonnegative matrix tri-factorization. J. Comput. Biol.20(4), 344–358 (2013). doi:10.1089/cmb.2012.0273. AK Menon, C Elkan, in Proceedings of the 2011 European Conference on Machine Learning and Knowledge Discovery in Databases - Volume Part II. Link prediction via matrix factorization. ECML PKDD'11 (SpringerBerlin, 2011), pp. 437–452. Y Yamanishi, J-P Vert, M Kanehisa, Protein network inference from multiple genomic data: a supervised approach. Bioinformatics. 20(suppl 1), 363–370 (2004). L Huang, L Liao, CH Wu, Evolutionary model selection and parameter estimation for protein-protein interaction network based on differential evolution algorithm. IEEE/ACM Trans. Comput. Biol. Bioinforma.12(3), 622–631 (2015). GRG Lanckriet, T De Bie, N Cristianini, MI Jordan, WS Noble, A statistical framework for genomic data fusion. Bioinformatics. 20(16), 2626–2635 (2004). T Ito, M Shimbo, T Kudo, Y Matsumoto, in Proceedings of the Eleventh ACM SIGKDD International Conference on Knowledge Discovery in Data Mining. Application of kernels to link analysis. KDD '05 (ACMNew York, 2005), pp. 586–592. AJ Smola, R Kondor, 2777, ed. by B Schölkopf, MK Warmuth. Learning Theory and Kernel Machines: 16th Annual Conference on Learning Theory and 7th Kernel Workshop, COLT/Kernel 2003, Washington, DC, USA, August 24-27, 2003. Proceedings (Springer Berlin HeidelbergBerlin, Heidelberg, 2003), pp. 144–158, doi:10.1007/978-3-540-45167-9_12. A Mantrach, N van Zeebroeck, P Francq, M Shimbo, H Bersini, M Saerens, Semi-supervised classification and betweenness computation on large, sparse, directed graphs. Pattern Recogn.44(6), 1212–1224 (2011). J-Y Pan, H-J Yang, C Faloutsos, P Duygulu, in Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Automatic multimedia cross-modal correlation discovery. KDD '04 (ACMNew York, 2004), pp. 653–658. S Kirkpatrick, CD Gelatt, MP Vecchi, Optimization by simulated annealing. Science. 220(4598), 671–680 (1983). L Salwinski, CS Miller, AJ Smith, FK Pettit, JU Bowie, D Eisenberg, The database of interacting proteins: 2004 update. Nucleic Acids Res.32(90001), 449–451 (2004). P Jaccard, Étude comparative de la distribution florale dans une portion des Alpes et des Jura. Bulletin del la Société Vaudoise des Sciences Naturelles. 37:, 547–579 (1901). SF Altschul, W Gish, W Miller, EW Myers, DJ Lipman, Basic local alignment search tool. J. Mol. Biol.215(3), 403–410 (1990). ELL Sonnhammer, SR Eddy, R Durbin, Pfam: A comprehensive database of protein domain families based on seed alignments. Proteins Struct. Funct. Bioinforma.28(3), 405–420 (1997). C Berg, JPR Christensen, P Ressel, Harmonic Analysis on Semigroups: Theory of Positive Definite and Related Functions, 1st edn., vol. 100 (Springer-Verlag New York, New York, 1984). L Huang, L Liao, CH Wu, in Bioinformatics and Biomedicine (BIBM), 2015 IEEE International Conference On. Protein-protein interaction network inference from multiple kernels with optimization based on random walk by linear programming, (2015), pp. 201–207. doi:10.1109/BIBM.2015.7359681. M Deng, S Mehta, F Sun, T Chen, Inferring domain-domain interactions from protein-protein interactions. Genome Res.12(10), 1540–1548 (2002). Z Itzhaki, E Akiva, Y Altuvia, H Margalit, Evolutionary conservation of domain-domain interactions. Genome Biol.7(12), 125 (2006). J Park, M Lappe, SA Teichmann, Mapping protein family interactions: intramolecular and intermolecular protein family interaction repertoires in the {PDB} and yeast1. J. Mol. Biol.307(3), 929–938 (2001). D Betel, R Isserlin, CWV Hogue, Analysis of domain correlations in yeast protein complexes. Bioinformatics. 20(suppl 1), 55–62 (2004). Funding: Delaware INBRE program, with grant from the National Institute of General Medical Sciences-NIGMS (P20 GM103446) from the National Institutes of Health. Department of Computer and Information Sciences, University of Delaware, 18 Amstel Avenue, Newark, 19716, DE, USA Lei Huang, Li Liao & Cathy H. Wu Center for Bioinformatics and Computational Biology, University of Delaware, 15 Innovation Way, Newark, 19711, DE, USA Cathy H. Wu Lei Huang Li Liao Correspondence to Li Liao. LH designed the algorithm and experiments, and performed all the calculations and analyses. LL and CHW aided in interpretation of the data and preparation of the manuscript. LH wrote the manuscript; LL and CHW revised it. LL and CHW conceived of this study. All authors have read and approved this manuscript. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Huang, L., Liao, L. & Wu, C.H. Inference of protein-protein interaction networks from multiple heterogeneous data. J Bioinform Sys Biology 2016, 8 (2016). https://doi.org/10.1186/s13637-016-0040-2 Protein interaction network Network inference Interaction prediction Differential evolution
CommonCrawl
\begin{document} \begin{abstract} The nonsoluble length $\lambda(G)$ of a finite group $G$ is defined as the minimum number of nonsoluble factors in a normal series of $G$ each of whose quotients either is soluble or is a direct product of nonabelian simple groups. The generalized Fitting height of a finite group $G$ is the least number $h=h^*(G)$ such that $F^*_h(G)=G$, where $F^*_1(G)=F^*(G)$ is the generalized Fitting subgroup, and $F^*_{i+1}(G)$ is the inverse image of $F^*(G/F^*_{i}(G))$. In the present paper we prove that if $\lambda (J)\leq k$ for every 2-generator subgroup $J$ of $G$, then $\lambda(G)\leq k$. It is conjectured that if $h^*(J)\leq k$ for every 2-generator subgroup $J$, then $h^*(G)\leq k$. We prove that if $h^*(\langle x,x^g\rangle)\leq k$ for all $x,g\in G$ such that $\langle x,x^g\rangle$ is soluble, then $h^*(G)$ is $k$-bounded. \end{abstract} \maketitle \section{Introduction} Certain properties of a finite group $G$ can be detected by looking at its 2-generator subgroups. For example, it is well-known that $G$ is nilpotent if and only if every 2-generator subgroup of $G$ is nilpotent. A deep theorem of Thompson says that $G$ is soluble if and only if every 2-generator subgroup of $G$ is soluble \cite{thompson} (see also Flavell \cite{flavell}). A number of recent results reflecting the phenomenon that properties of a finite group are determined by its boundedly generated subgroups can be found in \cite{eu10,lmshu,delu}. In the present paper we deal with groups of given nonsoluble length. Every finite group $G$ has a normal series each of whose quotients either is soluble or is a direct product of nonabelian simple groups. In \cite{junta2} the nonsoluble length of $G$, denoted by $\lambda (G)$, was defined as the minimal number of nonsoluble factors in a series of this kind: if $$ 1=G_0\leq G_1\leq \dots \leq G_{2k+1}=G $$ is a shortest normal series in which for $i$ even the quotient $G_{i+1}/G_{i}$ is soluble (possibly trivial), and for $i$ odd the quotient $G_{i+1}/G_{i}$ is a (non-empty) direct product of nonabelian simple groups, then the nonsoluble length $\lambda (G)$ is equal to $k$. \begin{theorem}\label{a} Suppose that $\lambda (J)\leq k$ for every 2-generator subgroup $J$ of a finite group $G$. Then $\lambda (G)\leq k$. \end{theorem} Recall that the generalized Fitting subgroup $F^*(G)$ of a finite group $G$ is the product of the Fitting subgroup $F(G)$ and all subnormal quasisimple subgroups; here a group is quasisimple if it is perfect and its quotient by the centre is a nonabelian simple group. Then the \textit{generalized Fitting series} of $G$ is defined starting from $F^*_0(G)=1$, and then by induction, $F^*_{i+1}(G)$ being the inverse image of $F^*(G/F^*_{i}(G))$. The least number $h$ such that $F^*_h(G)=G$ is defined as the generalized Fitting height $h^*(G)$ of $G$. Clearly, if $G$ is soluble, then $h^*(G)=h(G)$ is the ordinary Fitting height of $G$. Bounding the generalized Fitting height of a finite group $G$ greatly facilitates using the classification of finite simple groups (and is itself often obtained using the classification). One of such examples is the reduction of the Restricted Burnside Problem to soluble and nilpotent groups in the Hall--Higman paper \cite{ha-hi}, where the generalized Fitting height was in effect bounded for groups of given exponent (using the classification as a conjecture at the time). A similar example is Wilson's reduction of the problem of local finiteness of periodic profinite groups to pro-$p$ groups in \cite{wil83}. In view of our Theorem \ref{a} the following problem is natural. \begin{problem}\label{b} Suppose that $h^*(J)\leq k$ for every 2-generator subgroup $J$ of a finite group $G$. Does it follow that $h^*(G)\leq k$? \end{problem} We were not able to answer the above question. On the other hand, the next result seems to be of independent interest. \begin{theorem}\label{c} Let $G$ be a finite group in which $h^*(\langle x,x^g\rangle)\leq k$ for all $x,g\in G$ such that $\langle x,x^g\rangle$ is soluble. Then $h^*(G)$ is $k$-bounded. \end{theorem} We use the expression ``$k$-bounded" to mean ``bounded from above by a number depending on $k$ only". Our method of proof of Theorem \ref{c} shows that $h^*(G)\leq(k+1)2^{k}-1$. However we do not think that the bound $(k+1)2^{k}-1$ is anywhere near to being sharp. We conjecture that actually, under the hypothesis of Theorem \ref{c}, we necessarily have $h^*(G)\leq k$. Thus, the next question is natural. \begin{problem}\label{d} Does any finite group $G$ contain a soluble subgroup $J$ such that $h^*(G)=h(J)$? \end{problem} A positive answer to Problem \ref{d} would imply a positive answer to Problem \ref{b}. The proofs in this paper use the classification of finite simple groups in its application to Schreier's Conjecture that the outer automorphism groups of finite simple groups are soluble. \section{Proof of Theorem \ref{a}} In what follows we denote by $R(K)$ the soluble radical and by $\soc(K)$ the socle of a group $K$. Recall that $\soc(K)$ is the product of all minimal normal subgroups of $K$. If $G$ is a nonsoluble finite group such that $R(G)=1$, then of course $\lambda(G/\soc(G))=\lambda(G)-1$. \begin{lemma}\label{lem:olG} Let $G$ be a group with $\lambda(G)=1$. Let $N=S_1\times\cdots\times S_t$ be a normal subgroup in $G$ which is a direct product of isomorphic nonabelian simple groups. Then the group of permutations induced by $G$ on the set $\{S_1,\ldots,S_t\}$ is soluble. \end{lemma} \begin{proof} Let $\ol G$ be the group of permutations induced by $G$ on the set $\{S_1, \ldots,S_t\}$. Since the soluble radical $R(G)$ of $G$ centralises $N$, and $\lambda(G)=\lambda(G/R)$, we can assume that $R(G)=1$. Then, as $\lambda(G)=1$, it follows that $G/\soc(G)$ is soluble. Since $\soc(G)$ normalises all $S_i$, it follows that $\ol G$ is an homomorphic image of $G/\soc(G)$. Hence $\ol G$ is soluble. \end{proof} \begin{proposition}\label{la=1} Let $N,M$ be normal subgroups of $G$ such that $\lambda(G/N)\leq\lambda(G/M)\leq1$. Then $\lambda(G/N\cap M)\leq1$. \end{proposition} \begin{proof} Suppose that $G$ is a counterexample of minimal order. Then $M\cap N=1$. Since $G$ is a counterexample of minimal order, it follows that $\lambda(G/L)\leq1$ for any nontrivial normal subgroup of $G$. In particular, the soluble radical $R(G)$ of $G$ must be trivial because $\lambda(G)=\lambda(G/R(G))$. Without loss of generality we can assume that $M$ and $N$ are minimal normal subgroups in $G$. Let $G'$ be the derived subgroup of $G$. Since $\lambda(G')=\lambda(G)$, because of minimality of $|G|$ we have $G'=G$ and $\lambda(G/N)=\lambda(G/M)=1$. Let $N=S_1\times\cdots\times S_t$, where $S_i$ are isomorphic nonabelian simple groups. Since $M$ centralises $N$ and $\lambda(G/M)=1$, Lemma \ref{lem:olG} tells us that the permutation group $\ol G$ induced by $G$ on $\{S_1, \ldots , S_t\}$ is soluble. Taking into account that $G=G'$, we deduce that $\ol G=1$. Therefore $t=1$ and $N$ is simple. From the Schreier Conjecture combined with the fact that $G=G'$ we now deduce that $G/C_G(N)$ acts on $N$ by inner automorphisms. Hence, $G=N\times C_G(N)$. Since $\lambda(C_G(N))=\lambda(N)=1$, the result follows. \end{proof} Given a finite group $G$, we define $T(G)$ as the intersection of all normal subgroups $N$ of $G$ such that $\lambda(G/N)\leq1$. It is easy to deduce from Proposition \ref{la=1} that $\lambda(G/T(G))\leq1$ and $\lambda(G/T(G))=1$ if and only if $G$ is nonsoluble. Set $T_1(G)=G$ and, by induction, $T_{i+1}(G)= T(T_{i}(G))$. In view of Proposition \ref{la=1}, it is clear that if $T_{i-1}(G) \neq 1$, then $ T_i(G)$ is the minimal normal subgroup $N$ of $G$ such that $\lambda(G/N)=i-1$. Moreover, $\lambda(T_{i}(G)/T_{i+1}(G))=1$ and $T_{i}(G)$ is perfect for every $i\geq1$ such that $T_i(G) \neq 1$. \begin{lemma}\label{zero} Let $G$ be a finite group with nonsoluble length $\lambda$. Then for every positive integer $n\le\lambda$ there exists a subgroup $H$ in $G$ such that $\lambda(H)=n$. \end{lemma} \begin{proof} For example, the subgroup $T_{\lambda-n+1}(G)$ has the required property. \end{proof} \begin{lemma}\label{lem:Centralizer} Let $N$ be a normal subgroup of $G$. If $N$ is a direct product of nonabelian simple groups and $\lambda(G/N)=\lambda(G)$, then $C_G(N)\neq1$. \end{lemma} \begin{proof} Let $\lambda=\lambda(G)$ and $T=T_\lambda(G)$. Thus, $\lambda(G/T)=\lambda-1$. Since $\lambda(G/N)=\lambda$, it is clear that $T$ can not be a subgroup of $N$. If the soluble radical $R(T)$ is nontrivial, we have $R(T)\leq C_G(N)\neq1$. Otherwise, $T$, being perfect, is a direct product of nonabelian simple groups. In particular $T$ is a product of the minimal normal subgroups of $G$ contained in $T.$ If $T_0\le T$ is a minimal normal subgroup of $G$, then either $T_0$ centralizes $N$ or $T_0=[T_0,N]\le N$. Since $T$ is not contained in $N$, we deduce that $C_G(N)\neq1$. \end{proof} In what follows we let $d(G)$ denote the minimal size of a generating set of a group $G$. We will require the following well-known theorem. \begin{theorem}\label{L-M} Let the finite group $G$ have a unique minimal normal subgroup $N$ and assume that $G/N$ is noncyclic. Then $d(G)=d(G/N)$. \end{theorem} The above theorem was proved in \cite{AG} in the case where $N$ is abelian and in \cite{LM} in the general case. We are now ready to prove Theorem \ref{a}. \begin{proof}[Proof of Theorem \ref{a}] Recall that $\lambda (J)\leq k$ for every 2-generator subgroup $J$ of $G$. Our aim is to show that $\lambda(G)\leq k$. Let $G$ be a counterexample of minimal possible order. Then $\lambda(G)=k+1$. In view of Thompson's theorem \cite{thompson}, $k\geq1$. We deduce from Lemma \ref{zero} that $\lambda(H)\le k$ for every proper subgroup $H<G$. Further, whenever $N$ is a nontrivial normal subgroup, we have $\lambda(G/N)\le k$. Let $T=T_{k+1}(G)$. It follows that $T$ is contained in each nontrivial normal subgroup of $G$. Therefore $T$ is a unique minimal normal subgroup in $G$. Since $T=T'$, we conclude that $T$ is a direct product of isomorphic nonabelian simple groups. By minimality of $G$, the quotient $G/T$ has a 2-generator subgroup $H/T$ such that $\lambda(H/T)=k$. If $H=G$, then, by Theorem \ref{L-M}, the group $G$ is 2-generator and we have a contradiction. Assume that $H\neq G$. Since $H$ is a proper subgroup, we have $\lambda(H)\le k$. Since the image of $H$ in $G/T$ has nonsoluble length $k$, we conclude that $\lambda(H)=k$. Hence, $\lambda(H/T)=\lambda(H)$. Lemma \ref{lem:Centralizer} now tells us that $C_H(T)\neq1$. It follows that $C_G(T)\neq1$. Since $T$ is a direct product of nonabelian simple groups and since $T$ is a unique minimal normal subgroup in $G$, the centralizer $C_G(T)$ must be trivial. This is a contradiction. The proof is now complete. \end{proof} \section{Proof of Theorem \ref{c}} Given a finite group $G$, we denote by $K(G)$ the intersection of all normal subgroups $N$ such that $F^*(G/N)=G/N$. By \cite[Lemma X.13.3(c)]{hubla} $F^*(G/K(G))=G/K(G)$. Define the series $$K_1(G)=G, \text{ and } K_{i+1}(G)=K(K_i(G))\ for\ i=1,2,\dots.$$ If $h=h^*(G)$, we have the usual inclusions $K_i (G)\leq F^*_{h-i+1}(G)$ for $i=1,2,\dots,h+1$. Moreover, it is clear that $h^*(G)=i+h^*(G/F^*_i(G))$ and $h^*(G)=i+h^*(K_i(G))-1$. \begin{lemma}\label{bbou} Let $\lambda$ and $h$ be nonnegative integers and $G$ a finite group such that $\lambda(G)=\lambda$ and $h^*(G)=h$. Then $G$ contains a soluble subgroup $B$ such that $h(B)\geq \frac{h+1-2^\lambda}{2^\lambda}$. \end{lemma} \begin{proof} If $\lambda=0$, the result is obvious, so we can assume that $\lambda\geq1$ and use induction on $\lambda$. Let $R=R(G)$ and $\bar{G}=G/R$. Choose a Sylow 2-subgroup $T$ in $\soc(\bar{G})$. By the Frattini lemma $\bar{G}=\soc(\bar{G})N_{\bar{G}}(T)$. Let $H$ be the inverse image of $N_{\bar{G}}(T)$ in $G$. We will show that $\lambda(H)=\lambda-1$ and $2h^*(H)\geq h-1$. By the Feit-Thompson Theorem \cite{fei-tho} $\soc(\bar{G})\cap N_{\bar{G}}(T)$ is soluble. Let $S/R=\soc(\bar{G})$. Then $S\cap H$ is soluble. We know that $H/(S\cap H)$ is isomorphic with $G/S$ and that $\lambda(G/S)=\lambda-1$. Hence, $\lambda(H)=\lambda-1$. We will now prove that $2h^*(H)\geq h-1$. Let $j$ be the minimal number such that $K_j=K_j(G)$ is soluble. We notice that $h(K_j)=h-j+1$. Hence $h^*(H)\geq h-j+1$. Since $K_{j-1}(G)$ is not contained in $R$, it follows that $K_{j-1}(\bar{G})$ is a nontrivial subgroup contained in $\soc(\bar{G})$ while $K_{j-2}(\bar{G})$ is not contained in $\soc(\bar{G})$. Since $\bar{G}=\soc(\bar{G})N_{\bar{G}}(T)$, it follows that $K_{j-2}(H)\neq1$. Hence $h^*(H)\geq j-2$. Combining this with the fact that $h^*(H)\geq h-j+1$, we conclude that $h^*(H)\geq\frac{h-1}{2}$. By induction, $H$ contains a soluble subgroup $B$ such that $$h(B)\geq \frac{h^*(H)+1-2^{\lambda-1}}{2^{\lambda-1}}.$$ Since $h^*(H)\geq\frac{h-1}{2}$, we have $$h(B)\geq\frac{h^*(H)+1-2^{\lambda-1}}{2^{\lambda-1}}\geq\frac{h+1-2^\lambda}{2^\lambda}.$$ The proof is now complete. \end{proof} We will require the following results obtained in \cite{junta2} and \cite{eu10}, respectively. \begin{theorem}\label{ju} The nonsoluble length $\lambda(G)$ does not exceed the maximum Fitting height of soluble subgroups of a finite group $G$. \end{theorem} \begin{theorem}\label{uuu} Every soluble group $G$ has a subgroup $J$ generated by a pair of conjugate elements such that $h(G)=h(J)$. \end{theorem} The proof of Theorem \ref{c} is now straightforward. \begin{proof}[Proof of Theorem \ref{c}] Recall that $k$ is a positive integer and $G$ a finite group in which $h^*(\langle x,x^g\rangle)\leq k$ for all $x,g\in G$ such that $\langle x,x^g\rangle$ is soluble. We wish to show that $h^*(G)$ is $k$-bounded. Let $J\leq G$ be a soluble subgroup of maximal Fitting height. In view of Theorem \ref{uuu} we can choose $J$ in such a way that $J$ is generated by a pair of conjugate elements. Let $t=h(J)$ and $h=h^*(G)$. By Theorem \ref{ju}, we have $\lambda(G)\leq t$. Now Lemma \ref{bbou} shows that $$\frac{h+1-2^t}{2^t}\leq t.$$ From this we deduce that $h\leq(t+1)2^{t}-1$. So in particular $h$ is bounded by a function of $t$. Since $t\leq k$, the theorem follows. \end{proof} \end{document}
arXiv
\begin{document} \author{Dorit Aharonov\footnote{School of Computer Science, The Hebrew University of Jerusalem, Israel. $\{$doria,benor,elade$\}[email protected]} \and Michael Ben-Or$^*$ \and Elad Eban$^*$} \title{Interactive Proofs For Quantum Computations} \maketitle \thispagestyle{empty} \begin{abstract} The widely held belief that {\sf BQP}\ strictly contains {\sf BPP}\ raises fundamental questions: Upcoming generations of quantum computers might already be too large to be simulated classically. Is it possible to experimentally test that these systems perform as they should, if we cannot efficiently compute predictions for their behavior? Vazirani has asked \cite{vazirani07}: If computing predictions for Quantum Mechanics requires exponential resources, is Quantum Mechanics a falsifiable theory? In cryptographic settings, an untrusted future company wants to sell a quantum computer or perform a delegated quantum computation. Can the customer be convinced of correctness without the ability to compare results to predictions? To provide answers to these questions, we define Quantum Prover Interactive Proofs (\textsf{QPIP}). Whereas in standard Interactive Proofs \cite{goldwasser1985kci} the prover is computationally unbounded, here our prover is in {\sf BQP}, representing a quantum computer. The verifier models our current computational capabilities: it is a {\sf BPP}\ machine, with access to few qubits. Our main theorem can be roughly stated as: "Any language in {\sf BQP}\ has a \textsf{QPIP}, and moreover, a fault tolerant one''. We provide two proofs. The simpler one uses a new (possibly of independent interest) quantum authentication scheme (\textsf{QAS}) based on random Clifford elements. This \textsf{QPIP}\ however, is not fault tolerant. Our second protocol uses polynomial codes \textsf{QAS}\ due to Ben-Or, Cr{\'e}peau, Gottesman, Hassidim, and Smith \cite{benor2006smq}, combined with quantum fault tolerance and secure multiparty quantum computation techniques. A slight modification of our constructions makes the protocol ``blind'': the quantum computation and input remain unknown to the prover. After we have derived the results, we have learnt that Broadbent, Fitzsimons, and Kashefi \cite{broadbent2008ubq} have independently derived "universal blind quantum computation'' using completely different methods (measurement based quantum computation). Their construction implicitly implies similar implications. \end{abstract} \setcounter{page}{1} \section{Introduction} \subsection{Motivation}\label{sec:funda} As far as we know today, the quantum mechanical description of many-particle systems requires exponential resources to simulate. This has the following fundamental implication: the results of an experiment conducted on a many-particle physical system described by quantum mechanics, cannot be predicted (in general) by classical computational devices, in any reasonable amount of time. This important realization (or belief), which stands at the heart of the interest in quantum computation, led Vazirani to ask \cite{vazirani07}: Is quantum mechanics a falsifiable physical theory? Assuming that small quantum systems obey quantum mechanics to an extremely high accuracy, it is still possible that the physical description of large systems deviates significantly from quantum mechanics. Since there is no efficient way to make the predictions of the experimental outcomes for most large quantum systems, there is no way to test or falsify this possibility experimentally, using the usual scientific paradigm, as described by Popper. This question has practical implications. Experimentalists who attempt to realize quantum computers would like to know how to test that their systems indeed perform the way they should. But most tests cannot be compared to any predictions! The tests whose predictions can in fact be computed, do not actually test the more interesting aspects of quantum mechanics, namely those which cannot be simulated efficiently classically. The problem arises in cryptographic situations as well. Consider for example, a company called \mbox{\emph{Q-Wave}}\ which is trying to convince a certain potential customer that the system it had managed to build is in fact a quantum computer of $100$ qubits. How can the customer, who cannot make predictions of the outcomes of the computations made by the machine, test that the machine is indeed a quantum computer which does what it is claimed to do? Given the amounts of grant money and prestige involved, the possibility of dishonesty of experimentalists and experimentalists' bias inside the academia should not be ignored either \cite{roodman2003bap, BlindWiki}. There is another related question that stems from cryptography. It is natural to expect that the first generations of quantum computers will be extremely expensive, and thus quantum computations would be delegated to untrusted companies. Is there any way for the costumer to trust the outcome, without the need to trust the company which performed the computation, even though the costumer cannot verify the outcome of the computation (since he cannot simulate it)? And even if the company is honest, can the costumer detect innocent errors in such a computation? Vazirani points out \cite{vazirani07} that in fact, an answer to these questions is already given in the form of Shor's algorithm. Indeed, quantum mechanics does not seem to be falsifiable using the {\it usual} scientific paradigm, assuming that {\sf BQP}\ is strictly lager than {\sf BPP}. However, Shor's algorithm does provide a way for falsification, by means of an experiment which lies outside of the scientific paradigm: though its result cannot be {\it predicted} and then compared to the experimental outcome, it can be {\it verified} once the outcome of the experiment is known (by simply taking the product of the factors and checking that this gives the input integer). This, however, does not fully address the issues raised above. Let us take for example the case of the company trying to convince a costumer that the system it is trying to sell is indeed a quantum computer of $100$ qubits. Such a system is already too big to simulate classically; However, any factoring algorithm that is run on a system of a $100$ qubits can be easily performed by today's classical technology. For delegated quantum computations, how can Shor's algorithm help in convincing a costumer of correctness of, say, the computation of the {\sf BQP}\ complete problem of approximating the Jones polynomial \cite{aharonov2006pqa,jonesHardness}? As for experimental results, it is difficult to rigorously state what is exactly falsified or verified by the possibility to apply Shor's algorithm. Finally, from a fundamental point of view, there is a fundamental difference between being convinced of the ability to factor, and testing universal quantum evolution. We thus pose the following main question: Can one be convinced of the correctness of the computation of {\it any} polynomial quantum circuit? Does a similar statement to the one above, regarding Shor's algorithm, apply for universal quantum computation? Alternatively, can one be convinced of the ``correctness'' of the quantum mechanical description of any quantum experiment that can be conducted in the laboratory, even though one cannot compute any predictions for the outcomes of this experiment? In this paper we address the above fundamental question in a rigorous way. We do this by taking a computational point of view on the interaction between the supposed quantum computer, and the entity which attempts to verify that it indeed computes what it should. \subsection{Quantum Prover Interactive Proofs (\textsf{QPIP})} Interactive proof systems, defined by Goldwasser, Micali and Rackoff \cite{goldwasser1985kci}, play a crucial role in the theory of computer science. Roughly, a language $\mathcal{L}$ is said to have an interactive proof if there exists a computationally unbounded prover (denoted $\mathcal{P}$) and a {\sf BPP}\ verifier ($\mathcal{V}$) such that for any $x\in\mathcal{L}$, $\mathcal{P}$ convinces $\mathcal{V}$ of the fact that $x\in\mathcal{L}$ with probability $\ge \frac 2 3$ (completeness). Otherwise, when $x\notin\mathcal{L}$ any prover fails to convince $\mathcal{V}$ with probability higher than $\frac 1 3$ (soundness). Shor's factoring algorithm \cite{shor1997pta} can be viewed as an interactive proof of a very different kind: one between a classical {\sf BPP}\ verifier, and a quantum \textit{polynomial time} ({\sf BQP}) prover, in which the prover convinces the verifier of the factors of a given number (this can be easily converted to the usual {\sf IP}\ formalism of membership in a language). This is a quantum interactive proof of a very different kind than quantum interactive proofs previously studied in the literature \cite{watrous2003phc}, in which the prover is an \emph{unbounded} quantum computer, and the \emph{verifier} is a {\sf BQP}\ machine. Clearly, such an interactive proof between a {\sf BQP}\ prover and a {\sf BPP}\ verifier exists for any problem inside {\sf NP} $\cap$ {\sf BQP}. However, it is widely believed that {\sf BQP}\ is not contained in {\sf NP}\ ( and in fact not even in the polynomial hierarchy). The main idea of the paper is to generalize the above interactive point of view of Shor's's algorithm, and show that with this generalization, a verifier can be convinced of the result of {\it any} polynomial quantum circuit, using interaction with the prover - the quantum computer. To this end we define a new model of quantum interactive proofs which we call quantum prover interactive proofs (\textsf{QPIP}). The simplest definition would be an interactive proof in which the prover is a {\sf BQP}\ machine and the verifier a {\sf BPP}\ classical machine. In some sense, this model captures the possible interaction between the quantum world (for instance, quantum systems in the lab) and the classical world. However, this model does not suffice for our purposes. We therefore modify it a little, and allow the verifier additional access to a constant number of qubits. The verifier can be viewed as modeling our current computational abilities, and so in some sense, the verifier in the following system represents ``us''. \begin{deff}\label{def:QPIP} Quantum Prover Interactive Proof (\textsf{QPIP}) is an interactive proof system with the following properties: \begin{itemize} \item The prover is computationally restricted to {\sf BQP}. \item The verifier is a hybrid quantum-classical machine. Its classical part is a {\sf BPP}\ machine. The quantum part is a register of $c$ qubits (for some constant $c$), on which the prover can perform arbitrary quantum operations. At any given time, the verifier is not allowed to possess more than $c$ qubits. The interaction between the quantum and classical parts is the usual one: the classical part controls which operations are to be performed on the quantum register, and outcomes of measurements of the quantum register can be used as input to the classical machine. \item There are two communication channels: one quantum and one classical. \end{itemize} The completeness and soundness conditions are identical to the {\sf IP}\ conditions. \end{deff} Abusing notation, we denote the class of languages for which such a proof exists also by $\textsf{QPIP}$. \subsection{Main Results} \begin{deff} The promise problem \mbox{\textsf{Q-CIRCUIT}}\ consists of a quantum circuit made of a sequence of gates, $U=U_T{\ldots}U_1$, acting on $n$ input bits. The task is to distinguish between two cases: \begin{eqnarray*} \mbox{\textsf{Q-CIRCUIT}}_{\textmd{YES}}&: \|(\left(\ket 0 \bra 0 \otimes \mathcal{I}_{n-1}\right)U\ket{\bar{ 0}} \|^2\ge \frac 2 3\\ \mbox{\textsf{Q-CIRCUIT}}_{\textmd{NO}}\;\,&: \|(\left(\ket 0 \bra 0 \otimes \mathcal{I}_{n-1}\right)U\ket{\bar{ 0}} \|^2\le \frac 1 3 \end{eqnarray*} \end{deff} \mbox{\textsf{Q-CIRCUIT}}\ is a {\sf BQP}\ complete problem, and moreover, this remains true for other soundness and completeness parameters $0<s,c<1$, if $c-s>\frac 1 {Poly(n)}$. Our main result is: \begin{thm}\label{thm:qcircuit} The language \mbox{\textsf{Q-CIRCUIT}}\ has a \textsf{QPIP}. \end{thm} Since \mbox{\textsf{Q-CIRCUIT}}\ is {\sf BQP}\ complete, and \textsf{QPIP}\ is trivially inside {\sf BQP}, we have: \begin{thm} \label{thm:main} ${\sf BQP}\ = \textsf{QPIP}$. \end{thm} Thus, a {\sf BQP}\ the prover can convince the verifier of any language he can compute. We remark that our definition of \textsf{QPIP}\ is asymmetric - the verifier is ``convinced'' only if the quantum circuit outputs $1$. This asymmetry seems irrelevant in our context of verifying correctness of quantum computations. Indeed, it is possible to define a symmetric version of \textsf{QPIP}, (we denote it by $\textsf{QPIP}^{sym}$) in which the verifier is convinced of {\it correctness} of the prover's outcome (in both $0$ and $1$ cases) rather than of membership of the input in the language, namely in the $1$ case only. That ${\sf BQP}=\textsf{QPIP}^{sym}$ follows quite easily from the fact that ${\sf BQP}$ is closed under complement (see \pen{app:sym}). Moreover, the above results apply in a realistic setting, namely with noise: \begin{thm}\label{thm:ft} \Th{thm:qcircuit} holds also when the quantum communication and computation devices are subjected to the usual local noise model assumed in quantum fault tolerance settings. \end{thm} In the works \cite{childs2001saq,blind} a related question was raised: in our cryptographic setting, if we distrust the company performing the delegated quantum computation, we might want to keep both the input and the function which is being computed secret. Can this be done while maintaining the confidence in the outcome? A simple modification of our protocols gives \begin{thm}\label{thm:blind} \Th{thm:ft} holds also in a blind setting, namely, the prover does not get any information regarding the function being computed and its input. \end{thm} We note that an analogous result for {\sf NP}-hard problems was shown already in the late $80$'s to be impossible unless the polynomial hierarchy collapses \cite{abadi1987hio}. \subsection{Proofs Overview (and More Results About Quantum Authentication Schemes)} Our main tool is quantum authentication schemes (\textsf{QAS}) \cite{barnum2002aqm}. Roughly, a \textsf{QAS}\ allows two parties to communicate in the following way: Alice sends an encoded quantum state to Bob. The scheme is secure if upon decoding, Bob gets the same state as Alice had sent unless it was altered, whereas if the state had been altered, then Bob's chances of declaring valid a wrong state are small. The basic idea is that similar security can be achieved, even if the state needs to be rotated by unitary gates, as long as the verifier can control how the unitary gates affect the authenticated states. Implementing this simple idea in the context of fault tolerance encounters several complications, which we explain later. We start with a simple \textsf{QAS}\ and \textsf{QPIP}, which we do not know how to make fault tolerant, but which demonstrates some of the ideas and might be of interest on its own due to its simplicity. \noindent\textbf{Clifford \textsf{QAS}\ based \textsf{QPIP}}. We present a new, simple and efficient \textsf{QAS}, based on random Clifford group operations (it is reminiscent of Clifford based quantum $t$-designs \cite{ambainis2007qtd,ambainis2008tre}). To encode a state of $m$ qubits, tensor the state with $d$ qubits in the state $\ket{0}$, and apply a random Clifford operator on the $m+d$ qubits. The security proof of this \textsf{QAS}\ uses a combination of known ideas. We first prove that any attack of Eve is mapped by the random Clifford operator to random Pauli operators. We then show that those are detected with high probability. This \textsf{QAS}\ might be interesting on its own right due to its simplicity. To construct a \textsf{QPIP}\ using this \textsf{QAS}, we simply use the prover as an untrusted storage device: the verifier asks the prover for the authenticated qubits on which he would like to apply the next gate, decodes them, applies the gate, encodes them back and sends them to the prover. The proof of security is quite straight forward given the security of the \textsf{QAS}. \ignore{ The idea of the Clifford authentication scheme is the following: If one wants to encode an $m$ qubit state, one can use a random subspace of dimension $2^m$ in a space of dimension $2^{m+d}$. To realize this, we need to efficiently select a random subspace, which is impossible. Instead, we rotate a given subspace by a random Clifford operation, which turns out to provide enough randomization. We prove that for any encoded state and any intervention, if we average over the random Clifford rotation, the intervention takes the authenticated state far enough outside the random subspace, which will be noticeable upon decoding. } Due to the lack of structure of the authenticated states, we do not know how to make the prover apply gates on the states without revealing the key. This seems to be necessary for fault tolerance. The resulting \textsf{QPIP}\ protocol also requires many rounds of quantum communication. \noindent\textbf{Polynomial codes based \textsf{QAS}\ and its \textsf{QPIP}\ } Our second type of \textsf{QPIP}\ uses a \textsf{QAS}\ due to Ben-Or, Cr\'epeau, Gottesman, Hassidim and Smith \cite{benor2006smq}. This \textsf{QAS}\ is based on signed quantum polynomial codes, which are quantum polynomial codes \cite{aharonov1997ftq} of degree at most $d$ multiplied by some random sign ($1$ or $-1$) at every coordinate (this is called the sign key) and a random Pauli at every coordinate (the pauli key). We present here a security proof which was missing from the original paper \cite{benor2006smq}. The proof requires some care, due to a subtle point, which was not addressed in \cite{benor2006smq}. We first prove that no Pauli attack can fool more than a small fraction of the sign keys, and thus the sign key suffices in order to protect the code from any Pauli attack. Next, we need to show that the scheme is secure against general attacks. This, surprisingly, does not follow by linearity from the security against pauli attacks (as is the case in quantum error correcting codes): if we omit the Pauli key we get an authentication scheme which is secure against Pauli attacks but not against general attacks. We proceed by showing, (with some similarity to the Clifford based \textsf{QAS}), that the random Pauli key effectively translates Eve's attack to a mixture (not necessarily uniform like in the Clifford case) of Pauli operators acting on a state encoded by a random signed polynomial code. \ignore{ Intuitively, an authentication scheme is required to be able to detect \emph{any} intervention with high probability. Let us compare it to an error detection code. An error detection code, detects a \emph{certain set} of errors \emph{with certainty}, while other errors always escape detection. The main idea behind the authentication scheme of \cite{benor2006smq} is to use a random error detection code such that \emph{any error} is detected by all but a small fraction of codes in the set.} \ignore{ \begin{thm}\label{thm:securityof polynomial} The Polynomial authentication scheme is secure against general attacks with security $2^{-d}$\label{thm:PolynomialAuth}, where $d$ is the degree of the polynomial error correction code used in the scheme. \end{thm} We note that the security parameter $d$ is different than the $d$ parameter in the Clifford based \textsf{QAS}. The security of the signed polynomial codes \textsf{QAS}\ is in fact slightly worse than that of the Clifford based \textsf{QAS}\ (see \Sec{sec:authentication} for more precise statements).} \ignore{ There is a similarity between the Clifford and polynomial \textsf{QAS} s, which makes the ideas underlying the proofs quite reminiscent of each other. The similarity lies in the role of the random Clifford and random Pauli operations. Intuitively, the random operations' role is to protect some underling structure which holds the authenticated information. Both type of operations (Clifford, Pauli) serve to randomize the attack of Eve on the underlying structure. The random Clifford operation disentangles and symmetrizes any attack, such that Eve's attack has the effect on the protected structure as a randomly chosen Pauli. On the other hand, the random Pauli does not symmetrize Eve's attack but rather disentangles it, such that Eve's attack is reduced to a mixture (not necessarily uniform) of Pauli operators on the protected structure. The underling structure of both \textsf{QAS}\ makes it possible to detect such interventions with high probability. } \ignore{ \subsection{The \textsf{QPIP}\ Protocols} Given those \textsf{QAS} s, we can prove \Th{thm:qcircuit} in two ways, as follows. We start with the proof using the Clifford authentication scheme, in which case the construction is extremely simple. In this case, the prover in the \textsf{QPIP}\ serves merely as an untrusted storage device. He keeps the qubits in the correct superposition of a state encoded according to the \textsf{QAS}. To apply the operations of the quantum circuit, $\mathcal{P}$ sends the appropriate encoded qubits back to $\mathcal{V}$ who, knowing the encoding, can decode the qubits, apply the operations, encode them again, and send them back. This gives a scheme which requires the register to hold $3$ qubits. } \ignore{ The size of the quantum register which the verifier needs to possess in this protocol is that of two registers, which in our case can be of two qubits each, and so we get a register of $4$ qubits. In fact, $3$ qubits suffice for the verifier's register, since the length of the encoding of one qubit is $2$, and it suffices to send one register at a time and wait until the verifier decodes it before sending the second register. } \ignore{ The second proof of \Th{thm:main}, based on the polynomial codes \textsf{QAS}, also provides fault tolerance. The \textsf{QPIP}\ protocol we use is very different than that used with the Clifford group \textsf{QAS}. } Due to its algebraic structure, the signed polynomial code \textsf{QAS}\ allows applying gates without knowing the key. This was used in \cite{benor2006smq} for secure multiparty quantum computation; here we use it to allow the prover to perform gates without knowing the authentication key. The \textsf{QPIP}\ protocol goes as follows. The prover receives all authenticated qubits in the beginning. Those include the inputs to the circuit, as well as authenticated magic states required to perform Toffoli gates, as described in \cite{benor2006smq,nielsen2000qcq}. With those at hand, the prover can perform universal computation using only Clifford group operations and measurements (universality was proved for qubits in \cite{bravyi2005uqc}, and the extension to higher dimensions was used in \cite{benor2006smq}). The prover sends the verifier results of measurements and the verifier sends information given those results, which enables the prover to continue the computation. The communication is thus classical except for the first round. \ignore{ We mention that the resulting \textsf{QPIP}\ which uses the polynomial code \textsf{QAS}\ has worse parameters than the Clifford based one: The size of the register is $9$ qutrits (three dimensional systems) rather than $3$ qubits. The reason is that we need three registers of three qutrits each, to apply the Toffoli gate. In fact, we suspect that it suffices that the verifier's register contains only $3$ qutrits, based on ideas generalizing the result of Bravyi and Kitaev \cite{bravyi2005uqc} to qudits, namely, to higher dimensions. This will be discussed in the full version of this paper. } \ignore{ This might be a step towards a scheme involving only classical communication, which is a major open problem remaining in this paper. } \ignore{ \subsection{Fault Tolerant \textsf{QPIP}} As discussed before, for the \textsf{QPIP}\ to be applicable in physically realistic settings, it must work also in the presence of noise which affects the communication channels and the computations performed by the prover and the verifier. We manage to prove that this indeed holds for the scheme based on polynomial codes: \begin{thm}\label{thm:ft} \Th{thm:qcircuit} holds also when the quantum communication and computation devices are subjected to the usual local noise model assumed in quantum fault tolerance settings. \end{thm} } \noindent\textbf{Fault Tolerance} Using the polynomial codes \textsf{QAS}\ enables applying known fault tolerance techniques based on polynomial quantum codes \cite{aharonov1997ftq, benor2006smq} to achieve robustness to local noise. However, a problem arises when attempting to apply those directly: in such a scheme, the verifier needs to send the prover polynomially many authenticated qubits every time step, so that the prover can perform error corrections on all qubits simultaneously. However, the verifier's quantum register contains only a constant number of qubits, and so the rate at which he can send authenticated qubits (a constant number at every time step) seems to cause a bottleneck in this approach. We bypass this problem is as follows. At the first stage of the protocol, authenticated qubits are sent from the verifier to the prover, one by one. As soon as the prover receives an authenticated qubit, he protects his qubits using his own concatenated error correcting codes so that the effective error in the encoded authenticated qubit is constant. This constant accuracy can be maintained for a long time by the prover, by performing error correction with respect to {\it his} error correcting code. Thus, polynomially many such authenticated states can be passed to the prover in sequence. A constant effective error is not good enough, but can be amplified to an arbitrary inverse polynomial by purification. Indeed, the prover cannot perform purification on his own since the purification compares authenticated qubits and the prover does not know the authentication code; However, the verifier can help in the prover's using classical communication. This way the prover can reduce the effective error on his encoded authenticated qubits to inverse polynomial, and perform the usual fault tolerant construction of the given circuit, with the help of the prover in performing the gates. \noindent\textbf{Blind Quantum Computation} To achieve \Th{thm:blind}, we modify our construction so that the circuit that the prover performs is a {\it universal quantum circuit}, i.e., a fixed sequence of gates which gets as input a description of a quantum circuit, plus an input string to that circuit, and applies the input quantum circuit to the input string. Since the universal quantum circuit is fixed, it reveals nothing about the input quantum circuit or the input string to it. \subsection{Interpretations of the Results} The corollaries below clarify the connection between the results and the motivating questions, and show that one can use the \textsf{QPIP}\ protocols designed here, to address the various issues raised in \Sec{sec:funda}. We start with some basic question. Conditioned that the verifier does not abort, does he know that the final state of the machine is very close to the correct state that was supposed to be the outcome of the computation? This unfortunately is not the case. It may be that the prover can make sure that the verifier abort with very high probability, but when he does not abort, the computation is wrong. However a weaker form of the above result holds: if we know that the probability of not to abort is high, then we can deduce something about correctness. \begin{corol} \label{corol:confidence} For a \textsf{QPIP}\ protocol with security parameter $\delta$, if the verifier does not abort with probability $\ge \gamma$ then the trace distance between the final density matrix and that of the correct state is at most $\frac {2\delta} \gamma$ \end{corol} The proof is simple and is given in \pen{app:interpretation}. Further interpreting \Th{thm:main}, we show that under a somewhat stronger assumption than {\sf BQP}\ $\ne$ {\sf BPP}, but still a widely believed assumption, it is possible to lower bound the computational power of a successful prover and show that it is not within {\sf BPP}. Assuming that there is a language $L \in$ {\sf BQP}\ and there is a polynomial time samplable distribution $D$ on which any {\sf BPP}\ machine errs with non negligible probability (e.g. the standard cryptographic assumptions about the hardness of Factoring or Discrete Log), we have \begin{corol} For such a language $L$, if the verifier interacts with a given prover for the language $L$, and does not abort with high probability, then the prover's computational power cannot be simulated by a {\sf BPP}\ machine. \end{corol} This corollary follows immediately from Corollary \ref{corol:confidence}. One might wonder whether it is possible to somehow get convinced not only of the fact that the computation that was performed by the prover is indeed the desired one, but also that the prover must have had access to some quantum computer. We prove: \begin{corol}\label{corol:hybridProver} There exists a language $\mathcal{L} \in {\sf BQP}$ such that even if the prover in our \textsf{QPIP}\ is replaced by one with unbounded classical computational power, but only a constant number of qubits, the prover will not be able to convince the verifier to accept: $\mathcal{V}$ in this case aborts the computation with high probability. \end{corol} This means that our protocols suggests yet another example in which quantum mechanics cannot be simulated by classical systems, regardless of how computationally powerful they are. This property appears already in bounded storage models \cite{watrous1999sbq}, and of course (in a different setting) in the \textit{EPR} experiment. Finally, we remark that in the study of the classical notion of {\sf IP}, a natural question is to ask how powerful the prover must be, to prove certain classes of languages. It is known that a {\sf PSPACE}\ prover is capable of proving any language in {\sf PSPACE}, and similarly, it is known that {\sf NP}\ or \textsf{\#P} restricted provers can prove any language which they can compute. This is not known for \textsf{coNP}, \textsf{SZK} or \textsf{PH} \cite{arora:ccm}. It is natural to ask what is the power of a {\sf BQP}\ prover; our results imply that such a prover can prove the entire class of {\sf BQP}\ (albeit to a verifier who is not entirely classical). Thus, we provide a characterization of the power of a {\sf BQP}\ prover. \ignore{ Knill \cite{knill2008rbq} has studied independently a related question of providing methods to test the fact that \emph{small} quantum systems (of up to ten qubits, say) indeed constitute a quantum register as they are supposed to. He suggests tests to do this, based on random Clifford operations and quantum algorithms. However, his methods are of a more physics flavor, and they do not address the fundamental questions presented here.} \ignore{ As far as we know, our scheme is the first fault tolerant quantum interactive proof result, though related notions were discussed by Terhal etc \dnote{complete and check}.} \subsection{Related Work and Open Questions} The two questions regarding the cryptographic angle were asked by Childs in \cite{childs2001saq}, and by Arrighi and Salvail in \cite{blind}, who proposed schemes to deal with such scenarios. However \cite{childs2001saq} do not deal with a cheating prover, and \cite{blind} deals with a restricted set of functions that are classically verifiable. After deriving the results of this work, we have learned that Broadbent, Fitzsimons, and Kashefi \cite{broadbent2008ubq} have proven related results. Using measurement based quantum computation, they construct a protocol for universal blind quantum computation. In their case, it suffices that the verifier's register consists of a single qubit. Their results imply similar implications to ours, though these are implicit in \cite{broadbent2008ubq}. An important and intriguing open question is whether it is possible to remove the necessity for even a small quantum register, and achieve similar results in the more natural \textsf{QPIP}\ model in which the verifier is entirely classical. This would have interesting fundamental implications regarding the ability of a classical system to learn and test a quantum system. Another interesting (perhaps related?) open question is to study the model we have presented here of \textsf{QPIP}, with more than one prover. Possibly, multiprover \textsf{QPIP}\ might be strong enough even when restricted to classical communication. This work also raises some questions in the philosophy of science. In particular, it suggests the possibility of formalizing, based on computational complexity notions, the interaction between physicists and Nature; perhaps the evolution of physical theories.. Following discussions with us at preliminary stages of this work, Jonathan Yaari is currently studying ``interactive proofs with Nature'' from the philosophy of science aspect \cite{JonathanThesis}. \noindent\textbf{Paper Organization} We start by some notations and background in \Sec{Back}. In \Sec{sec:authentication} we present both our \textsf{QAS}\ and prove their security. In \Sec{sec:IPQ} we present our \textsf{QPIP}\ protocols together some aspects of their security proofs. Other proofs are delayed to the appendices due to lack of space: Fault tolerance is explained in \pen{app:ft}. Blind delegated quantum computation is proved in \pen{app:blind}. The corollaries related to the interpretations of the results are proven in \pen{app:interpretation}. \section{Background}\label{Back} \subsection{Pauli and Clifford Group}\label{sec:PauliNClifford} Let $\mathbbm{P}_n$ denote the $n$-qubits Pauli group. $P=P_1\otimes P_2{\otimes{\ldots}\otimes} P_n$ were $P_i \in \{\mathcal{I},X,Y,Z\}$. \begin{deff} Generalized Pauli operator over $F_q$: $X \ket{a} = \ket{\left(a+1\right)\mod q}, ~ Z \ket{a} = \omega_q^a\ket{a}, ~Y = XZ,$ where $\omega_q = e^{2\pi i /q}$ is the primitive q-root of the unity. \end{deff} We note that $ZX=\omega_q XZ$. We use the same notation, $\mathbbm{P}_n$, for the standard and generalized Pauli groups, as it will be clear by context which one is being used. \begin{deff} For vectors $x,z$ in $F_q^m$, we denote a $P_{x,z}$ the Pauli operator $Z^{z_1}X^{x_1}{\otimes{\ldots}\otimes} Z^{z_m}X^{x_m}$. \end{deff} We denote the set of all unitary matrices over a vector space $A$ as $ \mathbbm{U}(A)$. The Pauli group $\mathcal{P}_n$ is a basis to the matrices acting on n-qubits. In particular, we can write any matrix $U \in \mathbbm{U}(A\otimes B)$ for $A$ the space of $n$ qubits, as $\sum_{P\in \mathcal{P}_n}P\otimes U_P$ with $U_P$ some matrix on $B$. Let $\mathfrak{C}_n$ denote the $n$-qubit Clifford group. Recall that it is a finite subgroup of $\mathbbm{U}(2^n)$ generated by the Hadamard matrix-H, by $K=\left(\begin{array}{ll} 1&0\\0&i\end{array}\right)$, and by controlled-NOT. The Clifford group is characterized by the property that it maps the Pauli group $\mathbbm{P}_n$ to itself, up to a phase $\alpha\in\{\pm 1,\pm i\}$. That is: $\forall C\in\mathfrak{C}_n , P\in \mathbbm{P}_n: ~\alpha CPC^\dagger \in \mathbbm{P}_n$ \begin{fact}\label{fa:randomclifford} A random element from the Clifford group on $n$ qubits can be sampled efficiently by choosing a string $k$ of $poly(n)$ length uniformly at random. The map from $k$ to the group element represented as a product of Clifford group generators can be computed in classical polynomial time. \end{fact} \subsection{Signed Polynomial Codes}\label{sec:SignedPolynomial} For background on polynomial quantum codes see \pen{app:poly}. \begin{deff}\label{def:SignedPolynomial} {\bf (\cite{benor2006smq})} The signed polynomial code with respect to a string $k\in\{\pm 1 \}^m$ (denoted $\mathcal{C}_k$) is defined by: \begin{equation} \ket{S^k_a} \EqDef \frac{1}{\sqrt{q^d}}\sum_{f:def(f)\le d , f(0)=a}\ket{k_1\cdot f(\alpha_1){\ldots} k_m\cdot f(\alpha_m)} \end{equation} \end{deff} We use $m=2d+1$. In this case, the code can detect $d$ errors. Also, $\mathcal{C}_k$ is self dual \cite{benor2006smq}, namely, the code subspace is equal to the dual code subspace. \section{Quantum Authentication}\label{sec:authentication} \subsection{Definitions} \begin{deff} \label{def:qas} (adapted from Barnum et. al. \cite{barnum2002aqm}). A quantum authentication scheme (\textsf{QAS}) is a pair of polynomial time quantum algorithms $\mathcal{A}$ and $\mathcal{B}$ together with a set of classical keys $\mathcal{K}$ such that: \begin{itemize} \item $\mathcal{A}$ takes as input an m-qubit message system M and a key $k\in\mathcal{K}$ and outputs a transmitted system $T$ of $m+d$ qubits. \item $\mathcal{B}$ takes as input the (possibly altered) transmitted system $T'$ and a classical key $k \in \mathcal{K}$ and outputs two systems: a $m$-qubit message state $M$, and a single qubit $V$ which indicate whether the state is considered valid or erroneous. The basis states of $V$ are called $\ket{VAL},\ket{ABR}$. For a fixed $k$ we denote the corresponding super-operators by $A_k$ and $B_k$. \end{itemize} \end{deff} \ignore{ Note that $\mathcal{B}$ may well have measured the qubit $V$, but we would rather keep the quantum description so that we can use density matrices. There are two conditions which should be met by a quantum authentication protocol. On the one hand, in the absence of intervention, the received state should be the same as the initial state and $\mathcal{B}$ should not abort. On the other hand, we want that when the adversary does intervene, $B$'s output systems have high fidelity to the statement ``either $\mathcal{B}$ rejects or his received state is the same as that sent by $\mathcal{A}$''. This is formalized below for pure states; one can deduce the appropriate statement about fidelity of mixed states, or for states that are entangled to the rest of the world (see \cite{barnum2002aqm} Appendix B). } Given a pure state $\ket{\psi}$, consider the following test on the joint system $M, V$ : output a $1$ if the first $m$ qubits are in state $\ket{\psi}$ or if the last qubit is in state $\ket{ABR}$, otherwise, output $0$. The corresponding projections are: \begin{eqnarray} P_1^{\ket{\psi}} & = &\ket{\psi}\bra{\psi} \otimes I_V+ \left(I_M- \ket{\psi}\bra{\psi}\right) \otimes \ket{ABR}\bra{ABR} \\ P_0^{\ket{\psi}} & = &(I_M - \ket{\psi}\bra{\psi}) \otimes \ket{VAL}\bra{VAL} \end{eqnarray} The scheme is secure if for all possible input states $\ket{\psi}$ and for all possible interventions by the adversary, the expected fidelity of $\mathcal{B}$'s output to the space defined by $P^{\ket{\psi}}_1$ is high: \begin{deff} A \textsf{QAS}\ is secure with error $\epsilon$ if for every state $\ket\psi$ it holds: \begin{itemize} \item Completeness: For all keys $k\in \mathcal{K}$ : $B_k(A_k(\ket{\psi}\bra{\psi})) = \ket{\psi}\bra{\psi}\otimes \ket{VAL}\bra{VAL}$ \item Soundness: For any super-operator $\mathcal{O}$ (representing a possible intervention by the adversary), if $\rho_B$ is defined by defined by $\rho_B = \frac{1}{|\mathcal{K}|}\sum_k B_k\big(\mathcal{O}(A_k(\ket{\psi}\bra{\psi}))\big)$, then:\; $\mbox{Tr}{(P_1^{\ket{\psi}}\rho_B)} \ge 1- \epsilon$. \end{itemize} \end{deff} \subsection{Clifford Authentication Scheme}\label{sec:CliffordAuth} \begin{protocol}\textbf{Clifford based \textsf{QAS}\ }: Given is a state $\ket\psi$ on $m$ qubits and $d\in\mathbbm{N}$ a security parameter. We denote $n=m+d$. The set of keys $\mathcal{K}$ consists of succinct descriptions of Clifford operations on $n$ qubits (following Fact \ref{fa:randomclifford}). We denote by $C=C_k$ the operator specified by a key $k\in \mathcal{K}$. \begin{itemize} \item \textbf{Encoding - $A_k$}: Alice applies $C_k$ on the state $\ket\psi \otimes \ket{0}^{\otimes d}$. \item \textbf{Decoding - $B_k$}: Bob applies $C_k^\dagger$ to the received state. Bob measures the auxiliary registers and declares the state valid if they are all $0$, otherwise Bob aborts. \end{itemize} \end{protocol} \begin{thm}{\label{thm:CliffordAuth}} The Clifford scheme applied to $n=m+d$ qubits is a \textsf{QAS}\ with security $2^{-d}$. Where $d$ is the number of qubits added to a message on $m$ qubits. \end{thm} \begin{proof} \textit{Sketch.(The full proof is given in \pen{app:cliffordsecurity}). } We show that when Eve applies a non trivial Pauli operator, then averaging over the random Clifford operators, the effective transformation on the original state is as an application of a random Pauli. Hence, any Pauli attack is detected with high probability. We then show that \emph{any} attack of Eve is reduced to a very specific form: \begin{equation}\label{eq:Mdef}\mathcal{M}_s: \rho \rightarrow s\rho+ (1-s)\frac 1 {4^n-1}\sum_{P\ne \mathcal{I}} P\rho P^\dagger \end{equation} (for some $0\le s\le 1$). It is not hard to see, using linearity, that this type of attack is detected with high probability. \end{proof} \ignore{ Essentially, we show that the average of a conjugation of any operator by Clifford operators is equal to the mixing operator $\mathcal{M}_s$ for some $s$; furthermore, we show that these type of attacks are detected with high probability.} Given $r$ blocks of $m$ qubits each, we can apply the \textsf{QAS}\ separately on each one of the $r$ blocks. $\mathcal{B}$ declares the state valid if all of the $r$ registers are valid according to the original Clifford \textsf{QAS}\ . We call this the {\it concatenated Clifford protocol}. The completeness of the concatenated protocol is trivial, reasoning as in the original \textsf{QAS}. For soundness we have the following theorem, whose proof is given in \pen{app:concatclifford}. \begin{thm} \label{thm:CliffordConcat} The concatenated Clifford protocol has the security of the individual Clifford with security parameter $d$, \textsf{QAS}, that is $2^{-d}$. This holds regardless of the number of blocks ($r$) that are authenticated. \end{thm} \subsection{Polynomial Authentication Scheme} \label{sec:PolynomialAuth} \begin{protocol} \textbf{Polynomial Authentication protocol }: Alice wishes to send the state $\ket\psi$ of dimension $q$. She chooses a security parameter $d$, and a code length $m=2d+1$. \begin{itemize} \item \textbf{Encoding:} Alice randomly selects a pair of keys: a sign key $k\in\{\pm 1\}^m$ and a Pauli key $(x,z)$ with $x,z \in {F_q}^m$. She encodes $\ket\psi$ using the signed quantum polynomial code $\mathcal{C}_k$ of polynomial degree $d$ (see \Def{def:SignedPolynomial}). She then applies the Pauli $P_{(x,z)}$ (i.e., for $j \in \{1,..,m\}$ she applies $Z^{z_j}X^{x_j}$ on the $j$'th qubit). \item \textbf{Decoding} Bob applies the inverse of $P_{(x,z)}$, and performs the error detection procedure of the code $\mathcal{C}_k$. He aborts if any error is found and declares the message valid otherwise. \end{itemize} \end{protocol} The completeness of this protocol is trivial. We proceed to prove the security of the protocol. \ignore{if Eve does not interfere with the state then Bob recovers exactly the original state and never aborts.} \begin{thm}{\label{thm:PolynomialAuth}} The polynomial authentication scheme is secure against general attacks with security~$2^{-d}$ \end{thm} \begin{proof} A sketch was given in the introduction; the full proof is given in \pen{app:securitypolynomial}. \end{proof} We notice that in this scheme a $q$-dimensional system is encoded into a system of dimension $q^m=q^{2d+1}$. The same security is achieved in the Clifford \textsf{QAS}\ by encoding $q$ into $q\cdot 2^d$ dimensions. The polynomial scheme is somewhat worse in parameters, but still with an exponentially good security. To encode several registers, one can independently authenticate each register as in the Clifford case, (\Th{thm:CliffordConcat}) but in fact we can use the same sign key $k$ for all registers, while still maintaining security. This fact will be extremely useful in \Sec{sec:IPQ}. The following theorem is proved in \pen{app:concatpolynomial}. \begin{thm} \label{thm:concatpolynomial} The concatenated polynomial based \textsf{QAS}\ (with the same sign key for all registers), and with degree $d$ polynomial, has the same security as the individual \textsf{QAS}, that is: $2^{-d}$. \end{thm} \section{Interactive Proof For Quantumness}\label{sec:IPQ} \subsection{Clifford Authentication Based Protocol}\label{sec:cliffordIP} \begin{protocol} \textbf{Clifford based Interactive Proof for \mbox{\textsf{Q-CIRCUIT}}}: \label{prot:CliffordIP}Fix a security parameter $\epsilon$. Given is a quantum circuit consisting of two-qubit gates, $U=U_T{\ldots}U_1$, with error probability reduced to $\le \delta$. The verifier authenticates the input qubits of the circuit one by one using the (concatenated) Clifford \textsf{QAS}\ with security parameter $d = \lceil{\log{\frac 1 \epsilon}}\rceil$, that is every qubit is authenticated by $d+1$ qubits, and sends them to $\mathcal{P}$. For each $i=1$ to $m$, the verifier asks the prover for the authenticated qubits on which he would like to apply the gate $U_i$, decodes them, aborts if any error is found, applies the gate, authenticates the resulting qubits using a new pair of authentication keys, and sends the encoded qubits back to $\mathcal{P}$. Finally, the verifier asks $\mathcal{P}$ to send the output authenticated qubit, decodes and aborts if any error is found; otherwise, measures the decoded qubit and accepts or rejects accordingly. \label{prot:finalMeasurement} In any case that $\mathcal{V}$ does not get the correct number of qubits he aborts. \end{protocol} \begin{statement}{\Th{thm:qcircuit}} For any $\epsilon,\delta >0$ \Prot{prot:CliffordIP} is a \textsf{QPIP}\ protocol with completeness $1-\delta$ and soundness $\delta+\epsilon$ for \mbox{\textsf{Q-CIRCUIT}}. \end{statement} \begin{proof} If the prover is honest, the verifier will declare valid with certainty. Since the error in the circuit is $\le \delta$, $(1-\delta)$ completeness follows. For soundness, we observe that for the verifier to accept if $x$ is not in the language, means that he has not aborted, and also, answers YES. Let us denote by $P_{bad}$ the projection on this subspace (\emph{Valid} on the first qubit, \emph{Accept} on the second). To bound the probability of this event, we observe that the correct state at any given step is a state which is authenticated by the concatenated Clifford \textsf{QAS}. We can thus use the decomposition of Eve's attacks to Paulis, namely \Eq{attackProfile}. Observing that a Pauli attack in our scheme is either declared valid or leads to abort, implies that the final density matrix can be written as \begin{equation}\rho_{final}=(\alpha_0\rho_0 +\alpha_{c}\rho_c)\otimes\ket{VAL}\bra{VAL} +\alpha_{1}\bar\rho_1\otimes\ket{ABR}\bra{ABR}, \end{equation} where $\rho_c$ is the correct state. To bound $\mbox{Tr}(P_{bad}\rho_{final})$ we observe that the left term in bounded by the security parameter of the \textsf{QAS}, namely $\epsilon$, the second term is bounded by the error caused by the quantum circuit, namely $\delta$, and the third term vanishes. \end{proof} The classical communication is linear in the number of gates. For $\epsilon =\frac 1 2$, we get $d=1$, and so the verifier uses a register of $4$ qubits. In fact $3$ is enough, since each of the authenticated qubits can be decoded (or encoded and sent) on its own before a new authenticated qubit is handled. \subsection{Polynomial Authentication Based Protocol} We start by describing how the prover performs a set of universal gates on authenticated qubits, using classical communication with the verifier, and special states called Toffoli states. This set of operations, namely Clifford group operations augmented with the Toffoli gate, form a universal set of gates \cite{benor2006smq}. \noindent\textbf{Application of Quantum Gates} \label{des:secapp} We denote encoded gates (logical operators) with a tilde. For the full description of how to apply each of these logical gates see \pen{app:polynomialgates}. Briefly, for Pauli operators, the verifier merely updates his Pauli key. For the control-SUM, and the Fourier transform, the prover applies the gates transversally as if the code was the standard polynomial codes, and the verifier updates his sign and Pauli keys. For the measurement, the prover measures the register, sends the result to the verifier, who returns its interpretation which he computes using his keys. The Toffoli gate is applied using the above, on the relevant authenticated qubits plus an authenticated Toffoli state \cite{benor2006smq}. \begin{protocol}\textbf{Polynomial based Interactive Proof for \mbox{\textsf{Q-CIRCUIT}}} \label{prot:PolynomialIP} Fix a security parameter $\epsilon$. Given is a quantum circuit on $n$ qubits consisting gates from the above universal set, $U=U_T{\ldots}U_1$. We assume the circuit has error probability $\le \delta$. The verifier sets $d = \lceil{\log{\frac 1 \epsilon}}\rceil$ and uses $3$ registers of $m=2d+1$ qudits each, where each qudit is of dimensionality $q>m$. The verifier uses concatenated polynomial \textsf{QAS}\ with security parameter $d$ to authenticate $n$ input qudits and the necessary number of Toffoli states. This is done sequentially using $3m$ qudits at a time. Then, the prover and verifier perform the gates of the circuit as described above. Finally, if the final measurement does not yield an authenticated answer, the verifier\textbf{aborts}, otherwise, he accepts or rejects according to the measurement outcome. \end{protocol} \begin{thm}\Prot{prot:PolynomialIP} is a \textsf{QPIP}\ protocol with completeness $1-\delta$ and soundness $\delta +\epsilon$ for \mbox{\textsf{Q-CIRCUIT}}. \label{thm:PolynomialIP} \end{thm} This theorem implies a second proof for \Th{thm:qcircuit}. The size of the verifier's register is naively $3m$, but using the same idea as in the Clifford case, $m+2$ suffice. With $\epsilon=1/2$, this gives a register of $5$ qutrits. \begin{proof} \textit{(Sketch. The full proof can be found in \pen{apen:polyIPproof})} The completeness is trivial, similarly to the Clifford case. To prove the soundness of the protocol we first prove the following lemma. \begin{lem}\label{lem:uniformkeys} At any stage of the protocol the verifier's set of keys, $k$ and $\{(x,z)_i\}_1^n$ are distributed uniformly and independently. \end{lem} This implies that the correct state in an encoded states according to the concatenated \textsf{QAS}. The rest of the argument follows closely that of the proof of \Th{thm:qcircuit}. \end{proof} \appendix \section{Polynomial Quantum Error Correction Codes}\label{app:poly}\begin{deff} Polynomial error correction code \cite{aharonov1997ftq}. Given $m,d,q$ and $\{\alpha_i\}^m$ where $\alpha_i$ are distinct non zero values from $F_q$, the encoding of $a\in F_q$ is $\ket{S_a}$ \begin{equation} \ket{S_a} \EqDef \frac{1}{\sqrt{q^d}}\sum_{f:def(f)\le d , f(0)=a}\ket{ f(\alpha_1),{\ldots} , f(\alpha_m)} \end{equation} \end{deff} We use here $m=2d+1$, in which case the code subspace is its own dual. It is easy to see that this code can detect up to $d$ errors \cite{aharonov1997ftq}. It will be useful to explicitly state the logical gates of \textit{SUM}, Fourier ($F$) and Pauli operators ($X,Z$). We will see that it is possible to apply the logical operations of the Pauli operators or the controlled-sum by a simple transitive operation. We can easily verify that applying $X^{\otimes m}$ is the logical $\wt X$ operation: \begin{equation}\begin{split} \wt{X} \ket{S_a} =& X^{\otimes m} \frac{1}{\sqrt{q^d}}\sum_{f:def(f)\le d , f(0)=a}\ket{ f(\alpha_1),{\ldots} , f(\alpha_m)} \\ =& \frac{1}{\sqrt{q^d}}\sum_{f:def(f)\le d , f(0)=a}\ket{ f(\alpha_1)+1,{\ldots} , f(\alpha_m)+1} \end{split} \end{equation} setting $f'(\alpha) = f(\alpha)+1$ \begin{equation}\begin{split} {\ldots} =& \frac{1}{\sqrt{q^d}}\sum_{f':deg(f')\le d , f'(0)=a+1}\ket{ f'(\alpha_1),{\ldots} , f'(\alpha_m)}\\ =&\ket{S_{(a+1)}} \end{split}\end{equation} Similarly for logical \textit{SUM} , we consider the transitive application of controlled-sum, that is a \textit{SUM} operations applied between the $j$'th register of $\ket{S_a}$ and $\ket{S_b}$. \begin{equation}\begin{split} \wt{\textit{SUM}} \ket{S_a} \ket{S_b} =\ & (\textit{SUM})^{\otimes m}\frac{1}{{q^d}}\sum_{ f(0)=a}\ket{ f(\alpha_1),{\ldots} , f(\alpha_m)} \sum_{ h(0)=b}\ket{ h(\alpha_1),{\ldots} ,h(\alpha_m)}\\ =& \frac{1}{{q^d}}\sum_{f(0)=a,h(0)=b}\ket{ f(\alpha_1),{\ldots} , f(\alpha_m)}\ket{ h(\alpha_1)+f(\alpha_1),{\ldots} , h(\alpha_m)+f(\alpha_m)} \end{split}\end{equation} We set $g(\alpha)= f(\alpha)+h(\alpha)$ \begin{equation}\begin{split} {\ldots} =& \frac{1}{{q^d}}\sum_{f(0)=a,g(0)=a+b}\ket{ f(\alpha_1),{\ldots} , f(\alpha_m)} \ket{ g(\alpha_1),{\ldots} , g(\alpha_m)} \\ =& \ket{S_a}\ket{S_{a+b}} \end{split}\end{equation} Showing what is the logical Fourier transform on the polynomial code requires more work. We first recall the definition of the Fourier transform in $F_q$: \begin{eqnarray} F \ket{ a} &\EqDef& \frac 1 {\sqrt q} \sum_b \omega_q^{ab}\ket b \end{eqnarray} We consider an $r$-variant of the Fourier transform which we denote $F_r$ \begin{eqnarray} F_r \ket{a} &\EqDef& \frac 1 {\sqrt q} \sum_b \omega_q^{rab}\ket b \end{eqnarray} In addition we need the following claim: \begin{lem}\label{inter} For any $m$ distinct numbers $\{\alpha_i\}_1^m$ there exists $\{c_i\}_1^m$ such that \begin{eqnarray} \sum_{i=1}^m c_if(\alpha_i) = f(0) \end{eqnarray} For any polynomial of degree $\le m-1$. \end{lem} \begin{proof} A polynomial $p$ of degree $\le m-1$ is completely determined by it's values in the points $\alpha_i$. We write $p$ as in the form of the Lagrange interpolation polynomial: $f(x) = \sum_i \prod_{j\ne i} \frac{x-\alpha_j}{\alpha_i -\alpha_j}f(\alpha_j) $. Therefore, we set $c_i = \prod_{j\ne i}\frac {-\alpha_j}{\alpha_i-\alpha_j}$ and notice that it is independent of $p$, and the claim follows. \end{proof} We are now ready to define the logical Fourier transform. \begin{claim}\label{claim:fourier} The logical Fourier operator $\wt F$ obeys the following identity: \begin{eqnarray} \wt{F} \ket{S_a} \EqDef F_{c_1}\otimes F_{c_2}{\otimes{\ldots}\otimes} F_{c_m}\ket{S_a} =& q^{-m/2}\sum_b \omega_q^{ab} \ket{\wt{S_b}} \end{eqnarray} Where $\wt{S_b}$ is the encoding of $b$ in a polynomial code of degree $m-d$ on $m$ registers. \end{claim} \begin{proofof}{\Cl{claim:fourier}} We denote $\ket{\bar{f}} = \ket{f(\alpha_1),{\ldots} ,f(\alpha_m)}$ \begin{eqnarray} F_{c_1}\otimes F_{c_2}{\ldots} \otimes F_{c_m}\ket{S_a} &= & q^{-d/2} F_{c_1}\otimes F_{c_2}{{\otimes{\ldots}\otimes}} F_{c_m}\sum_{f:def(f)\le d,f(0)=a}\ket{\bar p} \\ & = & q^{-d/2} q^{-m/2}\sum_{f:def(f)\le d,f(0)=a} \sum_{b_1,{\ldots} ,b_m} \omega_q^{\sum_i c_if(\alpha_i)b_i}\ket{b_1,{\ldots} ,b_m} \end{eqnarray} We think of the $b_i$'s as defining a polynomial $g$ of degree $\le m-1$ that is $g(\alpha_i)=b_i$ and split the sum according to $g(0)$: \begin{eqnarray}\label{ignore} {\ldots} & = & q^{-(m+d)/2} \sum_{f:def(f)\le d,f(0)=a} \sum_b \sum_{g:deg(g)\le m-1,g(0)=b } \omega_q^{\sum_i c_if(\alpha_i)g(\alpha_i)}\ket{\bar g}\label{pre} \end{eqnarray} We temporally restrict our view to polynomials $g$ with degree at most $m-d-1$ and therefore the polynomial $fg$ has degree at most $m-1$. We use \Le{inter} on $fg$: \begin{eqnarray} \sum_{i=1}^m c_i (fg)(\alpha_i) &= fg(0) &=ab \end{eqnarray} Going back to \Eq{pre}: \begin{eqnarray} q^{-(m+d)/2} \sum_{p,g} \sum_{b\in F_q} \omega_q^{\sum_i c_i (fg)(\alpha_i)}\ket{\bar g} &=& q^{-(m+d)/2} \sum_{b\in F_q}\sum_{f,g} \omega_q^{ab}\ket{\bar g} \end{eqnarray} Where the summation is over all $f,g$ such that $f(0)=a$ and $g(0)=b$ while the degrees of $f$ and $g$ are at most $d$ and $m-d-1$ respectively. The sum does not depend on $f$ and there are exactly $q^d$ polynomials $f$ in the sum, therefore, we can write the expression as : \begin{equation}\begin{split} {\ldots} =&\quad q^{-(m+d)/2} \sum_{b\in F_q}q^d\sum_{g} \omega_q^{ab}\ket{\bar g} \\ \ =&\quad \frac{ 1}{ \sqrt{q}} \sum_{b\in F_q} \omega_q^{ab}\frac {1} { \sqrt{q^{m-d-1}}} \sum_{g:deg(g)\le m-d-1, g(0)=b}\ket{\bar g} \\ \ =&\quad \frac 1 {\sqrt{q}} \sum_{b\in F_q} \omega_q^{ab}\ket{\wt{S_b}} \end{split}\end{equation} Since the above expression has norm 1, if follows that the coefficients that we temporally ignored at \Eq{ignore} all vanish. \end{proofof} \begin{corol} If $m=2d+1$ then it follows from \Cl{claim:fourier} that the code is self dual. \end{corol} \begin{claim} The logical Pauli $Z$ operator $\wt{Z}$ is $Z^{c_1} {\otimes{\ldots}\otimes} Z^{c_m}$. \end{claim} The proof of this claim is omitted since it is extremely similar to the proof of \Cl{claim:fourier}. \section{Clifford Authentication Scheme} \subsection{Security Proof of Clifford \textsf{QAS}}\label{app:cliffordsecurity} \begin{proofof}{\Th{thm:CliffordAuth}} We denote the space of the message sent from Alice to Bob as $M$. Without loss of generality, we can assume that Eve adds to the message a system $E$ (of arbitrary dimension) and performs a unitary transformation $U\in \mathbbm{U}(M\otimes E)$ on the joint system. We note that there is a unique representation of $U=\sum_{P\in \mathbbm{P}_n}P\otimes U_P$ since the Pauli matrices form a basis for the $2^n\times 2^n$ matrix vector space. We first characterize the effect that Eve's attack has on the unencoded message: $\ket\psi\otimes{\ket 0}^{\otimes d}$. \begin{claim}\label{claim:EveAttack} Let $\rho=\ket\psi\otimes{\ket 0}^{\otimes d}$ be the state of Alice before the application of the Clifford operator. For any attack \mbox{$U=\sum_PP\otimes U_P$} by Eve, Bob's state after decoding is $\mathcal{M}_s(\rho)$, where $s=\mbox{Tr}{(U_\mathcal{I}^{\dagger}U_\mathcal{I})}$. \end{claim} We proceed with the proof of the theorem. From the above claim we know what Bob's state after Eve's intervention is and we would like to bound its projection on $P_1^{\ket\psi}$: \begin{eqnarray}\label{tr} \mbox{Tr}{\Big(P_1^{\ket\psi} \big(s\rho + \frac{1-s} {4^n -1}\sum_{Q\in\mathbbm{P}_n\setminus\{\mathcal{I}\}}Q\rho Q^\dagger\big)\Big)} &= &s\mbox{Tr}(P_1^{\ket\psi}\rho) + \frac{1-s}{4^n-1} \sum_{Q\in\mathbbm{P}_n\setminus\{\mathcal{I}\}}\mbox{Tr}{(P_1^{\ket\psi} Q\rho Q^\dagger)}\label{eq:ClifSec} \end{eqnarray} By definition of $P_1^{\ket\psi}$ we see that $\mbox{Tr}(P_1^{\ket\psi}\rho)=1$. On the other hand: $\mbox{Tr}{(P_1^{\ket\psi}Q\rho Q^\dagger)}=1$ when $Q$ does not flips any auxiliary qubit, and vanishes otherwise. The Pauli operators that do not flip auxiliary qubits can be written as $Q'\otimes Q''$ where \mbox{$Q'\in \mathbbm{P}_m$} and \mbox{$Q'' \in \{\mathcal{I},Z\}^{\otimes d}$}. It follows that the number of such operator is exactly $4^m2^d$. Omitting the identity $\mathcal{I}_n$ we are left with $4^m2^d-1$ operators which are undetected by our scheme. We return to \Eq{eq:ClifSec}: \begin{eqnarray} {\ldots}& \ge &s +\left(1-s\right)(1-\frac{4^m2^d-1}{4^n-1})\\ & \ge &s +\left(1-s\right)(1-\frac{4^m2^d}{4^{m+d}})\\ & = &1 -\frac{1-s} {2^d} \label{goodProjection} \end{eqnarray} The security follows from the fact that $s\ge 0$, and hence the projection is bounded by $1 -\frac{1} {2^d}$. \end{proofof} We remark that the above proof in fact implies a stronger theorem: interventions that are very close to $\mathcal{I}$ are even more likely to keep the state in the space defined by $P_1^{\ket\psi}$. What remains to prove is \Cl{claim:EveAttack} which is stated above. To this end we need three simple lemmata: \begin{lem} \label{mix} Fix a non-identity Pauli operator. Applying a random Clifford operator (by conjugation) maps it to a Pauli operator chosen uniformly over all non-identity Pauli operators. More formally, for every $P,Q\in\mathbbm{P}_n\setminus\{\mathcal{I}\}$ it holds that : $\left|\left\{C\in\mathfrak{C}_n | C^\dagger PC =Q\right\}\right| = \frac {\left|\mathfrak{C}_n\right|} {\left|\mathbbm{P}_n \right| -1}= \frac {\left|\mathfrak{C}_n\right|} {4^n -1}$. \end{lem} \begin{lem} \label{clifTw} Let $P\ne P'$ be Pauli operators. For any $\rho$ it holds that: \mbox{$\sum_{C\in\mathfrak{C}_n}C^\dagger P C\rho C^\dagger P'C=0$}. \end{lem} \begin{lem} \label{Decompose} Let $U=\sum_{P\in\mathbbm{P}_n} P\otimes U_P$ be a unitary operator. For any density matrix $\rho$: \begin{equation}\sum_{P\in\mathbbm{P}_n} \mbox{Tr}(U_P\rho U_P^{\dagger}) =1 \end{equation} \end{lem} Assuming these lemmata we are ready to prove the claim: \begin{proofof}{\Cl{claim:EveAttack}} Let $ U = \sum_{P\in \mathbbm{P}_n} P\otimes U_P$ be the operator applied by Eve. We denote $\rho=\ket\psi \bra\psi\otimes\ket {\bar 0}\bra{\bar 0}$ the state of Alice prior to encoding. Let us now write the state of Bob's system after decoding and before measuring the $d$ auxiliary qubits. For clarity of reading we omit the normalization factor $|\mathfrak{C}_n|$ and denote the Clifford operation applied by Alice (Bob) $C$ ($C^\dagger$): \begin{eqnarray} \rho_{Bob}&=& \mbox{Tr}_E{\Big(\sum_{C\in\mathfrak{C}_n}(C\otimes \mathcal{I}_E)^\dagger U\left( (C\otimes \mathcal{I}_E)\rho (C\otimes \mathcal{I}_E)^\dagger \otimes \rho_E\right)U^\dagger(C\otimes \mathcal{I}_E)\Big)} \\ &=& \mbox{Tr}_E{\Big(\sum_{P,P'\in\mathbbm{P}_n}\sum_{C\in\mathfrak{C}_n }(C\otimes \mathcal{I}_E)^\dagger P\otimes U_P\left( (C\otimes \mathcal{I}_E)\rho (C\otimes \mathcal{I}_E)^\dagger \otimes \rho_{E}\right) P'\otimes U_{P'}^\dagger(C\otimes \mathcal{I}_E)\Big)} \end{eqnarray} Regrouping elements operating on $M$ and on $E$ we have: \begin{eqnarray}\begin{aligned} {\ldots} =&\ \mbox{Tr}_E{\Big(\sum_{P,P'\in\mathbbm{P}_n}\sum_{C\in\mathfrak{C}_n}\left(C^\dagger P C\rho C^\dagger P'C\right)\otimes U_P\rho_{E} U_{P'}^\dagger \Big)}\\ =& \sum_{P,P'\in\mathbbm{P}_n} \sum_{C\in\mathfrak{C}_n}\left(C^\dagger P C\rho C^\dagger P'C\right) \cdot \mbox{Tr}{\big( U_P\rho_{E} U_{P'}^\dagger \big)} \end{aligned}\end{eqnarray} We use \Le{clifTw} and are left only with $P=P'$ \begin{eqnarray}\label{prev} {\ldots} = \sum_{P\in\mathbbm{P}_n}\sum_{C\in\mathfrak{C}_n}\left(C^\dagger P C\rho C^\dagger PC\right)\cdot \mbox{Tr}{\big( U_P\rho_{E} U_{P'}^\dagger \big)} \end{eqnarray} We first consider the case were $P=\mathcal{I}$, then: \begin{equation} \sum_{C\in\mathfrak{C}_n}C^\dagger P C\rho C^\dagger PC =|\mathfrak{C}_n|\rho \end{equation} On the other hand when, $P\ne\mathcal{I}$ by \Le{mix}: \begin{equation} \sum_{C\in\mathfrak{C}_n} C^\dagger P C\rho C^\dagger PC =\sum_{Q\in\mathbbm{P}\setminus\{\mathcal{I}\}}Q\rho Q^\dagger \frac {|\mathfrak{C}_n|}{|\mathbbm{P}_n|-1} \end{equation} Plugging the above two equations in \Eq{prev}: \begin{eqnarray}\begin{aligned} {\ldots} =&\; |\mathfrak{C}_n|\rho \mbox{Tr}{\big( U_\mathcal{I}\rho_{E}U_{\mathcal{I}}^\dagger\big)}&+& \sum_{P\in\mathbbm{P}_n\setminus\{\mathcal{I}\}} \sum_{Q\in\mathbbm{P}_n\setminus\{\mathcal{I}\}}\big(Q\rho Q^\dagger \big)\frac {|\mathfrak{C}_n|}{|\mathbbm{P}_n|-1} \mbox{Tr}{\big( U_P\rho_{E} U_{P'}^\dagger\big)}\\ =&\; |\mathfrak{C}_n|\rho \mbox{Tr}{\big( U_\mathcal{I}\rho_{E}U_{\mathcal{I}}^\dagger \big)}&+& \frac {|\mathfrak{C}_n|\sum_{P\ne\mathcal{I}} \mbox{Tr}{\big(U_P\rho_{E}U_{P}^\dagger \big)}}{|\mathbbm{P}_n|-1} \sum_{Q\in\mathbbm{P}_n\setminus\{\mathcal{I}\} }\big(Q\rho Q^\dagger \big) \end{aligned}\end{eqnarray} We use \Le{Decompose} and so Bob's state after renormalization can be written as: \begin{equation}\label{attackProfile} s\rho + \frac{(1-s)}{4^n-1}\sum_{Q\in\mathbbm{P}_n\setminus\{\mathcal{I}\}}\left(Q\rho Q^\dagger \right) \end{equation} For $s=\mbox{Tr}({ U_{\mathcal{I}}\rho U_{\mathcal{I}}^\dagger})$, which concludes the proof. \end{proofof} Finally, we prove the lemmata stated above: \begin{proofof}{\Le{mix}} We first claim that or every $Q,P\in\mathbbm{P}_n\setminus \mathcal{I}$ there exists $D\in\mathfrak{C}_n$ such that $D^\dagger P D=Q$. We will prove this claim by induction. Specifically, we show that starting form any non identity Pauli operator one can, using conjunction by Clifford group operator reach the Pauli operator $X\otimes \mathcal{I}^{\otimes n-1}$. We first notice that the swap operation is in $\mathfrak{C}_2$ since it holds that: \begin{eqnarray} SWAP_{k,k+1} &= &CNOT_{k\rightarrow (k+1)}CNOT_{(k+1)\rightarrow k}CNOT_{k\rightarrow(k+1)} \end{eqnarray} Furthermore, we recall that $K^\dagger(XZ)K\propto X$ and $H^\dagger ZH=X$. Therefore, any Pauli $P=P_1{\otimes{\ldots}\otimes} P_n$ can be transformed using $SWAP,H$ and $K$ to the form: $X^{\otimes k}\otimes \mathcal{I}^{\otimes n-k}$ (up to a phase). To conclude we use: \begin{eqnarray} CNOT_{1\rightarrow2}^\dagger (X_1\otimes X_2)CNOT_{1\rightarrow2}& =& X\otimes \mathcal{I} \end{eqnarray} which reduces the number of $X$ operations at hand. Applying this sufficiently many times results in reaching the desired form. Since this holds for any non-identity Pauli operators: $P,Q$ we know there are $C,D\in \mathfrak{C}_n$ such that: \begin{eqnarray} X\otimes\mathcal{I}^{\otimes n-1}&=&C^\dagger PC=D^\dagger QD \\ &\Rightarrow& DC^\dagger PCD^\dagger= Q \end{eqnarray} therefore $CD^\dagger$ is the operator we looked for. We return to the proof of the Lemma, let us first fix some $Q\ne\mathcal{I}$, it will suffice to prove that for any $P,P'$ the set $A_{P,P'}\EqDef\left\{C\in\mathfrak{C}_n| C^\dagger PC=P'\right\}$ is of a fixed size. We set $D\in\mathfrak{C}_n$ such that $D^\dagger PD=Q$ then it holds that: $CD\in A_{Q,P'} \iff C\in A_{P,P'}$ therefore $|A_{P,P'}| = |A_{Q,P'}|$, and $|A_{Q',P'}| = |A_{Q,P}|$ follows trivially. We use the fact that the sets $\{A_{P,Q}\,:\forall P \}$ is a partition of $\mathfrak{C}_n$, and that all $A_{P,Q}$ have the same size: \begin{eqnarray} \left|\mathfrak{C}_n\right| =\sum_{P'\in\mathbbm{P}_n\setminus\mathcal{I}}\left|A_{P',Q}\right| =& (4^n -1)\left|A_{P,Q}\right| \end{eqnarray} Which concludes the proof. \end{proofof} \begin{proofof}{\Le{clifTw}} Since $P\ne P'$ we know there exists an index $i$ such that $P_i\ne P_i'$ that is: \begin{eqnarray} P_i = X^aZ^b &\;& P_i' = X^{a'}Z^{b'} \end{eqnarray} where $(a,b)\ne (a',b')$. let us define $Q_i=X_i^{1-b-b'}Z_i^{1-a-a'}\otimes \mathcal{I}$. We notice that $(Q_i\otimes\mathcal{I}) C\in \mathfrak{C}_n$ and furthermore any operator in $\mathfrak{C}_n$ can be written in this form. We write $Q_iC$ instead of $(Q_i\otimes\mathcal{I}) C$ for simplicity. \begin{eqnarray} \sum_{C\in\mathfrak{C}_n} C^\dagger P C\rho C^\dagger P'C & = & \sum_{Q_i C\in\mathfrak{C}_n} \left(Q_i C\right) ^\dagger P \left(Q_i C\right) \rho \left(Q_i C\right) ^\dagger P'\left(Q_i C\right)\\ & = & \sum_{Q_i C\in\mathfrak{C}_n} C^\dagger Q_i^\dagger P Q_i C \rho C^\dagger Q_i^\dagger P'Q_i C \end{eqnarray} It is easy to check that $Q$ commutes with either $P'$ or $P$ and anti-commutes with the other. Therefore: \begin{eqnarray} {\ldots} & = & (-1) \sum_{Q_i C\in\mathfrak{C}_n} C^\dagger Q_i^\dagger Q_i PC \rho C^\dagger Q_i^\dagger Q_i P'C \\ & = & (-1) \sum_{Q_i C\in\mathfrak{C}_n} C^\dagger PC \rho C^\dagger P'C\\ & = & (-1) \sum_{ C\in\mathfrak{C}_n} C^\dagger PC \rho C^\dagger P'C \end{eqnarray} This concludes the proof since the sum must vanish. \end{proofof} \begin{proofof}{\Le{Decompose}} We analyze the action of $U$ on the density matrix $\mathcal{I}\otimes\tau$. We first notice that since $U$ is a trace preserving operator, that is: $\mbox{Tr}(U(\mathcal{I}\otimes\tau) U^\dagger\big)=\mbox{Tr}(\mathcal{I}\otimes\tau\big)=1$. On the other hand it holds that: \begin{eqnarray} \mbox{Tr}\big(U(\mathcal{I} \otimes \tau) U^{\dagger}\big) &=& \sum_{P,P'\in \mathbbm{P}_n}\mbox{Tr}\big((P\otimes U_P)(\mathcal{I}\otimes\tau) (P'\otimes U_{P'})^{\dagger}\big)\\ &=& \sum_{P,P'\in \mathbbm{P}_n}\mbox{Tr}\big(P\mathcal{I} P'^{\dagger} \otimes U_P\tau U_{P'}^{\dagger}\big)\\ &=& \sum_{P,P'\in \mathbbm{P}_n}\mbox{Tr}\big(PP'^{\dagger}\big) \mbox{Tr}\big( U_P\tau U_{P'}^{\dagger}\big) \end{eqnarray} If $P\ne P'$ then $\mbox{Tr}\big(PP'^{\dagger}\big)=0$, and therefore: \begin{eqnarray}\begin{aligned} \ldots &= \sum_{P\in \mathbbm{P}_n}\mbox{Tr}\big(\mathcal{I}\big) \mbox{Tr}\big( U_P\tau U_{P}^{\dagger}\big)\\ &= \sum_{P\in \mathbbm{P}_n} \mbox{Tr}\big( U_P\tau U_{P}^{\dagger}\big) \end{aligned}\end{eqnarray} It follows that $1=\sum_{P\in \mathbbm{P}_n} \mbox{Tr}( U_P\tau U_{P}^{\dagger})$, which concludes the proof. \end{proofof} \subsection{Concatenated Clifford \textsf{QAS}} \label{app:concatclifford} \begin{proofof}{\Th{thm:CliffordConcat}} From \Cl{claim:EveAttack}, we know that any attack by Eve on an authenticated register is equivalent to an effect of the mixing operator $\mathcal{M}_s$, on the unencoded message space. We notice that any attack on the concatenated protocol is in fact equivalent to separate attacks on the different registers. This fact follows from the fact any individual attack can be broken down to attacks of the form $\mathcal{M}_s$, specifically for $r=2$: \begin{equation} \begin{split} \rho_{Bob}= &\frac 1 {|\mathfrak{C}_n|^2}\sum_{C_1,C_2 \in \mathfrak{C}_n}(C_1\otimes C_2)^\dagger E\left( (C_1\otimes C_2)(\rho_1\otimes\rho_2)(C_1\otimes C_2)^\dagger\right)(C_1\otimes C_2) \\ =& \frac 1 {|\mathfrak{C}_n|^2}\sum_{P,Q\in \mathbbm{P}_n} \sum_{C_1,C_2 \in \mathfrak{C}_n}\alpha_{P,Q}(P\otimes Q) (C_1\otimes C_2)(\rho_1\otimes\rho_2)(C_1\otimes C_2)^\dagger)(P\otimes Q)^\dagger \\ =&\sum_{P,Q\in \mathbbm{P}_n}\alpha_{P,Q} \big(\frac 1 {|\mathfrak{C}_n|}\sum_{C_1\in \mathfrak{C}_n}(C_1^\dagger PC_1\rho_1C_1^\dagger P^\dagger C_1)\big)\otimes \big(\frac 1 {|\mathfrak{C}_n|}\sum_{C_2\in \mathfrak{C}_n}(C_2^\dagger QC_2\rho_2C_2^\dagger Q^\dagger C_2)\big) \end{split} \end{equation} We denote $\mathcal{M}_s(\rho)=\wt\rho$, and use \Le{mix}: it holds that: \begin{align} \label{eq:concat} \ldots=\alpha_{\mathcal{I},\mathcal{I}}(\rho_1\otimes \rho_2) + (\sum_{P,Q\ne \mathcal{I} \in \mathbbm{P}_n}\alpha_{P,Q})( \wt\rho_1\otimes \wt\rho_2) + (\sum_{P\ne \mathcal{I} \in \mathbbm{P}_n}\alpha_{P,\mathcal{I}})( \wt\rho_1\otimes \rho_2) + (\sum_{Q\ne \mathcal{I} \in \mathbbm{P}_n}\alpha_{\mathcal{I},Q})( \rho_1 \otimes \wt\rho_2) \end{align} Bob does not abort, if both individual Clifford \textsf{QAS}\ are valid. From the security of the individual \textsf{QAS}\ we know that $\mbox{Tr}{((P_0^{\rho_i})B(\wt\rho_i))}<2^{-d}$ where $B$ is Bob cheat detecting procedure. We also notice that $P_0^{\rho_1\otimes\rho_2}=P_0^{\rho_1}\otimes P_0^{\rho_2}$. We first rewrite \Eq{eq:concat} more clearly: \begin{eqnarray} {\ldots} =s(\rho_1\otimes\rho_2) +h(\wt\rho_1\otimes\rho_2)+r(\rho_1\otimes\wt\rho_2)+t(\wt\rho_1\otimes\wt\rho_2) \end{eqnarray} and using the above observations we have: \begin{equation} \begin{split} \mbox{Tr}\left(P_0^{\rho_1\otimes\rho_2} B\left(s(\rho_1\otimes\rho_2) +h(\wt\rho_1\otimes\rho_2)+r(\rho_1\otimes\wt\rho_2)+t(\wt\rho_1\otimes\wt\rho_2)\right)\right)\ =\ & s\cdot0 +q\cdot2^{-d}+r\cdot2^{-d}+t\cdot2^{-d}\cdot2^{-d}\\ \le&(1-a)2^{-d} \end{split} \end{equation} Where the inequality holds since $s+q+r+t=1$. \noindent The claim for $r>2$ is follows the exact same lines and therefore is omitted. \end{proofof} \section{Polynomial Authentication Scheme} \subsection{Security Proof of Polynomial \textsf{QAS}} \label{app:securitypolynomial} \subsubsection{Security Against Pauli Attacks} \begin{lem} \label{lem:PauliPolyAuthSec} The polynomial \textsf{QAS}\ is secure against (generalized) Pauli attacks, that is, in the case where the adversary applies a Pauli operator. In this case the projection of Bob's state on the space spanned by $P_1\ket{\psi}$ is at least $1-2^{-d}$. \end{lem} \begin{proof} Let us consider the effect of a Pauli $Q$ operator on the signed polynomial code $\mathcal{C}_k$. We first show that with probability $1-2^{-d}$ over the sign key $k$, the effect of $Q$ is detected by the error detection procedure. Let $Q_x\ne \mathcal{I}$ be a Pauli operator $Q_x=X^{x_i}{\otimes{\ldots}\otimes} X^{x_m}$ where $x\in F_q^m$. The effect of $Q_x$ on the code is an addition of $x_i$ to the $i'th$ qubit. This addition passes the error detection step only if coincides with the values of a \textbf{signed} polynomial of degree at most $d$. We consider two cases depending on the weight of $x$: \begin{itemize} \item If $|x|\le d$: let us denote by $g$ the polynomial that satisfies $\forall_{i} k_ig(\alpha_i) = x_i$, since $Q_x\ne \mathcal{I}$ we know that $g\ne 0$. then $g$ has at least $m-d$ zeros. Since $g$ is nonzero it must have degree at of least: $m-d=d+1$. Such an attack will be detected with certainty by the error detection procedure. \item Otherwise, assume without loss of generality that $x_i \ne 0$ for $i\le|x|$. There is exactly one polynomial $f$ of degree at most $d$ such that $\forall_{i\ge d+1}\ k_if(\alpha_i)=x_i$. For the attack of Eve to be undetected $x$ must agree with $f$ on the remaining coordinates as well: \begin{eqnarray} \Pr_k(\forall_{i\le d}\ x_i =k_if(\alpha_i))=\prod_{i=1}^d\Pr_k(k_i = x_i^{-1}f(\alpha_i)) \end{eqnarray} Equality holds since: $k_i$ are independent, $k_i=k_i^{-1}$ and $x_i\ne0$ for $i\le d$. Since $k_i=c$ with probability at most half we conclude that the probability that Eve's attack is undetected is at most $2^{-d}$. \end{itemize} Now that we have proved the claim for operators of the form $Q_x$, we handle the general case. Pauli $Z$ are mapped in the dual code to $X$ operators. Since the signed polynomial code is self dual, $Q_z$ attacks will be caught with probability $1-2^{-d}$ as well. To conclude the proof we notice that detection $Q_x$ attacks do not depend on the existence $Q_z$ attacks, therefore, a non identity operator $Q_{x,z}=P_zP_x$ will be detected with the correct probability since either $x$ or $z$ must be non trivial. What remains is to notice that the Pauli randomization $P_{x,z}$ simply shifts any attack $Q$ on the authenticated message to a different Pauli. That is the effect on the signed polynomial code is $P_{x,z}^\dagger QP_{x,z}$. We conclude that any Pauli operator acting on the polynomial \textsf{QAS}\ is detected with a probability of at least $1-2^{-d}$ as claimed. \end{proof} \subsubsection{Security Against General Attacks} We start with a generalization of \Le{clifTw} for generalized Pauli operators. \begin{lem} \label{pauliTw} Let $P \ne P'$ generalized Pauli operators. Then: $\sum_{Q\in \mathbbm{P}_m} Q^\dagger P Q \rho Q^\dagger P'^\dagger Q = 0$ \end{lem} The proof follows the same line as \Le{clifTw}: \begin{proofof}{\Le{pauliTw}} Let $P\ne P'$ be generalized Pauli operator $P=X^aZ^b$ and $P'=X^{a'}Z^{b'}$. \begin{eqnarray} \sum_{Q\in \mathbbm{P}_m} Q^\dagger P Q \rho Q^\dagger P'^\dagger Q &=& \sum_{d,c=0}^{q-1} (X^cZ^d)^\dagger X^aZ^b (X^cZ^d) \rho (X^cZ^d)^\dagger (X^{a'}Z^{b'})^\dagger (X^cZ^d) \end{eqnarray} We use the fact that $Z^dX^c=\omega_q^{dc} X^cZ^d$ and some algebra: \begin{eqnarray} {\ldots} &=& \sum_{d,c=0}^{q-1} \omega_q^{d(a-a')+c(b-b')} X^aZ^b \rho Z^{-b'}X^{-a'}\\ &=& X^aZ^b \rho Z^{-b'}X^{-a'} \sum_{c=0}^{q-1} \omega_q^{c(b-b')} \sum_{d=0}^{q-1} \omega_q^{d(a-a')} \end{eqnarray} To conclude the proof we recall that $a\ne a'$ or $b\ne b'$, hence one of the above sums vanishes. \end{proofof} In addition we need one more simple lemma: \begin{lem} \label{pauliID}For any generalized Pauli operator $P$ \begin{eqnarray} \frac 1 {\left|\mathbbm{P}_m\right|}\sum_{Q\in \mathbbm{P}_m} Q^\dagger P Q \rho Q^\dagger P^\dagger Q = P\rho P^\dagger \end{eqnarray} \end{lem} \begin{proofof}{\Le{pauliID}} From the observation about generalized Pauli operators in \Sec{Back} we know that for any two generalized Pauli operators $P,Q$ $PQ=\alpha QP$ where $\alpha$ is some phase dependent on $P$ and $Q$. \begin{eqnarray}\begin{aligned} \frac 1 {\left|\mathbbm{P}_m\right|}\sum_{Q\in \mathbbm{P}_m} Q^\dagger P Q \rho Q^\dagger P^\dagger Q &= \frac 1 {\left|\mathbbm{P}_m\right|}\sum_{Q\in \mathbbm{P}_m} Q^\dagger(\alpha Q P) \rho (\alpha^* P^\dagger Q^\dagger) Q \\&= \frac 1 {\left|\mathbbm{P}_m\right|}\sum_{Q\in \mathbbm{P}_m} \alpha P\rho \alpha^* P^\dagger \\&= P\rho P^\dagger \end{aligned}\end{eqnarray} \end{proofof} \begin{proofof} {\Th{thm:PolynomialAuth}} The proof will follow the same lines as \Th{thm:CliffordAuth}. For clarity, we omit the normalization factor $|\mathbbm{P}_m|$. We start by decomposing any attack \hbox{$V\in \mathbbm{U}(M\otimes E)$} made by Eve to \mbox{$V=\sum_{P\in \mathbbm{P}_m}P\otimes U_P$}. Bob's state prior to applying the error detection procedure is: \begin{eqnarray} \rho_{Bob} &=& \mbox{Tr}_E{\Big(\sum_{Q\in\mathbbm{P}_m}(Q\otimes \mathcal{I}_E)^\dagger V\left( (Q\otimes \mathcal{I}_E)\rho \otimes\rho_E(Q\otimes \mathcal{I}_E)^\dagger \right)V^\dagger(Q\otimes \mathcal{I}_E)\Big)} \\ &=& \mbox{Tr}_E{\Big(\sum_{P,P'\in\mathbbm{P}_m}\sum_{Q\in\mathbbm{P}_m}(Q\otimes \mathcal{I}_E)^\dagger P\otimes U_P\left( (Q\otimes \mathcal{I}_E)\rho\otimes\rho_E (Q\otimes \mathcal{I}_E)^\dagger \right) P'\otimes U_{P'}^\dagger(Q\otimes \mathcal{I}_E)\Big)} \end{eqnarray} Regrouping elements operating on $M$ and on $E$ we have: \begin{eqnarray}\begin{aligned} {\ldots} =& \mbox{Tr}_E{\Big(\sum_{P,P'\in\mathbbm{P}_m}\sum_{Q\in\mathbbm{P}_m}\left(Q^\dagger P Q\rho Q^\dagger P'Q\right)\otimes U_P\rho_{E} U_{P'}^\dagger \Big)}\\ =& \sum_{P,P'\in\mathbbm{P}_m} \sum_{Q\in\mathbbm{P}_m}\mbox{Tr}{\left(U_P\rho_{E}U_{P'}^\dagger \right)} \cdot \left(Q^\dagger P Q\rho Q^\dagger P'Q\right) \end{aligned}\end{eqnarray} We use \Le{pauliTw} and are left only with $P=P'$ \begin{eqnarray}\label{prev2} {\ldots} = \sum_{P\in\mathbbm{P}_m}\sum_{Q\in\mathbbm{P}_m} \mbox{Tr}{\left( U_P\rho_{E}U_P^\dagger \right)} \cdot\left(Q^\dagger P Q\rho Q^\dagger PQ\right) \end{eqnarray} Now we use \Le{pauliID} : \begin{eqnarray} {\ldots} = \sum_{P\in\mathbbm{P}_m} \mbox{Tr}{\left( U_P^\dagger U_P\rho_{E} \right)}\cdot|\mathbbm{P}_m |P \rho P^\dagger \end{eqnarray} We set $\alpha_P = \mbox{Tr}{\big( U_P^\dagger U_P\rho_{E} \big)} $ and we rewrite Bob's state after normalization: \begin{eqnarray} \alpha_{\mathcal{I}}\cdot \rho + \sum_{P\in\mathbbm{P}_m\setminus\{ \mathcal{I}\}} \alpha_P\cdot P \rho P^\dagger \end{eqnarray}\ Recall that we are interested projection of Bob's state on the subspace spanned by the operator $P_1^{\ket\psi}$. \begin{eqnarray} \mbox{Tr}\Big( P_1^{\ket\psi} \big(\alpha_{\mathcal{I}}\cdot \rho + \sum_{P\in\mathbbm{P}_m\setminus \{\mathcal{I}\}} \alpha_P\cdot P\rho P^\dagger \big) \Big) &=& \alpha_\mathcal{I} +\sum_{P\in\mathbbm{P}_m\setminus \{\mathcal{I}\}}\alpha_P\mbox{Tr}\Big(P_1^{\ket\psi} P\rho P^\dagger\Big) \end{eqnarray} We use the bound from \Le{lem:PauliPolyAuthSec}: \begin{eqnarray}\label{eq:FinalCalc} {\ldots} &\ge& \alpha_\mathcal{I} +\sum_{P\in\mathbbm{P}_m\setminus \{\mathcal{I}\}}\alpha_P \left(1-2^{-d}\right) \\ &=& (1-\frac {1-\alpha_\mathcal{I}} {2^d}) \end{eqnarray} Which concludes the proof. Similarly to the random Clifford authentication scheme, the further Eve's intervention is closer to the identity, that is -- Eve does almost nothing, then the projection on the good subspace is closer to $1$. \end{proofof} \subsection{Concatenated Polynomial \textsf{QAS}\ }\label{app:concatpolynomial} When authenticating multiple registers, it may seem at first glance that Eve has the advantage of being able to tamper with the state by applying some transformation on the entire space. In the concatenated Clifford authentication protocol, the intervention of Eve is broken down to individual attacks on each register by the fact random Clifford operators are applied to each register independently. The main idea for the concatenated polynomial authentication is to use an independent Pauli key $(x,z)$ for each registers, while maintaining the sign key $k$ equal between registers. This idea will suffice to ``brake up'' the attack of Eve to a sequence of attacks on each register separately. \begin{protocol}\textbf{Concatenated polynomial Authentication protocol}:\\ Alice wishes to send a state $\ket\psi \in (\mathcal{C}^q)^{\otimes r}$ that is $r$ $q$-dimensional systems. For a security parameter $d$, set $m=2d+1$. Alice randomly selects a single sign key $k\in \{\pm 1\}^m$, furthermore, Alice selects $r$ independent Pauli keys $(x_j,z_j)$. To encode $\ket\psi$ Alice encodes each $q$-dimensional system using the signed polynomial code specified by $k$. Additionally, Alice shifts the $j$'th encoded message by $P_{(x_j,y_j)}$. Bob decodes each message separately, if all messages are correctly authenticated Bob dealers as valid the concatenated message, otherwise Bob aborts. \label{PolynomialConcatSec} \end{protocol} We now prove \Th{thm:concatpolynomial}. \begin{proofof} (of \Th{thm:concatpolynomial}) We notice that all the reasoning in \Th{thm:PolynomialAuth} till \Eq{eq:FinalCalc} hold in this case as well. So we have that the projection on the good subspace $P_1^{\ket\psi}$ is equal to: \begin{eqnarray} \label{reFinalCalc} \alpha_\mathcal{I} +\sum_{P\in\mathbbm{P}_{r\cdot m}\setminus \{\mathcal{I}\}}\alpha_P\mbox{Tr}\left(P_1^{\ket\psi} P\rho P^\dagger\right) \end{eqnarray} We start by writing $\mbox{Tr}(P_1^{\ket\psi} P\rho P^\dagger) = 1-\mbox{Tr}(P_0^{\ket\psi} P\rho P^\dagger)$. We recall that $P$ here is a Pauli operator from the group $\mathbbm{P}_{m\cdot r}$ so we write: $P=P_{\left(1\right)}{\otimes{\ldots}\otimes} P_{(r)}$. \begin{lem} The probability for Bob to be fooled by the application of $P\ne\mathcal{I}$ is at most $2^{-d}$. \end{lem} \begin{proof} For $P\rho P^\dagger$ to be in $P_0^{\ket\phi}$ it must be the case that for all $j$ such that $P_{(j)}\ne \mathcal{I}$ Eve escapes detection (Bob does not abort although the register is ``corrupted''). We note that Bob declares as valid the remaining registers (where $P_{(j)}= \mathcal{I}$) with certainty. We assume without loss of generality that $P_{(1)}\ne \mathcal{I}$, we write the probability that Bob is fooled: \begin{eqnarray} \Pr{(\textit{Bob is fooled by } P)} &=&\Pr{(\forall_{j: P_{(j)}\ne \mathcal{I}}\textit{Bob is fooled by } P_{(j)})}\\ &\le& \Pr{ (\textit{Bob is fooled by } P_{(1)}) }\\ &\le& 2^{-d} \end{eqnarray} Where the last inequality holds by \Le{lem:PauliPolyAuthSec}. \end{proof} Plugging this result into \Eq{eq:FinalCalc} we have: \begin{eqnarray} {\ldots}&=& \alpha_\mathcal{I} +\sum_{P\in\mathbbm{P}_{r\cdot m}\setminus \{\mathcal{I}\}}\alpha_P\left(1-\mbox{Tr}(P_0^{\ket\psi} P\rho P^\dagger)\right)\\ &\ge& \alpha_\mathcal{I} +\sum_{P\in\mathbbm{P}_{r\cdot m}\setminus \{\mathcal{I}\}}\alpha_P\left(1-2^{-d}\right)\\ &=& \left(1-\frac {1-\alpha_\mathcal{I}} {2^d}\right) \end{eqnarray} Which concludes the proof. \end{proofof} \section{Polynomial Authentication Based \textsf{QPIP}} \subsection{Secure Application of Quantum Gates} \label{app:polynomialgates} We have seen in \Sec{sec:SignedPolynomial} how to perform operations on states encoded by a polynomial code. In this section we present a way for the prover to apply certain operations on a signed shifted Polynomial error correcting code. This can be done without compromising the security of the authentication scheme. The main idea is that the transitive operation performed on the signed Polynomial code have almost the desired effect on the state at hand. The verifier will only need to update \textbf{his} keys $(x,z)$ for the provers action to have the desired effect on the state. We will first show the simple and elegant fact that if the verifier wants a (generalized) Pauli applied to the state, he does not need to ask the prover to do anything. The only thing the verifier must do is change his Pauli keys. Then, we show how to perform other operations such as \textit{SUM}, Fourier and Measurement. \begin{itemize} \item \textbf{Pauli $X$}: The logical $\wt{X}$ operator consists of an application of $X^{k_1}\otimes, {\ldots} \otimes X^{k_m}$ where \textbf{k} is the sign key. We claim that the change $(x,z)\rightarrow (x - \mathbf k,z)$ will in fact change the interpretation the verifier assigns to the state in the desired way. \begin{equation}\begin{aligned} P_{x,z}\ket{S^k_a} &= P_{x-k,z} P_{x-k,z}^\dagger P_{x,z}\ket{S^k_a}\\ &= P_{x-k,z} X^{-(x-k)}Z^{-z} Z^zX^x\ket{S^k_a}\\ &=P_{x-k,z}(X^{k_1}\otimes, {\ldots} \otimes X^{k_m})\ket{S^k_a} \\ &= P_{x-k,z}\wt{X} \ket{S^k_a} \end{aligned} \end{equation} \item \textbf{Pauli $Z$}: Similarly to the $X$ operator, all that is needed is a change of the Pauli key. We recall that $\wt{Z}=Z^{r_1k_1}{\otimes{\ldots}\otimes} Z^{r_mk_m}$. We define the vector $\mathbf{t}$ to be $t_i=c_ik_i$. From the same argument as above, it holds that the change of keys must be $(x,z) \rightarrow (x,z -\mathbf{t})$. \item \textbf{Controlled-Sum}: In order to remotely apply the \textit{SUM} operation the prover perform transversely Controlled-Sum (\textit{SUM}) from register $A$ to register $B$ on the authenticated states; as if the code was not shifted by the Pauli masking. However, a change in the Pauli keys is needed for the operation to have the desired effect. It is easy to check that: \begin{eqnarray} {\textit{SUM}} (Z^{z_A}X^{x_A}\otimes Z^{z_B}X^{x_B}) &=& (Z^{z_A-z_B}X^{x_A}\otimes Z^{z_B}X^{x_B+x_A}) {\textit{SUM}} \end{eqnarray} Which implies that the same hold for the logical operation $\wt{\textit{SUM}}$, and the Pauli shift $P_{(x,z)}$ that is: \begin{eqnarray} \wt{\textit{SUM}}\left(P_{(x_A,z_A) } \otimes P_{(x_B,z_B)}\right) &=&\left(P_{(x_A,z_A-z_B) }\otimes P_{(x_B+x_A,z_B)}\right) \wt{\textit{SUM}} \end{eqnarray} Hence, the verifier must change the pair of keys $(x_A,z_A), (x_B,z_B)$ to $(x_A,z_A-z_B)$ and $ (x_B+x_A,z_B)$, for the \textit{SUM} to have the desired affect on the state. \item \textbf{Fourier}: The prover performs Fourier transversely on the authenticated state. We recall that the Fourier operation swaps the roles of the $X$ and $Z$ Pauli operator. $FX^xF^\dagger=Z^x$ and $FZ^zF^\dagger=X^{-z}$. This is true for each register separately and hence: \begin{eqnarray}\begin{aligned} \wt{\textit{F}}\cdot Z^{z_1}X^{x_1}\otimes{\ldots}\otimes Z^{z_m}X^{x_m} =& X^{-z_1}Z^{x_1}\otimes{\ldots}\otimes X^{-z_m}Z^{x_m}\cdot \wt{\textit{F}}\\ \simeq& Z^{x_1}X^{-z_1}\otimes{\ldots}\otimes Z^{x_m}X^{-z_m}\cdot\wt{\textit{F}} \end{aligned} \end{eqnarray} Where the last equality is up to a global phase. Therefore the verifier must change the key $(x,z)$ to $(-z,x)$. \item \textbf{Measurement in the standard basis}: The prover measures the encoded state in the standard basis, send result to the verifier. Using the $x$ part of Pauli key, and the knowledge of $k$, the verifier interpolates the polynomial according to values in the received set of points. If the polynomial is indeed a polynomial of low degree (which is always the case if the prover is honest) the verifier sends the encoded value to the prover. Otherwise, the prover is caught cheating and the verifier aborts. \item \textbf{Toffoli}: The (generalized) Toffoli gate is applied using Clifford group operations on the Toffoli state $\frac 1 q \sum_{a,b} \ket{a,b,a b}$ (\cite{benor2006smq,nielsen2000qcq}). The application of a Toffoli gate in such a way does not imply a change of keys directly. Changes are made with respect to the actual operations that were performed. \end{itemize} \subsection{Proof of \Le{lem:uniformkeys}}\label{apen:polyIPproof} \begin{statement}{\Le{lem:uniformkeys}} At any stage of the protocol the verifier's set of keys, $k$ and $\{(x,z)_i\}_1^n$ are distributed uniformly and independently. \end{statement} \begin{proofof}{\Le{lem:uniformkeys}} Before any gate is applied the claim holds. All that needs to be done it to check that all changes keep this desired property. The sign key $k$ does not change during the protocol so in this case the claim is trivial. At every step at most two pair of Pauli keys change, let us review the possible changes (see \pen{app:polynomialgates}) and verify that the claim holds: \begin{itemize} \item Changes from the Pauli operators and Fourier transform induces shift, swap or negation changes to the keys; all of them preserve the uniform independent distribution trivially. \item The \textit{SUM} operation involves two set of keys $(x_A,z_A),(x_B,z_B)$ which change to $(x_A,z_A-z_B)$ and $ (x_B+x_A,z_B)$. The sum $x_B+x_A$, is $\mod q$ hence it is distributed uniformly, in addition it is not hard to see that is independent of $x_A$. The same holds for $z_A-z_B$ and $z_B$. Other parts of both keys are trivially distributed correctly. \item When the prover measures in the standard basis an authenticated qubit the outcome of the measurement is distributed uniformly at random in $F_q^m$. Specifically, the outcome does not depend on the sign key or the information that is authenticated. Therefore, even when the prover has the interpretation of his measurement outcome, he does not gain any information about the sign key $k$ or the Pauli keys of other registers. \end{itemize} \end{proofof} \section{Fault Tolerant \textsf{QPIP}}\label{app:ft} For the interactive proofs described above to be relevant in a any realistic scenario, dealing with noise is necessary. We will present a scheme based on the polynomial \textsf{QPIP}, that enables us to conduct interactive proofs in the presence of noise. \\~\\ \begin{statement}{\Th{thm:ft}} {\it \Th{thm:qcircuit} holds also when the quantum communication and computation devices are subjected to the usual local noise model}. \end{statement} \begin{proof}\textit{(Sketch)} Our proof is based on a collection of standard fault-tolerant quantum computation techniques. Care must be given to the fact that the verifier is the only one who can authenticate qubits, while he cannot authenticate many qubits in parallel. The proof can be divided into three stages. In the first stage, the prover receives authenticated qudits from the verifier, one by one. Each qudit is authenticated on $m$ qudits. The prover ignores the authentication structure and begins encoding each qubit out of the $m$ qubits separately using a concatenated error correction code, with total length which is polylogarithmic, as is required for the fault tolerance scheme in \cite{aharonov1997ftq}. From the work of \cite{aharonov1997ftq,knill1998rqc} (and others) we know that this encoding can be done in a fault tolerant way, such that if the error probability was less than some threshold $\eta$, then the encoded qudit is faulty (namely, has an effective error) with probability at most $\eta'$, where $\eta'$ is a constant that depends on $\eta$ and other parameters of the encoding scheme, but not on $n$. We denote this concatenated encoding procedure by $\bar S$. Since each authenticated qudit sent to the prover is encoded using a constant number ($m$) of qudits, it follows that with a constant probability, $\eta''$ all these qubits are effectively correctly authenticated. In other words, the encoding of $\ket{S^k_a}$, $(\bar S {\otimes{\ldots}\otimes}\bar S) \ket{S^k_a}$, has no effective faults with probability $\eta''$. Once a qudit has been encoded by the prover, he can keep applying error corrections on that qudit, and thus, can keep its effective error below some constant for a polynomially long time. Polynomially many authenticated qudits are sent this way to the prover. In the second stage a purification procedure is performed on the authenticated messages, which are now protected from noise by the prover's concatenated error correction code. Since the purification is of the \emph{authenticated} qudits, it is done according to instructions from the verifier. As explained in \pen{pen:polynomialgates} the verifier can also interpret measurements outcomes for the prover, which are needed for the purification procedure. We need to purify both input qubits which are without loss of generality $\ket 0$, and Toffoli states. Any standard purification procedure (for example, that of \cite{benor2006smq}) would work for the $\ket 0$ states. In order to purify the Toffoli states we use the purification described in \cite{benor2006smq}. The purification procedure uses polylogarithmically many qubits in order to provide a total error of at most $\frac \Delta {poly(nT)}$, where $T$ is the number of gates in the circuit $U$ that will be computed by the prover. This means (using the union bound) that with probability at most $\Delta$ all purified states are effectively correct. Finally, having with high probability, correct input states, the polynomial \textsf{QPIP}\ (\Prot{prot:PolynomialIP}) is executed. The prover applies logical operations (\textit{SUM},$F$ and measurements) on his registers which contain authenticated qubits. In particular, a logical measurement of the output bit of the computation is executed by the prover at the end of the computation. The result is then sent to the verifier who subsequently interprets it according to his secret key. The soundness of the this fault tolerant \textsf{QPIP}\ is the same as that of the standard \textsf{QPIP}. In fact, in this scheme, the verifier ignores the prover's overhead of encoding the input in an error correcting code, and performing encoded operations. The verifier can be thought as performing \Prot{prot:PolynomialIP} for a purification circuit followed by the circuit he is interested in computing. Therefore, the security proof of \Th{thm:PolynomialIP} proves in fact that applying the purification and computation circuits, has the same soundness parameter as the standard \textsf{QPIP}. Regarding completeness, the fact that the prover's computation is noisy changes the error probability only very slightly. There is a probability $\Delta$ that one of the input authenticated states is effectively incorrect; Once they are all correct, the fault tolerance proof implies that they remain correct the entire computation with all but an inverse polynomial probability. Therefore, if the standard \textsf{QPIP}\ protocol has completeness $1-\delta-\epsilon$ the completeness of this scheme is bounded by $1-\delta-\epsilon-2\Delta$. \end{proof} \section{Blind \textsf{QPIP}}\label{app:blind} \begin{deff} \cite{blind,broadbent2008ubq,childs2001saq} Secure blind quantum computation is a process where a server computes a function for a client and the following properties hold: \begin{itemize} \item \textbf{Blindness}: The prover gets no information beyond an upper bound on the size of the circuit. Formally, in a blind computation scheme for a set of function $\mathfrak{F}_n$ the prover's reduced density matrix is identical for every $f\in\mathfrak{F}_n$. \item \textbf{Security}: Completeness and soundness hold the same way as in \textsf{QAS}\ (\Def{def:qas}). \end{itemize} \end{deff} \begin{statement}{\Th{thm:blind}} There is a blind \textsf{QPIP}\ for \mbox{\textsf{Q-CIRCUIT}}. \end{statement} We use the \textsf{QPIP}\ protocols for \mbox{\textsf{Q-CIRCUIT}}\ in order to provide a blind protocol for any language in {\sf BQP}. We use the simple observation that the input is completely hidden from the prover. This holds since in both \textsf{QAS}\ presented the density matrix that describes the prover's state does not depend on the input to the circuit. Specifically, due to the randomized selection of an authentication, the prover's state is the completely mixed state. We also use in the proof of this theorem the notion of a universal circuit. Roughly, a universal circuit acts on input bits and control bits. The control bits can be thought of, as a description of a circuit that should be applied to the input bits. Constructions of such universal circuits are left as an easy exercise to the reader. Having mentioned the above observations, a blind computation protocol is not hard to devise. The verifier will, regardless of the input, compute, with the prover's help, the result of the universal circuit acting on input and control bits. We first formally define a universal circuit: \begin{deff} The universal circuit $\mathfrak{U}_{n,k}$ acts in the following way: \begin{eqnarray} \mathfrak{U}_{n,k} \ket\phi\otimes\ket{c(U)} \longrightarrow U\ket\phi\ket{c(U)} \end{eqnarray} Where $c(U)$ is the canonical (classical) description of the circuit $U$. \end{deff} \begin{proofof}{\Th{thm:blind}} We prove that both the Clifford based \textsf{QPIP}\ and the Polynomial \textsf{QPIP}\ can be used to create a blind computation protocol. We claim that the state of the prover through the protocols is described by the completely mixed state. This is true in the Polynomial scheme since the Pauli randomization does exactly that. Averaging over all possible Pauli keys, it is easy to check that the state of the prover is described by $\mathcal{I}$. Furthermore, the prover gains no information regarding the Pauli key during the protocol, therefore, the description of the state does not change during the protocol as claimed. Since the above holds for any initial state, it follows that the prover has no information about the initial, intermediate or final state of the system. To see that the same argument holds for the Clifford \textsf{QAS}, it suffices to notice that applying a random Clifford operator ``includes'' the application of a random Pauli: \begin{eqnarray} \frac 1 {|\mathfrak{C}_n|}\sum_{c\in \mathfrak{C}_n} C\rho C^\dagger &=& \frac 1 {|\mathfrak{C}_n|}\sum_{c\in \mathfrak{C}_n} (CQ)\rho (CQ)^\dagger \end{eqnarray} Equality holds for any $Q\in \mathfrak{C}_n$ since it is nothing but a change of order of summation. \begin{eqnarray} \ldots &=& \sum_{Q\in \mathbbm{P}_n}\frac 1 {|\mathbbm{P}_n|} \frac 1 {|\mathfrak{C}_n|}\sum_{c\in \mathfrak{C}_n} C(Q\rho Q^\dagger)C^\dagger\\ &=& \frac 1 {|\mathfrak{C}_n|}\sum_{c\in \mathfrak{C}_n}\frac 1 {|\mathbbm{P}_n|}\sum_{Q\in \mathbbm{P}_n} C(Q\rho Q^\dagger)C^\dagger\\ &=&\frac 1 {|\mathfrak{C}_n|}\sum_{c\in \mathfrak{C}_n}C\Big(\frac 1 {|\mathbbm{P}_n|}\sum_{Q\in \mathbbm{P}_n} \left(Q\rho Q^\dagger\right)\Big)C^\dagger\\ &=&\frac 1 {|\mathfrak{C}_n|}\sum_{c\in \mathfrak{C}_n}C\left(\mathcal{I}\right)C^\dagger\\ &=& \mathcal{I} \end{eqnarray} \end{proofof} \ignore{ \section{\textsf{QPIP}\ With Different Provers} \label{app:interpretation} We have seen that a prover restricted to {\sf BQP}\ has enough power to convince a verifier of membership in any language in {\sf BQP}. We would like to find out how strict are the requirements on the prover's computational power for our results to hold. We consider two extreme cases: \begin{itemize} \item A quantum-classical hybrid prover. This prover which we denote $\mathcal{P}_\mathbf{hybrid}$ has, similarly to our verifier, constant amount of quantum memory on which he can act. In addition we assume no other (classical) computational bound on the prover. \item A computationally unbounded quantum prover which we denote $\mathcal{P}_\mathbf{unbound}$. This prover is identical to other systems studied in the literature (for instance: \cite{watrous2003phc}). \end{itemize} We would like to know what are the properties of our \textsf{QPIP}\ protocols, when interacting with such provers. Namely, do the soundness and completeness proofs hold? We prove an interesting feature of our \textsf{QPIP}\ protocols: A prover of type $\mathcal{P}_\mathbf{hybrid}$ is unable to convince $\mathcal{V}$ of even a true statement. That is, $\mathcal{V}$ will abort (with high probability) an interaction with such a $\mathcal{P}_\mathbf{hybrid}$. \\~\\ \begin{statement}{\Cl{claim:hybridProver}} There exists languages in {\sf BQP}\ such that for sufficiently large instances $\mathcal{V}$ will abort with high probability. \end{statement} \begin{proof}\textit{(Sketch. The complete proof will appear in the full version.)} A $\mathcal{P}_\mathbf{hybrid}$ prover cannot store $n>c$ qubits. Assume that $\mathcal{P}_\mathbf{hybrid}$ receives, the authentication of a random string. Then similarly, to the encryption procedure of \cite{bennett1984qcp}, the prover is asked to measure a random qubit either in the standard or Fourier basis. Since $\mathcal{P}_\mathbf{hybrid}$ could not have kept the needed qubit in a coherent state, it is possible to show that he will fail to provide an acceptable answer with probability of at least $ \frac 1 2 (1-o(1))(1-\frac 1 {2^d})$. The proof is similar to the security proof of the \cite{bennett1984qcp}. Intuitively, For $\mathcal{P}_\mathbf{hybrid}$ to be caught he must not have saved the bid and have guessed wrongly whether the standard or Fourier basis will be requested. This happens with probability $(1-\frac c n)\frac 1 2$. Now the string that $\mathcal{P}_\mathbf{hybrid}$ holds is independent of that correct set of strings, so $\mathcal{P}_\mathbf{hybrid}$ must guess a bit and fake an authentication for it. This succeeds with probability of at most $2^{-d}$. \end{proof} As for the computationally unbounded quantum prover, it is quite obvious that $\mathcal{P}_\mathbf{unbound}$ can be used in order to convince $\mathcal{V}$ of membership in a {\sf BQP}\ language using our protocols. However, it may come as a surprise to find out that the extra power of $\mathcal{P}_\mathbf{unbound}$ does not enable him to cheat $\mathcal{V}$ more than the regular prover $\mathcal{P}$. \begin{thm}\label{thm:unboundProver} The completeness and soundness of the \textsf{QPIP}\ protocols \ref{prot:CliffordIP} and \ref{prot:PolynomialIP} remains the same when interacting with a $\mathcal{P}_\mathbf{unbound}$ prover. \end{thm} \begin{proof} To see this, we notice that in the security proofs for the \textsf{QPIP}\ protocols (and for the \textsf{QAS}) we had no assumptions on the computational power of the prover (or Eve). That is, the security proofs holds for \textbf{any} intervention, regardless of the possible complexity it might encapsulate in it. \end{proof} Thus, the security of a delegated (possibly blind) quantum computation is enhanced by \Th{thm:unboundProver}, since it abandons the assumption that the prover is computationally bound to {\sf BQP}. That is, our security proofs are computationally unconditional.} \section{Interpretation of Results}\label{app:interpretation} \begin{proofof}{Corollary \ref{corol:confidence}} Let us first deal with the Clifford based \textsf{QPIP}. We assume that the soundness of the scheme is $\delta$ and that the prover applies a strategy on which the verifier dose not abort with probability $\gamma$. The final state of the protocol before the verifier's cheat detection can be written as (see \Eq{attackProfile}): \begin{equation} s\rho_c + \frac{(1-s)}{4^n-1}\sum_{Q\in\mathbbm{P}_n\setminus\{\mathcal{I}\}}\left(Q\rho_c Q^\dagger \right) \end{equation} Where $\rho_c$ is the correct final state of the protocol. After the verifier applies the cheat detection procedure $\mathcal{B}$ (which checks that the control registers are indeed in the $\ket 0$ state): \begin{equation} s\rho_{c}\otimes\ket{VAL}\bra{VAL} +\alpha_{rej}\rho_{rej}\otimes\ket{ABR}\bra{ABR} +\alpha_{bad}\rho_{bad}\otimes\ket{VAL}\bra{VAL} \end{equation} Assume the verifier declares the computation valid, then his state is: \begin{equation}\begin{aligned} \frac {s\rho_{c} +\alpha_{bad}\rho_{bad}} {1-\alpha_{rej}} \otimes\ket{VAL}\bra{VAL} \end{aligned}\end{equation} then the trace distance to the correct state $\rho_c$ is bounded by: \begin{equation} 1 - \frac {s}{1-\alpha_{rej}}+ \frac {\alpha_{bad}}{1-\alpha_{rej}} = \frac {2\alpha_{bad}}{1-\alpha_{rej}}\le \frac{2\delta}{\gamma} \end{equation} Were the inequality follows from the security of the \textsf{QPIP}\ protocol: $\alpha_{bad}\le\delta$, and the fact that the non-aborting probability $\gamma$ is equal to $\alpha_{bad}+s$. The proof that the Polynomial based \textsf{QPIP}\ has the same property follows the exact same lines. \end{proofof} \section{Symmetric Definition of \textsf{QPIP}}\label{app:sym} The definitions and results presented so far seem to be asymmetric. They refer to a setting where the provers wishes to convince the verifier \emph{solely} of YES instances (of problems in {\sf BQP}). This asymmetry does not seem natural neither regarding the complexity class {\sf BQP}\ nor in the cryptographic or commercial aspects. In fact, this intuition is indeed true. Apparently, we can provide a symmetric definition of quantum prover interactive proofs, and show that the two definitions are equivalent. Essentially, this follows from the trivial observation that the class {\sf BQP}\ is closed under complement, that is, $\mathcal{L}\in{\sf BQP} \iff {\mathcal{L}^c}\in{\sf BQP}$. To see this, let us first consider the symmetric definition for \textsf{QPIP}. \begin{deff} A language $\mathcal{L}$ is in the class symmetric quantum prover interactive proof $(\textsf{QPIP}^{sym})$ if there exists an interactive protocol with the following properties: \begin{itemize} \item The prover and verifier computational power is exactly the same as in the definition of \textsf{QPIP}\ (\Def{def:QPIP}). Namely, a {\sf BQP}\ machine and quantum-classical hybrid machine for the prover and verifier respectively. \item Communication is identical to the \textsf{QPIP}\ definition. \item The verifier has three possible outcomes: \textbf{YES}, \textbf{NO}, and \textbf{ABORT}: \begin{itemize} \item\textbf{YES}: The verifier is convinced that $x\in \mathcal{L}$. \item\textbf{NO}: The verifier is convinced that $x\notin \mathcal{L}$. \item\textbf{ABORT}: The verifier caught the prover cheating. \end{itemize} \item Completeness: There exists a prover $\mathcal{P}$ such that $\forall x\in\{0,1\}^*$ the verifier is correct with high probability: \[ \Pr{(\left[\mathcal{V},\mathcal{P} \right](x,r) = \mathbbm{1}_\mathcal{L} }) \ge \frac23 \] where $\mathbbm{1}_\mathcal{L}$ is the indicator function of $\mathcal{L}$. \item Soundness: For any prover $\mathcal{P}'$ and for {\bf any} $x\in\{0,1\}^*$, the verifier is mistaken with bounded probability, that is: \[ \Pr{(\left[\mathcal{V},\mathcal{P} \right](x,r) = 1- \mathbbm{1}_\mathcal{L} }) \le \frac13\] \end{itemize} \end{deff} \begin{thm} For any language $\mathcal{L}$: If $\mathcal{L},\mathcal{L}^c$ are both in $\textsf{QPIP}$ then $\mathcal{L},\mathcal{L}^c \in \textsf{QPIP}^{sym}$ \end{thm} \begin{proof} Let $\mathcal{V}_{\mathcal{L}},\mathcal{P}_{\mathcal{L}}$ denote the \textsf{QPIP}\ verifier and prover for the language $\mathcal{L}$. By the assumption, there exists such a pair for both $\mathcal{L}$ and $\mathcal{L}^c$. We define the pair $\wt{\mathcal{P}}$ and $\wt{\mathcal{V}}$ to be $\textsf{QPIP}^{sym}$ verifier and prover in the following way: On the first round the prover $\wt{\mathcal{P}}$ sends to $\wt\mathcal{V}$\; ``yes'' if $x\in \mathcal{L}$ and ``no'' otherwise. Now, both $\wt{\mathcal{P}}$ and $\wt{\mathcal{V}}$ behave according to $\mathcal{V}_{\mathcal{L}},\mathcal{P}_{\mathcal{L}}$ if ``yes'' was sent or according to $\mathcal{V}_{\mathcal{L}^c},\mathcal{P}_{\mathcal{L}^c}$ otherwise. Soundness and completeness follows immediately from the definition. \end{proof} Since {\sf BQP}\ is closed under completion, we get: \begin{corol} ${\sf BQP}=\textsf{QPIP}^{sym}$ \end{corol} \end{document}
arXiv
List of first-order theories In first-order logic, a first-order theory is given by a set of axioms in some language. This entry lists some of the more common examples used in model theory and some of their properties. Preliminaries For every natural mathematical structure there is a signature σ listing the constants, functions, and relations of the theory together with their arities, so that the object is naturally a σ-structure. Given a signature σ there is a unique first-order language Lσ that can be used to capture the first-order expressible facts about the σ-structure. There are two common ways to specify theories: 1. List or describe a set of sentences in the language Lσ, called the axioms of the theory. 2. Give a set of σ-structures, and define a theory to be the set of sentences in Lσ holding in all these models. For example, the "theory of finite fields" consists of all sentences in the language of fields that are true in all finite fields. An Lσ theory may: • be consistent: no proof of contradiction exists; • be satisfiable: there exists a σ-structure for which the sentences of the theory are all true (by the completeness theorem, satisfiability is equivalent to consistency); • be complete: for any statement, either it or its negation is provable; • have quantifier elimination; • eliminate imaginaries; • be finitely axiomatizable; • be decidable: There is an algorithm to decide which statements are provable; • be recursively axiomatizable; • be model complete or sub-model complete; • be κ-categorical: All models of cardinality κ are isomorphic; • be stable or unstable; • be ω-stable (same as totally transcendental for countable theories); • be superstable • have an atomic model; • have a prime model; • have a saturated model. Pure identity theories Main article: Theory of pure equality The signature of the pure identity theory is empty, with no functions, constants, or relations. Pure identity theory has no (non-logical) axioms. It is decidable. One of the few interesting properties that can be stated in the language of pure identity theory is that of being infinite. This is given by an infinite set of axioms stating there are at least 2 elements, there are at least 3 elements, and so on: • ∃x1 ∃x2 ¬x1 = x2,    ∃x1 ∃x2 ∃x3 ¬x1 = x2 ∧ ¬x1 = x3 ∧ ¬x2 = x3,... These axioms define the theory of an infinite set. The opposite property of being finite cannot be stated in first-order logic for any theory that has arbitrarily large finite models: in fact any such theory has infinite models by the compactness theorem. In general if a property can be stated by a finite number of sentences of first-order logic then the opposite property can also be stated in first-order logic, but if a property needs an infinite number of sentences then its opposite property cannot be stated in first-order logic. Any statement of pure identity theory is equivalent to either σ(N) or to ¬σ(N) for some finite subset N of the non-negative integers, where σ(N) is the statement that the number of elements is in N. It is even possible to describe all possible theories in this language as follows. Any theory is either the theory of all sets of cardinality in N for some finite subset N of the non-negative integers, or the theory of all sets whose cardinality is not in N, for some finite or infinite subset N of the non-negative integers. (There are no theories whose models are exactly sets of cardinality N if N is an infinite subset of the integers.) The complete theories are the theories of sets of cardinality n for some finite n, and the theory of infinite sets. One special case of this is the inconsistent theory defined by the axiom ∃x ¬x = x. It is a perfectly good theory with many good properties: it is complete, decidable, finitely axiomatizable, and so on. The only problem is that it has no models at all. By Gödel's completeness theorem, it is the only theory (for any given language) with no models.[1] It is not the same as the theory of the empty set (in versions of first-order logic that allow a model to be empty): the theory of the empty set has exactly one model, which has no elements. Unary relations A set of unary relations Pi for i in some set I is called independent if for every two disjoint finite subsets A and B of I there is some element x such that Pi(x) is true for i in A and false for i in B. Independence can be expressed by a set of first-order statements. The theory of a countable number of independent unary relations is complete, but has no atomic models. It is also an example of a theory that is superstable but not totally transcendental. Equivalence relations The signature of equivalence relations has one binary infix relation symbol ~, no constants, and no functions. Equivalence relations satisfy the axioms: • Reflexive ∀x x~x; • Symmetric ∀x ∀y x~y → y~x; • Transitive: ∀x ∀y ∀z (x~y ∧ y~z) → x~z. Some first order properties of equivalence relations are: • ~ has an infinite number of equivalence classes; • ~ has exactly n equivalence classes (for any fixed positive integer n); • All equivalence classes are infinite; • All equivalence classes have size exactly n (for any fixed positive integer n). The theory of an equivalence relation with exactly 2 infinite equivalence classes is an easy example of a theory which is ω-categorical but not categorical for any larger cardinal. The equivalence relation ~ should not be confused with the identity symbol '=': if x=y then x~y, but the converse is not necessarily true. Theories of equivalence relations are not all that difficult or interesting, but often give easy examples or counterexamples for various statements. The following constructions are sometimes used to produce examples of theories with certain spectra; in fact by applying them to a small number of explicit theories T one gets examples of complete countable theories with all possible uncountable spectra. If T is a theory in some language, we define a new theory 2T by adding a new binary relation to the language, and adding axioms stating that it is an equivalence relation, such that there are an infinite number of equivalence classes all of which are models of T. It is possible to iterate this construction transfinitely: given an ordinal α, define a new theory by adding an equivalence relation Eβ for each β<α, together with axioms stating that whenever β<γ then each Eγ equivalence class is the union of infinitely many Eβ equivalence classes, and each E0 equivalence class is a model of T. Informally, one can visualize models of this theory as infinitely branching trees of height α with models of T attached to all leaves. Orders The signature of orders has no constants or functions, and one binary relation symbols ≤. (It is of course possible to use ≥, < or > instead as the basic relation, with the obvious minor changes to the axioms.) We define x ≥ y, x < y, x > y as abbreviations for y ≤ x, x ≤ y ∧¬y ≤ x, y < x, Some first-order properties of orders: • Transitive: ∀x ∀y ∀z x ≤ y∧y ≤ z → x ≤ z • Reflexive: ∀x x ≤ x • Antisymmetric: ∀x ∀y x ≤ y ∧ y ≤ x → x = y • Partial: Transitive ∧ Reflexive ∧ Antisymmetric; • Linear (or total): Partial ∧ ∀x ∀y x ≤ y ∨ y ≤ x • Dense: ∀x ∀z x < z → ∃y x < y ∧ y < z ("Between any 2 distinct elements there is another element") • There is a smallest element: ∃x ∀y x ≤ y • There is a largest element: ∃x ∀y y ≤ x • Every element has an immediate successor: ∀x ∃y ∀z x < z ↔ y ≤ z The theory DLO of dense linear orders without endpoints (i.e. no smallest or largest element) is complete, ω-categorical, but not categorical for any uncountable cardinal. There are three other very similar theories: the theory of dense linear orders with a: • Smallest but no largest element; • Largest but no smallest element; • Largest and smallest element. Being well ordered ("any non-empty subset has a minimal element") is not a first-order property; the usual definition involves quantifying over all subsets. Lattices Lattices can be considered either as special sorts of partially ordered sets, with a signature consisting of one binary relation symbol ≤, or as algebraic structures with a signature consisting of two binary operations ∧ and ∨. The two approaches can be related by defining a ≤ b to mean a∧b = a. For two binary operations the axioms for a lattice are: Commutative laws: $\forall a\forall b\;a\vee b=b\vee a$$\forall a\forall b\;a\wedge b=b\wedge a$ Associative laws: $\forall a\forall b\forall c\;a\vee (b\vee c)=(a\vee b)\vee c$$\forall a\forall b\forall c\;a\wedge (b\wedge c)=(a\wedge b)\wedge c$ Absorption laws: $\forall a\forall b\;a\vee (a\wedge b)=a$$\forall a\forall b\;a\wedge (a\vee b)=a$ For one relation ≤ the axioms are: • Axioms stating ≤ is a partial order, as above. • $\forall a\forall b\exists c\;c\leq a\wedge c\leq b\wedge \forall d\;d\leq a\wedge d\leq b\rightarrow d\leq c$ (existence of c = a∧b) • $\forall a\forall b\exists c\;a\leq c\wedge b\leq c\wedge \forall d\;a\leq d\wedge b\leq d\rightarrow c\leq d$ (existence of c = a∨b) First order properties include: • $\forall x\forall y\forall z\;x\vee (y\wedge z)=(x\vee y)\wedge (x\vee z)$ (distributive lattices) • $\forall x\forall y\forall z\;x\vee (y\wedge (x\vee z))=(x\vee y)\wedge (x\vee z)$ (modular lattices) Heyting algebras can be defined as lattices with certain extra first-order properties. Completeness is not a first order property of lattices. Graphs Main article: Logic of graphs The signature of graphs has no constants or functions, and one binary relation symbol R, where R(x,y) is read as "there is an edge from x to y". The axioms for the theory of graphs are • Symmetric: ∀x ∀y R(x,y)→ R(y,x) • Anti-reflexive: ∀x ¬R(x,x) ("no loops") The theory of random graphs has the following extra axioms for each positive integer n: • For any two disjoint finite sets of size n, there is a point joined to all points of the first set and to no points of the second set. (For each fixed n, it is easy to write this statement in the language of graphs.) The theory of random graphs is ω categorical, complete, and decidable, and its countable model is called the Rado graph. A statement in the language of graphs is true in this theory if and only if the probability that an n-vertex random graph models the statement tends to 1 in the limit as n goes to infinity. Boolean algebras There are several different signatures and conventions used for Boolean algebras: 1. The signature has two constants, 0 and 1, and two binary functions ∧ and ∨ ("and" and "or"), and one unary function ¬ ("not"). This can be confusing as the functions use the same symbols as the propositional functions of first-order logic. 2. In set theory, a common convention is that the language has two constants, 0 and 1, and two binary functions · and +, and one unary function −. The three functions have the same interpretation as the functions in the first convention. Unfortunately, this convention clashes badly with the next convention: 3. In algebra, the usual convention is that the language has two constants, 0 and 1, and two binary functions · and +. The function · has the same meaning as ∧, but a+b means a∨b∧¬(a∧b). The reason for this is that the axioms for a Boolean algebra are then just the axioms for a ring with 1 plus ∀x x2 = x. Unfortunately this clashes with the standard convention in set theory given above. The axioms are: • The axioms for a distributive lattice (see above) • ∀a a∧¬a = 0, ∀a a∨¬a = 1 (properties of negation) • Some authors add the extra axiom ¬0 = 1, to exclude the trivial algebra with one element. Tarski proved that the theory of Boolean algebras is decidable. We write x ≤ y as an abbreviation for x∧y = x, and atom(x) as an abbreviation for ¬x = 0 ∧ ∀y y ≤ x → y = 0 ∨ y = x, read as "x is an atom", in other words a non-zero element with nothing between it and 0. Here are some first-order properties of Boolean algebras: • Atomic: ∀x x = 0 ∨ ∃y y ≤ x ∧ atom(y) • Atomless: ∀x ¬atom(x) The theory of atomless Boolean algebras is ω-categorical and complete. For any Boolean algebra B, there are several invariants defined as follows. • the ideal I(B) consists of elements that are the sum of an atomic and an atomless element (an element with no atoms below it). • The quotient algebras Bi of B are defined inductively by B0=B, Bk+1 = Bk/I(Bk). • The invariant m(B) is the smallest integer such that Bm+1 is trivial, or ∞ if no such integer exists. • If m(B) is finite, the invariant n(B) is the number of atoms of Bm(B) if this number is finite, or ∞ if this number is infinite. • The invariant l(B) is 0 if Bm(B) is atomic or if m(B) is ∞, and 1 otherwise. Then two Boolean algebras are elementarily equivalent if and only if their invariants l, m, and n are the same. In other words, the values of these invariants classify the possible completions of the theory of Boolean algebras. So the possible complete theories are: • The trivial algebra (if this is allowed; sometimes 0≠1 is included as an axiom.) • The theory with m = ∞ • The theories with m a natural number, n a natural number or ∞, and l = 0 or 1 (with l = 0 if n = 0). Groups The signature of group theory has one constant 1 (the identity), one function of arity 1 (the inverse) whose value on t is denoted by t−1, and one function of arity 2, which is usually omitted from terms. For any integer n, tn is an abbreviation for the obvious term for the nth power of t. Groups are defined by the axioms • Identity: ∀x 1x = x ∧ x1 = x • Inverse: ∀x x−1x = 1 ∧ xx−1 = 1 • Associativity: ∀x∀y∀z (xy)z = x(yz) Some properties of groups that can be defined in the first-order language of groups are: • Abelian: ∀x ∀y xy = yx. • Torsion free: ∀x x2 = 1→x = 1, ∀x x3 = 1 → x = 1, ∀x x4 = 1 → x = 1, ... • Divisible: ∀x ∃y y2 = x, ∀x ∃y y3 = x, ∀x ∃y y4 = x, ... • Infinite (as in identity theory) • Exponent n (for any fixed positive integer n): ∀x xn = 1 • Nilpotent of class n (for any fixed positive integer n) • Solvable of class n (for any fixed positive integer n) The theory of abelian groups is decidable.[2] The theory of infinite divisible torsion-free abelian groups is complete, as is the theory of infinite abelian groups of exponent p (for p prime). The theory of finite groups is the set of first-order statements in the language of groups that are true in all finite groups (there are plenty of infinite models of this theory). It is not completely trivial to find any such statement that is not true for all groups: one example is "given two elements of order 2, either they are conjugate or there is a non-trivial element commuting with both of them". The properties of being finite, or free, or simple, or torsion are not first-order. More precisely, the first-order theory of all groups with one of these properties has models that do not have this property. Rings and fields The signature of (unital) rings has two constants 0 and 1, two binary functions + and ×, and, optionally, one unary negation function −. Rings Axioms: Addition makes the ring into an abelian group, multiplication is associative and has an identity 1, and multiplication is left and right distributive. Commutative rings The axioms for rings plus ∀x ∀y xy = yx. Fields The axioms for commutative rings plus ∀x (¬ x = 0 → ∃y xy = 1) and ¬ 1 = 0. Many of the examples given here have only universal, or algebraic axioms. The class of structures satisfying such a theory has the property of being closed under substructure. For example, a subset of a group closed under the group actions of multiplication and inverse is again a group. Since the signature of fields does not usually include multiplicative and additive inverse, the axioms for inverses are not universal, and therefore a substructure of a field closed under addition and multiplication is not always a field. This can be remedied by adding unary inverse functions to the language. For any positive integer n the property that all equations of degree n have a root can be expressed by a single first-order sentence: • ∀ a1 ∀ a2... ∀ an ∃x (...((x+a1)x +a2)x+...)x+an = 0 Perfect fields The axioms for fields, plus axioms for each prime number p stating that if p 1 = 0 (i.e. the field has characteristic p), then every field element has a pth root. Algebraically closed fields of characteristic p The axioms for fields, plus for every positive n the axiom that all polynomials of degree n have a root, plus axioms fixing the characteristic. The classical examples of complete theories. Categorical in all uncountable cardinals. The theory ACFp has a universal domain property, in the sense that every structure N satisfying the universal axioms of ACFp is a substructure of a sufficiently large algebraically closed field $M\models ACF_{0}$, and additionally any two such embeddings N → M induce an automorphism of M. Finite fields The theory of finite fields is the set of all first-order statements that are true in all finite fields. Significant examples of such statements can, for example, be given by applying the Chevalley–Warning theorem, over the prime fields. The name is a little misleading as the theory has plenty of infinite models. Ax proved that the theory is decidable. Formally real fields The axioms for fields plus, for every positive integer n, the axiom: • ∀ a1 ∀ a2... ∀ an a1a1+a2a2+ ...+anan=0 → a1=0∧a2=0∧ ... ∧an=0. That is, 0 is not a non-trivial sum of squares. Real closed fields The axioms for formally real fields plus the axioms: • ∀x ∃y (x=yy ∨ x+yy= 0); • for every odd positive integer n, the axiom stating that every polynomial of degree n has a root. The theory of real closed fields is effective and complete and therefore decidable (the Tarski–Seidenberg theorem). The addition of further function symbols (e.g., the exponential function, the sine function) may change decidability. p-adic fields Ax & Kochen (1965) showed that the theory of p-adic fields is decidable and gave a set of axioms for it.[3] Geometry Axioms for various systems of geometry usually use a typed language, with the different types corresponding to different geometric objects such as points, lines, circles, planes, and so on. The signature will often consist of binary incidence relations between objects of different types; for example, the relation that a point lies on a line. The signature may have more complicated relations; for example ordered geometry might have a ternary "betweenness" relation for 3 points, which says whether one lies between two others, or a "congruence" relation between 2 pairs of points. Some examples of axiomatized systems of geometry include ordered geometry, absolute geometry, affine geometry, Euclidean geometry, projective geometry, and hyperbolic geometry. For each of these geometries there are many different and inequivalent systems of axioms for various dimensions. Some of these axiom systems include "completeness" axioms that are not first order. As a typical example, the axioms for projective geometry use 2 types, points and lines, and a binary incidence relation between points and lines. If point and line variables are indicated by small and capital letter, and a incident to A is written as aA, then one set of axioms is • $\forall a\forall b\;\lnot a=b\rightarrow \exists C\;aC\land bC$ (There is a line through any 2 distinct points a,b ...) • $\forall a\forall b\forall C\forall D\;\lnot a=b\land aC\land bC\land aD\land bD\rightarrow C=D$ (... which is unique) • $\forall a\forall b\forall c\forall d\forall e\forall G\forall H\;aH\land bH\land eH\land cG\land dG\land eG\rightarrow \exists f\exists I\exists J\;aI\land cI\land fI\land bJ\land dJ\land fJ$ (Veblen's axiom: if ab and cd lie on intersecting lines, then so do ac and bd.) • $\forall A\exists b\exists c\exists d\;bA\land cA\land dA\land \lnot b=c\land \lnot b=d\land \lnot c=d$ (Every line has at least 3 points) Euclid did not state all the axioms for Euclidean geometry explicitly, and the first complete list was given by Hilbert in Hilbert's axioms. This is not a first order axiomatization as one of Hilbert's axioms is a second order completeness axiom. Tarski's axioms are a first order axiomatization of Euclidean geometry. Tarski showed this axiom system is complete and decidable by relating it to the complete and decidable theory of real closed fields. Differential algebra • The theory DF of differential fields. The signature is that of fields (0, 1, +, −, ×) together with a unary function ∂, the derivation. The axioms are those for fields together with $\forall u\forall v\,\partial (uv)=u\,\partial v+v\,\partial u$ $\forall u\forall v\,\partial (u+v)=\partial u+\partial v\ .$ For this theory one can add the condition that the characteristic is p, a prime or zero, to get the theory DFp of differential fields of characteristic p (and similarly with the other theories below). If K is a differential field then the field of constants $k=\{u\in K:\partial (u)=0\}.$ The theory of differentially perfect fields is the theory of differential fields together with the condition that the field of constants is perfect; in other words, for each prime p it has the axiom: $\forall u\,\partial (u)=0\land p1=0\rightarrow \exists v\,v^{p}=u$ (There is little point in demanding that the whole field should be a perfect field, because in non-zero characteristic this implies the differential is 0.) For technical reasons to do with quantifier elimination, it is sometimes more convenient to force the constant field to be perfect by adding a new symbol r to the signature with the axioms $\forall u\,\partial (u)=0\land p1=0\rightarrow r(u)^{p}=u$ $\forall u\,\lnot \partial (u)=0\rightarrow r(u)=0.$ • The theory of differentially closed fields (DCF) is the theory of differentially perfect fields with axioms saying that if f and g are differential polynomials and the separant of f is nonzero and g≠0 and f has order greater than that of g, then there is some x in the field with f(x)=0 and g(x)≠0. Addition The theory of the natural numbers with a successor function has signature consisting of a constant 0 and a unary function S ("successor": S(x) is interpreted as x+1), and has axioms: 1. ∀x ¬ Sx = 0 2. ∀x∀y Sx = Sy → x = y 3. Let P(x) be a first-order formula with a single free variable x. Then the following formula is an axiom: (P(0) ∧ ∀x(P(x)→P(Sx))) → ∀y P(y). The last axiom (induction) can be replaced by the axioms • For each integer n>0, the axiom ∀x SSS...Sx ≠ x (with n copies of S) • ∀x ¬ x = 0 → ∃y Sy = x The theory of the natural numbers with a successor function is complete and decidable, and is κ-categorical for uncountable κ but not for countable κ. Presburger arithmetic is the theory of the natural numbers under addition, with signature consisting of a constant 0, a unary function S, and a binary function +. It is complete and decidable. The axioms are 1. ∀x ¬ Sx = 0 2. ∀x∀y Sx = Sy → x = y 3. ∀x x + 0 = x 4. ∀x∀y x + Sy = S(x + y) 5. Let P(x) be a first-order formula with a single free variable x. Then the following formula is an axiom: (P(0) ∧ ∀x(P(x)→P(Sx))) → ∀y P(y). Arithmetic Many of the first order theories described above can be extended to complete recursively enumerable consistent theories. This is no longer true for most of the following theories; they can usually encode both multiplication and addition of natural numbers, and this gives them enough power to encode themselves, which implies that Gödel's incompleteness theorem applies and the theories can no longer be both complete and recursively enumerable (unless they are inconsistent). The signature of a theory of arithmetic has: • The constant 0; • The unary function, the successor function, here denoted by prefix S, or by prefix σ or postfix ′ elsewhere; • Two binary functions, denoted by infix + and ×, called "addition" and "multiplication." Some authors take the signature to contain a constant 1 instead of the function S, then define S in the obvious way as St = 1 + t. Robinson arithmetic (also called Q). Axioms (1) and (2) govern the distinguished element 0. (3) assures that S is an injection. Axioms (4) and (5) are the standard recursive definition of addition; (6) and (7) do the same for multiplication. Robinson arithmetic can be thought of as Peano arithmetic without induction. Q is a weak theory for which Gödel's incompleteness theorem holds. Axioms: 1. ∀x ¬ Sx = 0 2. ∀x ¬ x = 0 → ∃y Sy = x 3. ∀x∀y Sx = Sy → x = y 4. ∀x x + 0 = x 5. ∀x∀y x + Sy = S(x + y) 6. ∀x x × 0 = 0 7. ∀x∀y x × Sy = (x × y) + x. IΣn is first order Peano arithmetic with induction restricted to Σn formulas (for n = 0, 1, 2, ...). The theory IΣ0 is often denoted by IΔ0. This is a series of more and more powerful fragments of Peano arithmetic. The case n = 1 has about the same strength as primitive recursive arithmetic (PRA). Exponential function arithmetic (EFA) is IΣ0 with an axiom stating that xy exists for all x and y (with the usual properties). First order Peano arithmetic, PA. The "standard" theory of arithmetic. The axioms are the axioms of Robinson arithmetic above, together with the axiom scheme of induction: • $\phi (0)\wedge (\forall x\phi (x)\rightarrow \phi (Sx))\rightarrow (\forall x\phi (x))$ for any formula φ in the language of PA. φ may contain free variables other than x. Kurt Gödel's 1931 paper proved that PA is incomplete, and has no consistent recursively enumerable completions. Complete arithmetic (also known as true arithmetic) is the theory of the standard model of arithmetic, the natural numbers N. It is complete but does not have a recursively enumerable set of axioms. For the real numbers, the situation is slightly different: The case that includes just addition and multiplication cannot encode the integers, and hence Gödel's incompleteness theorem does not apply. Complications arise when adding further function symbols (e.g., exponentiation). Second order arithmetic Main article: Second-order arithmetic Second-order arithmetic can refer to a first order theory (in spite of the name) with two types of variables, thought of as varying over integers and subsets of the integers. (There is also a theory of arithmetic in second order logic that is called second order arithmetic. It has only one model, unlike the corresponding theory in first order logic, which is incomplete.) The signature will typically be the signature 0, S, +, × of arithmetic, together with a membership relation ∈ between integers and subsets (though there are numerous minor variations). The axioms are those of Robinson arithmetic, together with axiom schemes of induction and comprehension. There are many different subtheories of second order arithmetic that differ in which formulas are allowed in the induction and comprehension schemes. In order of increasing strength, five of the most common systems are • ${\mathsf {RCA}}_{0}$, Recursive Comprehension • ${\mathsf {WKL}}_{0}$, Weak Kőnig's lemma • ${\mathsf {ACA}}_{0}$, Arithmetical comprehension • ${\mathsf {ATR}}_{0}$, Arithmetical Transfinite Recursion • $\Pi _{1}^{1}{\mbox{-}}{\mathsf {CA}}_{0}$, $\Pi _{1}^{1}$ comprehension These are defined in detail in the articles on second order arithmetic and reverse mathematics. Set theories The usual signature of set theory has one binary relation ∈, no constants, and no functions. Some of the theories below are "class theories" which have two sorts of object, sets and classes. There are three common ways of handling this in first-order logic: 1. Use first-order logic with two types. 2. Use ordinary first-order logic, but add a new unary predicate "Set", where "Set(t)" means informally "t is a set". 3. Use ordinary first-order logic, and instead of adding a new predicate to the language, treat "Set(t)" as an abbreviation for "∃y t∈y" Some first order set theories include: • Weak theories lacking powersets: • S' (Tarski, Mostowski, and Robinson, 1953); (finitely axiomatizable) • Kripke–Platek set theory; KP; • Pocket set theory • General set theory, GST • Constructive set theory, CZF • Mac Lane set theory and Elementary topos theory • Zermelo set theory; Z • Zermelo–Fraenkel set theory; ZF, ZFC; • Von Neumann–Bernays–Gödel set theory; NBG; (finitely axiomatizable) • Ackermann set theory; • Scott–Potter set theory • New Foundations; NF (finitely axiomatizable) • Positive set theory • Morse–Kelley set theory; MK; • Tarski–Grothendieck set theory; TG; Some extra first order axioms that can be added to one of these (usually ZF) include: • axiom of choice, axiom of dependent choice • Generalized continuum hypothesis • Martin's axiom (usually together with the negation of the continuum hypothesis), Martin's maximum • ◊ and ♣ • Axiom of constructibility (V=L) • proper forcing axiom • analytic determinacy, projective determinacy, Axiom of determinacy • Many large cardinal axioms See also • Glossary of areas of mathematics • List of mathematical theories References 1. Goldrei, Derek (2005), Propositional and Predicate Calculus: A Model of Argument: A Model of Argument, Springer, p. 265, ISBN 9781846282294. 2. Szmielew, W. (1955), "Elementary properties of Abelian groups", Fundamenta Mathematicae, 41 (2): 203–271, doi:10.4064/fm-41-2-203-271, MR 0072131. 3. Ax, James; Kochen, Simon (1965), "Diophantine problems over local fields. II. A complete set of axioms for p-adic number theory.", Amer. J. Math., The Johns Hopkins University Press, 87 (3): 631–648, doi:10.2307/2373066, JSTOR 2373066, MR 0184931 Further reading • Chang, C.C.; Keisler, H. Jerome (1989), Model Theory (3 ed.), Elsevier, ISBN 0-7204-0692-7 • Hodges, Wilfrid (1997), A shorter model theory, Cambridge University Press, ISBN 0-521-58713-1 • Marker, David (2002), Model Theory: An Introduction, Graduate Texts in Mathematics, vol. 217, Springer, ISBN 0-387-98760-6 Mathematical logic General • Axiom • list • Cardinality • First-order logic • Formal proof • Formal semantics • Foundations of mathematics • Information theory • Lemma • Logical consequence • Model • Theorem • Theory • Type theory Theorems (list)  & Paradoxes • Gödel's completeness and incompleteness theorems • Tarski's undefinability • Banach–Tarski paradox • Cantor's theorem, paradox and diagonal argument • Compactness • Halting problem • Lindström's • Löwenheim–Skolem • Russell's paradox Logics Traditional • Classical logic • Logical truth • Tautology • Proposition • Inference • Logical equivalence • Consistency • Equiconsistency • Argument • Soundness • Validity • Syllogism • Square of opposition • Venn diagram Propositional • Boolean algebra • Boolean functions • Logical connectives • Propositional calculus • Propositional formula • Truth tables • Many-valued logic • 3 • Finite • ∞ Predicate • First-order • list • Second-order • Monadic • Higher-order • Free • Quantifiers • Predicate • Monadic predicate calculus Set theory • Set • Hereditary • Class • (Ur-)Element • Ordinal number • Extensionality • Forcing • Relation • Equivalence • Partition • Set operations: • Intersection • Union • Complement • Cartesian product • Power set • Identities Types of Sets • Countable • Uncountable • Empty • Inhabited • Singleton • Finite • Infinite • Transitive • Ultrafilter • Recursive • Fuzzy • Universal • Universe • Constructible • Grothendieck • Von Neumann Maps & Cardinality • Function/Map • Domain • Codomain • Image • In/Sur/Bi-jection • Schröder–Bernstein theorem • Isomorphism • Gödel numbering • Enumeration • Large cardinal • Inaccessible • Aleph number • Operation • Binary Set theories • Zermelo–Fraenkel • Axiom of choice • Continuum hypothesis • General • Kripke–Platek • Morse–Kelley • Naive • New Foundations • Tarski–Grothendieck • Von Neumann–Bernays–Gödel • Ackermann • Constructive Formal systems (list), Language & Syntax • Alphabet • Arity • Automata • Axiom schema • Expression • Ground • Extension • by definition • Conservative • Relation • Formation rule • Grammar • Formula • Atomic • Closed • Ground • Open • Free/bound variable • Language • Metalanguage • Logical connective • ¬ • ∨ • ∧ • → • ↔ • = • Predicate • Functional • Variable • Propositional variable • Proof • Quantifier • ∃ • ! • ∀ • rank • Sentence • Atomic • Spectrum • Signature • String • Substitution • Symbol • Function • Logical/Constant • Non-logical • Variable • Term • Theory • list Example axiomatic systems  (list) • of arithmetic: • Peano • second-order • elementary function • primitive recursive • Robinson • Skolem • of the real numbers • Tarski's axiomatization • of Boolean algebras • canonical • minimal axioms • of geometry: • Euclidean: • Elements • Hilbert's • Tarski's • non-Euclidean • Principia Mathematica Proof theory • Formal proof • Natural deduction • Logical consequence • Rule of inference • Sequent calculus • Theorem • Systems • Axiomatic • Deductive • Hilbert • list • Complete theory • Independence (from ZFC) • Proof of impossibility • Ordinal analysis • Reverse mathematics • Self-verifying theories Model theory • Interpretation • Function • of models • Model • Equivalence • Finite • Saturated • Spectrum • Submodel • Non-standard model • of arithmetic • Diagram • Elementary • Categorical theory • Model complete theory • Satisfiability • Semantics of logic • Strength • Theories of truth • Semantic • Tarski's • Kripke's • T-schema • Transfer principle • Truth predicate • Truth value • Type • Ultraproduct • Validity Computability theory • Church encoding • Church–Turing thesis • Computably enumerable • Computable function • Computable set • Decision problem • Decidable • Undecidable • P • NP • P versus NP problem • Kolmogorov complexity • Lambda calculus • Primitive recursive function • Recursion • Recursive set • Turing machine • Type theory Related • Abstract logic • Category theory • Concrete/Abstract Category • Category of sets • History of logic • History of mathematical logic • timeline • Logicism • Mathematical object • Philosophy of mathematics • Supertask  Mathematics portal
Wikipedia
Busted taylor county Micro exotic bully for sale in california Tinnitus but no hearing loss reddit 3. Which of these compounds would have the highest boiling point? Topic: Intermolecular forces Molecular geometry, dipole moment. 8. Which compound would you expect to have the highest boiling point? – The result is an "expanded octet" 16 8.4 Covalent bonds can have partial charges • When a bond is formed between identical atoms, 17 each atom has an equal share of the electrons. • But, when different kinds of atoms combine (HCl), one nucleus usually attracts the electrons in the bond more strongly than the other. • Polar bond ... Which of these compounds does not follow the octet rule? A. NF3. B. CF4. C. PF5. D. AsH3. ... Which of these molecules has an atom with an expanded octet? A. HCl. follow the octet rule: – Ions or molecules with an odd number of electrons – Ions or molecules with less than an octet – Ions or molecules with more than eight valence electrons (an expanded octet) Sep 27, 2018 · 23. Statement-1 : Molecular species like SF6 , PF5 , I3 – and XeF2 violate the octet rule. Statement-2 : Compounds with an expanded octet are called hypervalent compounds. 24. Statement-1 : In tetrahedral hybridisation i.e., in sp3 hybridisation all p–orbitals of valence shell are involved and no p–orbital is left for forming –bonds. Noble gases are very stable as their outermost shell is completely full, it has 8 electrons that are paired. Atoms 'want' to be in this stable state; this is the 'Octet Rule' When elements bond so that the resulting compound has a full outer shell of paired electrons they are said to obey the 'octet rule'. There are exceptions to the Octet Rule: - Some atoms do not always obey the octet rule. A few, like BORON, will bond in such a way that they end up with less than eight electrons.... but many more bond in such a way that they end up with a share in MORE THAN EIGHT electrons! - Any atom in period three or greater can do this. SULFUR and PHOSPHORUS compounds commonly do this! Octet expansion. In some general chemistry textbooks, hybridization is presented for main group coordination number 5 and above using an "expanded octet" scheme with d-orbitals first proposed by Pauling. However, such a scheme is now considered to be incorrect in light of computational chemistry calculations. The verb system has expanded and has become more symmetrical. These alterations were primarily conditioned by internal factors of language evolution, such as the shift of some abstract meanings from the lexical to the grammatical level (e. g. the modal and temporal meanings), and the strive for a... Malaysia number calling me Evinrude 40 hp outboard motor manual The hunter call of the wild xbox one mouse and keyboard 5,083 will form compound and act as central bad atem with an expanded ocket SO2 Has H₂O BE 0-5=0 H- Sool 0 B The Br and I will also a forus compond but will not act central atom as they are have ally one electra to Less to complete then octet WH- FH-Biro 3 Aug 21, 2020 · The general formula of most interhalogen compounds is XY n, where n = 1, 3, 5 or 7, and X is the less electronegative of the two halogens. The compounds which are formed by the union of two different halogens are called inter halogen compounds. There are never more than two types of halogen atoms in an interhalogen molecule. Which compound has a super octet (expanded valence) around its central atom? a. CF4 b. NF3 c. SF4 d. OF2. c. SF4. Each chlorine atom has completed its octet of electrons, but the central phosphorus atom has a share in 10 valence electrons which is 2 more than an octet of electrons. The valence shell of the phosphorus atom is said to have expanded in order to accommodate all 5 bonding pairs of electrons. which of the following elements can form compounds with an expanded octet chegg, The elements that typically have less than an octet are hydrogen, helium, lithium, and beryllium. Boron also may form molecules where it only has 6 electrons. For expanded octets, any element in period 3 or later with a d-orbital can have an expanded octet. In this compound, the boron atom only has six valence shell electrons, but the octet rule is satisfied One class of such compounds are those that have an odd number of electrons. As the octet rule expanded octet: A case where an atom shares more than eight electrons with its bonding partners. Two of these that are important for living systems are sulfur and phosphorus. By the octet rule, sulfur can make 2 covalent bonds and phosphorus 3 covalent bonds. Sulfur can also have expanded orbitals to accept 4 or 6 covalent bonds, and phosphorus can expand to 5 covalent bonds. Any of the metals can do this, okay? So when, whenever we have these types of atoms, whenever you have sulfur or phosphorus or one of the halogens in the middle, you have to watch out for an expanded octet. Let's do some examples of those. Here are three different molecules. Each of which clearly has more than four atoms around the central atom. Compound words represent one of the most typical and specific features of English word-structure. Compounds are not always easy to distinguish from free word-combinations. Thus in this way, carbon will have {eq}5 + 3{\text{ = 8}} {/eq} valence electrons after sharing and a complete octet. Therefore, it is not a hypervalent molecule because it has not more than an octet. Which compound could have a super octet (expanded valence) around its central atom? Select the correct answer below: Question 10 options: CF 4. NF 3. SF 4. OF 2 ... Briggs and stratton quantum 5hp carburetorBlu b100dl specs Which Of The Following Is True Of Nonpolar Molecules_ Chemistry: Molecular Approach (4th Edition) answers to Chapter 9 - Exercises - Page 421 67 including work step by step written by community members like you. Textbook Authors: Tro, Nivaldo J., ISBN-10: 0134112830, ISBN-13: 978-0-13411-283-1, Publisher: Pearson Stihl fs 56 rc carburetor settingsFda accelerated approval regulations Expanded head and adjunct - a phrase in which either a head or adjunct are the phrases (the reception of the delegation by the President of the republic. A composite sentences are divided into complex and compound sentence. A composite sentence has more than one predicative pair/predicative centre. Oct 28, 2017 · If the central atom possesses partially occupied d-orbitals, it may be able to accommodate five or six electron pairs, forming what is sometimes called an "expanded octet". 2 Coordination with two or three electron pairs From the third shell on, the central atom can expand its octet. For compounds with expanded octets, one finds the hybridization will involve s, p, and d orbitals. Where the octet rule is obeyed, only s and p orbitals are involved in the hybridization. It helps to have a feel for what is "normal" for common central atoms. Minecraft design sheetGland nut wrench harbor freight More than 1 central atom The organic compound benzene, C6H6, has two resonance structures. It is commonly depicted as a hexagon with a circle inside to signify the delocalized electrons in the ring. Exceptions to the Octet Rule There are three types of ions or molecules that do not follow the octet rule: Ions or molecules with an odd number of ... please explain your answer Question 4 Select ALL central atoms that can form compounds with an expanded octet. o Be A. If the following elements were to form an ionic compound, which noble-gas configuration would they... 2017 methods exam 1 examiners reportExorsus raid tools notes timer Exceptions to the Octet Rule (cont.) •A third group of compounds has central atoms with more than eight valence electrons, called an expanded octet. •Elements in period 3 or higher have a d-orbital and can form more than four covalent bonds. Which of these compounds does not follow the octet rule? A. NF3. B. CF4. C. PF5. D. AsH3. ... Which of these molecules has an atom with an expanded octet? A. HCl. A cat also of weight 45 nVrtk locomotion Honda grom big bore kit 4. We have drawn a general outline of the division of the lexicon into part of speech classes developed by modern lin-guists on the lines of traditional morphology. It is known that the distribution of words between different parts of speech may to a certain extent differ with different au-thors. Snes original Sep 22, 2019 · Some atoms have more than eight valence electrons. Another exception to the octet rule is that the central atoms of certain molecules may have more than eight (octet) valence electrons (expanded octet). This usually happens when the central atom has more than four atoms bonded to it. Parkin5 clearly distinguished main group element compounds that feature three-center four-electron (3c-4e) interactions from those that feature 3c-2e interactions. The former, in which one of the atoms appears to have an expanded octet, are termed "electron-rich" hypervalent molecules. The latter are invoked for Elements which have d-orbitals (i. e. n=3 or higher) can form compounds with an expanded octet. Some examples are;Phosphorous Pentachloride PCl5Sulfur Hexafluoride SF6Dichloro Heptoxide Cl2O7 Apr 08, 2011 · This results in the expansion of octet. In the case of PCl 5, phosphorous must have caused one of its valence electron to be excited to the 3d subshell (its energetically accessible orbital), so that it has 5 orbitals available for bonding and hence able to form a single bond each with 5 chlorine atoms. Wassce new grading system 2020 Examples of molecules in which a third period central atom contains an expanded octet are the phosphorus pentahalides and sulfur hexafluoride. Sulfur hexafluoride: In the SF 6 molecule, the central sulfur atom is bonded to six fluorine atoms, so sulfur has 12 bonding electrons around it. However synchronically compound verbs correspond to the definition of a compound as a word consisting of two free stems and functioning in the sentence as a separate lexical unit. Thus, it seems logical to consider such words as compounds by right of their structure. His opponent in this era, Gilbert N. Lewis on the other hand believed in octet expansion. Hypervalent bonding has also been believed to be described as sp 3 d and sp 3 d 2 hybrid orbitals composed of s, p, and d-orbitals at higher energy levels. However, advances in the study of ab initio calculations have revealed that the contribution of d ... Compounding is the type of word-building in which new words are produced by combining two or more stems which occur in the language as free forms. Compounding is not only one of the most ancient ways of enriching the word-stock but it is also one of the three most productive types of... Spring boot lazy initialization year, have had — as Ihave read — for a weed or. a stray blade of grass. 4. He laid his hand for a moment on his broad chest and bent h is 12. but has its slang, and proficiency in this is be the greatest desideratum of an aspirant to the pleasures. Task 5. Form compound words.Naming Covalent Compounds. ... Expanded octet - an element in the 3rd period or below may have 10 and 12 electrons around it. Expanded octet is the most common ... Numerous compounds of these elements have more than eight valence electrons around the central atom. This is termed as the expanded octet. Some of the examples of such compounds are: PF5, SF6, H2SO4 and a number of coordination compounds.. 1. I have made this chair myself. O Active vo … ice O Passive voice 2. I sent the letter to Paris yesterday. O Active voice O Passive voice 3. The boxes have been brought by the postman. Active voice O Passive voice 4. I will become a famous composer one day. Deer velvet shedding 13) In which Lewis structure does the central atom have a non-zero formal charge? A)(i) B)(ii) C)(iii) D)(v) E)none 14) The formal charge on carbon in the molecule below is _____. A)0 B)+1 C)-1 D)+3 E)+2 15) How many equivalent resonance forms can be drawn for SO2without expanding octet on the sulfur atom (sulfur is the central atom)? Exceptions to the octet Rule: Expanded Octets: This is the largets class of the molecules to violate the octet rule consists of species in which the central atom is surrounded by more than four pairs of valence electrons.Typical examples belongs to the atoms from third row phosphorus with five valence electrons can form as many as five bonds ... Jul 27, 2017 · We must examine the Lewis structures of these compounds to see if they follow the octet rule. #bb"NO"# The #"N"# atom has an only seven valence electrons, so it does not obey the octet rule. #bb"BF"_3# #"B"# has only six valence electrons, so it violates the octet rule. It is pssible to draw a structure in which every atom has an octet: some structures have an odd number of electrons NO 2, NO; Electron Deficient; some structure have less than an octet particularly group 3 elements like B in BF 3; 24 electrons, 6 upe-= 3 covalent bonds ; B cannot have an octet; Expanded Valence Compounds; elements with d electrons can form expanded valence compounds Concave filler plates Jun 23, 2019 · Because sulfur is far less electronegative than oxygen. Notice that "expanded octets" only occur in compounds in which the central atom has low electronegativity relative to the ligands. Molecules with Expanded Octets Many molecules and polyatomic ions have more than eight valence electrons around the central atom. The central atom can have an expanded octet only if it is from Period 3 or higher, where empty d orbitals are available to hold these additional valence electrons. Which compound has an expanded octet_ Jewellery set 3d model free download Roblox face png goku Xenon now has twelve electrons instead of the octet! This is called an "expanded octet", expanding beyond the octet. Notice we do NOT put a double bond between the Xe and one of the F(!), a mistake that students often make. In the structure shown above, Xe has a formal charge of zero, so there is no reason to do any more to the structure. F Xe F F F ...by the octet rule are called hypervalent compounds, and have what is known as an 'expanded octet,' meaning that there are more than eight Therefore, the d orbitals participate in bonding with other atoms and an expanded octet is produced. Examples of molecules in which a third period... > This will sometimes lead to an "expanded octet" around the central atom > Expanded octet = five or six electron pairs around an atom > Only central atoms from the third period and below can have expanded octets 6.What to Do About Ions? • For ions, put brackets around the entire the entire Lewis structure and make sure to indicate its charge Knitting patterns for cardigans beginners free So I have to draw all the resonance structures but I tried it this way and then realize that two of these were the same structure and the N would end up with an expanded octet in the next structure.... what am I doing wrong? For compounds with expanded octets, one finds the hybridization will involve s, p, and d orbitals. Where the octet rule is obeyed, only s and p orbitals are involved in the hybridization. It helps to have a feel for what is "normal" for common central atoms. Despite the lack of chemical realism, the IUPAC recommends the drawing of expanded octet structures for functional groups like sulfones and phosphoranes, in order to avoid the drawing of a large number of formal charges or partial single bonds. A special type of hypervalent molecules is hypervalent hydrides. Three major exceptions to the octet rule ; Molecules or ions with more than eight ; electrons around the central atom. Species with fewer than eight electrons ; around the central atom. Species with an odd number of valence ; electrons. 20 Expanded Octets. Starting with period three, atoms have the ; capability to accommodate d electrons (3d). Oct 31, 2016 · Which of the following atoms could have an expanded octet when it is the central atom in a covalent compound? B . Cl . F . O Social psychology frq These compounds are incredibly important because when they're released into the atmosphere, they undergo reactions with a class of compounds called oxidants and that's things like ozone. Expanded octet occurs when an atom is able to have more than 8 valence electrons. For example, in SO₃, the sulfur atom forms 6 covalent bonds, hence it has 12 valence electrons. Expansion of octet is possible only from Period 3 elements onwards, d...Which of these compounds does not follow the octet rule? A. NF3. B. CF4. C. PF5. D. AsH3. ... Which of these molecules has an atom with an expanded octet? A. HCl. elements can hold up up to 18 valence shell electrons. Apparently third-period elements can depart from the octet rule by having more than eight An example of the expanded octetis seen in the compound xenon pentafluoride, XeF5. Back to Ionic & Covalent Bonding Index Page Fx effects for kinemaster Covalent compounds of beryllium and boron are frequently octet deficient. Example. Valence-Shell Expansion: Third-row and heavier elements can exceed the octet rule by using their empty valence d orbitals. More than eight electrons are allowed around the central atom. This is called Valence-Shell expansion. Example have to have multiple bonds in order to satisfy octet on all atoms other than B) (iii) and (iv) C) (iii) only D) (ii) and (v) E) (iv) only E none 18) In which Lewis structure does the central atom have a non-zero formal charge? B) (ii) C) (iii) 19) Which Lewis structure has the largest total number of Ione pairs of electrons? C) (iii) D) (iv) Explanation: Some of the elements of the third-period and periods below can have expanded octet because they have d- sub-level. We have 2 possible answers. A and B. Because lodine and chlorine. Sulfur and phosphorous are in the 3d or lower periods. In the compound P2S5, where P forms 5 bonds. Worst laptop brands reddit 52. Summarize exceptions to the octet rule by correctly pairing these molecules and phrases: odd number of valence electrons, PCl 5, ClO 2, BH 3, expanded octet, less than an octet. expanded octet, PCl 5; odd number of valence electrons, ClO 2; less than an octet, BH 3 53. Evaluate A classmate states that a binary compound having only sigma bonds displays AP Chemistry- Practice Bonding Questions for Exam. Multiple Choice. Identify the choice that best completes the statement or answers the question. Micollab delete call history Check all that apply. Check all that apply. Elements in the first row of the periodic table never have expanded octets. Elements in the second row of the periodic table never have expanded octets. Elements in the third row of the periodic table never have expanded octets. Elements in the fourth row of the periodic table never have expanded octets. 1. I have made this chair myself. O Active vo … ice O Passive voice 2. I sent the letter to Paris yesterday. O Active voice O Passive voice 3. The boxes have been brought by the postman. Active voice O Passive voice 4. I will become a famous composer one day.Collective proper has a meaning of plurality as indivisible whole. 5. I have a Fear of handing guns? and so it was an effort for me to examine the Browning. Compound nouns consist of at least two stems. The meaning of a compound is not a mere sum of its elements. The expanded valence PX5 molecules have a P atom that has 10 valence electrons in its Lewis structure. Sulfur forms the octet-obeying SX2 molecules, for which each S atom has 8 valence electrons in the Lewis structure. Yuzu texture mods Examples of octet in a sentence, how to use it. 76 examples: On our side we were, apart from myself, a distinguished octet. - One octet can… Compound words may be classified according to the type of composition and the linking element; according to the part of. A classification according to the type of the syntactic phrase with which the compound is correlated has also been suggested.a- have to expand the octet of the central phosphorous atom in order ot form the single bonds for the 5 cl atoms. d- ionic compounds don't have a molecular weight because they don't consist of molecules. - they don't consist of individual molecules, but instead just stacks of ions of opposite... How to change battery in logitech wireless keyboard 1The octet rule applies rigidly only to C, N, O and F, and even nitrogen violates it occasionally because it has an odd number of electrons in some of its molecules. Atoms with more than eight valence electrons are said to have expanded valence shells or expanded octets.…Let's check and see if everything has 8 valence electrons, or an octet. Fluorine has 8, this Fluorine has 8, and then this Fluorine right down here, as well, has 8. Their outer shells are full. BCl3 do not obey octet rule.It is a electron deficient molecule.As it share only three electron with chlorine atom . Nov 01, 2007 · It does not follow the octet rule.(Expanded octet). (c) Sulfur is in Group VIA. Number of bonds: 8 - 6 =2 If it has made 2 bonds it follows the octet rule. Since F is a halogen it can form only one bond ( or 8 - 7 = 1). In SF4, sulfur forms 4 bonds instead of 2. It does not follow the octet rule.(Expanded octet). (d) Sulfur is in Group VıA. J howell redroom album download zip All expo marker colors Sep 22, 2019 · Some atoms have more than eight valence electrons. Another exception to the octet rule is that the central atoms of certain molecules may have more than eight (octet) valence electrons (expanded octet). This usually happens when the central atom has more than four atoms bonded to it. below. This method is very useful when working with compounds formed from the representative elements that follow the octet rule, including those normally encountered in Chemistry 105. There are exceptions to the octet rule, such as expanded octets, that you may encounter in other chemistry classes. 45 acp bear defense ammo Nov 30, 2012 · Now if we count electrons around each atom, we see that each of the chlorine atoms has a complete octet of 8 valence electrons. This means that the chlorine atoms are stable and are therefore "happy." But look at the phosphorus atom there are 2 x 5 = 10 valence electrons. This is referred to as an expanded octet. Atoms in period 3 and below in the periodic table may expand their octet because they have d orbitals. They have grown as a result of the transference of certain characteristic features from one communicative type of sentence to another. You will then let me have a look at his picture. Formula to find distance between two points Sometimes, there are several correct Lewis structures for a given molecule. Ozone (O 3) (O_3) (O 3 ) is one example. The compound is a chain of three oxygen atoms, and minimizing the charges while giving each atom an octet of electrons requires that the central oxygen atom form a single bond with one terminal oxygen and a double bond with the other terminal oxygen. Vm idler wheel Kaulantak peeth nepal Qualcomm icd Hot historical western romance novels Unit 8 rational functions homework 10 direct joint and inverse variation It is highly reactive and has a tendency to combine with a H atom from other compounds, causing them to break up. Thus, OH is sometimes called a "detergent" radical because it helps to clean up the atmosphere. (a) Write the Lewis structure for the radical. (b) Refer to Table 9.4 and explain why the radical has a high affinity for $\mathrm{H ... Does uv glue really work please explain your answer Question 4 Select ALL central atoms that can form compounds with an expanded octet. o Be A. If the following elements were to form an ionic compound, which noble-gas configuration would they... In some cases an atom can have more than 8 electrons in its outer shell. This is called an . expanded octet. Only elements in period 3 or higher can expand their octet. Boron, a group 3 element, only has 6 electrons in its covalent shell. Elements, such as Sulphur, Phosphorous and Chlorine, can utilize the 3d-orbitals to 'expand their octet' See full list on byjus.com
CommonCrawl
\begin{document} \title{Spectrum Leasing as an Incentive towards Uplink Macrocell and Femtocell Cooperation} \author{\authorblockN{Francesco Pantisano$^{1,2}$, Mehdi Bennis$^{1}$, Walid Saad$^\textbf{3}$ and M{\'e}rouane Debbah$^\textbf{4}$\\} \authorblockA{\small $^\textbf{1}$ Centre for Wireless Communications - CWC, University of Oulu, 90570, Finland, email: \url{{fpantisa,bennis}@ee.oulu.fi}\\ $^\textbf{2}$ Dipartimento di Ingegneria dell'Energia Elettrica e dell'Informazione - DEI, University of Bologna, 40135, Italy, email:\url{[email protected]}\\ $^\textbf{3}$ Electrical and Computer Engineering Department, University of Miami, Coral Gables, FL, USA, email:\url{[email protected]}.\\ $^\textbf{4}$ Alcatel-Lucent Chair in Flexible Radio, SUP{\'E}LEC, Gif-sur-Yvette, France email: \url{[email protected] } } \thanks{ The authors would like to thank the Finnish funding agency for technology and innovation, Elektrobit, Nokia and Nokia Siemens Networks for supporting this work. This work has been performed in the framework of the ICT project ICT-4-248523 BeFEMTO, which is partly funded by the EU.}} \date{} \maketitle \thispagestyle{empty} \begin{abstract} The concept of femtocell access points underlaying existing communication infrastructure has recently emerged as a key technology that can significantly improve the coverage and performance of next-generation wireless networks. In this paper, we propose a framework for macrocell-femtocell cooperation under a closed access policy, in which a femtocell user may act as a relay for macrocell users. In return, each cooperative macrocell user grants the femtocell user a fraction of its superframe. We formulate a coalitional game with macrocell and femtocell users being the players, which can take individual and distributed decisions on whether to cooperate or not, while maximizing a utility function that captures the cooperative gains, in terms of throughput and delay. We show that the network can self-organize into a partition composed of disjoint coalitions which constitutes the \emph{recursive core} of the game representing a key solution concept for coalition formation games in partition form. Simulation results show that the proposed coalition formation algorithm yields significant gains in terms of average rate per macrocell user, reaching up to $239\%$, relative to the non-cooperative case. Moreover, the proposed approach shows an improvement in terms of femtocell users' rate of up to $21\%$ when compared to the traditional closed access policy. \end{abstract} \indent \indent {\bf \small Index terms:} {\small spectrum leasing; femtocell networks; coalitional game theory; Device-to-Device~(D2D); cooperation; recursive core.} \setcounter{page}{1} \section{Introduction} The new shifts in wireless communication paradigms, the need for energy-efficient communication, and the ever-increasing demands for ubiquitous wireless access and higher data rates led to increased research tackling the problem of deploying low cost, low power, femtocells. In fact, the challenges of introducing femtocell access points have attracted an increased attention from both academia as well as standardization bodies such as the Third Generation Partnership Program~(3GPP). It is envisioned that by introducing small cells, i.e., femtocells, serviced by dedicated femtocell access points, high quality indoor coverage can be achieved without any extra investments in network infrastructure, such as adding base stations or deploying advanced antenna systems. In addition, due to their ability to connect to existing backhaul networks (e.g., DSL), femtocells are an enabler for offloading traffic from existing wireless systems (e.g., cellular networks) and, subsequently, they can improve both spectrum efficiency and network capacity. As a result, two-tier femtocell networks, which consist of a macrocell network underlaid on femtocell access points~(FAPs) are expected to lie at the heart of emerging wireless systems~\cite{industry,industry2,LTE-A2,zhangg,befemto}. The deployment of femtocells underlaid with an existing macrocell wireless network is accompanied with numerous technical challenges at different levels, such as spectrum allocation, handover, interference management and access policy. From a cross-tier interference standpoint, orthogonal spectrum allocation can entirely eliminate interference but is inefficient in terms of spectrum utilization \cite{Guvenc}. As a result, from a network operator's perspective, deploying co-channel femtocells is of great interest \cite{JA1,HC1}. Through co-channel operations, femtocell access points are able to reuse the spectral resources of the macrocell network and, hence, improve the spectral efficiency, especially when traffic loads are high. However, in co-channel operations, cross-tier (i.e., macro- to femtocell, and vice versa) as well as intra-tier (i.e., inter-femtocell) interference can seriously degrade the system performance \cite{choi,lp}. In order to mitigate both types of interference, a variety of decentralized solutions are provided in~\cite{sundeep,reed,garcia,dong,FP1,FP2,yangg,yook2,gustavo } and can be divided in two categories: self-organization and cooperative strategies. The former class includes non-cooperative mechanisms of adaptation to the operating scenario and the interference environment, and deals with dynamic spectrum occupation \cite{DSA} , power control \cite{yook2, Andrews2} and interference cancelation \cite{chan}. Self-organization relies on context awareness capabilities and the paradigm of learning through trial-and-errors ~\cite{mehditao,gustavo,yangg}. Clearly, its key benefit is the scalability and the timeliness of the solutions, since the intelligence lies at the lower levels of the network architecture. One alternative approach is to leverage off the cooperation for interference management as done in \cite{FP2,Mischa,DP1}. Nevertheless, both self-organizing and cooperative solutions are associated with a cost or effort which can limit their benefits and, therefore, obtaining an optimal approach is quite challenging. In two-tier femtocell networks, different limitations can be witnessed at the mobile users' side. On one hand, macro users~(MUEs) are generally bandwidth limited and suffer from low signal to noise and interference ratios~(SINRs), especially when located at the cell boundary area. This is often reflected by a large number of outage events and a consequent increase in the user-plane delay. On the other hand, femtocell access points~(FAPs) are interference limited, therefore, a cross-tier cooperative scheme has to provide a benefit from different angles. In essence, some of the main open issues faced when designing cooperative schemes for femtocell networks are: \begin{itemize} \item How can the cooperation among users belonging to different tiers be modeled? \item What is the price for cooperation and when is cooperation beneficial? \item How to provide incentives to encourage cooperation? \end{itemize} In femtocell deployment, three main access control policies for femtocells can be recognized \cite{dlr}: closed, open, and hybrid. In the closed access policy, femtocell subscribers constitute the closed subscribers group~(CSG) and are the only ones allowed to connect to the belonging femtocell. In the open access policy, non-subscribers may also connect to any femtocell, without any restriction. Lastly, in hybrid access policies, non-subscribers may connect to a femtocell only under particular circumstances, depending on the resource availability. One promising approach for cross-tier interference management is to enable open or hybrid access policies at the femtocells, so that the effects of macro- to femtocell interference is reduced as shown in~\cite{Andrews1,ping}, for open access. A general introduction on the issues of coexistence between macrocells and femtocells is provided in \cite{DP1,JA1}, in which the authors present various practical scenarios. Alternatively, the MBS can coordinate or direct main operations at the FAP by means of information exchange over the X2 interface or through the femtocell gateway \cite{ZGUO}. However, in large networks, the computational effort resulting from this procedure at the MBS can be high as it requires excessive traffic on the control channel. Hence, there is a need to develop cooperative strategies at the FAP level as it has been proposed recently in~\cite{JINJIN,LEE,bennis1}. A novel form of distributed compress-and forward scheme with decoder side information is studied in \cite{som} while further mechanisms of cooperation have been studied in the context of providing a reliable backhaul to the FAPs such as in~\cite{Osvaldo2}. The main contribution of this paper is to propose, within the context of wireless femtocell networks, a framework for macrocell-femtocell cooperation which allows to alleviate the uplink interference at the FAP and reduce the transmission delay at the macrouser. Unlike existing network architectures, we propose a model in which macro cellular users are granted femtocell access using a device-to-device~(D2D) link \cite{d2d} that enables them to communicate with a femtocell user~(FUE) that, in a second phase, acts as a relay for macrocell traffic. In essence, whenever an MUE and an FUE cooperate, the MUE forwards its own traffic to the FUE which, in turn, combines the MUE`s traffic with its own data and relays it to its serving FAP. This proposed concept allows the MUEs to explore nearby femtocells by cooperating with the FUEs, even when the FAPs adopt a closed-access policy and have a limited coverage area. Clearly, this scheme is beneficial for any MUE, located at the cell boundary area, that is suffering a low performance at its serving MBS and which is unable to connect to nearby FAP due to closed-access policy or limited FAP coverage. Therefore, the rationale behind the proposed approach is to capture an important mutual benefit in co-channel femtocell networks. On the one hand, in such underlay femtocell network, the availability of the spectral resources depends on the utilization of the macrocell tier, which is performed without cross-tier coordination. As a result, the number of users (both FUEs and guest MUEs) is ultimately limited by the FAP is ultimately limited by the interference produced by the MUEs. On the other hand, the performance of MUEs located at a cell edge is essentially limited by the achievable SINR and large delays result from numerous outage events. In this respect, MUEs and FUEs have a mutual benefit to cooperate using the proposed approach in order to improve their performance and overcome their limitations. For instance, one of the main benefits of the proposed scheme is the possibility of separating in time uplink transmissions from cooperating MUE and FUE, allowing for cross-tier interference avoidance at the FAP`s side. In other words, the MUE exclusively grants the helping FUE a portion of its superframe to transmit in exchange for cooperation \cite{Osvaldo1}. Establishing a D2D link between the MUE and the FUE comes with a number of advantages, notably due to the low transmit range. In detail, when MUEs and FUEs are close, it is possible to leverage D2D communication at high rates. Clearly, the low transmit range also implies low average transmit power, which allows energy savings at the MUE side. Moreover, a direct link between MUEs and FUEs can also lead to the introduction of novel services and application which require a direct link between MUEs and FUEs. Therefore, in this paper, we introduce a holistic approach in which we study cross-tier cooperation in a macrocell-femtocell network accounting for delay, power constraints, and optimization of the rewarding mechanism. In summary, our key contributions are the following: \begin{itemize} \item We design a framework for macro-femto cooperation in which the end user benefit is quantified in terms of both throughput and delay. \item We tackle the macro-femto coexistence using a cooperative game theoretical approach, by formulating a coalitional game in which MUEs and FUEs are the players. We show that the game is in partition form as it takes into account the external interference between the formed coalitions. \item A distributed coalition formation algorithm is proposed through which MUEs and FUEs self-organize to reach the recursive core of the game. \item Within each coalition we apply a generalized optimization algorithm so as to maximize the FUE´s revenue, by adequately partitioning the available superframe and setting the transmit power for serving the MUEs in the coalition. \end{itemize} The proposed approach enables the MUEs and femtocells to self-organize and jointly establish a D2D link with a FUE, which will access the core network through an FAP access. These operations rely on self-organizing capabilities at the FUEs and MUEs and minimally involve the MBS, since it not notified until the players are actually cooperating. Moreover, the proposed approach is independent of the access policy in use at the FAP side, and could be applied even when the latter adopts a closed-access policy (or when it is congested in open access mode, or its maximum allowable MUEs is reached in hybrid access mode). System level simulations show that the proposed coalition formation algorithm yields significant gains, in terms of average rate per MUE, reaching up to $205\%$ compared to the non-cooperative case, for a network with $N=200$~femtocells. The rest of this paper is organized as follows. In Section~\ref{sec:sm}, we describe the considered system model and analyze the limitations of the non-cooperative approach. In Section~III we model the macro-femto cooperation as a coalitional game and discuss its properties. In Section~IV, we describe how to optimize the main parameters in the game and provide a distributed algorithm for coalition formation. Simulation results are discussed in Section~V and finally conclusions are drawn in Section~VI. \section{System Model}\label{sec:sm} \noindent Consider the \emph{uplink} direction of an Orthogonal Frequency Division Multiple Access (OFDMA) macrocell network (e.g., an LTE-Advanced or WiMAX macrocell) in which $N$ FAPs are deployed. These FAPs are underlaid to the macrocell frequency spectrum, and, within the femtocell tier, neighboring FAPs are allocated over orthogonal frequency subchannels\footnote{We assume that upon startup each femtocell senses the spectrum occupation of the adjacent FAPs and, based on that, it occupies a disjoint set of subchannels, thus, avoiding interference from the FAPs in proximity \cite{garcia,lp,gustavo,yangg}.}. Let $\mathcal{N}= \left \{ 1,\dots,n,\dots,N \right \}$ and $\mathcal{M}= \left \{ 1,\dots,m,\dots,M \right \}$ denote, respectively, the sets of all FAPs and MUEs in the network. Every FAP $n \in \mathcal{N}$ serves $L_n$ FUEs. Let $\mathcal{L}_n= \left \{ 1,\dots,l,\dots,L_n \right \}$ denote the set of FUEs served by an FAP $n \in \mathcal{N}$. The packet generation process at each MUE-MBS link is modeled as an M/D/1 queuing system\footnote{Other queue types, e.g., M/G/1 can be considered, without loss of generality.}, in which packets of constant size are generated using a Poisson arrival process with an average arrival rate of $\lambda_m$, in bits/s. Similarly, the link between FUE $l$ and its belonging FAP is modeled as an M/D/1 queuing system with Poisson arrival rate of $\lambda_l$. In the non cooperative approach, the MBS offers MUE $m$ a link transmission capacity (measured in bits/s) of: \begin{equation}\label{eq:R_m_nc} \mu_m^{NC} = B \log\Big(1+\frac{\left | H_{m,0} \right |^2 P_m}{\sum_{l\in \Phi_{l}^{0}}\left | H_{l,0} \right |^2 P_l +\sigma^{2}}\Big), \end{equation} \noindent where $B$ is the bandwidth of a subchannel, $\left | H_{m,0} \right |^2$ is the channel gain between MUE $m$ and the MBS denoted by subscript $0$, $P_m$ is the power used at MUE $m$, $\Phi_{l}^{0}$ is the set of FUEs operating on the same subchannel as MUE $m$, $\left | H_{l,0} \right |^2$ is the channel gain between FUE $l$ and the MBS, $P_l$ is the power used at FUE $l$ and $\sigma^{2}$ is the noise variance of the symmetric additive white Gaussian noise~(AWGN). The quality of the signal received at the MBS is generally limited by the signal strength, since the MUE-MBS link is often in non line-of-sight~(NLOS) and corrupted by the channel fluctuations and interference from FUEs. In contrast, the femtocell coverage is characterized by higher signal to noise ratio, resulting from the shorter distance between FUE and FAP, and more favorable channel conditions. However, due to the nature of underlay spectrum access, FAPs are limited by the interference from nearby MUEs and by capacity in terms of number of available spectral resources. As a matter of fact, each FAP $n$ provides a generic FUE $l \in \mathcal{L}_n$ with a link transmission capacity of : \begin{equation}\label{eq:R_i_nc} \mu_l^{NC} = B \log\Big(1+\frac{\left | H_{l,n} \right |^2 P_l}{\sum_{m\in \Phi_{m}^{n}}\left | H_{m,n} \right |^2 P_m +\sigma^{2}}\Big), \end{equation} \noindent where $B$ is the bandwidth of one assigned subchannel, $\left | H_{l,n} \right |^2$ is the channel gain between FUE $l \in \mathcal{L}_n$ and its belonging FAP $n$, $\Phi_{m}^{n} \subset \mathcal{M}$ is the set of MUEs operating on the same subchannel as FUE $l\in \mathcal{L}_n$, $\left | H_{m,n} \right |^2$ is the channel gain between MUE $m$ and FAP $n$. One of the aims of this work is to evaluate the effects of cross-tier interference, thus, the transmission capacity in (\ref{eq:R_i_nc}) only accounts for the interfering MUEs. However, the proposed solution can still be applied with some modifications whether a central or distributed frequency planning is adopted. The probability of successful transmission can be computed as the probability of maintaining the SINR above a target level $\gamma_m$ and $\gamma_l$, respectively for a MUE or a FUE, and expressed as: \begin{equation}\label{eq:Ps} \begin{matrix} Pt_m=\Pr \left \{\frac{\left | H_{m,0} \right |^2 P_m}{\sum_{l\in \Phi_{l}^{0}}\left | H_{l,0} \right |^2 P_l +\sigma^{2}} \geq \gamma_m\right \}, \\ Pt_l=\Pr \left \{\frac{\left | H_{l,n} \right |^2 P_l}{\sum_{m\in \Phi_{m}^{n}}\left | H_{m,n} \right |^2 P_m +\sigma^{2}} \geq \gamma_l \right \}. \end{matrix} \end{equation} To reduce the outage in MUE-MBS transmissions, a Hybrid Automatic-Repeat-ReQuest protocol with Chase Combining~(HARQ-CC) is employed at the medium access control layer \cite{parkvall}. In this scheme, erroneous packets at the destination are preserved so that they can be soft-combined with retransmitted ones. In general, this procedure, carried out at the MUE side, is highly costly since the MUE has to spend additional power for packet retransmission. Consequently, the effective input traffic $\tilde{\lambda}_m $ from an MUE $m$, accounting for a maximum of $D$ retransmissions is: \begin{equation}\label{eq:lambda_m} \tilde{\lambda}_m=\lambda_m \sum_{d=1}^{D}Pt_m(1-Pt_m)^{d-1}. \end{equation} \noindent We consider M/D/1 queueing delay for the MUEs $m \in \mathcal{M}$, and thus the average waiting time can be expressed by Little`s law \cite{little} as: \begin{equation}\label{eq:delay_m} D_m^{NC}= \frac{\tilde{\lambda}_m}{2\mu_m^{NC}(\mu_m^{NC} - \tilde{\lambda}_m)}. \end{equation} \noindent Note that once a transmission on a MUE-MBS link drops due to an outage event, it is reiterated up to $D$ times (otherwise dropped), and the increased congestion represented in (\ref{eq:lambda_m}) produces an average higher delay at the end user, as expressed in (\ref{eq:delay_m}). \section{Femtocell Cooperation as a Coalitional Game in Partition Form}\label{sec:GF} \noindent In this section, we formulate the problem of cooperation between FUEs and MUEs as a coalitional game in partition form, whose solution is the concept of the recursive core. The aim of the proposed cooperative approach is to minimize the delay of the MUE transmissions through FUE assisted traffic relay, considering bandwidth exchange as a mechanism of reimbursement for the cooperating FUEs. In existing wireless networks, FUEs and MUEs are typically scheduled independently, regardless of the access policy used at the FAP side. However, the objectives of the FAPs and the MUEs are intertwined from different viewpoints. At the FAP side, high interference level can be due to MUEs operating over the same subchannel which, consequently limits the achievable rates. At the MUE side, poor signal strength reception may result in a high number of retransmissions and, hence, higher delays. To overcome this, we propose that upon retransmissions, an MUE might deliver its packets to the core network by means of FUE acting as relay terminal. In this case, at each relay FUE, the incoming packets are stored and transmitted in a First-In First-Out (FIFO) fashion on the access line through its own FAP. We model each relay FUE as a M/D/1 queue and use the Kleinrock independence approximation \cite{KL00}. For the relay FUE, cooperation incurs significant costs in terms of delay and spectral resources, since the FUE relays the combined traffic $\tilde{\lambda}_l$ over its originally assigned subchannels. Therefore, it is reasonable to assume that FUEs will willingly bear the cooperation cost only upon a reimbursement from the serviced MUEs. We propose that, upon cooperation, the MUE autonomously delegates a fraction $\alpha$ ($0< \alpha \leq 1$) of its own superframe to the serving FUE $l$. At the relay FUE $l$, the portion $\alpha$ is further decomposed into two subslots according to a parameter $0 < \beta_l \leq 1$. The first subslot $\alpha\beta_l$ is dedicated to relay MUE`s traffic. The second subslot of duration $\alpha(1-\beta_l)$ represents a reward for the FUE granted by the serviced MUE, and it is used by the FUE for transmitting its own traffic. This method is known in the literature as spectrum leasing \cite{Osvaldo1} or bandwidth exchange \cite{manrelay} and it represents a natural choice for such kind of incentive mechanisms. The above approach is applied to MUEs with one assigned subchannel, nevertheless, it could be extended to the case of multiple assigned subchannels with some modifications in the negotiating phase. In that case, the relay FUE should be initially informed on the subchannels the MUE can potentially lease. Then, the FUE would communicate its preference, according to the highest gain it can achieve, for a given $\alpha$ and $\beta_l$. It is worth emphasizing that the proposed solution could still be applied in conjunction with an open, hybrid or closed access policy. Moreover, MUE transmissions would not require additional spectral resources, as the entire proposed scheme operates without changing the original spectrum allocation in both the femtocell and the macrocell tiers (since it occurs on a D2D link). Figure~\ref{fig:scenario} illustrates the considered scenario compared to the traditional transmission paradigm. \begin{figure} \caption{A concept model of the proposed solution compared to the traditional non-cooperative approach.} \label{fig:scenario} \end{figure} Note that, this concept solution allows to align and separate in time the transmissions allowing to avoid interference at the FAP from the MUEs within the coalition. In order to do that, we assume that operations are synchronized at the system level through IP based synchronization techniques such as IEEE 1588 \cite{synch} in standard or enhanced form. In order to increase their throughput and reduce MUE-to-FAP interference, the FAPs have an incentive to cooperate and relay the MUEs` traffic. In this respect, FUEs may decide to service a group of MUEs, and thus form a coalition $S_l$ in which transmissions from FUE $l$ and MUEs within the same coalition are separated in time. The proposed cooperation scheme can accommodate any relaying scheme such as decode-and-forward or compress-and-forward schemes. In this work, we use a decode-and-forward relay scheme, assuming that a packet is successfully received if the respective SINR satisfies the conditions in (\ref{eq:Ps}). Finally, the achievable service rates for MUEs and FUEs in the cooperative approach become: \begin{equation}\label{eq:R_c} \left\{\begin{matrix} & \mu_m^{C}(\alpha,\beta_l) &=& \min \{(1-\alpha)\mu_m^{R},\: \alpha\beta_l\mu_l^{R}\}, \\ & \mu_l^{C}(\alpha,\beta_l) &=& \alpha(1-\beta_l)\mu_l^{R}, \end{matrix}\right. \end{equation} \noindent with, \begin{equation}\label{eq:R_m_r} \mu_m^{R} = \log\Big(1+\frac{\left | H_{m,l} \right |^2 P_m}{ \sigma^{2}}\Big), \end{equation} \begin{equation}\label{eq:R_i_r} \mu_l^{R} = \log\Big(1+\frac{\left | H_{l,n} \right |^2 P_l}{\sum_{m\in \Phi_{m}^{n}\setminus S_l}\left| H_{m,n} \right |^2 P_m +\sigma^{2}}\Big). \end{equation} \noindent where $\left|H_{m,l}\right |^2$ denotes the channel gain of the relay link from MUE $m \in S_l$ and FUE $l$. Note that the factor $(1-\alpha)$ in the first term of (\ref{eq:R_c}) is due to the fraction of superframe occupied by the D2D transmission, while the second factor $\alpha\beta_l$ accounts for the fraction occupied by the forward transmission by the FUE. Due to the fact that MUEs are originally assigned orthogonal subchannels, the first hop of the relay transmission is not affected by interference. Moreover, note that by separating the transmissions from MUE and FUE within the superframe, the FUE forward transmissions are affected only by interference from non cooperative MUEs, outside the coalition. At this point, since the FUE may have to transmit independent packets of its own, the input traffic generation (or the packet arrival at the queue of the FAP) has to account for the packets generated at the FUE and at the MUEs for which the FUEs is relaying. Consequently, the effective traffic $\tilde{\lambda}_{l} $ generated by FUE $l$, accounting for the retransmissions becomes: \begin{equation}\label{eq:lambda_f} \tilde{\lambda}_{l}= \big(\lambda_l+\sum_{m \in S_l}\tilde{\lambda}_m\big) \sum_{d=1}^{D} Pt_l(1-Pt_l)^{d-1}, \end{equation} \noindent where $D$ is the maximum number of retransmissions before the packet is dropped, $Pt_l$ and $\tilde{\lambda}_m$ are computed as in (\ref{eq:Ps}), (\ref{eq:lambda_m}) considering that the SINR this time refers to the FUE-FAP link and using the Kleinrock approximation to combine traffic arrival rates from queues in sequence. We model every D2D link as a M/D/1 queue system and investigate the average delay incurred per serviced MUE. For a given MUE $m$ served by FUE $l$, we express the average delay as: \begin{equation}\label{eq:delay_c} D_m^{R}=\frac{\tilde{\lambda}_m}{2\mu_{m}^{R}(\mu_m^{R} - \tilde{\lambda}_m)}. \end{equation} \noindent It is important to underline that, to guarantee the stability of the queues, for any MUE $m$ serviced by a FUE in the network, the following condition must hold: \begin{equation}\label{eq:cond} \tilde{\lambda}_m <\mu_{m}^{R}. \end{equation} In the event where this condition is violated, the system is considered unstable and the delay is considered as infinite. In this regard, the analysis presented in the remainder of this paper will take into account this condition and its impact on the coalition formation process (as seen later, a coalition where $\tilde{\lambda}_m \geq \mu_{m}^{R}$ will never form). Having considered this, we now define $\tilde{\lambda}_r = \sum_{m \in S_l}\tilde{\lambda}_m \sum_{d=1}^{D} Pt_l(1-Pt_l)^{d-1}$ and $D_l^{C}= \frac{\tilde{\lambda}_r}{2\mu_{l}^{C}(\mu_{l}^{C}-\tilde{\lambda}_r)}$ as the delay at the FUE for transmitting the traffic of the MUEs' in the coalition. Finally, we can compute the average delay for an MUE as a sum over the MUE-FUE and FUE-FAP hops, as: \begin{equation}\label{eq:delay_c2} D_m^{C}= D_m^{R} + D_l^{C}. \end{equation} We assume that the relay FUE performs half-duplex operations, i.e., they first receive the MUE`s packets in a transmission window wide $(1-\alpha)$ in the subchannel originally utilized at the MUE. Successively, each FUE forwards the MUE`s packets in the next transmission window wide ($\alpha\beta_l$) in a FIFO policy. We further foresee that, once the packets are forwarded towards the core network, they can be traced back to the original source by means of a small packet header which include the mobile user ID. \subsection{Coalitional Game Concepts} Coalitional games involve a set of players, who seek to form cooperative groups, i.e., coalitions, in order to improve their performance or gains. A coalitional game is defined by a set of players, i.e., the decision makers seeking to cooperate and a \emph{coalitional value} (which is either a function or a set of vectors) which quantifies the worth of a coalition in a game, i.e., the overall benefit achieved by the coalition. Classical coalitional problems are typically modeled in the characteristic form, in which the utility of a coalition is not affected by the formation of other distinct coalitions \cite{CF00,Game_theory2,GT00}. In contrast, for coalitional games in \emph{partition form}~\cite{GT00}, the value of any coalition strongly depends on how the players \emph{outside} $S_l$ have organized themselves, i.e., which coalitions they formed. Although coalitional games in partition form are inherently complex to solve, they capture realistic inter-coalition effects that arise in many problems, notably in wireless and communication networks. In this context, finding an optimal solution for games in partition form is a challenging task and is currently a topic of high interest in game theory \cite{CF00,Mac1,WS00,MAC,GT00} (and references therein). In this section, we mathematically model the problem of macrocell-femtocell cooperation as a coalitional game with the FUEs and MUEs being the players. In particular, having defined $\Psi = \mathcal{M} \cup \: \big\{\bigcup_{n \in \mathcal{N}} \mathcal{L}_n\big\}$ as the set of the players in the proposed game, the rate and the delay achieved by the members of any coalition $S_l \subseteq \Psi$ that forms in the network is affected by the cooperative behavior of the users outside $S_l$, i.e., FUEs and MUEs in $\Psi \setminus S_l$, and thus, we remark the following: \begin{remark} The proposed game $(\Psi,U)$ is in partition form. \end{remark} This property stems mainly from two reasons. First, under the non cooperative approach, MUEs fully utilize the assigned superframe and, hence, transmit for its whole duration. In consequence, non-cooperative FUEs and MUEs allocated over the same subchannel can collide for the whole transmission duration. In contrast, when an MUE and an FUE belong to the same coalition, the MUE transmits for a fraction $(1-\alpha)$, while the remaining fraction is granted at the FUE in exchange for relaying, hence avoiding collisions between coalitional members. Second, cooperating MUEs transmit over a D2D link which is locally established and has a low transmission range. Therefore, when cooperating, the transmit power at the MUEs are sensibly lower when compared to the non-cooperative scheme and the consequent level of interference suffered at the FAPs outside the coalition is generally lower. As a result, the performance of a coalition depends on the partition of the network $\Pi_{\Psi}$ ($\Pi_{\Psi}$ is a partition of $\Psi$). We will henceforth include this dependence in the definition of the achievable rate in (\ref{eq:R_c}) : $\mu_i^{C}(\alpha,\beta_l,\Pi_{\Psi})$, where $i\in\{m,l\}$ (i.e., MUE or FUE). Given this property, one suitable framework for modeling the macrocell-femtocell cooperation is that of a coalitional game in \emph{partition form} with \emph{non transferable utility} which is defined as follows~\cite{Game_theory2}: \begin{definition}\label{def:partform} A coalitional game in \emph{partition form} with \emph{non transferable} utility~(NTU) is defined by a pair $(\Psi,U)$ where $\Psi$ is the set of players (i.e., MUEs and FUEs), and $U$ is a mapping such that for every coalition $S_l \subseteq \Psi$, $U(S_l,\Pi_{\Psi})$ is a closed convex subset of $\mathbb{R}^{\left | S_l \right |}$ that contains the payoff vectors that players in $S_l$ can achieve. \end{definition} As discussed in the previous section, it is clear that MUEs and FUEs have a strong incentive to cooperate to improve their performance using advanced techniques such as relaying and spectrum leasing. Since MUEs and FUEs exhibit a tradeoff between the achievable throughput and the transmission delay, we use a suitable metric to quantify the benefit of cooperation defined as \emph{power} of the network. Indeed, the power is defined as the ratio of maximum achievable throughput and delay (or a power of the delay)~\cite{KL00,PW01,PW02}. Thus, given a coalition $S_l$, composed by a set of $|S_l|-1$ MUEs and a serving relay FUE $l$, we define a mapping function $U(S_l,\Pi_\Psi)$ as: \begin{equation}\label{eq:payoff} U(S_l,\Pi_{\Psi})=\Big\{\boldsymbol{x}\in \mathbb{R}^{\left | S_l \right |} \:|\: x_i(S_l,\Pi_{\Psi})= \frac{\mu_i^{C}(\alpha,\beta_l,\Pi_{\Psi})^{\delta}}{D_{i}^{C\,(1-\delta)}}, \: \forall i \in S_l \Big\}, \end{equation} \noindent where $\delta \in(0,1)$ is a transmission capacity-delay tradeoff parameter to model the service tolerance to the delay. The set $U(S_l,\Pi_{\Psi})$ is a \emph{singleton set} and, hence, closed and convex. Note that, the player's payoff denoted by $x_i(S_l,\Pi_{\Psi})$ directly refers to a ratio between the achievable throughput and the average delay for player $i$ in coalition $S_l$ and quantifies the benefit of \emph{being a member} of the coalition. In consequence, the game $(\Psi,U)$ is an NTU game in partition form and, within each coalition, the utility of the players is univocally assigned. \subsection{Recursive core}\label{sec:Rec} In order to solve the proposed coalition formation game in partition form, we will use the concept of a \emph{recursive core} as introduced in \cite{K01} and further investigated in \cite{K02,Lazlo2009,Lazlo2}. The recursive core is one of the key solution concepts for coalitional games that have dependence on externalities, i.e., in partition form. Due to the challenging aspect of NTU games in partition form, as discussed in \cite{K02,Lazlo2009,Lazlo2} the recursive core is often defined for games with transferable utility where the benefit of a coalition is captured by a real function rather than a mapping. By exploring the fact that, for the proposed game, as seen in (\ref{eq:payoff}) is a \emph{singleton set}, then we can define an adjunct coalitional game $(\Psi,v)$ in which we use, for any coalition $S_l$, the following function over the real line (i.e., similar to games with transferable utility) which represents the sum of the users' payoffs: \begin{equation}\label{eq:value} v(S_l,\Pi_{\Psi})= \left\{\begin{matrix} &\sum_{i=1}^{\left | S_l \right |} x_i(S_l,\Pi_{\Psi}) ,& if \: |S_l|>1 \:and\: \alpha>0, \\ &0 ,& otherwise, \end{matrix}\right. \end{equation} as the \emph{value of the game}. Then, for every coalition achieving (\ref{eq:value}), the individual payoffs of the users are given uniquely by the mapping in (\ref{eq:payoff}). By doing so, we are able to exploit the recursive core as a solution concept for the original game $(\Psi,U)$ by solving the game $(\Psi,v)$ while \emph{restricting} the transfer of payoffs to be according to the unique mapping in (\ref{eq:payoff}). Further, given two payoff vectors $\boldsymbol{x},\boldsymbol{y} \in\mathbb{R}^{|S_l|}$, we let $\boldsymbol{x}>_{S_l} \boldsymbol{y}$ if $x_i \geq y_i$ for all $i \in S_l$ and for at least one $j \in S_l$ $x_j >y_j$. We also define an \emph{outcome} as couple ($\textbf{x},\Pi_{\Psi}$), where $\textbf{x}$ is a payoff vector resulting from a partition $\Pi_{\Psi}$. Further, let $\Omega(\Psi,v)$ denote a set of all the possible outcomes of $\Psi$. Essentially, the recursive core is a natural generalization of the well-known \emph{core solution} for games in characteristic form, to games with externalities, i.e., in partition form~\cite[Lemma~10]{K01}. In fact, when applied to a game in characteristic function form, the recursive core coincides with the original characteristic form core~\cite{CF00}. The recursive core is a suitable outcome of a coalition formation process that takes into account externalities across coalitions, which, in the considered game, are represented by effects of mutual interference between coalitions. Before delving into the definition of the recursive core, we need to introduce the concept of a \emph{residual game}: \begin{definition} A \emph{residual game} ($\mathcal{R},v$) is a coalitional game in partition form defined on a set of players $\mathcal{R}$, after the players in $\Psi \setminus \mathcal{R}$ have already organized themselves in a certain partition. These players that are outside $\mathcal{R}$ are called \emph{deviators}, while the players in $\mathcal{R}$ are called \emph{residuals}. \end{definition} \noindent Consider a coalitional game $(\Psi,v)$ and let $S_l$ be a certain coalition of deviators. Then, let $\mathcal{R}=\Psi \setminus S_l$ denote the set of residual players. The residual game $(\mathcal{R},v)$ is defined as a game in partition form over the set $\mathcal{R}$. Clearly, a residual game is still in partition form and it can be solved as an independent game, regardless of how it was generated as discussed in \cite{K01}. To better present this concept, we will provide an intuitive introduction. For instance, when some deviators reject an existing partition and decide to reorganize themselves into a different partition, their decisions will, in general, affect the payoff of the residual players. As a result, the residual players for a new game that is part of the original game (e.g., the game over the whole set $\Psi$), but with a certain part of the partition (composed by deviators) already fixed. In consequence, one of the main attractive properties of a residual game is its consistency as well as the possibility of dividing any coalitional game in partition form into a number of residual games which, in essence, are easier to solve. In fact, any game in partition form can be seen as a collection of residual games, and each one of those can be solved as if it was the original one. The solution of a residual game is known as \emph{the residual core} which is defined as follows: \begin{definition} The \emph{residual core} of a residual game $(\mathcal{R},v)$ is a set of possible game outcomes, i.e., partitions of $\mathcal{R}$ that can be formed. \end{definition} One can see that given any coalitional game $(\Psi,v)$, residual games are smaller than the original one and therefore computationally easier to analyze. Given any coalitional game $(\Psi,v)$, the recursive core solution can be found by recursively playing residual games, which, in fact, yields the following definition as per \cite[Definition~2]{K01}: \begin{definition}\label{def:Reccore} The \emph{recursive core} of a coalitional game $(\Psi,v)$ is inductively defined in four main steps: \begin{enumerate} \item \emph{Trivial Partition}. Let ($\Psi,v$) be a coalitional game. The recursive core of a coalitional game where $\Psi\!\!=\!\!\mathcal{M} \cup \mathcal{L}_n$ is composed by the only outcome with the trivial partition composed by the single player $i$: $C(\left \{ i \right \}, v )$ = $(v({i}),i) $. \item \emph{Inductive Assumption}. Proceeding recursively, consider a larger network and suppose the recursive core $C(\mathcal{R},v)$ for each game with at most $|\Psi|-1 $ players has been defined. Now, we define the assumption $A(\mathcal{R},v)$ about the game $(\mathcal{R},v)$ as follows: $A(\mathcal{R},v)=C(\mathcal{R},v)$, if $C(\mathcal{R},v) \neq \emptyset $ ; $A(\mathcal{R},v)=\Omega(\mathcal{R},v)$, otherwise. \item \emph{Dominance.} An outcome $(\textbf{x},\Pi_{\Psi})$ is \emph{dominated} via a coalition $S_l$ if for at least one \\ $(\textbf{y}_{\Psi\setminus S_l},\Pi_{\Psi \setminus S_l}) \in A(\Psi \setminus S_l,v)$ there exists an outcome $((\textbf{y}_{S_l}, \textbf{y}_{\Psi \setminus S_l}), \Pi_{S_l} \cup \Pi_{\Psi \setminus S_l}) \in \Omega(\Psi,v)$ such that $(\textbf{y}_{S_l}, \textbf{y}_{\Psi \setminus S_l})>_{S_l} \textbf{x}$. \item \emph{Core Generation.} The recursive core of a game of $\left | \Psi \right |$ players is the set of undominated outcomes and we denote it by $C(\Psi,v)$. \end{enumerate} \end{definition} Note that, in Definition~\ref{def:Reccore}, the concept of dominance in step 3) inherently captures the fact that the value of a coalition depends on the belonging partition. Hence, it can be expressed in the following way: given a current partition $\Pi_{\Psi}$ and the respective payoff vector $\textbf{x}$, an undominated coalition $S_l$ represents a deviation from $\Pi_{\Psi}$ in such a way that the resulting outcome $((\textbf{y}_{S_l}, \textbf{y}_{\Psi \setminus S_l}), \Pi_{S_l} \cup \Pi_{\Psi \setminus S_l})$ is more rewarding for the players of $S_l$, compared to $\textbf{x}$. Since a partition uniquely determines the payoffs of all the players in the game, the recursive core can be seen as a set of partitions that allow the players to organize in a way that provides them with the highest payoff. It is important to underline that the recursive core is achieved by verifying relevant properties of \emph{rationality, well-definition and efficiency} as discussed in~\cite{K01}. In detail, with \emph{rationality} it is intended that players never choose an inferior (i.e., dominated) strategy, therefore, they always pursue a profitable strategy. The recursive core is also \emph{well-defined} because when it exists, its solution is unique. Furthermore, \emph{efficiency} is a consequence of the fact that there is no preferred in the set composed by the recursive core, and thus, all the included partitions are equivalent in terms of individual payoff. Given these properties, once a partition in the recursive core takes place, the players have no incentive to abandon it, because any deviation would be detrimental. As a result, a partition in the recursive core is also \emph{stable} since it is a partition which ensures the highest possible payoff for each one of the players who have no incentive to leave this partition. Similar to many game theoretic concepts such as the core or the Nash equilibrium, the existence of a recursive core for a coalitional game is a key issue. In \cite{K01}, the author shows that the existence of the recursive core requires \emph{at least one residual core} (and not all of them) to be nonempty. In particular, this means that at least a subset of the players in the network must have defined a preference on how to organize themselves, i.e., how to partition the network. Moreover, an empty residual core reflects a case in which the players of the corresponding residual game do not identify any preferred network partition, or in our proposed cooperation scenario, can equivalently choose between cooperating or not. Therefore, for the proposed coalitional game, the emptiness of a residual core does not happen and this can be justified as follows. As per Definition~\ref{def:Reccore}, the recursive core is evaluated through a sequence of residual games over subsets of players (i.e., MUEs and FUEs, in our case) in the network. When a given residual core is empty, it is still possible to solve a larger game which contains this as a residual game, in a nested fashion. Hence, the existence of the recursive core is in fact guaranteed as long as one can find at least one residual core that is nonempty. Thus, the recursive core is a solution concept that exists for any game in partition form, unless all the residual cores are empty. In practice, for the proposed coalitional game, the case in which all residual cores (and, thus, the recursive core) are empty is unlikely to emerge. As a matter of fact, this would represent a network in which any partition of mobile users is \emph{equally likely} to form. In a practical wireless network, this would imply that the MUEs and FUEs are indifferent (i.e., achieve the same payoff) between states in which they are actually suppressing interference and relaying their transmissions (e.g., cooperatively using a D2D link with an FUE) and states in which they are actually suffering from this interference and transmitting to the MBS. In a nutshell, for the proposed coalitional game, one can use the concept of residual cores in order to find a partition in the recursive core, i.e., a stable and efficient partition, as will be further described in the next section. \subsection{Distributed implementation of the Recursive core} \noindent Once a coalition $S_l$ has formed, the FUE $l$ optimizes its own payoff by deciding upon $\beta_l$ and the transmit power. At the FUE´s side, relaying traffic for a set of MUEs incurs a cost that must be taken into account by the FUE before making any cooperation decision. In this paper, we consider a cost in terms of the transmit power that each FUE spends to transmit for MUEs within the same coalition. Namely, a FUE spends $\beta_l P_l^{(R)}$ to relay MUEs´ traffic and $(1-\beta_l) P_l^{(T)}$ for its own transmissions, while the overall transmit power is limited by $P_{max}$ as: \begin{align}\label{eq:con} \beta_l P_l^{(R)} + (1-\beta_l)P_l^{(T)}<P_{max}. \end{align} Leased spectrum and transmit power can be finely tuned in order to maximize the payoff of each member in the coalition. Accordingly, after a coalition has formed, given a value of $\alpha$, FUE $l \in \mathcal{L}_n$ jointly optimizes the transmit power and the parameter $\beta_l$, by solving the following problem: \begin{align}\label{eq:opt} &\max_{\beta_l, P_{l}} \:\:\: x_i(S_l,\Pi_{\mathcal{N}})\\ &\text{s.t.} \:\:\: 0<\alpha,\beta_l \leq 1 ; \: \beta_l P_l^{(R)} + (1-\beta_l) P_l^{(T)}<P_{max}. \end{align} \noindent Mainly, the FUE is fed back with the estimated aggregated interference from the MUEs $m$ outside the coalition (and included in $\Phi_{m}^{n}$), which can be either measured by its own belonging FAP, or extimated by considering the MUEs in the proximity \cite{zhangg}. The problem in (\ref{eq:opt}) can be solved using well known optimization techniques such as those in \cite{BO00}. \begin{algorithm}[!t] \footnotesize \flushleft \begin{minipage}{\linewidth} \caption{{\small Distributed coalition formation algorithm for uplink interference management in two-tier femtocell networks}} \label{ALG:recursivecore} \begin{algorithmic} \STATE{\textbf{Initial State:} The network is partitioned by $\Pi_{\Psi}$ = $\mathcal{M} \cup \mathcal{L}_n$ with non-cooperative MUEs and FUEs.} \STATE{\textbf{repeat}} \STATE\hspace{\algorithmicindent}{\emph{Phase I - Interferer Discovery}} \STATE\hspace{\algorithmicindent}{a) Through RSSI measurements, each FUE detects nearby MUEs, active on the same subchannel, and vice versa.} \STATE\hspace{\algorithmicindent}{b) For each of the occupied subchannels, each FUE sorts the interfering MUEs from the stronger to the weaker.} \STATE\hspace{\algorithmicindent}{c) Based on the measured RSSIs, each MUE $m$ in $\Psi$ sorts the sensed FUEs from the supposedly closer to the farther.} \STATE\hspace{\algorithmicindent}{\emph{Phase II - Coalition Formation}} \STATE\hspace{\algorithmicindent}\textbf{for all} mutually interfering MUEs and FUEs in $\Psi$ \textbf{do} \STATE\hspace{\algorithmicindent}{a) Each MUE and FUE sequentially engages in pairwise negotiations with the strongest discovered interferer, to identify} \STATE\hspace*{2.5em}{potential coalition partners.} \STATE\hspace{\algorithmicindent}{b) Each MUE and FUE in $\Psi$ estimates the achieved rate and delay and computes its utility $x_i({S},\Pi_{\Psi})$ as in (\ref{eq:payoff}).} \STATE\hspace{\algorithmicindent}{c) FUEs and MUEs engage in a coalition formation which ensures the maximum payoff.} \STATE\hspace{\algorithmicindent}\textbf{end for} \STATE{\textbf{until}}{ any further growth of the coalition does not result in a payoff enhancement of at least one player \textbf{or} decreases the other coalitional members' payoffs.} \STATE{\textbf{Outcome of this phase:} Convergence to a stable partition in the recursive core.} \STATE\hspace{\algorithmicindent}{\emph{Phase III - Spectrum Leasing and Cooperative Transmission}} \STATE\hspace{\algorithmicindent}{a) Within each coalition, the MUEs notify the serving MBS, and connect to the serving FUE through the D2D operations} \STATE\hspace*{2.5em}{described in Section \ref{sec:GF}.} \STATE\hspace{\algorithmicindent}{b) Each FUE $l \in S_l$ optimizes its payoff by balancing the transmit power and the transmission window $\beta_l$ by solving} \STATE\hspace*{2.5em}{the optimization problem in (\ref{eq:opt}). } \end{algorithmic} \end{minipage} \end{algorithm} To reach a partition in the recursive core, the players in $\Psi$ use Algorithm~\ref{ALG:recursivecore}. This algorithm is composed mainly of three phases: Interferer discovery, recursive core coalition formation, and coalition-level cooperative transmission. Initially, the network is partitioned by $ \left |\Psi\right |$ singleton coalitions (i.e., non-cooperating mobile users). The MBS periodically requests Received Signal Strength Indicators~(RSSIs) measurements from its MUEs to identify the presence of femtocells which might cooperatively provide higher throughput and lower delays through D2D communication. A similar measurement campaign is carried out at the FUE, as requested by the respective FAP. Successively, for each of the potential coalitional partners, the potential payoffs in (\ref{eq:payoff}) are computed, considering the mechanisms of spectrum leasing captured in (\ref{eq:R_c}). Ultimately, each MUE or FUE sends a request for cooperation to its counterpart which ensures the highest payoff. If both MUEs and FUEs mutually approve the cooperation request, they form a coalition, set up a D2D connection and the MUE acknowledges its MBS about the established connection. Even during the D2D transmission, the MUEs still maintain a connection to the radio resource control of its original MBS. Being limited by interference, the most eligible partners for FUEs are dominant interfering MUEs, while, vice versa for a MUE, the higher utilities are granted by FUE in the vicinity or experiencing good channel gains. The recursive core is reached by considering that only the payoff-maximizing coalitions are formed. Clearly, this algorithm is distributed since the FUEs and MUEs can take their individual decisions to join or leave a coalition, while, ultimately reaching a stable partition, i.e., a partition where players have no incentive to leave the belonging coalition. Those stable coalitions are in the recursive core at the end of the second stage of the algorithm. Finally, once the coalitions have formed, the members of each coalition proceed to construct a D2D link and perform the operations described in Section \ref{sec:GF}. As a result, intra-coalition uplink interference at the respective FAPs is suppressed and the MUEs achieve lower delays. The proposed distributed solution significantly reduces the intrisic complexity of the coalition formation problem as it leverages on the formulation of reduced games among mutual interferers within transmit range, which reduces the search space. Moreover, as per step $(b)$ in Phase I, since the dominant interferers are the most eligible to join the FUE's coalition, they are sorted by descending values of the estimated interference they produce and processed accordingly, which further reduces the number of algorithm iterations. With regards to the convergence of Algorithm~\ref{ALG:recursivecore}, note that the limitation on the cost for cooperation as per $(15)$ limits the number of potential coalitional partners and, thus, the number of combination that the algorithm has to evaluate. Moreover, by choosing the coalitional partners from an ordered list of interfering MUEs, the resulting FUE's payoff is non decreasing after each iteration of the algorithm. Finally, Algorithm~\ref{ALG:recursivecore} terminates at the first iteration, in which an FUE cannot further increase its payoff without being detrimental for the other coalition partners. Therefore, by cooperatively solving the strongest interference, the FUEs achieve the maximum achievable payoff, and, therefore, have no incentives to break away from the belonging coalitions since it would lead to lower payoffs. Thus, the formed coalitions represent a stable network partition which lies in the recursive core. The proposed Algorithm~\ref{ALG:recursivecore} converges to a stable partition which is undominated as per Definition~\ref{def:Reccore}. Although the recursive core might include more than one undominated partition, they are all equivalent, in the sense that they provide the same average player's payoff. Furthermore, due to the concept of dominance since a deviation occurs only towards coalition which guarantees a strictly higher payoff, as per step 3) in Definition~\ref{def:Reccore}, a player MUE or FUE has no incentive to deviate towards equivalent partitions in the recursive core, as they provide equivalent average payoffs. \section{Simulation Results and Analysis}\label{sec:res} \begin{table}[!t] \scriptsize \caption{System Parameters} \centering \setlength{\tabcolsep}{3pt} \begin{tabular}{|c | c || c| c|} \hline Macrocell radius & 1Km & Max TX power at MUE and FUE: $P_{max}$ & 20 dBm \\ Femtocell radius ($r$) & 10-50m & Max number of retransmissions ($D$) & 4 \\ Carrier frequency & 2.0 GHz & Forbidden drop radius (femto) & 0.2m \\ Number of FAPs & 1 - 360 & Total Bandwidth & 100 MHz\\ Number of FUEs per femtocell & 1 & Subcarrier Bandwidth $B$ & 180 kHz \\ Number of MUEs per macrocell & 1- 500 & Thermal Noise Density & -174 dBm/Hz \\ Input traffic macro: $\lambda_m$ (femto: $\lambda_l$) & 150 Kbps & Path Loss Model [dB] (indoor) & $37+ 30\log_{10}$(d[m]) \\ Min required SINR at the MBS: $\gamma_m$ (FAP: $\gamma_l$) & 10 dB (15 dB) & Path Loss Model [dB] (outdoor)& $15.3+ 37.6\log_{10}$(d[m])\\ FAP antenna gain & 0 dBi & External wall penetration loss & 12dB\\ Forbidden drop radius (macro) & 50m & Lognormal shadowing st. deviation & 10 dB \\ Number of antennas at the MBS (FAP) & 1 (1) & Shadowing correlation between FAPs & 0 \\ \hline \end{tabular}\label{table:par} \end{table} \noindent For system-level simulations, we consider a single hexagonal macrocell with a radius of $1$~Km within which $N$ FAPs are underlaid with $M$ MUEs. Each FAP $n \in \mathcal{N}$ serves $L_n=1$ FUE scheduled over orthogonal subchannel, adopting a closed access policy. We set the maximum transmit power at MUEs and FUEs to $P_{max}=20$~dBm, which includes both the power for the serviced MUE's and its own transmissions as in (\ref{eq:con}). Transmissions are affected by distance dependent path loss shadowing according to the 3GPP specifications \cite{3GPP3}. For both femto users and MUEs, we assume that power control fully compensates for the path loss. Moreover, a wall loss attenuation of $12$~dB affects MUE-to-FUE transmissions. The considered macrocell has $500$ available subcarriers, each one having a bandwidth of $180$~KHz, and dedicates one OFDMA subchannel to each transmissions. As a matter of fact, assigning multiple subchannels to an MUE would extend the produced interference to more than one FAP, and lead to the formation of overlapping coalitions. However, performing coalition formation with multiple membership yields a combinatorial complexity order due to the need for distributing the capabilities of a user among multiple coalitions. Thus, assigning one subchannel enables the formation of disjoint coalitions and optimizes the tradeoff between benefits from cooperation and the accompanying complexity \cite{CF00,Chali}. Further simulation parameters are included in Table~\ref{table:par}. To leverage channel variations, statistical results are averaged on $10000$ simulation rounds. In Fig.~\ref{fig:network}, we show a snapshot of a femtocell network resulting from the proposed coalition formation algorithm with $N=200$ FAPs that are randomly deployed in the network. The partition in Fig.~\ref{fig:network} lies in the recursive core of the game and is, thus, stable (both FUE and MUE have no incentive to deviate). In this figure, note that although the MUEs are located outside the femtocells, they might be in proximity of a femtocell and potentially interfere with it. If this the case, the FUE has an incentive in forming a coalition with the interfering MUE since it would neutralize its interference. Furthermore, note how the cooperative MUEs are located within the transmission range of a FUE, leveraging on a smaller distance dependent path loss. Conversely, spatially separated MUEs and FUEs are most likely to form singleton coalitions, hence not cooperate. In a nutshell, Fig.~\ref{fig:network} shows how, using the proposed algorithm, the FUEs and the MUEs in a network can self-organize into a partition composed of disjoint and independent coalitions and which is stable, i.e., lies in the recursive core of the game. \begin{figure} \caption{A snapshot of the two-tier femtocell network. The FAPs are modeled by a Poisson point process (squares) and they serve a disc of radius $20$~meters. Triangles represent non cooperating MUEs which communicate with the main base station, represented by a diamond. Stars represent cooperating MUEs which are serviced by the FUE in the coalition (dots).} \label{fig:network} \end{figure} In Fig.~\ref{fig:MUEpayoff}, we evaluate the performance of the proposed coalition formation game by showing the average gain of achievable payoff per MUE during the whole transmission time scale as a function of the number of MUEs $M$. We compare the performance of the proposed algorithm to that of the non-cooperative case, for a network with $50,100,200$ FAPs using a closed access policy. The curves are normalized to the performance of the non-cooperative solution. For small network sizes, MUEs do not cooperate with FUEs due to spatial separation. Thus, the proposed algorithm has a performance that is close to the non-cooperative case for $M < 60$. As the number of MUEs grows, the probability of being in proximity of an FUE gradually increases and forming coalitions becomes more desirable. Hence, the MUEs become connected to a nearby FUE which allows for a higher SINR, allowing for high values of payoff. For example, Fig.~\ref{fig:MUEpayoff} shows that cooperating MUE can gain up to $75\%$ with respect than the non-cooperative case in a network with $N=200$ FAPs and $M=160$ MUEs. For larger sizes of the macrocell tier, the coalition formation process eventually saturates and the average gains of cooperation decrease. Further, note that as FUEs in the network not only represent an opportunity of cooperation for the MUEs, but also sources of cross-tier interference, the maximum achievable gains translate towards larger sizes of macrocell tier. In fact, Fig.~\ref{fig:MUEpayoff} clearly shows that the average gain of payoff per MUE increases in the cooperative case as the number of femtocells is large, until each coalition reaches its maximum size. It is also demonstrated that the proposed coalitional game model has a significant advantage over the non-cooperative case, which increases with the probability of having FUEs and MUEs in proximity, and resulting in an improvement of up to $239 \%$ for $M=285$ MUEs. \begin{figure} \caption{Average gain of individual payoff per MUE, for a network having $N = 50,100,200$ FAPs, $\delta=0.5$, $r=20$m.} \label{fig:MUEpayoff} \end{figure} In Fig.~\ref{fig:FUEpayoff}, we show the average gain of achievable payoff per FUE as a function of the number of FAPs in the network $N$, for different number of MUEs $M=300, 400, 500$ and normalize the curves to the performance of the non-cooperative solution. As previously seen in Fig.~\ref{fig:MUEpayoff}, cooperation seldom occurs in cases where MUEs and FUEs are spatially separated, as for low numbers of FUEs in the network. Nevertheless, as the density of FAPs increases, coalitions start to take place yielding to higher gains for the FUEs. For instance, Fig.~\ref{fig:FUEpayoff} shows that the average gain of payoff per FUE resulting from the coalition formation can achieve an additional $15\%$ gain with respect to the non-cooperative case, in a network with $N=200$ FAPs and $M=500$ MUEs. However, for larger numbers of FAPs in the network, the average gain in terms of FUE's payoff decreases, as the spectrum becomes more congested and the MUEs in the network have already joined the most rewarding coalitions. Fig.~\ref{fig:FUEpayoff} also shows the comparison with the optimal solution obtained through centralized exhaustive search. For example, Fig.~\ref{fig:FUEpayoff} shows that the performance gap between the centralized and the proposed solution does not exceed $2.6 \%$ for a network of $N=10$ FAPs, while networks with more than $N=10$ FAPs are computationally and mathematically intractable, due to the exponentially increasing number of combinations to be evaluated \cite{CF00}. Therefore, we demonstrated how cooperation can be beneficial to the FUEs in highly populated areas where the density of interferers (i.e., potential coalitional partners) is high and that the proposed algorithm yields a near optimal performance at a much lower complexity. Finally note that, since the femtocells are orthogonally scheduled, the performance of each FUE in the non cooperative approach is transparent to the density of femtocells in the network. \begin{figure} \caption{Average gain of individual payoff per FUE, for a network having $M = 300,400,500$ MUEs, $\delta=0.5$, $r=20$m.} \label{fig:FUEpayoff} \end{figure} The performance of the proposed coalition formation approach is further assessed in Fig.~\ref{fig:cross}, where we show the average gain of payoff per FUE as the number of MUEs in the network varies, under different access policies. Here, the curves are normalized to the performance of the closed access policy. Under the open access policy, each FAP has to select a secondary subchannel among the least interfered to schedule the guest MUEs \cite{dlr}. Fig.~\ref{fig:cross} shows that, as $M$ increases, the performance of the FUEs is undermined by the increasing level of interference and a closed access policy may result in a loss of up to $30\%$. An open access policy is more robust to this effect, but it cannot neutralize interfering MUEs which are not in the transmission range. Conversely, our proposed algorithm allows to solve the interference from the dominant neighboring interferers, which are more likely to be in the FUE's transmission range and resulting in a higher gain with respect to the open access policy of $7.6\%$ for a network with $300$ MUEs. \begin{figure} \caption{Performance assessment of the proposed network formation algorithm, in terms of average gain of payoff per FUE, for a network having $N=200$ FAPs, under different access policies. $\delta=0.5$, $r=50$m.} \label{fig:cross} \end{figure} In Fig.~\ref{fig:coalsize}, we show the average size of the coalitions in the recursive core for a QoS parameter $\delta= 0.5$, in a network in which femtocells are extensively deployed ($N=200$). Fig.~\ref{fig:coalsize} shows that due to the high number of cooperation opportunities, the network witnesses an exponential growth of number of MUE-FUE coalitions when the delay constraints are stringent ($\delta=0.2$). For instance, the average coalition size for a network with $M=200$ MUEs is $2.87$. In a less delay constraining case ($\delta= 0.8$), the incentives in cooperation are smaller but still tangible, as demonstrated by a network with $M=200$ MUEs where the average coalition size is $1.39$. \begin{figure} \caption{Average coalition size as function of the number of MUEs, for different degrees of delay tolerance, expressed by $\delta=0.2, 0.5, 0.8$. $N=200$, $r=20$m.} \label{fig:coalsize} \end{figure} \begin{figure} \caption{Average number of iterations till convergence and average number of coalitions as a function of the number of MUEs in the network. The bisectrix delimits the area of cooperation and non-cooperation, therefore, the points on the bisectrix represent full non-cooperative MUEs, denoted by singleton coalitions. $N=200$, $\delta=0.5$.} \label{fig:numcoal} \end{figure} Fig.~\ref{fig:numcoal} shows the growth of the number of coalitions, i.e., the size of a partition in the recursive core, while the number of MUEs increases. Additionally, the average number of iteration in the proposed algorithm is observed. The network is initially organized in a non-cooperative structure where each player (i.e., MUE or FUE) represents a singleton coalition, therefore the number of coalitions equals the number of players in the network (grey dotted line in Fig.~\ref{fig:numcoal}) and, since interferers are out of range of cooperation, the number of iterations is minimum. Initially, for $M<40$ cooperation seldom occurs, due to the large distance between potential coalitional partners. As $M$ increases, the network topology changes with the emergence of new coalitions. For example, when $N=200$ FAPs and $M=200$ MUEs are deployed, $138$ coalitions take place, requiring an average number of algorithm iterations of $6.9$. Therefore, Fig.~\ref{fig:coalsize} and Fig.~\ref{fig:numcoal} show that the incentive towards cooperation becomes significant when the femtocells' spectrum becomes more congested and femtocells are densely deployed in the network. Eventually, for larger $M$, the process of coalition formation is limited by the number of MUEs which a relay FUE can service, given the mechanism of reimbursement in (\ref{eq:R_c}). Fig.~\ref{fig:cdf} shows the cumulative distribution function of the distances between the MBS, at the cell center, and the coalitions formed in the network, for $N=200$, $M=200$. This figure shows that the requirement on the QoS, represented by the parameter $\delta$, plays a key role in the coalition formation. In essence, when the delay is more stringent ($\delta =0.2$) than the throughput, as in real time services, cooperation takes place even in the vicinity of the MBS, where higher values of SINR are averagely possible. In contrast, when throughput is more relevant ($\delta=0.8$), coalition formation generally occurs at the cell boundary area, whereas the SINRs at the MBS are limited by the received power. For instance, in Fig.~\ref{fig:cdf} the expected value of the distance from the MBS for a coalition with $\delta =0.2$ is $d=212$ meters, while for $\delta =0.8$ is $d=703$ meters. \begin{figure} \caption{Cumulative distribution function of the distances where the coalitions with more than one user are located from the MBS, for different QoS parameters $\delta=0.2, 0.5, 0.8$.} \label{fig:cdf} \end{figure} \begin{figure} \caption{Probability distribution function of coalition formation vs. the superframe fraction $\alpha$ granted to the relay FUE. $M=200$, $N=200$.} \label{fig:alpha} \end{figure} \begin{figure} \caption{Performance assessment of the proposed network formation algorithm, in terms of average gain of payoff per MUE and FUE at the cell boundary area, in case of a MUE moving on the x-axis in positive direction towards a femtocell. $\delta=0.5$.} \label{fig:mobb} \end{figure} Fig.~\ref{fig:alpha} shows the probability distribution function of coalition formation as a function of the portion of superframe granted to the relay FUEs, for different $\delta = 0.2, 0.5, 0.8$. This figure demonstrates that, when delay and throughput are equally relevant, an average value of $\alpha= 62\%$ is required by each FUE, for serving an MUE. In contrast, for delay-constrained services, represented by $\delta =0.2$, cooperation becomes more demanding and MUEs have to reimburse the serving FUE for an average value of $\alpha = 78\%$. As a result, we show that the reimbursement mechanism highly depends on the type of service that is required, and the network power is a metric which plays a key role in the coalition formation. Fig.~\ref{fig:mobb} provides a comparison of the average individual payoff of both cooperative and non-cooperative approaches as a function of the mobility range of a MUE. We consider, from different positions, an MUE close to the macrocell boundary and interfering with a femtocell which adopts a closed access policy. While the MUE is out of the transmission range of the FUE, cooperation cannot be established, thus, the interference from the MUE is unresolved. Conversely, although being located outside of the femtocell and behind a wall, the MUE is serviced by the FUE when the mutual distance is approximately $9.5$ meters, yielding to a significant improvement in terms of respective payoffs. Fig.~\ref{fig:mobb} demonstrates that the proposed solution can lead to an improvement at the MUE side of up to $41\%$. Note that our solution applies not only to the closed access policy, but to all the general cases where an MUE cannot be served by a FAP, although it is harmfully interfering with it (for instance, when the MUE is within interference but out of the FAP transmission range). \begin{figure} \caption{Comparison between the proposed solution and the existing access policies in terms of FUE's payoff vs the size of the femtocell. $M=200$, $N=200$, $\delta=0.5$.} \label{fig:radius} \end{figure} In Fig.~\ref{fig:radius} we compare our approach to different access policies in terms of average individual FUE's payoff as a function of a femtocell transmission range. The curves are normalized to the performance of the closed access policy for $r=10$ meters. For small femtocell radius, which are currently included in 3GPP specifications\cite{3GPP3}, an open access policy can better protect the FAP from cross-tier interference with respect to a closed access policy. However, when the femtocell radius increases, the FAP is more insulated from the outer interference when located at the cell center. Thus, the closed and open access policies gradually converge. Our proposed solution becomes more beneficial in those cases where, despite the access policy being open, the MUE cannot reach the FAP, leading to a maximum gain of $6\%$ with respect to the open access policy and $14\%$ to the closed access policy, for a femtocell radius of $50$ meters. \section{Conclusions}\label{sec:conc} \noindent In this paper, we have introduced a novel framework of cooperation among FUEs and MUEs, which has a great potential for upgrading the performance of both classes of mobile users in next generation wireless femtocell systems. We formulated a coalitional game among FUEs and MUEs in a network adopting a closed access policy at each femtocell. Further we have introduced a coalitional value function which accounts for the main utilities in a cellular network, namely transmission delay and achievable throughput. To form coalitions, we have proposed a distributed coalition formation algorithm that enables MUEs and FUEs to autonomously decide on whether to cooperate or not, based on the tradeoff between the cooperation gains, in form of increased throughput to delay ratio, and the costs in terms of leased spectrum and transmit power. We have shown that the proposed algorithm reaches a stable partition which lies in the recursive core of the studied game. Results have shown that the performance of MUEs and FUEs are respectively limited by delay and interference, therefore, the proposed cooperative strategy can provide significant gains, when compared to the non-cooperative case as well as to the closed access policy. \def0.80{0.80} \end{document}
arXiv
Triakis icosahedron In geometry, the triakis icosahedron is an Archimedean dual solid, or a Catalan solid, with 60 isosceles triangle faces. Its dual is the truncated dodecahedron. It has also been called the kisicosahedron.[1] It was first depicted, in a non-convex form with equilateral triangle faces, by Leonardo da Vinci in Luca Pacioli's Divina proportione, where it was named the icosahedron elevatum.[2] The capsid of the Hepatitis A virus has the shape of a triakis icosahedron.[3] Triakis icosahedron (Click here for rotating model) TypeCatalan solid Coxeter diagram Conway notationkI Face typeV3.10.10 isosceles triangle Faces60 Edges90 Vertices32 Vertices by type20{3}+12{10} Symmetry groupIh, H3, [5,3], (*532) Rotation groupI, [5,3]+, (532) Dihedral angle160°36′45″ arccos(−24 + 15√5/61) Propertiesconvex, face-transitive Truncated dodecahedron (dual polyhedron) Net As a Kleetope The triakis icosahedron can be formed by gluing triangular pyramids to each face of a regular icosahedron. Depending on the height of these pyramids relative to their base, the result can be either convex or non-convex. This construction, of gluing pyramids to each face, is an instance of a general construction called the Kleetope; the triakis icosahedron is the Kleetope of the icosahedron.[2] This interpretation is also expressed in the name, triakis, which is used for the Kleetopes of polyhedra with triangular faces.[1] Non-convex triakis icosahedron drawn by Leonardo da Vinci in Luca Pacioli's Divina proportione The visible parts of a small triambic icosahedron have the same shape as a non-convex triakis icosahedron The great stellated dodecahedron, with 12 pentagram faces, has a triakis icosahedron as its outer shell When depicted in Leonardo's form, with equilateral triangle faces, it is an example of a non-convex deltahedron, one of the few known deltahedra that are isohedral (meaning that all faces are symmetric to each other).[4] In another of the non-convex forms of the triakis icosahedron, the three triangles adjacent to each pyramid are coplanar, and can be thought of as instead forming the visible parts of a convex hexagon, in a self-intersecting polyhedron with 20 hexagonal faces that has been called the small triambic icosahedron.[5] Alternatively, for the same form of the triakis icosahedron, the triples of coplanar isosceles triangles form the faces of the first stellation of the icosahedron.[6] Yet another non-convex form, with golden isosceles triangle faces, forms the outer shell of the great stellated dodecahedron, a Kepler–Poinsot polyhedron with twelve pentagram faces.[7] Each edge of the triakis icosahedron has endpoints of total degree at least 13. By Kotzig's theorem, this is the most possible for any polyhedron. The same total degree is obtained from the Kleetope of any polyhedron with minimum degree five, but the triakis icosahedron is the simplest example of this construction.[8] Although this Kleetope has isosceles triangle faces, iterating the Kleetope construction on it produces convex polyhedra with triangular faces that cannot all be isosceles.[9] As a Catalan solid The triakis icosahedron is a Catalan solid, the dual polyhedron of the truncated dodecahedron. The truncated dodecahedron is an Archimedean solid, with faces that are regular decagons and equilateral triangles, and with all edges having unit length; its vertices lie on a common sphere, the circumsphere of the truncated decahedron. The polar reciprocation of this solid through this sphere is a convex form of the triakis icosahedron, with all faces tangent to the same sphere, now an inscribed sphere, with coordinates and dimensions that can be calculated as follows. Let $\varphi $ denote the golden ratio. The short edges of this form of the triakis icosahedron have length ${\frac {5\varphi +15}{11}}\approx 2.099$, and the long edges have length $\varphi +2\approx 3.618$.[10] Its faces are isosceles triangles with one obtuse angle of $\cos ^{-1}{\frac {-3\varphi }{10}}\approx 119^{\circ }$ and two acute angles of $\cos ^{-1}{\frac {\varphi +7}{10}}\approx 30.5^{\circ }$.[11] As a Catalan solid, its dihedral angles are all equal, $\cos ^{-1}{\frac {\varphi ^{2}(1+2\varphi (2+\varphi )}{\sqrt {(1+5\varphi ^{4})(1+\varphi ^{2}(2+\varphi )^{2}}}}\approx $160°36'45.188". One possible set of 32 Cartesian coordinates for the vertices of the triakis icosahedron centered at the origin (scaled differently than the one above) can be generated by combining the vertices of two appropriately scaled Platonic solids, the regular icosahedron and a regular dodecahedron:[12] • Twelve vertices of a regular icosahedron, scaled to have a unit circumradius, with the coordinates ${\frac {(0,\pm 1,\pm \varphi )}{\sqrt {\varphi ^{2}+1}}},{\frac {(\pm 1,\pm \varphi ,0)}{\sqrt {\varphi ^{2}+1}}},{\frac {(\pm \varphi ,0,\pm 1)}{\sqrt {\varphi ^{2}+1}}}.$ • Twenty vertices of a regular dodecahedron, scaled to have circumradius ${\frac {2+\varphi }{3+2\varphi }}{\sqrt {\frac {3}{2-1/\varphi }}}={\frac {1}{11}}{\sqrt {75+6{\sqrt {5}}}}\approx 0.8548,$ with the coordinates $(\pm 1,\pm 1,\pm 1){\frac {\sqrt {75+6{\sqrt {5}}}}{11{\sqrt {3}}}}$ and $(0,\pm \varphi ,\pm {\frac {1}{\varphi }}){\frac {\sqrt {75+6{\sqrt {5}}}}{11{\sqrt {3}}}},(\pm {\frac {1}{\varphi }},0,\pm \varphi ){\frac {\sqrt {75+6{\sqrt {5}}}}{11{\sqrt {3}}}},(\pm \varphi ,\pm {\frac {1}{\varphi }},0){\frac {\sqrt {75+6{\sqrt {5}}}}{11{\sqrt {3}}}}.$ Symmetry In any of its standard convex or non-convex forms, the triakis icosahedron has the same symmetries as a regular icosahedron.[4] The three types of symmetry axes of the icosahedron, through two opposite vertices, edge midpoints, and face centroids, become respectively axes through opposite pairs of degree-ten vertices of the triakis icosahedron, through opposite midpoints of edges between degree-ten vertices, and through opposite pairs of degree-three vertices. See also • Triakis triangular tiling for other "triakis" polyhedral forms. • Great triakis icosahedron References 1. Conway, John H.; Burgiel, Heidi; Goodman-Strauss, Chaim (2008). The Symmetries of Things. AK Peters. p. 284. ISBN 978-1-56881-220-5. 2. Brigaglia, Aldo; Palladino, Nicla; Vaccaro, Maria Alessandra (2018). "Historical notes on star geometry in mathematics, art and nature". In Emmer, Michele; Abate, Marco (eds.). Imagine Math 6: Between Culture and Mathematics. Springer International Publishing. pp. 197–211. doi:10.1007/978-3-319-93949-0_17. 3. Zhu, Ling; Zhang, Xiaoxue (October 2014). "Hepatitis A virus exhibits a structure unique among picornaviruses". Protein & Cell. 6 (2): 79–80. doi:10.1007/s13238-014-0103-7. PMC 4312766. 4. Shephard, G. C. (1999). "Isohedral deltahedra". Periodica Mathematica Hungarica. 39 (1–3): 83–106. doi:10.1023/A:1004838806529. 5. Grünbaum, Branko (2008). "Can every face of a polyhedron have many sides?". Geometry, games, graphs and education. The Joe Malkevitch Festschrift. Papers from Joe Fest 2008, York College–The City University of New York (CUNY), Jamaica, NY, USA, November 8, 2008. Bedford, MA: Comap, Inc. pp. 9–26. hdl:1773/4593. ISBN 978-1-933223-17-9. Zbl 1185.52009. 6. Cromwell, Peter R. (1997). Polyhedra. Cambridge University Press. p. 270. ISBN 0-521-66405-5. 7. Wenninger, Magnus (1974). "22: The great stellated dodecahedron". Polyhedron Models. Cambridge University Press. pp. 40–42. ISBN 0-521-09859-9. 8. Zaks, Joseph (1983). "Extending Kotzig's theorem". Israel Journal of Mathematics. 45 (4): 281–296. doi:10.1007/BF02804013. hdl:10338.dmlcz/127504. MR 0720304. 9. Eppstein, David (2021). "On polyhedral realization with isosceles triangles". Graphs and Combinatorics. 37 (4): 1247–1269. arXiv:2009.00116. doi:10.1007/s00373-021-02314-9. 10. Weisstein, Eric W. "Triakis icosahedron". MathWorld. 11. Williams, Robert (1979). The Geometrical Foundation of Natural Structure: A Source Book of Design. Dover Publications, Inc. p. 89. ISBN 0-486-23729-X. 12. Koca, Mehmet; Ozdes Koca, Nazife; Koc, Ramazon (2010). "Catalan Solids Derived From 3D-Root Systems and Quaternions". Journal of Mathematical Physics. 51 (4). arXiv:0908.3272. doi:10.1063/1.3356985. Catalan solids Tetrahedron (Dual) Tetrahedron (Seed) Octahedron (Dual) Cube (Seed) Icosahedron (Dual) Dodecahedron (Seed) Triakis tetrahedron (Needle) Triakis tetrahedron (Kis) Triakis octahedron (Needle) Tetrakis hexahedron (Kis) Triakis icosahedron (Needle) Pentakis dodecahedron (Kis) Rhombic hexahedron (Join) Rhombic dodecahedron (Join) Rhombic triacontahedron (Join) Deltoidal dodecahedron (Ortho) Disdyakis hexahedron (Meta) Deltoidal icositetrahedron (Ortho) Disdyakis dodecahedron (Meta) Deltoidal hexecontahedron (Ortho) Disdyakis triacontahedron (Meta) Pentagonal dodecahedron (Gyro) Pentagonal icositetrahedron (Gyro) Pentagonal hexecontahedron (Gyro) Archimedean duals Tetrahedron (Seed) Tetrahedron (Dual) Cube (Seed) Octahedron (Dual) Dodecahedron (Seed) Icosahedron (Dual) Truncated tetrahedron (Truncate) Truncated tetrahedron (Zip) Truncated cube (Truncate) Truncated octahedron (Zip) Truncated dodecahedron (Truncate) Truncated icosahedron (Zip) Tetratetrahedron (Ambo) Cuboctahedron (Ambo) Icosidodecahedron (Ambo) Rhombitetratetrahedron (Expand) Truncated tetratetrahedron (Bevel) Rhombicuboctahedron (Expand) Truncated cuboctahedron (Bevel) Rhombicosidodecahedron (Expand) Truncated icosidodecahedron (Bevel) Snub tetrahedron (Snub) Snub cube (Snub) Snub dodecahedron (Snub) Convex polyhedra Platonic solids (regular) • tetrahedron • cube • octahedron • dodecahedron • icosahedron Archimedean solids (semiregular or uniform) • truncated tetrahedron • cuboctahedron • truncated cube • truncated octahedron • rhombicuboctahedron • truncated cuboctahedron • snub cube • icosidodecahedron • truncated dodecahedron • truncated icosahedron • rhombicosidodecahedron • truncated icosidodecahedron • snub dodecahedron Catalan solids (duals of Archimedean) • triakis tetrahedron • rhombic dodecahedron • triakis octahedron • tetrakis hexahedron • deltoidal icositetrahedron • disdyakis dodecahedron • pentagonal icositetrahedron • rhombic triacontahedron • triakis icosahedron • pentakis dodecahedron • deltoidal hexecontahedron • disdyakis triacontahedron • pentagonal hexecontahedron Dihedral regular • dihedron • hosohedron Dihedral uniform • prisms • antiprisms duals: • bipyramids • trapezohedra Dihedral others • pyramids • truncated trapezohedra • gyroelongated bipyramid • cupola • bicupola • frustum • bifrustum • rotunda • birotunda • prismatoid • scutoid Degenerate polyhedra are in italics.
Wikipedia
Beta-model In model theory, a mathematical discipline, a β-model (from the French "bon ordre", well-ordering[1]) is a model which is correct about statements of the form "X is well-ordered". The term was introduced by Mostowski (1959)[2][3] as a strengthening of the notion of ω-model. In contrast to the notation for set-theoretic properties named by ordinals, such as $\xi $-indescribability, the letter β here is only denotational. In set theory It is a consequence of Shoenfield's absoluteness theorem that the constructible universe L is a β-model. In analysis β-models appear in the study of the reverse mathematics of subsystems of second-order arithmetic. In this context, a β-model of a subsystem of second-order arithmetic is a model M where for any Σ11 formula φ with parameters from M, (ω,M,+,×,0,1,<)⊨φ iff (ω,P(ω),+,×,0,1,<)⊨φ.[4]p.243 Every β-model of second-order arithmetic is also an ω-model, since working within the model we can prove that < is a well-ordering, so < really is a well-ordering of the natural numbers of the model.[2] Axioms based on β-models provide a natural finer division of the strengths of subsystems of second-order arithmetic, and also provide a way to formulate reflection principles. For example, over ATR0, Π1 1 -CA0 is equivalent to the statement "for all $X$ [of second-order sort], there exists a countable $\beta $-model M such that $X\in M$.[4]p.253 (Countable ω-models are represented by their sets of integers, and their satisfaction is formalizable in the language of analysis by an inductive definition.) Also, the theory extending KP with a canonical axiom schema for a recursively Mahlo universe (often called $KPM$)[5] is logically equivalent to the theory Δ1 2 -CA+BI+(Every true Π1 3 -formula is satisfied by a β-model of Δ1 2 -CA).[6] Additionally, there's a connection between β-models and the hyperjump, provably in ACA0: for all sets $X$ of integers, $X$ has a hyperjump iff there exists a countable β-model $M$ such that $X\in M$.[4]p.251 References 1. C. Smoryński, "Nonstandard Models and Related Developments" (p. 189). From Harvey Friedman's Research on the Foundations of Mathematics (1985), Studies in Logic and the Foundations of Mathematics vol. 117. 2. K. R. Apt, W. Marek, "Second-order Arithmetic and Some Related Topics" (1973), p. 181 3. J.-Y. Girard, Proof Theory and Logical Complexity (1987), Part III: Π21-proof theory, p. 206 4. S. G. Simpson, Subsystems of Second-Order Arithmetic (2009) 5. M. Rathjen, Proof theoretic analysis of KPM (1991), p.381. Archive for Mathematical Logic, Springer-Verlag. Accessed 28 February 2023. 6. M. Rathjen, Admissible proof theory and beyond , Logic, Methodology and Philosophy of Science IX (Elsevier, 1994). Accessed 2022-12-04.
Wikipedia
Subbundle In mathematics, a subbundle $U$ of a vector bundle $V$ on a topological space $X$ is a collection of linear subspaces $U_{x}$of the fibers $V_{x}$ of $V$ at $x$ in $X,$ that make up a vector bundle in their own right. In connection with foliation theory, a subbundle of the tangent bundle of a smooth manifold may be called a distribution (of tangent vectors). If a set of vector fields $Y_{k}$ span the vector space $U,$ and all Lie commutators $\left[Y_{i},Y_{j}\right]$ are linear combinations of the $Y_{k},$ then one says that $U$ is an involutive distribution. See also • Frobenius theorem (differential topology) – On finding a maximal set of solutions of a system of first-order homogeneous linear PDEs • Sub-Riemannian manifold – Type of generalization of a Riemannian manifold Manifolds (Glossary) Basic concepts • Topological manifold • Atlas • Differentiable/Smooth manifold • Differential structure • Smooth atlas • Submanifold • Riemannian manifold • Smooth map • Submersion • Pushforward • Tangent space • Differential form • Vector field Main results (list) • Atiyah–Singer index • Darboux's • De Rham's • Frobenius • Generalized Stokes • Hopf–Rinow • Noether's • Sard's • Whitney embedding Maps • Curve • Diffeomorphism • Local • Geodesic • Exponential map • in Lie theory • Foliation • Immersion • Integral curve • Lie derivative • Section • Submersion Types of manifolds • Closed • (Almost) Complex • (Almost) Contact • Fibered • Finsler • Flat • G-structure • Hadamard • Hermitian • Hyperbolic • Kähler • Kenmotsu • Lie group • Lie algebra • Manifold with boundary • Oriented • Parallelizable • Poisson • Prime • Quaternionic • Hypercomplex • (Pseudo−, Sub−) Riemannian • Rizza • (Almost) Symplectic • Tame Tensors Vectors • Distribution • Lie bracket • Pushforward • Tangent space • bundle • Torsion • Vector field • Vector flow Covectors • Closed/Exact • Covariant derivative • Cotangent space • bundle • De Rham cohomology • Differential form • Vector-valued • Exterior derivative • Interior product • Pullback • Ricci curvature • flow • Riemann curvature tensor • Tensor field • density • Volume form • Wedge product Bundles • Adjoint • Affine • Associated • Cotangent • Dual • Fiber • (Co) Fibration • Jet • Lie algebra • (Stable) Normal • Principal • Spinor • Subbundle • Tangent • Tensor • Vector Connections • Affine • Cartan • Ehresmann • Form • Generalized • Koszul • Levi-Civita • Principal • Vector • Parallel transport Related • Classification of manifolds • Gauge theory • History • Morse theory • Moving frame • Singularity theory Generalizations • Banach manifold • Diffeology • Diffiety • Fréchet manifold • K-theory • Orbifold • Secondary calculus • over commutative algebras • Sheaf • Stratifold • Supermanifold • Stratified space
Wikipedia
Fluorine is so reactive that it forms compounds w… Fluorine is so reactive that it forms compounds with several of the noble gases. (a) When 0.327 g of platinum is heated in fluorine, 0.519 g of a dark red, volatile solid forms. What is its empirical formula? (b) When 0.265 g of this red solid reacts with excess xenon gas, 0.378 of an orange-yellow solid forms. What is the empirical formula of this compound, the first to contain a noble gas? (c) Fluorides of xenon can be formed by direct reaction of the elements at high pressure and temperature. Under conditions that produce only the tetra- and hexafluorides, $1.85 \times 10^{-4}$ mol of xenon reacted with $5.00 \times 10^{-4}$ mol of fluorine, and $9.00 \times 10^{-6} \mathrm{mol}$ of xenon was found in excess. What are the mass percents of each xenon fluoride in the product mixture? Hemoglobin is 6.0$\%$ heme $\left(\mathrm{C}_{34} \mathrm{H}_{32} \mathrm{FeN}_{4} \mathrm{O}_{4}\right)$ by mass. To remove the heme, hemoglobin is treated with acetic acid and $\mathrm{NaCl},$ which forms hemin $\left(\mathrm{C}_{34} \mathrm{H}_{32} \mathrm{O}_{4} \mathrm{O}_{4} \mathrm{FeCl}\right) . \mathrm{A}$ blood sample from a crime scene contains 0.65 $\mathrm{g}$ of hemoglobin. (a) How many grams of heme are in the sample? (b) How many moles of heme? (c) How many grams of Fe? (d) How many grams of hemin could be formed for a forensic chemist to measure? Manganese is a key component of extremely hard steel. The element occurs naturally in many oxides. A 542.3 -g sample of a manganese oxide has an Mn/O ratio of 1.00$/ 1.42$ and con- sists of braunite $\left(\mathrm{Mn}_{2} \mathrm{O}_{3}\right)$ and manganosite $(\mathrm{MnO}) .(\mathrm{a})$ How many grams of braunite and of manganosite are in the ore? (b) What is the $\mathrm{Mn}^{3+} / \mathrm{Mn}^{2+}$ ratio in the ore? The human body excretes nitrogen in the form of urea, $\mathrm{NH}_{2} \mathrm{CONH}_{2} .$ The key step in its biochemical formation is the reaction of water with arginine to produce urea and ornithine: (a) What is the mass $\%$ of nitrogen in urea, in arginine, and in ornithine? (b) How many grams of nitrogen can be excreted as urea when 135.2 $\mathrm{g}$ of ornithine is produced? Aspirin (acetylsalicylic acid, $\mathrm{C}_{9} \mathrm{H}_{8} \mathrm{O}_{4} )$ is made by reacting salicylic acid $\left(\mathrm{C}_{7} \mathrm{H}_{6} \mathrm{O}_{3}\right)$ with acetic anhydride $\left[\mathrm{CH}_{3} \mathrm{CO}\right)_{2} \mathrm{O} ] :$ \mathrm{C}_{7} \mathrm{H}_{6} \mathrm{O}_{3}(s)+\left(\mathrm{CH}_{3} \mathrm{CO}\right)_{2} \mathrm{O}(l) \longrightarrow \mathrm{C}_{9} \mathrm{H}_{8} \mathrm{O}_{4}(s)+\mathrm{CH}_{3} \mathrm{COOH}(l) In one preparation, 3.077 g of salicylic acid and 5.50 $\mathrm{mL}$ of acetic anhydride react to form 3.281 $\mathrm{g}$ of aspirin. (a) Which is the limiting reactant $(d \text { of acetic anhydride }=1.080 \mathrm{g} / \mathrm{mL})$ ? (b) What is the percent yield of this reaction? (c) What is the percent atom economy of this reaction? Md M. Auburn University Main Campus Nitrogen monoxide reacts with elemental oxygen to form nitrogen dioxide. The scene at right represents an initial mixture of reactants. If the reaction has a 66$\%$ yield, which of the scenes below $(\mathrm{A},$ $\mathrm{B},$ or $\mathrm{C}$ ) best represents the final product mixture? Nitrogen monoxide and oxygen react to form nitrogen dioxide. Consider the mixture of NO and O $_{2}$ shown in the accompanying diagram. The blue spheres represent $\mathrm{N},$ and the red ones represent $\mathrm{O}$ . (a) How many molecules of $\mathrm{NO}_{2}$ can be formed, assuming the reaction goes to completion? (b) What is the limiting reactant? (c) If the actual yield of the reaction was 75$\%$ instead of 100$\%$ , how many molecules of each kind would be present after the reaction was over?
CommonCrawl
Stanley–Wilf conjecture The Stanley–Wilf conjecture, formulated independently by Richard P. Stanley and Herbert Wilf in the late 1980s, states that the growth rate of every proper permutation class is singly exponential. It was proved by Adam Marcus and Gábor Tardos (2004) and is no longer a conjecture. Marcus and Tardos actually proved a different conjecture, due to Zoltán Füredi and Péter Hajnal (1992), which had been shown to imply the Stanley–Wilf conjecture by Klazar (2000). Statement The Stanley–Wilf conjecture states that for every permutation β, there is a constant C such that the number |Sn(β)| of permutations of length n which avoid β as a permutation pattern is at most Cn. As Arratia (1999) observed, this is equivalent to the convergence of the limit $\lim _{n\to \infty }{\sqrt[{n}]{|S_{n}(\beta )|}}.$ The upper bound given by Marcus and Tardos for C is exponential in the length of β. A stronger conjecture of Arratia (1999) had stated that one could take C to be (k − 1)2, where k denotes the length of β, but this conjecture was disproved for the permutation β = 4231 by Albert et al. (2006). Indeed, Fox (2013) has shown that C is, in fact, exponential in k for almost all permutations. Allowable growth rates The growth rate (or Stanley–Wilf limit) of a permutation class is defined as $\limsup _{n\to \infty }{\sqrt[{n}]{a_{n}}},$ where an denotes the number of permutations of length n in the class. Clearly not every positive real number can be a growth rate of a permutation class, regardless of whether it is defined by a single forbidden pattern or a set of forbidden patterns. For example, numbers strictly between 0 and 1 cannot be growth rates of permutation classes. Kaiser & Klazar (2002) proved that if the number of permutations in a class of length n is ever less than the nth Fibonacci number then the enumeration of the class is eventually polynomial. Therefore, numbers strictly between 1 and the golden ratio also cannot be growth rates of permutation classes. Kaiser and Klazar went on to establish every possible growth constant of a permutation class below 2; these are the largest real roots of the polynomials $x^{k+1}-2x^{k}+1$ for an integer k ≥ 2. This shows that 2 is the least accumulation point of growth rates of permutation classes. Vatter (2011) later extended the characterization of growth rates of permutation classes up to a specific algebraic number κ≈2.20. From this characterization, it follows that κ is the least accumulation point of accumulation points of growth rates and that all growth rates up to κ are algebraic numbers. Vatter (2019) established that there is an algebraic number ξ≈2.31 such that there are uncountably many growth rates in every neighborhood of ξ, but only countably many growth rates below it. Pantone & Vatter (2020) characterized the (countably many) growth rates below ξ, all of which are also algebraic numbers. Their results also imply that in the set of all growth rates of permutation classes, ξ is the least accumulation point from above. In the other direction, Vatter (2010) proved that every real number at least 2.49 is the growth rate of a permutation class. That result was later improved by Bevan (2014) harvtxt error: no target: CITEREFBevan2014 (help), who proved that every real number at least 2.36 is the growth rate of a permutation class. See also • Enumerations of specific permutation classes for the growth rates of specific permutation classes. Notes References • Albert, Michael H.; Elder, Murray; Rechnitzer, Andrew; Westcott, P.; Zabrocki, Mike (2006), "On the Stanley–Wilf limit of 4231-avoiding permutations and a conjecture of Arratia", Advances in Applied Mathematics, 36 (2): 96–105, doi:10.1016/j.aam.2005.05.007, MR 2199982. • Arratia, Richard (1999), "On the Stanley–Wilf conjecture for the number of permutations avoiding a given pattern", Electronic Journal of Combinatorics, 6: N1, MR 1710623. • Bevan, David (2018), "Intervals of permutation class growth rates", Combinatorica, 38: 279–303, arXiv:1410.3679, Bibcode:2014arXiv1410.3679B, doi:10.1007/s00493-016-3349-2. • Fox, Jacob (2013), Stanley–Wilf limits are typically exponential, arXiv:1310.8378, Bibcode:2013arXiv1310.8378F. • Füredi, Zoltán; Hajnal, Péter (1992), "Davenport–Schinzel theory of matrices", Discrete Mathematics, 103 (3): 233–251, doi:10.1016/0012-365X(92)90316-8, MR 1171777. • Kaiser, Tomáš; Klazar, Martin (March 2002), "On growth rates of closed permutation classes", Electronic Journal of Combinatorics, 9 (2): Research paper 10, 20, MR 2028280. • Klazar, Martin (2000), "The Füredi–Hajnal conjecture implies the Stanley–Wilf conjecture", Formal Power Series and Algebraic Combinatorics (Moscow, 2000), Springer, pp. 250–255, MR 1798218. • Klazar, Martin (2010), "Some general results in combinatorial enumeration", Permutation patterns, London Math. Soc. Lecture Note Ser., vol. 376, Cambridge: Cambridge Univ. Press, pp. 3–40, doi:10.1017/CBO9780511902499.002, MR 2732822. • Marcus, Adam; Tardos, Gábor (2004), "Excluded permutation matrices and the Stanley–Wilf conjecture", Journal of Combinatorial Theory, Series A, 107 (1): 153–160, doi:10.1016/j.jcta.2004.04.002, MR 2063960. • Pantone, Jay; Vatter, Vincent (2020), "Growth rates of permutation classes: categorization up to the uncountability threshold", Israel Journal of Mathematics, 236 (1): 1–43, arXiv:1605.04289, doi:10.1007/s11856-020-1964-5, MR 4093880. • Vatter, Vincent (2019), "Growth rates of permutation classes: from countable to uncountable", Proc. London Math. Soc., Series 3, 119 (4): 960–997, arXiv:1605.04297, doi:10.1112/plms.12250, MR 3964825. • Vatter, Vincent (2010), "Permutation classes of every growth rate above 2.48188", Mathematika, 56 (1): 182–192, arXiv:0807.2815, doi:10.1112/S0025579309000503, MR 2604993. • Vatter, Vincent (2011), "Small permutation classes", Proc. London Math. Soc., Series 3, 103 (5): 879–921, arXiv:0712.4006, doi:10.1112/plms/pdr017, MR 2852292. External links • How Adam Marcus and Gabor Tardos divided and conquered the Stanley–Wilf conjecture – by Doron Zeilberger. • Weisstein, Eric W. "Stanley–Wilf conjecture". MathWorld.
Wikipedia
Articles and information on this website may only be copied, reprinted, or redistributed with written permission (but please ask, we like to give written permission!) The purpose of this Blog is to encourage the free exchange of ideas. The entire contents of this website is based upon the opinions of Dave Asprey, unless otherwise noted. Individual articles are based upon the opinions of the respective authors, who may retain copyright as marked. The information on this website is not intended to replace a one-on-one relationship with a qualified health care professional and is not intended as medical advice. It is intended as a sharing of knowledge and information from the personal research and experience of Dave Asprey and the community. We will attempt to keep all objectionable messages off this site; however, it is impossible to review all messages immediately. All messages expressed on The Bulletproof Forum or the Blog, including comments posted to Blog entries, represent the views of the author exclusively and we are not responsible for the content of any message. In most cases, cognitive enhancers have been used to treat people with neurological or mental disorders, but there is a growing number of healthy, "normal" people who use these substances in hopes of getting smarter. Although there are many companies that make "smart" drinks, smart power bars and diet supplements containing certain "smart" chemicals, there is little evidence to suggest that these products really work. Results from different laboratories show mixed results; some labs show positive effects on memory and learning; other labs show no effects. There are very few well-designed studies using normal healthy people. The question of whether stimulants are smart pills in a pragmatic sense cannot be answered solely by consideration of the statistical significance of the difference between stimulant and placebo. A drug with tiny effects, even if statistically significant, would not be a useful cognitive enhancer for most purposes. We therefore report Cohen's d effect size measure for published studies that provide either means and standard deviations or relevant F or t statistics (Thalheimer & Cook, 2002). More generally, with most sample sizes in the range of a dozen to a few dozen, small effects would not reliably be found. I've been actively benefitting from nootropics since 1997, when I was struggling with cognitive performance and ordered almost $1000 worth of smart drugs from Europe (the only place where you could get them at the time). I remember opening the unmarked brown package and wondering whether the pharmaceuticals and natural substances would really enhance my brain. This research is in contrast to the other substances I like, such as piracetam or fish oil. I knew about withdrawal of course, but it was not so bad when I was drinking only tea. And the side-effects like jitteriness are worse on caffeine without tea; I chalk this up to the lack of theanine. (My later experiences with theanine seems to confirm this.) These negative effects mean that caffeine doesn't satisfy the strictest definition of nootropic (having no negative effects), but is merely a cognitive enhancer (with both benefits & costs). One might wonder why I use caffeine anyway if I am so concerned with mental ability. Taken together, the available results are mixed, with slightly more null results than overall positive findings of enhancement and evidence of impairment in one reversal learning task. As the effect sizes listed in Table 5 show, the effects when found are generally substantial. When drug effects were assessed as a function of placebo performance, genotype, or self-reported impulsivity, enhancement was found to be greatest for participants who performed most poorly on placebo, had a COMT genotype associated with poorer executive function, or reported being impulsive in their everyday lives. In sum, the effects of stimulants on cognitive control are not robust, but MPH and d-AMP appear to enhance cognitive control in some tasks for some people, especially those less likely to perform well on cognitive control tasks. So with these 8 results in hand, what do I think? Roughly, I was right 5 of the days and wrong 3 of them. If not for the sleep effect on #4, which is - in a way - cheating (one hopes to detect modafinil due to good effects), the ratio would be 5:4 which is awfully close to a coin-flip. Indeed, a scoring rule ranks my performance at almost identical to a coin flip: -5.49 vs -5.5419. (The bright side is that I didn't do worse than a coin flip: I was at least calibrated.) 20 March, 2x 13mg; first time, took around 11:30AM, half-life 3 hours, so halved by 2:30PM. Initial reaction: within 20 minutes, started to feel light-headed, experienced a bit of physical clumsiness while baking bread (dropped things or poured too much thrice); that began to pass in an hour, leaving what felt like a cheerier mood and less anxiety. Seems like it mostly wore off by 6PM. Redosed at 8PM TODO: maybe take a look at the HRV data? looks interestingly like HRV increased thanks to the tianeptine 21 March, 2x17mg; seemed to buffer effects of FBI visit 22 March, 2x 23 March, 2x 24 March, 2x 25 March, 2x 26 March, 2x 27 March, 2x 28 March, 2x 7 April, 2x 8 April, 2x 9 April, 2x 10 April, 2x 11 April, 2x 12 April, 2x 23 April, 2x 24 April, 2x 25 April, 2x 26 April, 2x 27 April, 2x 28 April, 2x 29 April, 2x 7 May, 2x 8 May, 2x 9 May, 2x 10 May, 2x 3 June, 2x 4 June, 2x 5 June, 2x 30 June, 2x 30 July, 1x 31 July, 1x 1 August, 2x 2 August, 2x 3 August, 2x 5 August, 2x 6 August, 2x 8 August, 2x 10 August, 2x 12 August: 2x 14 August: 2x 15 August: 2x 16 August: 1x 18 August: 2x 19 August: 2x 21 August: 2x 23 August: 1x 24 August: 1x 25 August: 1x 26 August: 2x 27 August: 1x 29 August: 2x 30 August: 1x 02 September: 1x 04 September: 1x 07 September: 2x 20 September: 1x 21 September: 2x 24 September: 2x 25 September: 2x 26 September: 2x 28 September: 2x 29 September: 2x 5 October: 2x 6 October: 1x 19 October: 1x 20 October: 1x 27 October: 1x 4 November: 1x 5 November: 1x 8 November: 1x 9 November: 2x 10 November: 1x 11 November: 1x 12 November: 1x 25 November: 1x 26 November: 1x 27 November: 1x 4 December: 2x 27 December: 1x 28 December: 1x 2017 7 January: 1x 8 January: 2x 10 January: 1x 16 January: 1x 17 January: 1x 20 January: 1x 24 January: 1x 25 January: 2x 27 January: 2x 28 January: 2x 1 February: 2x 3 February: 2x 8 February: 1x 16 February: 2x 17 February: 2x 18 February: 1x 22 February: 1x 27 February: 2x 14 March: 1x 15 March: 1x 16 March: 2x 17 March: 2x 18 March: 2x 19 March: 2x 20 March: 2x 21 March: 2x 22 March: 2x 23 March: 1x 24 March: 2x 25 March: 2x 26 March: 2x 27 March: 2x 28 March: 2x 29 March: 2x 30 March: 2x 31 March: 2x 01 April: 2x 02 April: 1x 03 April: 2x 04 April: 2x 05 April: 2x 06 April: 2x 07 April: 2x 08 April: 2x 09 April: 2x 10 April: 2x 11 April: 2x 20 April: 1x 21 April: 1x 22 April: 1x 23 April: 1x 24 April: 1x 25 April: 1x 26 April: 2x 27 April: 2x 28 April: 1x 30 April: 1x 01 May: 2x 02 May: 2x 03 May: 2x 04 May: 2x 05 May: 2x 06 May: 2x 07 May: 2x 08 May: 2x 09 May: 2x 10 May: 2x 11 May: 2x 12 May: 2x 13 May: 2x 14 May: 2x 15 May: 2x 16 May: 2x 17 May: 2x 18 May: 2x 19 May: 2x 20 May: 2x 21 May: 2x 22 May: 2x 23 May: 2x 24 May: 2x 25 May: 2x 26 May: 2x 27 May: 2x 28 May: 2x 29 May: 2x 30 May: 2x 1 June: 2x 2 June: 2x 3 June: 2x 4 June: 2x 5 June: 1x 6 June: 2x 7 June: 2x 8 June: 2x 9 June: 2x 10 June: 2x 11 June: 2x 12 June: 2x 13 June: 2x 14 June: 2x 15 June: 2x 16 June: 2x 17 June: 2x 18 June: 2x 19 June: 2x 20 June: 2x 22 June: 2x 21 June: 2x 02 July: 2x 03 July: 2x 04 July: 2x 05 July: 2x 06 July: 2x 07 July: 2x 08 July: 2x 09 July: 2x 10 July: 2x 11 July: 2x 12 July: 2x 13 July: 2x 14 July: 2x 15 July: 2x 16 July: 2x 17 July: 2x 18 July: 2x 19 July: 2x 20 July: 2x 21 July: 2x 22 July: 2x 23 July: 2x 24 July: 2x 25 July: 2x 26 July: 2x 27 July: 2x 28 July: 2x 29 July: 2x 30 July: 2x 31 July: 2x 01 August: 2x 02 August: 2x 03 August: 2x 04 August: 2x 05 August: 2x 06 August: 2x 07 August: 2x 08 August: 2x 09 August: 2x 10 August: 2x 11 August: 2x 12 August: 2x 13 August: 2x 14 August: 2x 15 August: 2x 16 August: 2x 17 August: 2x 18 August: 2x 19 August: 2x 20 August: 2x 21 August: 2x 22 August: 2x 23 August: 2x 24 August: 2x 25 August: 2x 26 August: 1x 27 August: 2x 28 August: 2x 29 August: 2x 30 August: 2x 31 August: 2x 01 September: 2x 02 September: 2x 03 September: 2x 04 September: 2x 05 September: 2x 06 September: 2x 07 September: 2x 08 September: 2x 09 September: 2x 10 September: 2x 11 September: 2x 12 September: 2x 13 September: 2x 14 September: 2x 15 September: 2x 16 September: 2x 17 September: 2x 18 September: 2x 19 September: 2x 20 September: 2x 21 September: 2x 22 September: 2x 23 September: 2x 24 September: 2x 25 September: 2x 26 September: 2x 27 September: 2x 28 September: 2x 29 September: 2x 30 September: 2x October 01 October: 2x 02 October: 2x 03 October: 2x 04 October: 2x 05 October: 2x 06 October: 2x 07 October: 2x 08 October: 2x 09 October: 2x 10 October: 2x 11 October: 2x 12 October: 2x 13 October: 2x 14 October: 2x 15 October: 2x 16 October: 2x 17 October: 2x 18 October: 2x 20 October: 2x 21 October: 2x 22 October: 2x 23 October: 2x 24 October: 2x 25 October: 2x 26 October: 2x 27 October: 2x 28 October: 2x 29 October: 2x 30 October: 2x 31 October: 2x 01 November: 2x 02 November: 2x 03 November: 2x 04 November: 2x 05 November: 2x 06 November: 2x 07 November: 2x 08 November: 2x 09 November: 2x 10 November: 2x 11 November: 2x 12 November: 2x 13 November: 2x 14 November: 2x 15 November: 2x 16 November: 2x 17 November: 2x 18 November: 2x 19 November: 2x 20 November: 2x 21 November: 2x 22 November: 2x 23 November: 2x 24 November: 2x 25 November: 2x 26 November: 2x 27 November: 2x 28 November: 2x 29 November: 2x 30 November: 2x 01 December: 2x 02 December: 2x 03 December: 2x 04 December: 2x 05 December: 2x 06 December: 2x 07 December: 2x 08 December: 2x 09 December: 2x 10 December: 2x 11 December: 2x 12 December: 2x 13 December: 2x 14 December: 2x 15 December: 2x 16 December: 2x 17 December: 2x 18 December: 2x 19 December: 2x 20 December: 2x 21 December: 2x 22 December: 2x 23 December: 2x 24 December: 2x 25 December: 2x ran out, last day: 25 December 2017 –> Power times prior times benefit minus cost of experimentation: (0.20 \times 0.30 \times 540) - 41 = -9. So the VoI is negative: because my default is that fish oil works and I am taking it, weak information that it doesn't work isn't enough. If the power calculation were giving us 40% reliable information, then the chance of learning I should drop fish oil is improved enough to make the experiment worthwhile (going from 20% to 40% switches the value from -$9 to +$23.8). Began double-blind trial. Today I took one pill blindly at 1:53 PM. at the end of the day when I have written down my impressions and guess whether it was one of the Adderall pills, then I can look in the baggy and count and see whether it was. there are many other procedures one can take to blind oneself (have an accomplice mix up a sequence of pills and record what the sequence was; don't count & see but blindly take a photograph of the pill each day, etc.) Around 3, I begin to wonder whether it was Adderall because I am arguing more than usual on IRC and my heart rate seems a bit high just sitting down. 6 PM: I've started to think it was a placebo. My heart rate is back to normal, I am having difficulty concentrating on long text, and my appetite has shown up for dinner (although I didn't have lunch, I don't think I had lunch yesterday and yesterday the hunger didn't show up until past 7). Productivity wise, it has been a normal day. All in all, I'm not too sure, but I think I'd guess it was Adderall with 40% confidence (another way of saying placebo with 60% confidence). When I go to examine the baggie at 8:20 PM, I find out… it was an Adderall pill after all. Oh dear. One little strike against Adderall that I guessed wrong. It may be that the problem is that I am intrinsically a little worse today (normal variation? come down from Adderall?). Power-wise, the effects of testosterone are generally reported to be strong and unmistakable. Even a short experiment should work. I would want to measure DNB scores & Mnemosyne review averages as usual, to verify no gross mental deficits; the important measures would be physical activity, so either pedometer or miles on treadmill, and general productivity/mood. The former 2 variables should remain the same or increase, and the latter 2 should increase. Theanine can also be combined with caffeine as both of them work in synergy to increase memory, reaction time, mental endurance, and memory. The best part about Theanine is that it is one of the safest nootropics and is readily available in the form of capsules.  A natural option would be to use an excellent green tea brand which constitutes of tea grown in the shade because then Theanine would be abundantly present in it. Vitamin B12 is also known as Cobalamin and is a water-soluble essential vitamin. A (large) deficiency of Vitamin B12 will ultimately lead to cognitive impairment [52]. Older people and people who don't eat meat are at a higher risk than young people who eat more meat. And people with depression have less Vitamin B12 than the average population [53]. That is, perhaps light of the right wavelength can indeed save the brain some energy by making it easier to generate ATP. Would 15 minutes of LLLT create enough ATP to make any meaningful difference, which could possibly cause the claimed benefits? The problem here is like that of the famous blood-glucose theory of willpower - while the brain does indeed use up more glucose while active, high activity uses up very small quantities of glucose/energy which doesn't seem like enough to justify a mental mechanism like weak willpower.↩ None of that has kept entrepreneurs and their customers from experimenting and buying into the business of magic pills, however. In 2015 alone, the nootropics business raked in over $1 billion dollars, and web sites like the nootropics subreddit, the Bluelight forums, and Bulletproof Exec are popular and packed with people looking for easy ways to boost their mental performance. Still, this bizarre, Philip K. Dick-esque world of smart drugs is a tough pill to swallow. To dive into the topic and explain, I spoke to Kamal Patel, Director of evidence-based medical database Examine.com, and even tried a few commercially-available nootropics myself. According to clinical psychiatrist and Harvard Medical School Professor, Emily Deans, "there's probably nothing dangerous about the occasional course of nootropics...beyond that, it's possible to build up a tolerance if you use them often enough." Her recommendation is to seek pharmaceutical-grade products which she says are more accurate regarding dosage and less likely to be contaminated. This looks interesting: the Noopept effect is positive for all the dose levels, but it looks like a U-curve - low at 10mg, high at 15mg, lower at 20mg, and even lower at 30mg 48mg and 60mg aren't estimated because they are hit by the missingness problem: the magnesium citrate variable is unavailable for the days the higher doses were taken on, and so their days are omitted and those levels of the factor are not estimated. One way to fix this is to drop magnesium from the model entirely, at the cost of fitting the data much more poorly and losing a lot of R2: In terms of legal status, Adrafinil is legal in the United States but is unregulated. You need to purchase this supplement online, as it is not a prescription drug at this time. Modafinil on the other hand, is heavily regulated throughout the United States. It is being used as a narcolepsy drug, but isn't available over the counter. You will need to obtain a prescription from your doctor, which is why many turn to Adrafinil use instead. Manually mixing powders is too annoying, and pre-mixed pills are expensive in bulk. So if I'm not actively experimenting with something, and not yet rich, the best thing is to make my own pills, and if I'm making my own pills, I might as well make a custom formulation using the ones I've found personally effective. And since making pills is tedious, I want to not have to do it again for years. 3 years seems like a good interval - 1095 days. Since one is often busy and mayn't take that day's pills (there are enough ingredients it has to be multiple pills), it's safe to round it down to a nice even 1000 days. What sort of hypothetical stack could I make? What do the prices come out to be, and what might we omit in the interests of protecting our pocketbook? Jesper Noehr, 30, reels off the ingredients in the chemical cocktail he's been taking every day before work for the past six months. It's a mixture of exotic dietary supplements and research chemicals that he says gives him an edge in his job without ill effects: better memory, more clarity and focus and enhanced problem-solving abilities. "I can keep a lot of things on my mind at once," says Noehr, who is chief technology officer for a San Francisco startup. But while some studies have found short-term benefits, Doraiswamy says there is no evidence that what are commonly known as smart drugs — of any type — improve thinking or productivity over the long run. "There's a sizable demand, but the hype around efficacy far exceeds available evidence," notes Doraiswamy, adding that, for healthy young people such as Silicon Valley go-getters, "it's a zero-sum game. That's because when you up one circuit in the brain, you're probably impairing another system." Since dietary supplements do not require double-blind, placebo-controlled, pharmaceutical-style human studies before going to market, there is little incentive for companies to really prove that something does what they say it does. This means that, in practice, nootropics may not live up to all the grandiose, exuberant promises advertised on the bottle in which they come. The flip side, though? There's no need to procure a prescription in order to try them out. Good news for aspiring biohackers—and for people who have no aspirations to become biohackers, but still want to be Bradley Cooper in Limitless (me). Two studies investigated the effects of MPH on reversal learning in simple two-choice tasks (Clatworthy et al., 2009; Dodds et al., 2008). In these tasks, participants begin by choosing one of two stimuli and, after repeated trials with these stimuli, learn that one is usually rewarded and the other is usually not. The rewarded and nonrewarded stimuli are then reversed, and participants must then learn to choose the new rewarded stimulus. Although each of these studies found functional neuroimaging correlates of the effects of MPH on task-related brain activity (increased blood oxygenation level-dependent signal in frontal and striatal regions associated with task performance found by Dodds et al., 2008, using fMRI and increased dopamine release in the striatum as measured by increased raclopride displacement by Clatworthy et al., 2009, using PET), neither found reliable effects on behavioral performance in these tasks. The one significant result concerning purely behavioral measures was Clatworthy et al.'s (2009) finding that participants who scored higher on a self-report personality measure of impulsivity showed more performance enhancement with MPH. MPH's effect on performance in individuals was also related to its effects on individuals' dopamine activity in specific regions of the caudate nucleus. More photos from this reportage are featured in Quartz's new book The Objects that Power the Global Economy. You may not have seen these objects before, but they've already changed the way you live. Each chapter examines an object that is driving radical change in the global economy. This is from the chapter on the drug modafinil, which explores modifying the mind for a more productive life. The use of cognition-enhancing drugs by healthy individuals in the absence of a medical indication spans numerous controversial issues, including the ethics and fairness of their use, concerns over adverse effects, and the diversion of prescription drugs for nonmedical uses, among others.[1][2] Nonetheless, the international sales of cognition-enhancing supplements exceeded US$1 billion in 2015 when global demand for these compounds grew.[3]
CommonCrawl
Intuitive perspective of eigenvalues and rank of a matrix Assuming a matrix $A$, $n\times n$, with $n$ non-repeated and non-zero eigenvalues; If we calculate the matrix $A-\lambda I$ for one of its $n$ eigenvalues, we see that its rank has been decreased by one. If the eigenvalue has repetitiveness of $k$, then the rank decreases again by $k$. What would be an intuitive explanation for it? By $Ax=\lambda x$ one could argue that we try to find the values of $\lambda$ for which an $n\times n$ matrix with $\text{rank}(A)=n$ to have the same impact on $x$ as a: 2a. a scalar $\lambda$? 2b. a $n\times n$ diagonal matrix of rank $n$? In the relationship $(A-\lambda I)x=0$, given that we want a nontrivial solution for the vector $x$, could we declare the matrix $A-\lambda I$ as zero, without the determinant, following directly the above relationship? linear-algebra matrices eigenvalues-eigenvectors intuition matrix-rank edited Jun 3 at 7:41 Rodrigo de Azevedo $\begingroup$ The eigenvectors of $\lambda$ are exactly the kernel of $A-\lambda I$. What does this say about the rank of this matrix? $\endgroup$ – amd Jun 2 at 19:23 $\begingroup$ The answer to 2 is a. We normally think about it the other way around. We are trying to find a vector such that A operating on that vector simply scales it. $\endgroup$ – NicNic8 Jun 2 at 19:30 $\begingroup$ "The eigenvectors of 𝜆 are exactly the kernel of 𝐴−𝜆𝐼. What does this say about the rank of this matrix?" I understand that the kernel is the vector that if we let the 𝐴−𝜆𝐼 act upon, it will give zero. However, I unfortunately cannot see for the moment the causal relationship between the kernel and how the repeated number of eigenvalues influence the rank. I can see it on paper when i calculate it by hand, but i am missing something I guess. I would be grateful if you developed your thought $\endgroup$ – Alex Jun 2 at 21:23 For 1, it depends on what you mean by "intuitive", but here's a shot at it. A matrix either sends a vector to 0 or it doesn't. The amount of vectors that it sends to 0 is related to the amount of vectors that it doesn't by the rank it's theorem. The more vectors it sends to 0, the fewer it sends to non-zero values. If A has an eigenvector then $A-\lambda I$ sends a vector to 0. Therefore, it can't send as many to non-zero values (meaning it's rank has been reduced). EDIT: This is not always true as Widawensen points out in another answer. For 2, the answer is a. But we normally think about it the other way around. We are trying to find a vector such that A operating on that vector simply scales it. For 3, if $A-\lambda I$ is 0 then $A=\lambda I$. This is the special case where A has a single eigenvalue of multiplicity equal to its rank. But there are cases where A can have eigenvector a but A is not diagonal. Those cases can be found with the determinant because we know that the determinant of a singular matrix is 0 and we want $A-\lambda I$ to be singular. edited Jun 4 at 15:52 answered Jun 2 at 19:37 NicNic8NicNic8 $\begingroup$ Helpful! About the first part, you concluded: "it can't send as many to non-zero values (meaning it's rank has been reduced)". This implies that the rank of a square matrix, that is: - the number of linearly independent columns - the number of vectors that form the basis - the number of nonzero eigenvalues is the same as the number of vectors, that a matrix can send to 0. Is this correct? Could you please name the theorem concerning the rank you talked about? $\endgroup$ – Alex Jun 2 at 21:04 $\begingroup$ @Alex It's the rank-nullity theorem. $\endgroup$ – user326210 Jun 2 at 21:18 $\begingroup$ @Alex. It's called the "Rank-Nullity Theorem". The theorem states what I said above in words: rank(A) + Nullity(A) = number of columns of A. $\endgroup$ – NicNic8 Jun 2 at 23:19 Consider matrix ( so called Jordan normal form) for example: $$A=\begin{bmatrix} 2 & 1 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \end{bmatrix}$$ Then calculating $A-2I$ you'll see that "If the eigenvalue has repetitiveness of $k$, then the rank decreases again by $k$" WidawensenWidawensen Not the answer you're looking for? Browse other questions tagged linear-algebra matrices eigenvalues-eigenvectors intuition matrix-rank or ask your own question. Properties of a matrix and eigenvalues Determinant of rank-one perturbation of a diagonal matrix Matrix with non-negative eigenvalues What is the relation between rank of a matrix, its eigenvalues and eigenvectors Eigenvalues of the subtraction of a gram matrix and a psd rank $1$ matrix. Matrix with eigenvalues as $0$ and rank as $n-1$ Maximum rank of a matrix based on its eigenvalues eigenvalues of "diagonal matrix $+$ rank one matrix" Deduction of eigenvalues of the matrix $A$ which satisfies $x^t A A^t x = \alpha x^t x$? If we know the rank of a matrix r, can we assume that will have precisely r non-zero eigenvalues?
CommonCrawl
Sustainability assessment of the German energy transition Christine Rösch1, Klaus-Rainer Bräutigam1, Jürgen Kopfmüller1, Volker Stelzer1 & Annika Fricke1 The goal of the energy transition in Germany is to achieve a sustainable supply of energy. Providing advice for decision-makers to either continue the current transition pathway or implement strategic adjustments requires a comprehensive assessment tool. The authors have developed a Sustainability Indicator System (SIS) consisting of 45 indicators to assess if policy measures implemented so far by the Federal Government are appropriate and sufficient to achieve the energy policy targets and, furthermore, the sustainability targets defined for the German energy system. The assessment is carried out applying the SIS. For each indicator, a linear projection was calculated, based on the past 5 years for which data were available, assuming that this trend will continue in a linear way until 2020. Then, the projected value for 2020 resulting from the trend was compared to the political or defined target for 2020. The assessment was based on distance-to-target considerations, i.e. to which degree the set, proposed or desirable target will be met within the framework of the existing energy policy. The results are illustrated using a traffic light colour code. Indicators with less than 5 years of data available were given a white traffic light since no assessment was possible. A profound view on eight selected sustainability indicators that are not already part of the German monitoring process 'Energy of the Future' and a comprehensive overview on the sustainability assessment of the German energy system are presented. The results show that 24% of the assessed indicators are rated with a green, 7% with a yellow, 45% with a red and 24% with a white traffic light. This means that it cannot be expected that the sustainability targets defined for the German energy system will be achieved by 2020 without substantial modifications of political strategies and measures implemented so far. The developed SIS is a comprehensive decision support and navigation tool with respect to long-term governance of the German energy transition. It aims to assess and monitor the overall sustainability performance of the energy system, to identify unsustainable energy strategies and measures as well as trade-offs and to evaluate the achievements or failures of policies regarding the energy transition. It can also be adapted to assess the sustainability of the energy systems in other European countries. The transformation of the German energy system is considered as key element to achieve sustainability at the national scale. This is according to the Brundtland report claiming that 'a safe and sustainable energy pathway is crucial to sustainable development' [1] and particularly to the latest and most relevant framework in this respect, the 17 sustainable development goals (SDGs) defined by the United Nations [2]. Goal 7 refers to the energy topic by demanding universal access to affordable, reliable and modern energy services for everybody. This includes, among others, a substantial increase of the renewable energy share in the global energy mix, doubling global energy efficiency rates, as well as according infrastructure expansion and modernization and technology upgrades for supplying sustainable energy services. Given that, it is obvious that planning and design of the transformation process requires a holistic understanding of sustainable development (SD), including environmental, economic, social and institutional issues, and a deliberate monitoring and evaluation of possible implications of possible pathways to achieve the goals. While the goal of a nuclear-free energy supply in Germany is widely shared, the transition pathway and the required specifications of the future energy system are lively and controversially debated in science, politics and society. The debate focuses on strategies and measures towards a more sustainable energy system including a secure, environmental-friendly and economically affordable energy supply and a high public acceptance. In particular, the design of transition measures that suitably consider the socio-technical characteristics and interfaces of the energy system, and their several interdependencies, are debated. The question, to which degree steadily increasing electricity prices for private customers due to the National Renewable Energy Law (EEG) lead to 'energy poverty', is one example for this. Thus, one essential precondition for both, a coherent energy transition policy, and a sufficient support of a critical public is that the consequences of political decisions for a complex socio-technical energy system are taken into account appropriately. The monitoring process 'Energy of the Future' established by the Federal Government continuously reviews if the current trend is on track to attain the goals and targets set out in the German Energy Concept, and if additional measures should be implemented. In this process, indicators are used to take annual stock of the progress made in achieving the quantitative targets [3,4,5,6,7]. The strategies and measures taken for the energy transition, however, have impacts also on other sustainability issues and, thus, can trigger interactions and trade-offs with respect to and between sustainability criteria that are not included in the monitoring system. Therefore, a more comprehensive set of sustainability criteria is needed. To give an example: While higher shares of renewable energy sources are necessary to achieve a carbon-free energy system, the construction, operation and disposal of renewable energy technologies require a substantial amount of resources (e.g. land, water, nutrients, rare materials) including possible strong impacts on natural and social systems. In particular, social aspects, such as fair social distribution of benefits and burdens due to the energy transition, or participation of citizens in relevant decisions within the transformation process are to a large extent missing in the German monitoring process. To fill this knowledge gap, the authors have developed a comprehensive Sustainability Indicator System (SIS) within the Helmholtz Alliance project 'Energy-Trans' to improve the assessment of the energy transition process in Germany [8]. In this paper, selected results of this assessment are presented and discussed. The assessment of the sustainability performance of the German energy system was carried out using the SIS, which was developed based on the integrative concept of sustainable development. More information about this concept and how the indicators have been selected can be found in [8]. The SIS consists of 45 indicators (Table 1), including mainly objective indicators but also a few subjective, survey-based indicators (nos. 34, 35 and 36). The indicator assessment includes three methodological steps: Collection, selection and analysis of facts and figures and preparation of data series Definition of targets for each indicator for the years 2020, 2030 and 2050 Calculation of a trendline and assessment of the extrapolated values by the distance-to-target method Table 1 The Sustainability Indicator System [8] Sustainability indicator targets for 2020, 2030 and 2050 Since a distance-to-target (DTT) approach was applied in this project for the indicator-based assessment of the energy system and its transition, targets obviously have a key function. The targets defined are important reference lines for indicator values to be compared with. Strategically, they should allow for higher planning reliability of actors, in particular if targets are designed stepwise over time, and help decision-makers to design political measures. From the DTT approach, the necessity aroused to define targets for all indicators in the SIS. However, not for all of the defined indicators political justified and binding targets were available, since the indicators selected to cover the socio-technical interface of the energy system are rather new. Thus, political discussions and processes of target setting in these cases are still ongoing or even missing. Therefore, we have carried out a comprehensive and profound review of documents from policy consulting institutions, such as the German Advisory Council on Global Change, science, NGOs, unions and other stakeholders and the media as well as the target agreements of other comparable countries to identify and adopt appropriate proposals for binding or non-binding targets. The objective of that wide-ranging investigation was to define target values for all indicators of the SIS in a comprehensive and reliable way. As a result, the present work comprises a mixture of set, proposed or desirable targets with different degree of justification by politics and society: Some of them have been derived from policy-based targets in 2020, both binding and non-binding, some were adopted from political targets or good examples in other countries, some from policy consulting institutions, some from science and other targets have been abstracted from public debates. As described above, in the presented work targets were determined based on these different sources, for the years 2020, 2030 and 2050. Primarily, political targets were adopted if available, either at the national or at the international scale. To give examples: For the indicators 'primary energy use', 'energy-related greenhouse gas emissions' and 'number of electric vehicles', the political targets defined by the German government were used. For the indicator 'energy-related emissions of mercury', the targets were taken from a United Nations protocol. In cases where targets exist only for 2050, the authors determined according values for 2020 and 2030, mainly based on a linear extrapolation. This was applied for the indicators 'emissions of particulate matter' and 'number of electric vehicles'. Secondly, targets were adopted or derived from scientific or societal debates as, e.g. for the indicator 'relation of technician salary to manager salary in the big electricity suppliers' that refers to the Swiss debate, and for the indicator 'area under cultivation of energy crops', following recommendations of the German Advisory Council on Global Change. In addition, a cross-border look at other countries' best practices provided a source to derive targets. This was done for the indicators 'SAIDI for electricity' and 'federal expenditures for energy research'. The research spending in Germany in relation to its GDP and the research spending of the country with the highest value in this category (South Korea) are used as reference point for future expenditures. For those indicators where no targets were available or discussed so far, conclusion by analogy was chosen as method, e.g. for the indicator 'final energy consumption of private households per capita' where the trend of the official target for national primary energy use was adopted. A similar procedure was applied for the indicator 'number of university graduates in energy sciences', assuming that this indicator develops proportionally to the volume of investments in Germany given in the DLR-Report [9], which provided the key basis for all model-based analyses in the project. For the indicator 'number of start-ups in renewable energy and energy efficiency sector', targets were defined in accordance with the indicators 'number of German patents in the field of renewable energy and energy efficiency' and 'federal expenditures for energy research'. Table 2 gives an overview on the targets defined for 2020, 2030 and 2050, briefly describes the origin of the targets and gives the main reference for the targets. Table 2 Sustainability Indicator System targets for 2020, 2030 and 2050 Sustainability assessment based on the distance-to-target approach The performance of the sustainability indicators is assessed based on a combined linear extrapolation and distance-to-target approach used also in the German monitoring report 'Energy of the Future' [7]. Accordingly, a linear projection of the performance trend for each indicator was calculated based on the previous 5 years for which data were available, assuming that this trend will continue in a linear way until 2020. Then, this projected trend was compared to the targets for 2020, in order to assess to which degree the target will be met within the framework of the existing energy policy. The near-term target 2020 was chosen because here a linear projection is regarded as feasible since it can be assumed that the framework conditions influencing the energy system will remain relatively constant within this short time period and that effects of measures previously implemented will support the trend until 2020. For the period until 2050, however, it can be expected that due to the unpredictable nature of the complex and dynamic energy system, as well as changing political and institutional framework conditions, indicator performance trends will change accordingly and, thus, extrapolation is not a valid methodology any more. The traffic light symbol was used to visualize the assessment results (Fig. 1). The assessment includes the following steps: Defining a 'reference value' by calculating the average value of the last 5 years with data Calculation of a 'projected value' for 2020 by extrapolating the trendline, covering the past 5 years with data, until 2020 Calculation of the relation between the necessary change (relation between 'reference value' and 'target value') and the expected change (relation between 'reference value' and 'projected value') according to the following formula: Sustainability indicator assessment with the distance-to-target approach $$ \left(1-\frac{1-{\mathrm{PV}}_{2020}/{\mathrm{AV}}_5}{1-{\mathrm{TV}}_{2020}/{\mathrm{AV}}_5}\right)\times 100\% $$ PV2020: projected value for 2020 TV2020: target value for 2020 AV5: average value of the past 5 years with available data The traffic light colours are defined as follows: Green traffic light: the deviation is < 10% or the projected value exceeds the target value. Yellow traffic light: the deviation is between 10 and 40%. Red traffic light: the deviation is > 40% or the calculated trend goes in the 'wrong' direction (indicator value increase instead of decrease or decrease instead of increase). White traffic light: no distance-to-target evaluation can be carried out due to the lack of data series. The assessment results are part of elaborated fact sheets worked out for each of the 45 indicators composing the Sustainability Indicator System (SIS). These fact sheets include information on the justification and definition of the indicator, the unit, data sources, previous data trends, targets for 2020, 2030 and 2050, comments on data and targets, the result of the assessment applying the traffic colour code and the references used. In this paper, only some selected indicators are described in detail. The selection of the indicators is based on the innovativeness of the indicators for science and politics and if the indicators are 'new' and not (yet) used in the German monitoring process 'Energy of the Future'. The following indicators will be presented: Share of employees in the renewable energy sector in relation to total number of employees Monthly energy expenditure of households with a monthly net income less than 1300 Euros Area under cultivation of energy crops Number of start-ups in the renewable energy and energy efficiency sector Gender pay gap in the highest salary group in the energy sector Acceptance of renewable energies in the neighbourhood Degree of internalization of energy-related external costs Number of energy cooperatives engaged in renewable energy plants An overview on the assessment results of all indicators comprised by the SIS is given afterwards in Fig. 10 including the figures showing the assessment results for the eight indicators mentioned above. According to the UN Sustainable Development Goal 8, sustained, inclusive and sustainable economic growth and full and productive employment and decent work are required to achieve sustainable development at different scale. This goal is integral part of the German sustainability strategy [10]. In the light of this and due to the threat of increasing underfunding of the social security systems, the German Federal Government wants to make better use of the existing workforce potential. The political target is to increase the employment rate, i.e. the proportion of the workforce in the population of working age (20 to 64 years old) to 78% and the employment rate of older (60- to 64-year-olds) to 60% by 2030 [11]. To achieve these targets, labour demand deriving from private companies and the public sector is of crucial importance. The energy sector is an important employer, and the continuing growth of jobs in the renewable energy sector is significant. This increase is driven by declining renewable energy technology costs and enabling policy frameworks. The labour demand and employment in the renewable energy sector mainly depend on economic growth, but also on changes in labour productivity (real gross domestic product per hour of employment) and working hours. Additional demand for labour can be compensated for by a higher yield of the individual working hour (productivity) or by additional work of the employees. Thus, if the renewable energy sector shows real growth that does not mean that the share of employees rises too. For the actual demand for labour, the macro-economic labour productivity plays a decisive role. For example, if growth is about 3% and labour productivity is due to automatization and digitization also about 3%, the growth-induced increase in demand for labour and the productivity-related decline in labour demand balance each other out. Only when the growth of production exceeds the increase in productivity the volume of work will increase and additional jobs are created. In order to define an indicator that can be communicated and understood easily, we agreed to use the comprehensive perspective assuming that the relationship between labour demand, productivity and overtime work and other influencing factors in the renewable energy sector remains unchanged. The indicator 'share of employees in the renewable energy sector in relation to total number of employees' was selected, although we were fully aware that jobs in this new sector will reduce employment in the 'old' fossil fuel-based energy sector. Besides, employment in other sectors could decline due to increasing energy costs caused by a higher share of expensive renewable energy. Furthermore, employment could decrease in the future if the new energy sector turns out to be very efficient over time. An increase in the efficiency of electricity production is linked with a decrease in labour costs that could improve the overall employment rate. In view of these considerations, the defined indicator is regarded as provisional indicator that need to be improved or even replaced by a more comprehensive one including all direct and indirect employment effects of the energy transition if data are available. The provisional indicator 'share of employees in the renewable energy sector in relation to total number of employees' includes the employment due to domestic production for domestic use and for exported renewable energy compounds, also employees responsible for maintenance and operation of renewable energy plants. However, the indicator excludes employment due to the production in other countries, e.g. the production of photovoltaic modules in China, since the sustainability analysis is focusing on Germany. A decline of employees in the conventional energy sector and other sectors as direct consequence of the energy transition is not taken into account, also higher energy costs resulting from subsidies for renewable energies (indirect effects) due to the lack of reliable data series. This indicator shows continuously increasing values from 2007 to 2012 (Fig. 2), mainly because the number of employees in the renewable energy sector steadily increased from 277,300 in 2007 to 399,800 in 2012. Then, the number decreased to 371,400 in 2013, to 355,000 in 2014 and to 330,000 in 2015 [7, 12, 13]. The share of employees in 2007 to 2015 was calculated based on these data and data of total employees given in [14]. The number of employees in the renewable energy sector mainly depends on the volume of investments into this sector in Germany, the export of renewable energy technologies, and the maintenance and operation intensity of renewable energy plants. Model-based information on the volume of investments in Germany until 2050 is given in [9]. Data on future exports and for employees responsible for maintenance and operation of renewable energy plants are not available. Therefore, the authors estimated the number of employees for the years 2020, 2030 and 2050 based on the estimated volume of investments in the field of renewable energy. In 2015, investments in the construction and maintenance of renewable energy plants (not investment in general) amounted to 15 billion euros [7] and the number of employees was 330,000. The yearly volume of future investments has been taken from [9]. It accounts for 18.4 billion euros until 2020, 17.2 billion euros until 2030, 18.7 billion euros until 2040 and 19.9 billion euros until 2050 [9]. Based on these numbers, 416,000 employees for 2020, 387,000 employees for 2030 and 449,000 employees for 2050 were calculated. However, an even larger increase of gross employment from 530,000 to 640,000 people in 2030 would be possible assuming that a global technological leadership of the German industry also leads to a considerable competitive advantage on the growing future world energy market [9]. According to [14], the total number of employees was 41.5 million in 2011 and 43 million in 2015. Starting from the average value of 0.87% over the past 5 years (2011–2015), the following targets for the share of employees in the renewable energy sector to total employees can be calculated, using the data given in [9] for the renewable energy investments and the total number of employees: Target for 2020: 0.94% (361,925 employees in relation to 38.6 million employees in total) Target 2050: 1.19% (391,004 employees in relation to 32.8 million employees in total). The increase in employees according to the investments in the renewable energy sector required to achieve the energy targets of the Federal Government are comprising assumptions on increase in productivity. Decoupling of economic growth and employment in general and in the renewable energy sector respectively due to automation and digitization was not considered. Under these assumptions, the calculated trendline to 2020 shows a decrease of about 34%, whereas the target recommends an increase of about 8%. This leads to the assignment of a red traffic light for this indicator. Energy expenditure of low-income households Experiences in Germany show that the energy transition leads to growing energy expenditures of households because the costs to increase the share of renewable energies are allocated to customers through the EEG shared contributions. This allocation system has been discussed controversially. The impact of this financial burden on the energy expenditures of low-income households has been associated with terms such as 'energy poverty' or 'fuel poverty'. However, there is little agreement even on the problem definition and the measuring method. Moreover, evidence exists that the assessment if and to which extent 'fuel poverty' exists strongly depends on the measuring method used [15]. Hence, the authors refrained from an evaluation of data without mathematical methods carried out in [16] and propose to determine 'essential expenditures' of low-income household for an adequate energy supply for electricity and heating, according to the recommendation of [16]. The statistically raised data about energy expenditures of low-income households should be compared to these 'essential expenditures'. Not surprisingly, these values have not been determined (even not discussed) in Germany or other countries for different household types, since this is a highly normative decision, hardly justifiable in 'objective' terms. In fact, these expenditures are raised and used to date only within the English Household Survey and were used in the model BREDEM to investigate 'energy poverty' in the United Kingdom (UK). Besides the lack of appropriate poverty targets available from other countries, we chose the target from the UK, because the climatic and economic conditions in the UK are similar to those in Germany. Beyond the fact that this approach is suitable in general, but not operable to date, the authors propose to refer on the indicator 'monthly energy expenditures of households with a monthly net income less than 1,300 euros' as a first approach to monitor if the energy transition leads to undesirable additional financial burden. If this might be associated with the term 'energy poverty', remains open to discussion. The monthly net income of households is categorized according to the German Federal Statistical Office and calculated by subtracting income and wage taxes, church tax, and the solidarity surcharge as well as the mandatory social security contributions from gross household income consisting of the total income of the household from employment, property, public and private transfers and subletting. Data for the monthly energy expenses from 2002 to 2012 for the income class below 1300 € have been taken from [17]. They include electricity, fees, fuel costs for heating and taxes or levies on heating plants. To derive a data series of 5 years, data for 2013 have been calculated from information given in [18] and are the weighted average of the income classes below 500 € (2.6% of this household group), 500 to 900 € (39.5% of households) and 900 to 1300 € (57.9% of households). Data for 2014 and 2015 are taken from [19, 20]. In principle, the target for this indicator would have to be adjusted over time considering the development of the income of the group concerned, the development of energy prices and the inflation rate. Since these values are not known, no prediction was made by the authors. Instead, the authors used research results on 'energy poverty' from the UK, where most research on this issue is carried out in the EU. According to [21], in the UK, the expenditure of low-income households on heating should not exceed 10% of their income. A higher percentage would indicate 'fuel poverty'. Despite the critical view of [22] on the data from [21], the authors decided to use this percentage to determine the target, simply because no other valid data were available to define a 'German standard'. On average, German households spend 70% of their energy expenditure on heating and 30% on electricity [23]. By weighting these two values, in Germany, the expenditures for heating and electricity should not exceed 15% of the net disposable household income of low-income households. Households in the category 'net income below 1300 €' had on average a net income of 901 € in 2011 [17] and 916 € in 2015 [19]. Based on these data, values of expenditures for heating and electricity of 135 € in 2011 and 137 € in 2015 (Fig. 3) were derived. Monthly energy expenditures of households with a net income below 1300 € The data for the period 2011 to 2015 show that households with a net income below 1300 € spend on average 89 € per month for energy use (Fig. 3). Based on the data for the past 5 years, values for the net income for 2020, 2030 and 2050 have been calculated. For the target values, 15% of these net income values have been assumed corresponding to 139 € in 2020, 142 € in 2030 and 147 € in 2050. Since the trendline shows a decreasing monthly expenditure not reaching the target value for 2020, a green traffic light was assigned to this indicator. Despite the green traffic light, however, there might be households who suffer from 'energy poverty' because their income is below the average of all households with incomes below 1300 Euro, which was used as database here. For the cultivation of energy crops, agricultural land is required. Land, however, is a finite and increasingly scarce resource. This leads to competition or even conflicts with other land uses, such as for food, feed and fibre production. Land is also needed for the installation of renewable energy plants, such as biogas plants, open space PV systems or wind energy plants, as well as power transmission lines. Compared to the land use requirements for conventional energy production with fossil fuels, for example for the installation of power plants or mining of brown coal, the energy transition towards renewable sources is associated with a higher land use. Land use data for the cultivation of energy crops are given in [24]. However, the different kinds of land use listed in [25] should not be summed up, because they are associated with different sustainability-related impacts. In addition, parts of the land occupied by energy production can still be used for other purposes or can be re-cultivated after the energy production phase. Therefore, the authors have decided to take into consideration only land use for the cultivation of energy crops. The cultivation of energy crops requires agricultural land and, therefore, will further lead to an increase of competition for land [1]. This growing demand can be satisfied by extending cropland and pastures into new areas, thereby replacing natural ecosystems, and/or by improving productivity of existing cultivated land through an increasing or more efficient use of inputs, improvement of agronomic practices and crop varieties, etc. Both options have negative environmental impacts, for example on the conservation of biodiversity. The import of biomass for food, feed, fuels and industrial applications is regarded as an unsustainable strategy to reduce land use conflicts, because this will only shift such conflicts to other countries. The land footprint abroad to satisfy the German (bio) energy demand has not been taken into account here, because the system boundaries defined for the SIS only comprise processes located in Germany, and due to lacking of valid data. The trend calculated based on data for the past 5 years (2011–2015) shows an increase for this indicator of about 11% by 2020 compared to the average value for 2011 to 2015 (Fig. 4). According to [26], it is necessary to determine limits for the area dedicated to energy cropping in order to minimize land use conflicts. The authors derived these limits from two general principles based on the Sustainable Development model. First, to reach the SDG no. 2 (stop hunger and all forms of malnutrition by 2030), the production of food must be given priority over the production of renewable energy sources or the use for terrestrial CO2 storage. Thus, it is hardly justifiable to convert arable land from food production to energy cropping. Second, land use for energy crops should not jeopardize the nature conservation target determined by the German Advisory Council on Global Change (WGBU). The WGBU has proposed that 10–20% of the total land area should be reserved for nature conservation to protect, restore and promote a sustainable use of terrestrial ecosystems and to minimize biodiversity loss. Since worldwide only 8.8% of total land area are designated as protected areas (category I–VI areas), the conversion of natural ecosystems to land cultivated for energy crops has to be rejected as a matter of principle. As a global benchmark, the WBGU recommends to allocate not more than 3% of the terrestrial area to energy cropping to avoid conflicts with nature conservation. Considerations of particular regional conditions and possibilities are indispensable to translate this global target into the national scale. As recommended in [26], a maximum of 10% of arable land and 10% of pasture land should be used for the cultivation of energy crops in Europe. According to [26], these two percentages correspond to an area of 22 million ha or 4.5% of the land area available for the cultivation of energy crops in the European Union due to the decline in agricultural land. This target is used for calculating the potential area in relation to the total land area of 34.9 million ha in Germany [27]. In doing so, the calculated target to be achieved by the year 2050 is about 1.57 million ha used for energy crops as a maximum. The targets for the years 2020 and 2030 were derived by interpolation from the target for 2050. Based on the average value of 2.13 million ha for energy crops over the years 2011 to 2015 and the target for 2050, the following targets were derived by linear interpolation: for the year 2020 a target of 2.0 million ha (5.6% of the land area of Germany) and for the year 2030 a target of 1.9 million ha (5.4% of the land area of Germany). In order to achieve the target of 2.0 million ha for 2020, a reduction by 4.7% of the energy crops area compared to the mean value of 2.13 million ha for the years 2011 to 2015 is required. Since the trendline shows a further increase in the area under cultivation of energy crops, this indicator is aligned with a red traffic light. The traffic light evaluation has to be discussed against the background of the defined target value in 2020 and the ongoing debate on bioenergy. Nevertheless bioenergy contributes to the Renewable-Energy Directive 2009/28/EC (which sets a target of 10% renewable energy in transport) and only biofuels meeting the binding sustainability requirements may count on the obligations, the cultivation of energy crops and even the energetic use of biomass is under increasingly controversial debate in Germany. The reason is that energy crops compete with other biomass uses, such as food and feed, and can be associated with negative effects on humans and the environment. This includes a change in global land use mainly driven by the expansion of bioenergy use in industrialized countries but also an increasing demand for animal products and correspondingly high feed requirements in emerging markets. In addition, the increased biomass demand is triggering an expansion of the agricultural production area, which could lead to the loss of valuable ecosystems such as forests and species-rich grassland. An intensification of agricultural production by an increasing use of synthetic fertilizers and pesticides can also be associated with ecological drawbacks, e.g. the loss of weed and landscape elements that are valuable for biodiversity. In view of these challenges and risks, it cannot be ruled out that the science-based target defined for the area under cultivation of energy crops in 2020 could be set more ambitious (less or even no area of energy crops) by society since the success of the energy transition is not tied to the expansion of bioenergy. While innovation is widely considered to be an important engine of the energy transition in Germany and a basic prerequisite to the general sustainability goal of 'maintaining societies' productive potential', measuring innovation is not easy, since knowledge about innovation processes and results is often limited. Different approaches are available, and various attempts have been made to measure innovation. For instance, asking experts in their respective fields to identify major innovations can be one method. However, this provides a rather subjective perspective and it is difficult to gain an overall and continuous picture of innovation. Therefore, the authors propose to use more than one indicator to properly assess the energy-related innovation process at different stages on a quantitative basis, encompassing both, the input into the innovation process and its outcome. The selected indicators are, first of all, 'number of university graduates in the field of energy sciences' and 'federal expenditures for energy research'. Research and development (R&D) expenditures are often used as a proxy for innovation or technological progress. However, expenditure is an input for R&D rather than an outcome of R&D, which should be innovation. Therefore, the authors additionally propose the indicator 'number of German patents in the field of renewable energy and energy efficiency', since patent data and statistics on new technologies are increasingly used to measure innovation, using e.g. European Patent Office (EPO) data, which provides long time data series. Although patent data are frequently used as an innovation indicator, their application is discussed controversially due to the constraints that are associated with this approach [28]. The key argument is that not all patents represent innovation, nor are all innovations patented. Besides, there are a small number of highly valuable patents and a large number of patents with little value. Scherer and Harhoff showed in their survey of German patents in total that about 10% of the most valuable patents account for more than 80% of the economic value of all patents [29]. Against this background, the authors decided to select also the indicator 'start-ups in the renewable energy and energy efficiency sector', since entrepreneurial activity can be seen as an outcome of innovation processes and an initiation of opportunities opening up in the changing energy market. Niche actors, such as start-ups, play an important role in the energy transition process because they can support the implementation of shifts in the socio-technical landscape [30] and explore, develop or advance innovative products and processes that are required to shape transition [5]. Particularly when it comes to the commercialization of new energy technologies, start-ups may capture entrepreneurial opportunities or provide complementary niche innovations to the current regime players [31, 32]. Data on 5000 business start-ups used to describe and analyse the indicator are derived from [33]. Data was classified according to the 'environmental goods and services sector' framework. Thus, the start-ups could be assigned to eight distinguished sectors of the green economy: climate protection, renewable energies, energy efficiency, emission prevention, recycling economy, resource efficiency, renewable resources and biodiversity. Only the firms in the renewable energy and energy efficiency sector were considered for this indicator, in order to avoid duplicates, e.g. firms that are active in more than one sector (Fig. 5). The numbers of start-ups taken from [33] differ significantly from those presented in [4] (based on [34]). One reason is that the Centre for European Economic Research [34] uses a more conservative method to ascribe start-ups to the renewable energy sector that is based on a keyword search within the company name and description. The Borderstep Institute, however, uses individual Internet-based research to classify the firms within the sample. In general, this indicator has the problem that the data series ends in 2013. To determine targets for this indicator, it is assumed that the number of start-ups develops in proportion to the number of registered patents in the renewable energy and energy efficiency sector (indicator no. 28, see Table 1). Patents are regarded as crucial for companies to generate benefits as a pioneering company. In terms of start-ups, however, there is little information on their patenting behaviour and any influence of patents on the company's success [35]. Some studies on the functionality of the patent system suggest that this system, although intended to support smaller companies and start-ups, is more likely to be driven by the strategic patenting behaviour of large companies and the rapid growth of all patent applications, [36, 37]. Furthermore, uncertainty in patent enforceability leads to discrimination against small businesses and start-ups. Despite these concerns about the functionality of the patent system for start-ups, arguments are repeatedly being made that start-ups can generate competitive advantages, above all through patents [38]. The main argument is that start-ups can capitalize on innovation only if innovation is protected and potential competitors are excluded from potential gains associated with innovation [35]. The number of newly registered patents, in turn, is assumed to depend on expenditures for energy R& (indicator no. 27, see Table 1). A study by [39] shows that R&D expenditure (in % of GDP) in the OECD countries correlates significantly and positively with the so-called patent intensity. This indicates that countries with high R&D expenditures also have high patent intensity. High expenditure on R&D seems to be one of the most important prerequisites for a high level of invention activity. The German Government's Expert Commission on Research and Innovation [40] comes to the same conclusion: The commission states that R&D promotes the emergence of new knowledge and thus innovation and describes R&D as key drivers of international competitiveness and the long-term growth opportunities of economies. Based on these findings, the target for the number of start-ups was assumed to develop in relation to the number of patents in the renewable energy and energy efficiency sector and the R&D expenditures for energy in Germany respectively. The target for energy research expenditure in Germany was assumed to increase from 2.92% in 2013 to 4.36% of the GDP in 2050. This corresponds to an increase by the factor 1.49 by the year 2050 compared to 2013. For the definition of this target, the sustainability goal of the sustainability strategy of the Federal Government, to spend 3% of GDP on R&D, was not adopted, because it was considered as not ambitious enough [10]. Instead, the target was defined by using the OECD country with the highest value in the category of research spending in relation to the GDP as reference point, which is South Korea with 4.36% in 2013 [41]. The research spending for the energy sector is assumed to increase also by the factor 1.49 to ensure that the share of energy research in total research spending remains the same. The same factor is applied to define the target for the number of start-ups in 2050 (24,515). The average number of start-ups over the past 5 years for which data were available (16,420) was used as initial value to derive the targets. The targets for the years 2020 and 2030 were interpolated accordingly, resulting in 18,288 start-ups in 2020 and 20,363 in 2030 (Fig. 5). The trendline calculated based on the past 5 years (2009–2013) shows a decrease in the number of start-ups of approx. 48% by 2020 compared to the average value over the years 2009 to 2013. Since the target for 2020 is 11% higher than the average value for the years 2009 to 2013, a red traffic light is assigned for this indicator. The pay gap between women and men is a relevant national sustainability indicator because it reflects equality in society [10]. Wage differences between women and men are a sign of social inequality in modern employment societies. Thus, the reduction in the gender pay gap is an indicator of progress towards equality and sustainable development. Still, women in Germany earn 23% less on average than their male colleagues [42]. In an EU-wide comparison, Germany is ranked on the seventh place from the bottom. With respect to university graduates and management positions, the gap is even wider. One main reason for this gap is that women are still very rarely represented in certain professions, sectors and on the upper end of the job career ladder. As the wage gap is a key indicator of the persistent gender inequality in working life used in political and scientific debates, we chose this for the SIS. The ratio between women's and men's gross yearly earnings addresses nearly all problems women are still confronted with in their working lives: women's limited access to certain jobs, obstacles they face in their professional development, traditional gender roles and mental patterns which hamper the reconciling of family and working life, including obstacles to re-enter labour market after a career break due to child care. Each of these factors contributes to the pay gap, ultimately. An EU-wide comparison reveals that in Germany the gender pay gap in the sector electricity, gas, heat and cold supply belongs to those economic sectors with the highest gap [43]. Official statistics distinguish between five performance groups representing a rough categorization of the employees' activities according to the qualification profile of workplaces. This categorization was narrowed down to the 'highest salary group' for a clearer visualization and focusing on most relevant groups, and to ensure reliable data series from the Federal Statistical Office. This 'performance group 1' includes employees in a leading position with supervisory and discretionary authority such as employed managers, provided their earnings include, at least partially, non-performance-related payments. Employees in larger management areas who perform dispatching or management tasks are included as well as employees with activities that require comprehensive business or technical expertise. In general, the specialist knowledge is acquired through university studies. The indicator selected is defined with respect to gross yearly income of full-time employees in the energy supply sector including special payments, according to the German Federal Statistical Office category 'D–Energy supply', which includes electricity, gas, heat and cold supply sector [44]. In 2015, women's salary amounted to 84% of men's salary, with an annual salary difference of around 16,000 Euros (Fig. 6). Until 2030, the target is defined to eliminate this gender pay gap. The indicator and the target refer to the unadjusted gender pay gap by only relating the gross earnings to each other without considering their causes. This also includes the pay gap, which results from different factors such as career choice and employment biography of the respective cohorts. The defined target is more ambitious than the objective stated in the sustainability strategy (2016) of the Federal Government to reduce the gender pay gap to 10% by 2030 comprising gross hourly earnings at all salary levels and in all sectors [10]. The defined target for 2020 is determined by interpolating the average value of the last 5 years (2011–2015) and the complete closing of the gender pay gap by 2030. The extrapolated trend calculated for 2011–2015 shows an increase of the gap by 24% in 2020 compared to the average value over the years 2011 to 2015. This means that the indicator is assigned with a red traffic light and measures are required to reduce the gender pay gap in the highest salary group in the energy sector. Since the indicator is regarded to be representative for a variety of pay grades, also measures are needed to close the gender pay gap for other pay grades in the energy sector according to the sustainability principle equal pay for equal work or work of equal value. While there are ambitious government targets to increase the share of renewable energy in Germany, it is increasingly recognized that social acceptance of renewable energy technologies may be a constraining factor in achieving this target especially due to changes in land use and landscape that are associated with these technologies. The far-reaching changes in energy technology infrastructure and the landscape image associated with the energy turnaround are increasingly provoking intense resistance among the population. This is particularly apparent in the case of wind energy, which has become a subject of contested debates mainly due to visual impacts of plants on characteristic landscapes. Apparently, contradictions exist between public support for renewable energy innovation on the one hand, and obstruction or even resistance against the realization of specific projects in the neighbourhood, on the other hand. In this context, the question arises how it can be determined whether the energy transition towards renewable energies and the associated changes in resources, technologies and infrastructures are really accepted by citizens. Since general opinions on renewable energies usually reveal little information about social issues developing through the introduction of new renewable energy technologies and infrastructures and their retroactive effects on citizens, we have chosen the acceptance of renewable energies in the neighbourhood as indicator for the SIS. With this indicator, we can measure if citizens not only agree on the expansion of renewable energy in general, but would also accept to have a renewable energy plant in their backyard. This indicator addresses the socio-technical interface of the energy system since it can be measured if the technical energy transition is conform to political and social ideas and individual values. Social acceptance is crucial for a successful energy transition, but difficult to assess with indicators because exploring the view of the subject on an object, and measuring different dimensions of acceptance and the influencing factors is a rather complex task and the field of renewable energies is highly diverse. In the present work, we have decided to use the results from different surveys in various years on the acceptance of renewable energies that was analysed on behalf of the German Renewable Energies Agency [45] since survey results are usually used to measure social acceptance and can give an impression of acceptance trends, if the same questions are asked over time. Measuring acceptance faces the problem to gather reliable and accessible data for the impact assessment and thus the assessment is quite often driven by the availability of data. For the selected indicator, data are available for Germany for the years 2010 to 2016 [46,47,48,49]. As desirable target for 2050, a total acceptance of renewable energy in the neighbourhood was assumed. Based on a linear interpolation between 100% in 2050 and the average value for the past 5 years (2011–2015), the targets for 2020 (72%) and 2030 (82%) were determined (Fig. 7). Compared to the average value for 2012 to 2016, the extrapolated trend calculated for the past 5 years (2012–2016) shows a decrease in the acceptance of renewable energy in the neighbourhood by 7.3% in 2020. However, the target for 2020 requires an increase of 8.7% compared to the average value of 2012 to 2016. Consequently, the indicator is rated with a red traffic light. Acceptance of renewable energy in the neighbourhood Since the reactive acceptance of renewable energy is strongly influenced by the technology used to produce renewable energy, it is important to also specifically measure the acceptance of the different renewable energy technologies. The data in Table 2 show the values for the acceptance of specific renewable energy technologies, such as wind turbines, biomass plants, photovoltaic systems (solar parks) and nuclear and coal-fired power plants. The percentages listed here are based on regular surveys and represent the sum of positive answer options 'I like that' and 'I like that very much'. Looking at renewable energy technologies in more detail, biomass and wind energy plants experience the lowest level of social acceptance, whereas solar energy to produce electricity with photovoltaic panels in solar parks receive the highest level of acceptance (Table 3). Table 3 Acceptance of renewable energy technologies in the neighbourhood (data from [46,47,48,49]) In principle, acceptance issues cannot be fully covered by only one or two indicators. The acceptance of key energy transition technologies does not cover all issues that are relevant to assess to which extent energy transition paths are acceptable and will be accepted. Therefore, another indicator addressing grid extension is part of the indicator set. It should be emphasized at this point that despite the uncertainties how to operationalize and measure the acceptance of the energy transition in a relevant, robust and scientifically sound way, we believe that acceptance is a highly important research field to address the socio-technical interface of the energy system. Further research is needed to develop a reliable and meaningful set of acceptance indicators which can be sufficiently addressed with data over a time series in order to improve the transformation process constructively and with a view to all actors and citizens. Activities related to the energy system often cause environmental impacts and according costs. External costs occur if producing or consuming energy services imposes costs upon third parties, such as air pollution-related ecosystem or health impairment to individuals and according clean-up costs to the society. Therefore, internalization of external costs aims at making such effects part of the decision-making process of energy providers and users, reducing occurring market failures and minimizing negative impacts of the energy system on society's welfare. In order to estimate these costs, external effects of the energy system have to be identified, assessed and monetized, as far as possible. Internalization of external costs can be implemented by various policy measures, including market-based instruments (e.g. charges, taxes or tradable permits). Accordingly, fair and 'true' energy pricing is assumed to make it economically more attractive to both, using energy services with fewer negative environmental effects and healthcare costs, and reducing energy use in total, in order to bridge the gap between private and societal costs of energy production and use. This is why the authors have chosen this indicator for the SIS. The degree of internalization of energy-related external costs is defined here as the coefficient between taxes on energy use (energy taxes, electricity taxes, motor vehicle taxes, air transport taxes, nuclear fuel taxes and road taxes) and environmental and healthcare costs due to electricity production and energy use for heating and transportation. Data are given for the years 2008 to 2010 and are calculated based on methodological guidance given in [50]. Therefore, taxes on air transport and on nuclear fuels, established since 2011, are so far not included in the methodology and the numbers presented. Data on energy taxes, electricity taxes and motor vehicle taxes are taken from [51, 52], data on road taxes for trucks from [53] and data on environmental costs from [50]. According to [54], environmental costs resulting from the production of electricity in Germany include environmental and healthcare costs that result from direct emissions. Costs resulting from indirect emissions over the entire life cycle of energy production have also been taken into consideration. Since indirect emissions arise not only in Germany, EU cost rates have been considered as well. The costs of greenhouse gas emissions are determined as 80 € per tCO2, including damage as well as abatement costs. Estimates of environmental and healthcare costs of nuclear energy differ widely within the literature available. Following the requirements of the methodological convention used here [54], the most expensive technology should be used for the calculations. In the case considered here, this is electricity production from lignite. Environmental costs of transportation include health effects, climate change effects, noise and impact on nature and landscape, as well as effects caused by indirect emissions (construction, maintenance and disposal, fuel supply). Total environmental costs, defined as described, amounted to 122.4 billion € in 2008, 115.2 billion € in 2009 and 120.6 billion € in 2010 [50]. In principle, data for other years can also be calculated by taking into consideration the mix of electricity production and heat energy consumption, as well as the relevant data for the transport sector for the different years. However, this is only reasonable if both the related environmental costs and the technologies (e.g. emission factors) do not change—an assumption that is not realistic. Thus, only calculations for other years are valid that take into account such changes. Based on the methodology described, in 2010, the degree of internalization of external costs amounted to 48.9% [50,51,52,53] (Fig. 8). An update beyond 2010 was not calculated because the results strongly depend on the development of emissions and the related healthcare costs. As target for 2050, a complete internalization of energy-related external costs was assumed. Based on a linear interpolation between 100% in 2050 and the average value for the 3 years with data available (2009–2010), the targets for 2020 and 2030 were determined as shown in Fig. 8. A white traffic light was assigned to this indicator because no trendline and distance-to-target were calculated due to the lack of a sufficient data series. Internalization of energy-related external costs External costs of the energy system and its transition can be calculated by determining the social costs, which have been borne by the public, and integrate them into microeconomic cost accounting. The aim of this method is to attribute the external costs associated with environmental pollution with the help of prices to the polluter (polluter-pays principle). By this, a market-based and therefore system-compatible and effective solution to the environmental problem is provided. It has to be noted, however, that in environmental policy, it is regarded as not possible to fully internalize externalities because of the problems of economic assessment of environmental damage and the polluters. That is why the defined desirable target to completely internalize the energy-related external costs is quite ambitious. In recent decades, thousands of people have joined citizen groups, city and local councils or local business enterprises to set up renewable energy projects. Energy cooperatives enjoy great popularity as a form of organization since in Germany a long tradition of cooperatives exists. The organizational form of the cooperative is based on the sustainability principles of solidarity, democracy, identity and membership promotion and has a high potential for democracy [55]. With their economic-democratic approach of involving the members in their entrepreneurial orientation, of forming a solidarity economy and moving away from the maxim of profit maximization, cooperatives are, at least ideally, counterparts to capitalistically organized companies and blueprints for sustainable organizational forms [56]. Moreover, energy cooperatives can play a central role in a participative oriented energy transition in terms of their design as prosumer organizations. They represent a model that tries to respond to the social and environmental challenges of modern societies with alternative business, economic and social models [55]. In energy cooperatives, citizens work together for the production and distribution of renewable and clean energy [57]. Not only the ecological claim, but also the democratically oriented logic of action, suggests that energy cooperatives are included in the discourse on sustainability, emphasizing their central role in the context of the energy transition and their transformative potential for social development processes as well as their potential for self-organization of society pursuing the decentral transition to clean energy, and thus become main actors of the energy transition [55]. Besides, energy cooperatives for local energy projects can contribute to a higher public acceptance of new systems to provide renewable energy. In the light of these considerations, we have decided to select the indicator 'number of energy cooperatives engaged in renewable energy plants' for the SIS. Various forms of energy cooperatives have been founded in Germany for more than a decade, allowing citizens to directly support the energy transition through own investments in and ownership of large-scale renewable energy plants that would be too expensive for single individuals alone, such as solar parks or wind turbines. To date, most energy cooperatives are formed at a local level, for example, by villagers investing in a nearby wind farm. Information about energy cooperatives is taken from [58,59,60] and includes local and regional citizens' cooperatives. Here, only energy cooperatives under the umbrella of the Deutscher Genossenschafts- und Raiffeisenverband e.V. are taken into consideration. According to these studies, the accumulated number of energy cooperatives was 8 in 2006, 272 in 2010 and 812 in 2015 (Fig. 9). According to these figures, the number of energy cooperatives in Germany has risen steadily in recent years. At the same time, however, it can be observed that annual growth rates are falling sharply. This can be explained above all by the changing conditions under the EEG. Thus, 129 new energy cooperatives were founded in 2013, compared to only 56 in 2014 and 40 in 2015. These figures may vary since some sources are based on the year of establishment, others on the year of registration. The contracts of these energy cooperatives include electricity production (87% of all cooperatives in 2012 and 95% in 2013), heat production (19% in 2012, 16% in 2013), grid operation (4% in 2012 and 2013) and operation of district heating systems (20% in 2012, 16% in 2013). Since the results are based on a survey where multiple answers were possible, the added single percentages exceed the total of 100% [59, 60]. Civil power plants produced approximately 580 million kWh of renewable electricity in 2012 and. 830 million kWh in 2013 [6, 47]. No data series are available for the number of people belonging to these cooperatives. Only for 2011, it is confirmed that more than 80,000 citizens were engaged in energy cooperatives. Number of energy cooperatives To preserve the ability for self-organization in the field of renewable energies, we derived the targets for 2020, 2030 and 2050 by assuming that the number of energy cooperatives should rise proportionately to the increase of the 'share of renewable energy in gross final consumption of energy' (indicator no. 10, see Table 1). The extrapolated trend calculated based on the past 5 years (2011–2015) leads to an almost doubling until 2020 compared to the average value for 2011 to 2015. The target for 2020 (1415 cooperatives) requires an increase of 112% compared to the average values for the years 2011 to 2015 (666 cooperatives). This results in a deviation of 13%, which was assigned with a yellow traffic light. Above all, the framework conditions of support via the German Renewable Energy Act (EEG) are crucial for the number of energy cooperatives. The EEG amendment, which came into force in 2017, switched from fixed feed-in tariffs to competitive tenders. By this, projects of energy cooperatives are disadvantaged systematically. With the aim of preserving the important diversity of actors involved in the energy transition in general and the organization model of energy cooperatives respectively, facilitated participation conditions have to be defined for citizens' energy projects. Since the share of renewable energy in gross final consumption of energy is still rising significantly and continuously while at the same time fewer and fewer energy cooperatives are founded, a drastic change in the framework conditions of the EEG is required to achieve the targets for 2020 and beyond. Sustainability assessment of the German energy system Figure 10 gives an overview on the evaluation results for all 45 indicators selected for the sustainability assessment of the German energy system. Only for 12 indicators it can be assumed that the sustainability targets for 2020 can be achieved without additional or changes of policy measures (green traffic light). Four indicators are aligned with a yellow traffic light. Political action is needed to reach the targets for 18 indicators assigned with a red traffic light. Another 11 indicators are assigned with a white traffic light due to the lack of available data series. It can be noted that indicators related to the maintenance of society's productive potential with regard to use of rentable and non-renewable resources as well as environment pollution (nos. 10 to 22) are all rated with a red traffic light, except the indicators 'final energy productivity of the industry' (no. 19) and 'energy-related emissions of acid-forming gases' (no. 22). The indicators assessing the sustainable development of human capital (nos. 26 to 29), however, are mainly evaluated with a green traffic light. Here, action is only required to improve the performance of the indicator 'numbers of start-ups' (no. 29). As described in [8], it was not possible to define suitable indicators for all sustainability aspects affected by the energy transition. This was the case, for example, for the issue of preserving biodiversity. However, biodiversity could be measured by using several indicators of the Sustainability Indicator Set (SIS), as some of them measure driving forces considered as mainly responsible for the loss of biodiversity [61]. Some driving forces, such as the extent of land use, are listed in the SIS or can be translated into adequate indicators. This was done for the load of nutrients and pollutants that is referring to the indicators eutrophication and acidification and discharge of heavy metals (Fig. 11). Only one main driving force—the occurrence of invasive species—is not reflected in the SIS at all. Indirect sustainability assessment of the impact of the energy system and its transition on biodiversity As shown in the overview of results in Fig. 11, seven indicators are regarded as relevant for the preservation of biodiversity. Of these, four are rated with a red traffic light and two with a white traffic light. These results indicate that the transition of the energy system will rather contribute to the loss of biodiversity than to stop it. However, the targets for these indicators were not derived to address biodiversity aspects explicitly. Therefore, the statement is accordingly provisional and uncertain. Regarding the pollution of ecosystems due to the discharge of heavy metals, however, the critical load concept should be used for the assessment rather than the emission values affecting the human health. For Germany, critical loads are available for lead (Pb), cadmium (Cd) and mercury (Hg), taking into account both potential health effects and ecotoxic effects by measuring the maximum load of ecosystems. As a result of European mapping, critical load exceedances in Germany are widespread for Pb and Hg, but hardly for Cd [62]. A review of these statements based on results of German deposition measurement networks in combination with dispersion models is not yet possible. For this reason, there are no spatially differentiated representations of critical loads for heavy metals by atmospheric immissions. Against this background, we recommend further research and empirical studies aiming at overcoming these limitations of measuring impacts of the energy system on biodiversity. The quality and reliability of assessments based on the Sustainability Indicator System (SIS) such as the one we presented in this paper depends on the appropriateness of the selected indicators, the availability of valid data series, targets determined and the evaluation method applied, e.g. based on the distance-to-target approach. These factors, their relationships and impacts on the assessment results will be discussed in the following. The discussion is focusing on the comparison of our results with those of the German monitoring report 'Energy of the Future' as this is the only official and the most elaborated and regularly revised approach to monitor the German Energiewende. Besides, it applies a similar procedure for the selection of indicators for economic and ecological impacts and the assessment of the indicator performances. Other studies such as the indicator report from the German Federal Office of Statistics or the Energiewende-Navigator developed by the Federal Association of German Industry (BDI) are not considered here (see [8]), because they are not as comprehensive and regularly updated as the German monitoring report. Besides, the BDI applies a different procedure for the assessment resulting in another traffic light system that is not comparable with the approach described here. The discussion is focusing on those indicators that are used both in the SIS and the German monitoring report, but show divergent assessment results. Such differences occur in the case of four indicators addressing key targets of the energy transition: share of renewable energies in gross final energy consumption (SI no.10), primary energy use (SI no. 13), final energy productivity of the German economy (SI no. 18) and greenhouse gas emissions (SI no. 21). In our assessment, these indicators are all assigned with a red traffic light. Although the monitoring report also used the distance-to-target approach and the same data series (except for the greenhouse gas emissions where we included only the energy-related emissions), the two assessment results are different. To understand the differences, it must be explained that the monitoring report applies an assessment scoring system ranging between 5 points for the fulfilment of a target up to a deviation of 10% to 1 point for a deviation over 60%. Using this scoring method leads to the results that three of these four indicators (SI nos. 13, 18 and 21) were awarded with 3 points, whereas the indicator SI no. 10 was awarded with 5 points. In fact, the monitoring report assessment results of these four indicators are much more positive compared to the results presented here. A further difference between our approach and the monitoring report, also responsible for the varying results, is the methodology chosen to assess the deviation between projected values and the targets for the year 2020. As described before (see formula I in the 'Sustainability assessment based on the distance-to-target approach' section), we compare the projected change in percentage with the change required in percentage for calculating the deviation in percentage that is evaluated using the traffic light colour code. In contrast, the monitoring report compares the absolute values of the projected value with the target. We chose the percentage deviation because it provides information on both, the deviation of the present and the projected value from the present and future target. Besides, absolute values could result in misleading conclusions. This applies particularly to cases where the distance between the current value and the target is large, because comparing absolute values would lead to an overestimation of the degree of target achievement. On the other hand, using percentage values as basis for the assessment can lead to an underestimation of the target achievement degree in cases where the distance between the current value and target is small. Another methodological difference exists with respect to the reference value used for the calculation of the projected value for 2020. In the monitoring report, the projected value was derived by a linear projection starting from the year 2008, which is fixed for all indicators. In our assessment, however, we use the average value of the period of the past 5 years with available data. Although for many indicators, data series up to the year 2015 or 2016 were available, this approach has the drawback that the indicators can have different reference periods. Despite this drawback, we have chosen this approach in order to better capture and integrate recent changes in trend development, e.g. due to modifications of societal framework conditions, such as regulation approaches. To give an example: With just 40 new energy cooperatives being set up in 2015, the number of newly founded cooperatives fell by another 25% compared to the previous year with an already low level. Such recent shifts are possibly overlaid in the monitoring report, as has been already stated in [63]. Löschel et al. criticize the monitoring report being not able to suitably consider the more or less stagnation of greenhouse gas emissions since 2009 with its methodological approach. In contrast, we assigned a red traffic light to this indicator, as a result of regarding the probability to reach the target set. It has to be noted that the delimitation of the 5-year period and the calculation of the reference value depends on the availability of data series. Consequently, the number of remaining years for political measures to achieve the 2020 target can differ. Considering a period closer to the target, e.g. from 2012 to 2016, would require stronger measures to achieve the target compared to an earlier time period, e.g. 2008 to 2012, because fewer years remain for interventions and measurable impacts. Thus, it may be reasonable to adjust the reference lines to assign the traffic light code over time. Moving closer to the target year 2020, the need for action is more urgent, and thus, the traffic light should turn, e.g., from a red light into a dark red light, accordingly a green traffic light could turn into a yellow one. Compared to the approach chosen, such a modification could better fulfil fairness considerations in the distance-to-target approach, but it would definitely make the assessment more complicated and require difficult decisions how to adjust the traffic light colour code in detail. On the other hand, a green traffic light based on the past and extrapolated trend may lead to the conclusion that the distance to the target is so close that the target will be reached easily and thus efforts could be slowed down and even reverse earlier progress. Then, action to achieve the targets at least in 2030 or 2050 would be again necessary. This phenomenon can be tackled by focusing on rates of improvement rather than on distances to target. Dynamic assessments can also suggest the degree of effort required to meet a target, and how this varies across targets: where there is a long distance to travel, but recent progress has been rapid, it may be easier to close the gap than where the initial distance is short but recent progress has been slow or negative. A further reason for the differences in the results between our assessment and the monitoring report are the targets determined for the indicators. Löschel et al. assessed the indicator SI no. 13 ('use of primary energy') with a yellow traffic light and the indicator SI no. 10 ('share of renewable energy in gross final consumption of energy') with a green traffic light, meaning that it is likely that the targets for 2020 can be achieved with current policies and strategies. For SI no. 10, we choose a more ambitious target for 2020. Instead of 18% share of renewable energy, a share of 23%, based on [9], was determined to ensure a better consistency with other assumptions also taken from [9]. Hence, we assigned the SI no. 10 with a red traffic light, in contrast to the green traffic light in the monitoring report. This example shows the influence of target setting on the assessment results. Our approach to define targets for each indicator of the SIS regardless of whether these are already politically or legally anchored targets in order to carry out comprehensively the DDT assessment has strengths and limitations. The strength of the approach is that it provides a preliminary comprehensive overview of the sustainability of the energy system in Germany and its transition. The restriction of the approach is that the assessment results have to be considered differentially since those targets, which are not reflecting political binding targets, are provisionally as long as they are not justified by politics. Furthermore, it has to be noted that even for those indicators where binding political targets exist, these targets can be revised accordingly if it is likely that the objectives will not be met. A current example of this is the agreement between the biggest parties in Germany to give up officially the already unattainable climate targets for 2020. Another restriction is that the translation of targets irrespective of their origin into quantitative numbers for 2020, 2030 and 2050 appeared to be not a straightforward, but a complex and rather difficult task, due to several reasons. One challenge is that not all targets can be easily expressed in quantitative terms or can be translated into quantitative reductions and modifications of existing numbers. In those cases when the policy target refers to a year different from 2020, e.g. a period in time beyond 2020, the target for this year had to be re-scaled through linear interpolation. This necessary procedure is regarded as a second source of uncertainty. Despite these restrictions and uncertainties, the DTT assessments can clearly help to identify the need for political priority setting and action respectively in those areas that are highly relevant for the sustainable development of the energy system and its transition but have been excluded or overlooked so far. As outlined above, we have applied existing policy targets if possible to be compatible for political decision-makers and provide applicable information. In view of the influence of the target definition on the assessment result, it can be criticized that targets should be defined according to scientific evidence rather than political feasibility. The debate on climate protection shows that this would probably lead to more ambitious targets and to a worse rating of the transformation strategies implemented. In our assessment, however, for many indicators, this would not have changed the alignment of the already red traffic lights and the recommendation that action is required to reach the quite ambitious political targets. For the new indicators that are not yet on the political agenda of the energy transition, we have applied a scientific approach to derive appropriate targets for and beyond the year 2020. In the view of these findings, we consider it important for future research and according policy consultation to better consider strengths and weaknesses of sustainability assessments based on distance-to-target calculations, and also the impact of the selected reference values, targets defined and scoring systems applied on results and recommendations. One possibility to check and reveal the quality and robustness of assessment results could be to carry out sensitivity analyses to support decision-makers in becoming more aware how changes in reference values, distance-to-target calculations and targets can influence assessment results and policy recommendations. As has been already discussed in [8], the SIS includes several new indicators addressing important socio-technical aspects of the energy system and its transition that are not considered so far in the German monitoring report. This includes most of the indicators that are listed in Table 1 from the SI no. 32 to 45. For those indicators, only few data exist and it is not possible yet to create data series of at least 5 years. Since the distance-to-target method applied here requires such series, no assessment is possible for most of these indicators. Therefore, white traffic lights were assigned indicating the need to collect more comparable data over time. Since this is the case for 11 out of 45 indicators, it is difficult to assess the social and socio-economic impacts of the energy system and its transition, being the field of investigation that is the most exciting from our point of view. Among the indicators related to the socio-technical interface, only one indicator is assigned with a green traffic light (SI no. 38), whereas three indicators (SI no. 32, 35 and 38) are assigned with a red traffic light. This indicates the need for action to close the gender pay gap in the energy sector and to increase public acceptance for renewable energies in the neighbourhood and also the volume of publicly funded loans for energy-related investments. Considering the relatively big number of indicators included in the SIS may evoke the idea—most frequently expressed by decision-makers—to aggregate the single indicator assessment results to a 'sustainability index' for the energy system. The main argument behind this demand is to get a quick information that can be communicated more easily. However, there is no scientifically proven approach to sum up such heterogeneous indicators to generate a single sustainability score. Beyond that, an aggregated index would be of limited value for decision-makers, because recommendations for action have to address particular fields of action which can't be identified based on an aggregated index, but need disaggregated information provided in terms of specific indicators and targets. The assessment with the SIS presents such information in a transparent format. In any case, users of the SIS may select indicators according to the specific context they are acting in. The developed Sustainability Indicator System (SIS) is a comprehensive tool to assess progress towards a more sustainable energy system and is, thus, useful to support decision-making. It includes new indicators to assess the socio-technical interface of the system that are lacking in existing indicators sets such as the German monitoring report 'Energy of the Future'. As for over one quarter of the SIS, no assessment is possible due to the lack of data series; research and monitoring is recommended to fill these gaps in order to carry out a really comprehensive sustainability assessment. As the distance-to-target methodology features some uncertainties and limitations that are associated with the method, it is crucial to check and display the quality and robustness of the assessment result by carrying out sensitivity analysis. The SIS is considered a relevant contribution to sustainability research and practice for the further development of the energy transition. It can be used as a monitoring system by politics, administration, NGOs and society. As no other scientific approach provides a similar comprehensive tool for the sustainability assessment of energy systems, our work is a milestone that contributes both, to the academic discourse and the improvement of already existing indicator-based assessments such as the German monitoring report. However, both the determination of indicators and targets as well as the assessment methodology should be seen as a continuous process in which scientists, decision-makers, stakeholders and citizens should be integrated. In particular, target setting is a process, which is subject to social value patterns and thus needs political agreement and legitimation. The SIS has the potential to provide information beyond the mere assessment of single indicators. For example, it is applicable to assess the impact on biodiversity in an indirect way and to identify trade-offs between sustainability issues. The assessment tool bears the potential for studying a wide range of questions concerning the future sustainability of the energy system. Besides, the SIS could be used to assess the sustainability of the energy system at different scales, at the state level as well as in other European countries if data series are available. With respect to the methodological challenges, applying the SIS for monitoring and decision-making in different contexts and at different scales would be beneficial to gain experiences about the adaptability of the SIS assessment tool and to get valuable clues how to elaborate our approach. BDI: Bundesverband der Deutschen Industrie DTT: Distance-to-target EEG: EPO: Hg: R&D: SAIDI: System Average Interruption Duration Index SI: Sustainable indicator SIS: Sustainable Indicator System WGBU: Wissenschaftlicher Beirat der Bundesregierung Globale Veränderungen World Commission on Environment and Development (WCED) (1987) Our common future. Oxford University Press, Oxford United Nations (UN) (2016) The sustainable development goals report 2016. United Nations, New York Bundesministerium für Wirtschaft und Energie (BMWi). 2012. Erster Monitoring-Bericht "Energie der Zukunft". https://www.bmwi.de/Redaktion/DE/Publikationen/Energie/erster-monitoring-bericht-energie-der-zukunft.html. Accessed: 17 April 2017 Bundesministerium für Wirtschaft und Energie (BMWi). 2014. Zweiter Monitoring-Bericht "Energie der Zukunft". https://www.bmwi.de/Redaktion/DE/Publikationen/Energie/zweiter-monitoring-bericht-energie-der-zukunft.html. Accessed 28 June 2016 Bundesministerium für Wirtschaft und Energie (BMWi). Ein gutes Stück Arbeit–Die Energie der Zukunft. https://www.bmwi.de/Redaktion/DE/Publikationen/Energie/fortschrittsbericht.html. Accessed 20 Mar 2017 Bundesministerium für Wirtschaft und Energie (BMWi). 2015. Ein gutes Stück Arbeit–Die Energie der Zukunft. https://www.bmwi.de/Redaktion/DE/Publikationen/Energie/vierter-monitoring-bericht-energie-der-zukunft.html. Accessed 20 Mar 2017 Federal Ministry for Economic Affairs and Energy (BMWi). 2016. Fifth monitoring report "the energy of the future" 2015. https://www.bmwi.de/Redaktion/EN/Publikationen/monitoring-report-2016.html. Accessed 12 Apr 2017 Rösch C, Bräutigam K, Kopfmüller J, Stelzer V, Lichtner P (2017) Indicator system for the sustainability assessment of the German energy system and its transition. Energy, Sustainability and Society 7:1 Deutsches Zentrum für Luft- und Raumfahrt (DLR), Fraunhofer Institut für Windenergie und Energiesystemtechnik (IWES), Ingenieurbüro für neue Energien (IfnE). 2012. Langfristszenarien und Strategien für den Ausbau der erneuerbaren Energien in Deutschland bei Berücksichtigung der Entwicklung in Europa und global. http://www.dlr.de/dlr/Portaldata/1/Resources/bilder/portal/portal_2012_1/leitstudie2011_bf.pdf. Accessed 10 Apr 2017 The Federal Government. Germany's national sustainable development strategy. https://www.bundesregierung.de/Content/EN/StatischeSeiten/Schwerpunkte/Nachhaltigkeit/Anlagen/2017-06-20-langfassung-n-en.pdf?__blob=publicationFile&v=5. Accessed 25 Jan 2018 Statistisches Bundesamt (Destatis). 2017. Nachhaltige Entwicklung in Deutschland. https://www.destatis.de/DE/Publikationen/Thematisch/UmweltoekonomischeGesamtrechnungen/Umweltindikatoren/IndikatorenPDF_0230001.pdf?__blob=publicationFile. Accessed 25 Jan 2018 GWS, DIW, DLR, Prognos, ZSW. 2015. Beschäftigung durch erneuerbare Energien in Deutschland: Ausbau und Betrieb, heute und morgen. https://www.zsw-bw.de/uploads/media/Studie_Beschaeftigung_durch_EE_2015.pdf. Accessed 20 Mar 2017 GWS, Fraunhofer ISI, DIW Berlin, DLR, Prognos, ZSW. Bruttobeschäftigung durch erneuerbare Energien in Deutschland und verringerte fossile Brennstoffimporte durch erneuerbare Energien und Energieeffizienz. https://www.bmwi.de/Redaktion/DE/Downloads/S-T/bruttobeschaeftigung-erneuerbare-energien-monitioringbericht-2015.pdf?__blob=publicationFile&v=11. Accessed 12 Apr 2017 Statistisches Bundesamt (Destatis). Number of employees 2006 to 2015. https://www.destatis.de/DE/ZahlenFakten/Indikatoren/Konjunkturindikatoren/Arbeitsmarkt/karb811.html. Accessed 20 Mar 2017 Heindl, P. Measuring fuel poverty:general considerations and application to German household data. ftp://ftp.zew.de/pub/zew-docs/dp/dp13046.pdf. Accessed 20 Mar 2017 Tews, K. Energiearmut definieren, identifizieren und bekämpfen. http://www.polsoz.fu-berlin.de/polwiss/forschung/systeme/ffu/aktuelle-publikationen/13-tews-energiearmut/index.html. Accessed 25 Sept 2017 Statistisches Bundesamt (Destatis). Laufende Wirtschaftsrechnungen–Einnahmen und Ausgaben privater Haushalte 2012. https://www.destatis.de/DE/Publikationen/Thematisch/EinkommenKonsumLebensbedingungen/EinkommenVerbrauch/EinnahmenAusgabenprivaterHaushalte2150100127004.pdf?__blob=publicationFile. Accessed 20 Mar 2017 Statistisches Bundesamt (Destatis) (2015) Wirtschaftsrechnungen: Einkommens- und Verbrauchsstichprobe–Einnahmen und Ausgaben privater Haushalte für den Privaten Konsum, p 2013 https://www.destatis.de/DE/Publikationen/Thematisch/EinkommenKonsumLebensbedingungen/Konsumausgaben/EVS_AufwendungprivaterHaushalte2152605139004.pdf?__blob=publicationFile. Accessed 20 Mar 2017 Statistisches Bundesamt (Destatis). Laufende Wirtschaftsrechnungen–Einkommen, Einnahmen und Ausgaben privater Haushalte 2014. https://www.destatis.de/DE/Publikationen/Thematisch/EinkommenKonsumLebensbedingungen/EinkommenVerbrauch/EinnahmenAusgabenprivaterHaushalte2150100147004.pdf?__blob=publicationFile. Accessed 20 Mar 2017 Statistisches Bundesamt (Destatis). Laufende Wirtschaftsrechnungen-Einkommen, Einnahmen und Ausgaben privater Haushalte 2015. https://www.destatis.de/DE/Publikationen/Thematisch/EinkommenKonsumLebensbedingungen/EinkommenVerbrauch/EinnahmenAusgabenprivaterHaushalte2150100157004.pdf?__blob=publicationFile. Accessed 20 Mar 2017 Boardman B (1991) Fuel poverty. Belhaven Press, London Pye, S.; Dobbins, A. 2015. Energy poverty and vulnerable consumers in the energy sector across the EU: analysis of policies and measures Umweltbundesamt. 2017. Private Haushalte und Konsum. https://www.umweltbundesamt.de/daten/private-haushalte-konsum/energieverbrauch-privater-haushalte. Accessed 20 Mar 2017 Statista. Anbaufläche von Energiepflanzen in Deutschland nach Art in den Jahren 2007 bis 2015 (in 1.000 Hektar). http://de.statista.com/statistik/daten/studie/153072/umfrage/anbauflaeche-von-energiepflanzen-in-deutschland-nach-sorten-seit-2007/. Accessed 20 Mar 2017 Löschel, A.; Erdmann, G; Staiß, F.; Ziesing, H. Expertenkommission zum Monitoring-Prozess "Energie der Zukunft"–Stellungnahme zum vierten Monitoring-Bericht der Bundesregierung für das Berichtsjahr 2014. https://www.bmwi.de/Redaktion/DE/Downloads/M-O/monitoringbericht-energie-der-zukunft-stellungnahme-zusammenfassung-2014.pdf?__blob=publicationFile&v=5. Accessed 20 Mar 2017 Wissenschaftlicher Beirat der Bundesregierung Globale Umweltveränderungen (WBGU). 2013. Welt im Wandel–Energiewende zur Nachhaltigkeit. http://www.wbgu.de/fileadmin/user_upload/wbgu.de/templates/dateien/veroeffentlichungen/hauptgutachten/jg2003/wbgu_jg2003.pdf. Accessed 12 Feb 2015 Lexas. Flächendaten aller Staaten der Erde. http://www.laenderdaten.de/geographie/flaeche_staaten.aspx. Accessed 30 Nov 2016 Nagaoka S, Motohashi K, Goto A (2010) Patent statistics as an innovation indicator. In: Hall BH, Rosenberg N (eds) Handbook of the economics of innovation (Vol. 2), pp. 1083–1127. Harhoff D, Scherer F (2000) Technology policy for a world of skew-distributed outcomes. Res Policy 29(4–5):559–566 Wuebker R, Wüstenhagen R (2011) The handbook of research on energy entrepreneurship. Edward Elgar, Cheltenham, UK, Northampton, MA Farla J, Markard J, Raven R, Coenen L (2012) Sustainability transitions in the making: a closer look at actors, strategies and resources. Technol Forecast Soc Chang 79(6):991–998 Santos F, Eisenhardt K (2009) Constructing markets and shaping boundaries: entrepreneurial power in nascent fields. Acad Manag J 52(4):643–671 Borderstep Institut für Innovation und Nachhaltigkeit. 2015. Green economy Gründungsmonitor 2014–Grüne Wirtschaft als Gründungs- und Beschäftigungsmotor in Deutschland. http://www.borderstep.de/wp-content/uploads/2015/05/Green_Economy_Gruendungsmonitor_20141.pdf. Accessed 20 Mar 2017 Zentrum für Europäische Wirtschaftsforschung (ZEW). 2014. Potenziale und Hemmnisse von Unternehmensgründungen im Vollzug der Energiewende. http://www.bmwi.de/DE/Mediathek/publikationen,did=639222.html. Accessed 20 Mar 2017 Helmers, C.; Schautschick, P.; Rogers, M. 2011. Intellectual property at the firm-level in the UK: the Oxford firm-level intellectual property database. https://www.economics.ox.ac.uk/department-of-economics-discussion-paper-series/intellectual-property-at-the-firm-level-in-the-uk-the-oxford-firm-level-intellectual-property-database. Accessed 25 Jan 2018 Jaffe AB, Lerner J (2004) Innovation and its discontents how our broken patent system is endangering innovation and progress, and what to do about it. Princeton University Press Bessen JE, Meurer MJ (2008) The private costs of patent litigation. SSRN Electronic Journal. http://www.bu.edu/law/workingpapers-archive/documents/bessen-ford-meurer-no-11-45rev.pdf. Graham S High technology entrepreneurs and the patent system: results of the 2008 Berkeley patent survey. Berkeley Technology Law Journal 24:255–327 Dehio, J., Engel, D.; Graskamp, R. 2006. Forschung und innovation: Wo steht Deutschland? Wirtschaftsdienst 86(8):517-523 Expertenkommission Forschung und Innovation (2012) Gutachten 2012. Technische Universität Berlin, Berlin Statista. Ausgaben für Forschung und Entwicklung in Prozent des BIP in ausgewählten Ländern im Jahr 2013. https://de.statista.com/statistik/daten/studie/158150/umfrage/ausgaben-fuer-forschung-und-entwicklung-2008/. Accessed 30 Mar 2016 Ministry of Family Affairs, Senior Citizens, Women and Youth (BMFSFJ). Pay inequality between women and men in Germany. https://www.bmfsfj.de/blob/94442/efbd528467e361882848c23486fcc8d8/pay-inequality-data.pdf. Accessed 20 Mar 2017 Eurostat. 2017. Gender pay gap statistics. http://ec.europa.eu/eurostat/statistics-explained/index.php/Gender_pay_gap_statistics. Accessed 20 Mar 2017 Statistisches Bundesamt (Destatis). Verdienste und Arbeitskosten. 2. 1. Arbeitnehmerverdienste 2007–2016. https://www.destatis.de/GPStatistik/receive/DESerie_serie_00000297?list=all. Accessed 20 Mar 2017 Agentur für Erneuerbare Energien. Bundesländer in der Übersicht. http://www.foederal-erneuerbar.de/uebersicht/bundeslaender/BW|BY| B|BB|HB|HH|HE|MV|NI|NRW|RLP|SL|SN|ST|SH|TH|D/kategorie/akzeptanz/. Accessed 20 Mar 2017 Agentur für Erneuerbare Energien:. Umfrage 2013: Bürger befürworten Energiewende und sind bereit, die Kosten dafür zu tragen. http://www.unendlich-viel-energie.de/themen/akzeptanz2/akzeptanz-umfrage/umfrage-2013-buerger-befuerworten-energiewende-und-sind-bereit-die-kosten-dafuer-zu-tragen. Accessed 20 Mar 2017 Agentur für Erneuerbare Energien. Umfrage Akzeptanz Erneuerbarer Energien 2014. http://www.unendlich-viel-energie.de/mediathek/grafiken/akzeptanzumfrage-erneuerbare-energie-2014. Accessed 20 Mar 2017 Agentur für Erneuerbare Energien. Umfrage zur Akzeptanz Erneuerbarer Energien 2015. http://www.unendlich-viel-energie.de/mediathek/grafiken/umfrage-akzeptanz-erneuerbare-energien-2015. Accessed 20 Mar 2017 Agentur für Erneuerbare Energien. Umfrage 2016: Bürger befürworten Energiewende und sind bereit, die Kosten dafür zu tragen. https://www.unendlich-viel-energie.de/mediathek/grafiken/akzeptanz-umfrage-2016. Accessed 20 Mar 2017 Umweltbundesamt. Daten zur Umwelt 2015. https://www.umweltbundesamt.de/sites/default/files/medien/376/publikationen/daten_zur_umwelt_2015.pdf. Accessed 20 Mar 2017 Bundesfinanzministerium. Kassenmäßige Steuereinnahmen nach Steuerarten in den Kalenderjahren 2006–2009. http://www.bundesfinanzministerium.de/Content/DE/Standardartikel/Themen/Steuern/Steuerschaetzungen_und_Steuereinnahmen/2017-05-05-steuereinnahmen-nach-steuerarten-2010-2016.pdf?__blob=publicationFile&v=5. Accessed 22 Mar 2017 Statista. Mauteinnahmen in Deutschland von 2005 bis 2015* (in Milliarden Euro). http://de.statista.com/statistik/daten/studie/75600/umfrage/mauteinnahmen-in-deutschland-seit-2005/. Accessed 20 Mar 2017 Umweltbundesamt. 2013. Ökonomische Bewertung von Umweltschäden – Methodenkonvention 2.0 zur Schätzung von Umweltkosten. https://www.umweltbundesamt.de/publikationen/oekonomische-bewertung-von-umweltschaeden-0. Accessed 20 Mar 2017 Klemisch H, Boddenberg M (2016) Energiegenossenschaften und Nachhaltigkeit. Aktuelle Tendenzen und soziologische Überlegungen. Soziologie und Nachhaltigkeit. Soziologie und Nachhaltigkeit - Beiträge zur sozial-ökologischen Transformationsforschung 2:6 Blome-Drees, J. 2012. Wirtschaftliche Nachhaltigkeit statt Shareholder Value. http://library.fes.de/pdf-files/wiso/08964.pdf. Accessed 25 Jan 2018 Elsen S (2014) Genossenschaften als transformative Kräfte auf dem Weg in die Postwachstumsgesellschaft. In: Genossenschaften und Klimaschutz. Akteure für eine zukunftsfähige Stadt. Springer VS, Wiesbaden Deutscher Genossenschafts- und Raiffeisenverband e.V. (DGRV). 2015. Energiegenossenschaften. Ergebnisse der DGRV-Umfrage (zum 31.12.2015). https://www.dgrv.de/webde.nsf/7d5e59ec98e72442c1256e5200432395/418a5acd4479ba4ec1257e8400272bec/$FILE/DGRV-Jahresumfrage.pdf. Accessed 10 Apr 2017 Deutscher Genossenschafts- und Raiffeisenverband e.V. (DGRV). Energiegenossenschaften. Ergebnisse der Umfrage des DGRV und seiner Mitgliedsverbände. Frühjahr 2013. https://www.dgrv.de/webde.nsf/7d5e59ec98e72442c1256e5200432395/dd9db514b5bce595c1257bb200263bbb/$FILE/Umfrageergebnisse%20Energiegenossenschaften.pdf. Accessed 10 Apr 2017 Deutscher Genossenschafts- und Raiffeisenverband e.V. (DGRV). 2014.Energiegenossenschaften. Ergebnisse der Umfrage des DGRV und seiner Mitgliedsverbände. https://www.dgrv.de/webde.nsf/7d5e59ec98e72442c1256e5200432395/418a5acd4479ba4ec1257e8400272bec/$FILE/DGRV-Jahresumfrage.pdf. Accessed 20 Mar 2017 Umweltbundesamt. 2014. Biodiversität. https://www.umweltbundesamt.de/das-uba/was-wir-tun/forschen/umwelt-beobachten/biodiversitaet#textpart-1. Accessed 29 Mar 2017 Umweltbundesamt. Critical loads für Schwermetalle. http://www.umweltbundesamt.de/themen/luft/wirkungen-von-luftschadstoffen/wirkungen-auf-oekosysteme/critical-loads-fuer-schwermetalle#textpart-1. Accessed 2 Aug 2017 Löschel, A.; Erdmann G; Staiß, F.; Ziesing, H. 2016. Expertenkommission zum Monitoring-Prozess, Energie der Zukunft" - Stellungnahme zum fünften Monitoring-Bericht der Bundesregierung für das Berichtsjahr 2015 Council of European Energy Regulators. 2015. CEER benchmarking report 5.2 on the continuity of electricity supply Bundesministerium für Wirtschaft und Technologie, Bundesministerium für Umwelt, Naturschutz und Reaktorsicherheit. 2010. Energiekonzept für eine umweltschonende, zuverlässige und bezahlbare Energieversorgung. https://www.bmwi.de/Redaktion/DE/Downloads/E/energiekonzept-2010.html. Accessed 19 Oct 2017 Zumach, A. Volksabstimmung in der Schweiz. Topverdiener werden nervös. taz Umweltbundesamt. Strategien zur Emissionsminderung von Luftschadstoffen. https://www.umweltbundesamt.de/daten/luftbelastung/massnahmen-zur-emissionsminderung-von. Accessed 09 Nov 2017 Hirschl B, Heinbach K, Prahl A, Salecki S, Schröder A, Aretz A, Weiß J (Dezember 2015) Wertschöpfung durch erneuerbare Energien. Institut für ökologische Wirtschaftsforschung Berlin, Berlin Prahl, A. 2014. Renewable energies' impact on value added and employment in Germany—model results for 2012. Conference presentation Lang, C. 2008. Marktmacht und Marktmachtmessung im deutschen Großhandelsmarkt für Strom. Gabler Verlag. s.l. The authors gratefully acknowledge financial support by the Helmholtz Alliance ENERGY-TRANS 'Future infrastructures for meeting energy demands—towards sustainability and social compatibility'. For additional information, see https://www.energy-trans.de/ Institute for Technology Assessment and Systems Analysis (ITAS), Karlsruhe Institute of Technology (KIT), Karlstraße 11, 76133, Karlsruhe, Germany Christine Rösch, Klaus-Rainer Bräutigam, Jürgen Kopfmüller, Volker Stelzer & Annika Fricke Christine Rösch Klaus-Rainer Bräutigam Jürgen Kopfmüller Volker Stelzer Annika Fricke All authors designed the objectives and methods of the study and contributed to the development of the indicator system. CR prepared the manuscript with contributions from all co-authors. All authors read and approved the final manuscript. Correspondence to Christine Rösch. Rösch, C., Bräutigam, KR., Kopfmüller, J. et al. Sustainability assessment of the German energy transition. Energ Sustain Soc 8, 12 (2018). https://doi.org/10.1186/s13705-018-0153-4 The Transition of Energy Systems towards Sustainability: Challenges and Lessons Learnt from the German 'Energiewende'
CommonCrawl
# Understanding the NumPy library NumPy is a fundamental library in Python for working with arrays and numerical computations. It provides a high-level interface to C libraries like CBLAS and LAPACK, which are optimized for numerical operations. NumPy is widely used in scientific computing and signal processing due to its efficiency and ease of use. To get started with NumPy, you'll need to install it. You can do this using pip: ``` pip install numpy ``` Once installed, you can import NumPy into your Python script: ```python import numpy as np ``` NumPy provides a wide range of functions for working with arrays, including mathematical operations, linear algebra, and random number generation. Some common functions include: - `np.array()`: Create a NumPy array from a Python list or other iterable. - `np.zeros()` and `np.ones()`: Create arrays filled with zeros or ones. - `np.arange()`: Create an array of evenly spaced values within a specified range. - `np.linspace()`: Create an array of evenly spaced values over a specified interval. - `np.random.rand()`: Generate an array of random values between 0 and 1. ## Exercise Instantiate a NumPy array using the following Python code: ```python import numpy as np arr = np.array([1, 2, 3, 4, 5]) print(arr) ``` What is the output of this code? ```python [1 2 3 4 5] ``` NumPy also provides functions for performing element-wise operations on arrays, such as addition, subtraction, multiplication, and division. These operations are performed on arrays of the same shape. For example: ```python arr1 = np.array([1, 2, 3]) arr2 = np.array([4, 5, 6]) arr3 = arr1 + arr2 print(arr3) ``` This code will output: ```python [5 7 9] ``` # FFT and its applications The Fast Fourier Transform (FFT) is an algorithm for computing the discrete Fourier transform (DFT) of a sequence. It is an efficient method for transforming a sequence from the time domain to the frequency domain, and vice versa. FFT has numerous applications in signal processing, including filtering, compression, and spectral analysis. The DFT of a sequence is defined as: $$X[k] = \sum_{n=0}^{N-1} x[n] e^{-j2\pi kn/N}$$ where $X[k]$ is the $k$-th frequency component, $x[n]$ is the $n$-th time sample, $N$ is the sequence length, and $j$ is the imaginary unit. The FFT algorithm efficiently computes the DFT by factorizing the sequence length $N$ and using a recursive divide-and-conquer approach. The Cooley-Tukey algorithm is a commonly used FFT algorithm, which has a time complexity of $O(N\log(N))$. # Convolution and its significance Convolution is a mathematical operation that combines two functions to produce a third function. In signal processing, convolution is used to model the effect of a system on a signal, such as filtering or modulation. It is widely used in image processing, data compression, and speech recognition. The convolution of two functions $f(t)$ and $g(t)$ is defined as: $$(f * g)(t) = \int_{-\infty}^{\infty} f(\tau)g(t-\tau)d\tau$$ In the context of signals, the convolution is often performed in the frequency domain using the product of the Fourier transforms of $f(t)$ and $g(t)$. Convolution has several important properties: - Linearity: The convolution of two signals is linear. - Time-invariance: The convolution of a signal with a delayed version of itself is equivalent to the original signal. - Shift-invariance: The convolution of a signal with a translated version of itself is equivalent to the original signal. # Implementing FFT using Python To implement the FFT using Python, we can use the `numpy.fft` module, which provides functions for computing the FFT and inverse FFT. Here's an example of how to compute the FFT of a signal using NumPy: ```python import numpy as np signal = np.array([1, 2, 3, 4, 5]) fft_signal = np.fft.fft(signal) print(fft_signal) ``` This code will output the FFT of the signal: ```python [ 15.+0.j -5.+5.j -5.+0.j -5.-5.j -15.-0.j] ``` ## Exercise Compute the inverse FFT of the FFT of the signal: ```python inverse_signal = np.fft.ifft(fft_signal) print(inverse_signal) ``` What is the output of this code? ```python [1.+0.j 2.+0.j 3.+0.j 4.+0.j 5.+0.j] ``` # Implementing convolution using Python To implement convolution using Python, we can use the `numpy.convolve` function, which computes the discrete convolution of two one-dimensional sequences. Here's an example of how to compute the convolution of two signals using NumPy: ```python import numpy as np signal1 = np.array([1, 2, 3]) signal2 = np.array([4, 5, 6]) convolved_signal = np.convolve(signal1, signal2) print(convolved_signal) ``` This code will output the convolution of the two signals: ```python [ 4 10 20 22 18] ``` ## Exercise Compute the convolution of the signal with itself: ```python convolved_signal = np.convolve(signal1, signal1) print(convolved_signal) ``` What is the output of this code? ```python [1 4 10 12 6] ``` # Applications of FFT and convolution in signal processing FFT and convolution have numerous applications in signal processing, including: - Spectral analysis: FFT is used to analyze the frequency components of a signal, which can be useful for detecting specific frequencies or patterns in the signal. - Filtering: Convolution is used to apply filters to signals, such as low-pass, high-pass, or band-pass filters, to remove unwanted frequencies or attenuate specific frequency ranges. - Image processing: Convolution is used to apply various filters to images, such as edge detection, blurring, or sharpening. - Speech recognition: Convolution is used to model the effect of filters on speech signals, which can be used to improve the performance of speech recognition systems. # Real-world examples and case studies FFT and convolution are widely used in various fields, including: - Audio processing: FFT is used to analyze the frequency components of audio signals, while convolution is used to apply filters and effects to audio. - Image processing: Convolution is used to apply various filters to images, such as edge detection, blurring, or sharpening. - Communications: FFT and convolution are used to model the effects of communication channels on signals, which can be used to design efficient communication systems. - Medical imaging: FFT and convolution are used to process medical images, such as MRI or CT scans, to improve image quality or detect specific patterns. These applications demonstrate the versatility and importance of FFT and convolution in signal processing. By understanding and implementing these concepts, you'll be able to tackle a wide range of signal processing problems and develop innovative solutions.
Textbooks
\begin{document} \title{Energy transport and optimal design of noisy Platonic quantum networks} \maketitle \section*{Abstract} Optimal transport is one of the primary goals for designing efficient quantum networks. In this work, the maximum transport is investigated for three-dimensional quantum networks with Platonic geometries affected by dephasing and dissipative Markovian noise. The network and the environmental characteristics corresponding the optimal design are obtained and investigated for five Platonic networks with 4, 6, 8, 12, and 20 number of sites that one of the sites is connected to a sink site through a dissipative process. Such optimal designs could have various applications like switching and multiplexing in quantum circuits. \section*{Introduction} Transport is an essential phenomena in atomic-scale devices and networks. The structure that hosts the energy or information carriers could be a continuous medium like metallic nanorods and waveguides \cite{Javaherian_2009, Javaherian_2009_2} or site-based structure like metallic nanoparticle arrays \cite{LI2018213, maier_2003}, quantum dot arrays \cite{Braakman2013, Wang2020}, and many more. Discrete or site-based transport --- considered in this work --- has many applications such as quantum state transport through spin chains \cite{PhysRevLett.92.187902, PhysRevA.72.012319, PhysRevA.69.052315, PhysRevA.71.052315}, quantum energy transport in chains of trapped ions, environment-assisted transport in networks of sites \cite{PhysRevA.90.042313, PhysRevA.83.013811}, and switching with qubit arrays \cite{PhysRevLett.105.167001}. While much is known for ideal networks, the effect of noise on the desired properties are less understood. We study the case of noisy quantum networks here. A specific type of quantum network that has been proven to have exact theoretical solutions for site-based energy excitation transfer are called Fully Connected Networks (FCNs) \cite{J.Chem.Phys.2009}. FCNs are defined by the property that all sites are equally connected to each other and the last site is dissipatively connected to a sink site. In \cite{Javaherian2015} we studied some three dimensional Platonic configurations with distance dependent couplings, and proved that they have some similar properties as those of an $N$-site FCN, where in the corresponding Platonic network $N-1$ would be the number of nearest neighbours of each site. For example, it was shown that the sink population --- the energy excitation accumulated in the sink site --- of the "noiseless" Platonic quantum networks and FCNs are the same at the steady state (or infinite time). These identities are convenient and powerful tools for the study quantum networks, and we find similar relations in the more relevant case of \emph{noisy} quantum networks. In this work we prove that in different noisy Platonic quantum networks the analytical solution of the steady sink population is the same as that of the equivalent FCN. In addition, we provide some relations among the couplings and noise rates corresponding to the maximum transport. These relations will be useful to optimally design such quantum networks for various applications such as transport through three dimensional Platonic networks, or switching and multiplexing in quantum circuits. The latter could be achieved by changing the environmental noise rates (electrical or magnetic) around the last site connected to the sink site, in a quantum dot or photonic qubits implementations, so that the incoming excitation would be transferred to the planned sink site. Before presenting our results, we comment on two advantages of using Platonic quantum networks for such purposes. First, three dimensional networks in general are more compact in comparison with two and one dimensional networks, and the nanoscale 3D printing methods \cite{doi:10.1021/acs.nanolett.1c02847, doi:10.1038/s42005-021-00532-4} could help to overcome the 3D construction difficulties. The three dimensional structures might also be compatible with the physics or other constraints of some architectures. The other advantage of Platonic quantum networks is that they are proven to be a FCN network with reduced number of sites that has exact transport solutions, and thus ideal benchmarks for new techniques and demonstrations. \section*{Results} A Platonic network in this work is defined as a group of interactive identical two-level systems, located on the vertices of one of the five Platonic geometries, depicted in Fig.\ref{Fig0}, and one additional site (sink site) is irreversibly connected to one of the main sites with rate $\Gamma$. We assume a homogeneous environment so that all sites (main qubits) are affected by equal dissipation and dephasing Markovian local noises with rates $\Gamma_{\rm diss}$ and $\gamma$, respectively. \begin{figure} \caption{The schematic of the five Platonic geometries. Left to right: Tetrahedron (4 vertices), Cube (8 vertices), Octahedron (6 vertices), Icosahedron (12 vertices), and Dodecahedron (20 vertices)} \label{Fig0} \end{figure} The Hamiltonian (in units $\hbar=1$) of the Platonic network would be \begin{equation} H= \sum_{i=1}^{N} J_{ij}\sigma_i^{\dagger} \sigma_j + c.c., \end{equation} where $N$ is the total number of main sites, $J_{ij}$ is the distance-dependent coupling between qubits $i,j$, $\sigma_i^{\dagger}$($\sigma_i$) is the creation (annihilation) operator, and $c.c.$ denotes the complex conjugation of the fist part. The dynamics of the total density matrix of this system is found by a Lindbladian master equation \cite{Breuer2007} as follows: \begin{align}\label{equationset1} \qquad \dot{\hat{\rho}}&=-i[\hat{H}_{\rm sys},\hat{\rho}]+{\cal{L}}_{\rm target}(\hat{\rho})+{\cal{L}}_{deph}(\hat{\rho})+{\cal{L}}_{\rm diss}(\hat{\rho}); \nonumber \\ \qquad {\cal{L}}_{\rm target}(\hat{\rho}) &= \Gamma(2\hat{\sigma}_{\rm target}^{\dagger}\hat{\sigma_{N}}^{}\hat{\rho}\hat{\sigma_{N}}^{\dagger}\hat{\sigma}_{\rm target}^{} -\left\{ \hat{\sigma_{N}}^{\dagger}\hat{\sigma}_{\rm target}^{}\hat{\sigma}_{\rm target}^{\dagger}\hat{\sigma_{N}}^{},\hat{\rho}\right\}), \nonumber \\ \qquad {\cal{L}}_{deph}(\hat{\rho})&=\gamma \sum_{i=1}^N(2\hat{\sigma_{i}}^{\dagger}\hat{\sigma_{i}}\hat{\rho}\hat{\sigma_{i}}^{\dagger}\hat{\sigma_{i}}-\left\{ \hat{\sigma_{i}}^{\dagger}\hat{\sigma_{i}},\hat{\rho}\right\}), \nonumber\\ \qquad {\cal{L}}_{\rm diss}(\hat{\rho}) &= \Gamma_{\rm diss}(2\hat{\sigma}_{\rm target}^{\dagger}\hat{\sigma_{N}}^{}\hat{\rho}\hat{\sigma_{N}}^{\dagger}\hat{\sigma}_{\rm target}^{} -\left\{ \hat{\sigma_{N}}^{\dagger}\hat{\sigma}_{\rm target}^{}\hat{\sigma}_{\rm target}^{\dagger}\hat{\sigma_{N}}^{},\hat{\rho}\right\}), \end{align} where we assumed $\hat{\rho}=\hat{\rho}_{\rm qubits} \oplus \hat{\rho}_{\rm target} \oplus \hat{\rho}_{\rm discharge}$, is the direct sum of the $N\times N$ density matrix of the Platonic network ($\hat{\rho}_{\rm qubits}$), the $1\times 1$ population matrix of the sink site ($\hat{\rho}_{\rm target}$), and the $1\times 1$ matrix representing the total population discharged to the environment by dissipation noise ($\hat{\rho}_{\rm discharged}$). $\gamma$, and $\Gamma_{\rm diss}$ are the dephasing and dissipation noise rates to the environment from the network sites to the surrounding environment. $\Gamma$ is the rate of irreversible energy transfer from the last site to the target site and $\hat \sigma_{i}^{\dagger}(\hat \sigma_{i})$ is the creation (annihilation) operator of site $i$. We aim to provide an analytical expression for the population of target sink i.e. $\rho_{\rm (N+1),(N+1)}(t)=\rho_{\rm target}(t)$ in the equilibrium state i.e. $t\rightarrow \infty$. \begin{figure}\label{N6-8} \end{figure} In order to simplify the analytical calculation, we study the dynamical characteristics of Platonic networks by numerically solving $\rho(t)$ from Eq.\eqref{equationset1}. For the simulation, the ion qubits implementation of the Platonic networks is assumed in which the coupling rate, or the interaction energy between two dipoles of ion qubits $i$ and $j$ would be inversely proportional to the cube of their interconnecting distance ($J_{ij}=v/r^3_{ij},\:v=1$). In Figs.\ref{N6-8},\ref{N12},\ref{N20} we plotted the sites' populations ($\rho_{ii}(t)$) of four Platonic networks in the noiseless case ($\gamma= \Gamma_{diss}=0$). It can be seen that the populations of some sites vary inversely in some cases, while other sites would have same population dynamics. The characteristics shown in these figures (noiseless cases) would be the same as that of noisy cases unless the fact that in the presents of dephasing and/or dissipation noises, the oscillating patterns of populations would be evanescent and the sink site would be fully populated at the equilibrium ($t \rightarrow \infty$). In \cite{Javaherian2015} we found that the target population of noiseless Platonic networks at steady state are independent of their size and is only related to the number of neighboring sites $(N_c-1)$ as of $\rho_{sink}=1/(N_c-1)$, i.e. $\rho_{sink}=0.25,0.33,0.2,0.33$ for $N=6,8,12,20$, respectively. Later on, we conclude that the exact solution of the dynamics of Platonic networks are the same as that of FCNs, in which all sites are equidistant from each other \cite{J.Chem.Phys.2009}. Since the tetrahedron Platonic network (N=4) is a FCN by definition, we ignored its simulation since we do not need anymore to prove that its analytical solution of target population is as that of an FCN. Figs.\ref{N6-8} show the dynamics of two Platonic networks with N=6 and 8. It can be seen that some pair of sites are oscillating inversely, while the populations of other pairs of sites vary equally. To simplify the analytical solution, we will assume that the average of sites' populations (and the coherences) of these pairs could be assumed for one of the sites, and we would only solve the dynamics of a Platonic network for $N_c$ sites. Figs.\ref{N12} and \ref{N20} show the dynamics of Platonic networks with N=12 and N=20 sites, respectively. The graphs show that in these types of networks, for the chosen initial charges of each case, the populations of different groups of sites vary similarly. So likewise the other Platonic geometries, we simplify the analytical dynamics by assuming only a specific number of sites i.e. $N_c$ sites. So if $\rho(t)$ would be the density matrix of a platonic network of $N$ sites, with rank $(N+2)$ and elements $\rho_{ij}(t)$, we define the following symbolic density matrix $\tilde{\rho}(t)$ of an equivalent network of $N_c$ sites, with rank $(N_c+2)$ and symbolic elements of $\tilde{\rho}_{ij}(t)$ defined as follows. Note that in the legends of Figs. \ref{N6-8},\ref{N12},\ref{N20} and the two following formulas, we consider the notation $\rho_{ii}(t) \equiv \rho_{i}(t)$ for simplicity. \begin{figure} \caption{This graph shows the simulated population dynamics of all sites ($\rho_{ii}\equiv \rho_{i},\:i=1...N=12$) of a regular icosahedron noiseless network with zero environmental noises ($\gamma=\Gamma_{diss}=0$), where the sink site ($i=N+1=13$) is dissipatively connected to only one site (12) with rate $\Gamma$. The left graph shows the schematic of the icosahedron network where sites are located on the vertices with coordinates $(0,\pm \phi, \pm 1),(\pm 1, 0, \pm \phi),(\pm \phi, \pm 1, 0)$ where $\phi = (1+\sqrt(5))/2$. At $t=0$ four sites (1,2,3,4) are equally charged by $1/4$ amount of excitation. The right graphs show that due to the various symmetries in Platonic geometries, some sites are oscillating similarly (sites "1,2", "3,4", "5,7", "6,8", and "10,11"), at the equilibrium i.e. $\Gamma.t \gtrsim 20$. In addition, the populations of the spherical-symmetrically positioned sites of 12 and 9, are zero, since the population of the sink site (13) is saturated to $0.2$ at equilibrium.} \label{N12} \end{figure} \begin{figure} \caption{This graph shows the simulated population dynamics of all sites ($\rho_{ii}\equiv \rho_{i},i=1...N=20$) of a regular dodecahedron noiseless network with zero environmental noises ($\gamma=\Gamma_{diss}=0$), where the sink site ($i=N+1=21$) is dissipatively connected to only one site (20) with rate $\Gamma$. The left graph shows the schematic of the icosahedron network where sites are located on the vertices with coordinates $(\pm 1, \pm 1, \pm 1), (0,\pm \phi, \pm \frac{1}{\phi}),(\pm \phi, 0, \pm \frac{1}{\phi}),(\pm \phi, \pm \frac{1}{\phi}, 0),$ where $\phi = (1+\sqrt(5))/2$. At $t=0$ three sites (1,5,9) are equally charged by $1/4$ amount of excitation. The right graphs show that due to the various symmetries in Platonic geometries, some sites are oscillating similarly (sites "1,2", "3,4", "5,6,7,8", and "10,11"), at the equilibrium i.e. $\Gamma.t \gtrsim 20$. In addition, the populations of the spherically symmetric sites 19 and 20 are zero since the population of the sink site (21) is saturated to $0.33$. There are four groups of sites on the right-side schematics that are marked with four different colors (violet, red, light blue, and yellow), that each group of sites posses similar populations. Adding two sites to red and yellow groups (sites No. 19 and 20, respectively), we create the four symbolic sites of the equivalent FCN of dodecahedron, as introduced in Eq.\eqref{N20FCNsites}} \label{N20} \end{figure} \begin{align}\label{assumption} \begin{split} N=6:&\\ &\tilde{\rho}_{1}(t)= \rho_{1}(t)+\rho_{2}(t),\\ &\tilde{\rho}_{i}(t)= \rho_{i}(t),\: i \ne 1,2\\ N=8,12:&\\ &\tilde{\rho}_{i}(t) = \rho_{i}(t) + \rho_{\tilde{i}}(t),\\ \end{split} \end{align} where $\tilde{i}$ is the index of the spherical-symmetrically positioned site with respect to site $i$. So the number of symbolic sites of $N=6$ network is $N_c=5$, and that of $N=8$ and $12$ are $N_c=4$ and $N_c=6$, respectively. For $N=20$, according to Fig.\ref{N20} we choose: \begin{align}\label{N20FCNsites} \begin{split} N=20:&\\ &\tilde{\rho}_{1}(t)= \rho_{1}(t)+\rho_{5}(t)+\rho_{9}(t)+\rho_{20}(t),\\ &\tilde{\rho}_{2}(t)= \rho_{2}(t)+\rho_{6}(t)+\rho_{10}(t)+\rho_{19}(t),\\ &\tilde{\rho}_{3}(t)= \rho_{3}(t)+\rho_{8}(t)+\rho_{12}(t)+\rho_{13}(t)+\rho_{16}(t)+\rho_{17}(t),\\ &\tilde{\rho}_{4}(t)= \rho_{4}(t)+\rho_{7}(t)+\rho_{11}(t)+\rho_{14}(t)+\rho_{15}(t)+\rho_{18}(t),\\ \end{split} \end{align} So the number of symbolic sites of $N=20$ network is $N_c=4$, The above two formulas show that we assume $N_c=5,4,6,4$ symbolic sites, for five Platonic networks with $N=6,8,12,20$ sites. Note that the distance between the chosen group of sites, assumed as one symbolic site, with the other neighboring group of sites are all equal. Now the coherences between the sites of the equivalent reduced networks would be defined according to the symbolic indices as \begin{equation}\label{coherences} \tilde{\rho}_{ij}(t)=\sum_{p=1}^n\sum_{q=1}^m\rho_{pq}(t), \end{equation} where $n,m$ are the number of sites that define the symbolic sites $i,j$, respectively, that is $\tilde{\rho}_{ii}(t) \equiv \tilde{\rho}_{i}(t)=\sum_{p=1}^n\rho_{p}(t)$ and $\tilde{\rho}_{jj}(t)\equiv \tilde{\rho}_{j}(t)=\sum_{q=1}^n\rho_{q}(t)$. As an example, in a dodecahedron network: \begin{equation}\label{coherences-example} \tilde{\rho}_{12}(t)=\sum_{\substack{p=\{ 1,5,9,20\},\\ q=\{2,6,10,19\}}}\rho_{pq}(t). \end{equation} In future, for simplicity of notation, we substitute $\tilde{\rho}(t)\rightarrow \rho(t)$. Using the assumption of reduced networks, Eq.\eqref{equationset1} would be written as follows: \begin{equation} \label{eq1} \begin{split} \dot{\rho}_{ii} &= -2\Gamma_{\rm diss}\rho_{ii}+iJ (R_i-\bar{R}_i) ;\:\: i\ne N,\\ \dot{\rho}_{ij} &= -2(\Gamma_{\rm diss}+\gamma)\rho_{ij}+iJ (R_i-\bar{R}_j) ;\:\: (i,j)\ne N,\\ \dot{\rho}_{iN} &= -(2\Gamma_{\rm diss}+2\gamma+\Gamma)\rho_{iN}+iJ (R_i-\bar{R}_N),\\ \dot{\rho}_{\rm NN} &= -(2\Gamma_{\rm diss}+2\gamma+\Gamma)\rho_{\rm NN}+iJ (R_N-\bar{R}_N),\\ \dot{\rho}_{00} &= 2\Gamma_{\rm diss}Tr_{N_c}(\rho),\\ \dot{\rho}_{\rm target} & = 2\Gamma \rho_{\rm NN}, \end{split} \end{equation} where $\rho_{00}$ correspond to a virtual site that stores a fraction of initial excitation within the environment through dissipative noise with rate $\Gamma_{diss}$, and the population of the last site $\rho_{NN}$, is being discharged dissipatively by rate $\Gamma$ to the target site with corresponding matrix element $\rho_{(N+1),(N+1)}$. We assume the following collective variables: \begin{equation} R_i(t)=\sum_{j=f_{\rm N_c}(i)} \rho_{ij}(t),\:\:\Lambda_{i}=\sum_{j=f_{\rm N_c}(i)} R_j, \end{equation} where $f_{\rm N_c}(i)$ is the set of $N_c$ indices of the nearest neighbors of site $i$ plus itself. The equations of motion of the collective variables would be as following: \begin{equation} \label{eq2} \begin{split} \dot{R_{i}} &= -iJ\Lambda_i + i N_c J R_i - 2 (\Gamma_{\rm diss}+\gamma)R_i - \Gamma \rho_{iN} +2\gamma \rho_{ii,}\\ \dot{R_{N}} &= -iJ\Lambda_N + i N_c J R_N - (2\Gamma_{\rm diss}+2\gamma+\Gamma)R_N + (2\gamma-\Gamma) \rho_{\rm NN},\\ \dot{\Lambda_{i}} &= -(2\Gamma_{diss}+\gamma)\Lambda_i + \Gamma(R_N+\bar{R}_N) +2\gamma Tr_{\rm N_c(i)}(\rho),\\ \end{split} \end{equation} where \begin{equation} Tr_{\rm N_c(i)}(\rho)=\sum_{i=f_{\rm N_c}(i)} \rho_{ii} = Tr(\rho). \end{equation} Now from the rule of conservation of population we have: \begin{equation} \label{eq3} \begin{split} 1 = Tr(\rho) +\rho_{00} + \rho_{\rm target}. \end{split} \end{equation} It indicates that the initial population would oscillate among all network sites, and partially accumulated in the surrounding environment and the target site, through the dissipation noise rates of $\Gamma_{\rm diss},\gamma$ and $\Gamma$, respectively.\\ Considering $R_N = x+iy$, Eq.\eqref{eq2} line 2 yields two first order differential equations for $x$ and $y$ which besides Eqs.\eqref{eq1} lines 4-6, Eq.\eqref{eq2} line 3 and Eq.\eqref{eq3}, form a close set of differential equations for variables $x, y, \Lambda_N, \rho_{\rm NN}, \rho_{00},\rho_{\rm target}$ as follows: \begin{equation} \label{eq4'} \begin{split} \dot{\Lambda}_N &= -2(\Gamma_{diss}+\gamma)\Lambda_N -2\Gamma x + 2\gamma (1-\rho_{00}-\rho_{\rm target}),\\ \dot{x} &= -(2\Gamma_{diss}+2\gamma+\Gamma)x + (2\gamma-\Gamma)\rho_{\rm NN} -JN_cy,\\ \dot{y} &= -(2\Gamma_{diss}+2\gamma+\Gamma)y + JN_cx -J \Lambda_N,\\ \dot{\rho}_{\rm NN} &= -2(\Gamma_{diss}+\Gamma)\rho_{\rm NN}-2Jy,\\ \dot{\rho}_{00} &= 2\Gamma_{diss} (1-\rho_{00}-\rho_{\rm target}),\\ \dot{\rho}_{\rm target} &= 2\Gamma\rho_{\rm NN}. \end{split} \end{equation} According to Eq.\eqref{assumption} and Figs.\ref{N6-8},\ref{N12},\ref{N20}, the initial conditions for different Platonic networks are assumed as: \begin{align}\label{initial-conditions} \begin{split} N=6,8:&\\ &\rho_{1}(0)\equiv \rho_{11}(0)=1,\:\rho_{ij}(0)=0,i,j \ne 1\\ N=12:&\\ &\rho_{1}(0)=\rho_{2}(0)=\rho_{3}(0)=\rho_{4}(0)=1/4, \:\rho_{ij}(0)=0,\:i \ne j \ne 1,2,3,4\\ N=20:&\\ &\rho_{1}(0)=\rho_{5}(0)=\rho_{9}(0)=1/3, \:\rho_{ij}(0)=0,\:i \ne j \ne 1,5,9 \end{split} \end{align} The above initial conditions yield: \begin{align} \begin{split} R_N(0) &= \sum_{i\in f_{\rm N_c}(N)} \rho_{Ni}=0, \:x(0)=y(0)=0,\\ \Lambda_N(0) &=\sum_{i\in f_{\rm N_c}(N)} R_i(0)=R_N(0)+R_{i\ne\{1-4,N\}\:\rm or,i\ne \{1,5,9,N\}}(0)=1. \end{split} \end{align} Now by applying the Laplace transform to Eqs.\eqref{eq4'} i.e. $t\rightarrow 1/s$, $\dot{\alpha}(t) \rightarrow \Big(\tilde{\alpha}.s -\alpha(0)\Big)$, we obtain: \begin{equation} \label{eq5'} \begin{split} (s+2\Gamma_{diss}+2\gamma)\tilde{\Lambda}_N + 2\Gamma \tilde{x} + 2\gamma \rho_{\rm target} + 2\gamma \rho_{00} - 2\gamma /s -1 =0,\\ (s+2\Gamma_{diss}+2\gamma+\Gamma)\tilde{x} + (\Gamma-2\gamma)\tilde{\rho}_{\rm NN} + JN_c\tilde{y} = 0,\\ (s+2\Gamma_{diss}+2\gamma+\Gamma)\tilde{y} + J\tilde{\Lambda}_N -JN_c\tilde{x} = 0,\\ (s+2\Gamma_{diss}+2\Gamma)\tilde{\rho}_{\rm NN} +2J\tilde{y} = 0,\\ (s+2\Gamma_{diss})\tilde{\rho}_{00} + 2\Gamma_{diss} \tilde{\rho}_{\rm target} -2\Gamma_{diss}/s = 0,\\ s\tilde{\rho}_{\rm target} - 2\Gamma\tilde{\rho}_{\rm NN} = 0. \end{split} \end{equation} Solving the complete set of Eqs.\eqref{eq5'}, the target sink population will be found for Platonic networks in the presence of homogeneous local noises as following: \begin{equation} \label{eq4} \begin{split} \bar{\rho}_{\rm target}= 4 \Gamma J ^ 2 \frac{ (\Gamma_B + s) (s + \Gamma_A)}{s \Delta(s)}\\ \Delta(s)= 8 \Gamma J ^ 2 \gamma (\Gamma_B + s) + (s + 2 \Gamma_{\rm diss})\\ ( (\Gamma_A + s) (\Gamma_C + s) (\Gamma_B + s) ^ 2\\ - 4 \Gamma J ^ 2 (\Gamma - 2 \gamma) \\ - 2 J ^ 2 N_c (\Gamma_A + s) (\Gamma - 2 \gamma)\\ + 2 \Gamma J ^ 2 N_c (\Gamma_C + s) \\ + J ^ 2 N_c ^ 2 (\Gamma_A + s) (\Gamma_C + s)) . \end{split} \end{equation} where \begin{align} \begin{split} \Gamma_A &=2\gamma+2\Gamma_{\rm diss}, \\ \Gamma_B &=2\gamma+2\Gamma_{\rm diss}+\Gamma,\\ \Gamma_C &=2\Gamma_{\rm diss}+2\Gamma \end{split} \end{align} This expression is equivalent to that of an FCN network, i.e. Eqs.(A25, A26) of \cite{J.Chem.Phys.2009}, where the Platonic coordinate number ($N_c$) is equivalent to the total number of sites. Since the root of the denominator in Eq.\eqref{eq4} does not have an analytical solution, the final target population of the considered Platonic networks are as following: \begin{equation} \label{eq5} \begin{split} \rho_{\rm target}(t \rightarrow \infty) = \lim_{s \rightarrow 0} [s \bar{\rho}_{\rm target}] = 4 \Gamma J ^ 2 \frac{\Gamma_B \Gamma_A}{ \Delta(0)} \end{split} \end{equation} It can be seen that in the noiseless environment ($\gamma, \Gamma \rightarrow 0$), the steady state target population is the same as the previous expression found in \cite{Javaherian2015} i.e. $1/(N_c -1)$, which can be here achieved by first tending the local dephasing rate to zero. If first tending the local dissipation rate to zero, the target population tends to $1$. This is due to the fact that the local dissipation noise would only discharge the excitation from each site to the environment and not to the other sites, however the dephasing noise provides new paths of energy transport within network sites, leading to discharge of all excitation to the target site through the $N^{th}$ site. \begin{figure} \caption{The relation of network and environment variables in a noisy cubic Platonic network with optimal transport to the target site at steady state. Graph (a) shows the relation of coupling rate $J$ and dephasing noise $\gamma$ for different amounts of dissipation rate to the sink ($\Gamma$), while dissipation rate to the local environments are fixed at $\Gamma_{\rm diss}=10$. Graph (b) shows the same relations for different values of $\Gamma_{\rm diss}$ and fixed amount of $\Gamma=10$. It could be indicated from the graphs that the coupling strength should be increases by increasing other parameters to maintain the full transport. The values of parameters could be used for optimal design of Platonic networks.} \label{Fig.2} \end{figure} \begin{figure} \caption{This graph shows the relation of sites’ couplings and the environmental dephasing rate for different Platonic networks with optimal transport at steady state. It could be seen that for networks with more $Nc$ or number of nearest neighbours, the coupling rate $J$ of each pair of sites is less. This indicates that more sites with less coupling strength are equivalent to less number of sites with higher coupling rates. } \label{Fig.3} \end{figure} From the numerical investigations we know that the maximum value of the target population in the presence of noises is one. To find the network-environment parameters corresponding the maximum excitation transport, we equate the target population of Eq.\eqref{eq5} to one, and find a relation among all network and environment parameters. The resulting equation can be used to find the optimal design variables. For example, the coupling rate $J$ could be found in terms of other parameters. Figs.\ref{Fig.2}(a),(b) show the relation between $J$, the coupling rate between the nearest neighbors in Platonic networks, and the Markovian dephasing rate $\gamma$, for different values of dissipation rates from each site to the environment ($\Gamma_{\rm diss}$), and also the different values of dissipation rates from the $N^{th}$ site to the sink site ($\Gamma$). The network of consideration for both parts (a) and (b) is a cubic lattice with $N=8$ main sites, and the constant parameters are $\Gamma_{\rm diss}=10$ and $\Gamma =10$, respectively. It can be seen in Fig.\ref{Fig.2} that for the fixed chosen parameters, by increasing the dephasing rate, the coupling strength should be increased, so that the disturbing effect of dephasing noise would be compensated on energy transport towards the sink. It can be also seen from Fig.\ref{Fig.2}(a) that for a fixed depahsing noise rate $\gamma$ and the fixed chosen dissipation noise rate $\Gamma_{\rm diss}=10$, to maintain the maximum transport, the nearest neighbors sites couplings should be increased, by increasing the dissipation rate to the target sink ($\Gamma$). In other words, in the fixed environment with the same dephasing and dissipation noise rates, the higher sites' couplings demands faster noise rates to the sink site to obtain the optimal design corresponding the maximum energy transfer. To understand this behaviour, note that the higher sites' coupling rate results in stronger and more energy bouncing among the sites, that demands higher coupling rate towards the sink. It should be also taken into account that the relation among the parameters for the maximum transport is nonlinear. In such Platonic noisy networks, Since the nonzero dissipation rate of $\Gamma_{\rm diss}=10$ will irreversibly transfer some energy to the environment, to reach the full energy transport, the coupling rate and the sink dissipation rate should be high or fast enough to transfer all energy before any fraction of that would be dissipated towards the surrounding environment. Part of this process could be supported by dephasing-noise-assisted-transport \cite{PhysRevA.90.042313, PhysRevA.83.013811}. This fact can be seen in Fig.\ref{Fig.2}(b). It shows the relation of network-environment parameters for the fixed amount of $\Gamma=10$. It can be seen that for a strong dissipation rate ($\Gamma_{\rm diss}=100$), the sites' coupling rate should be increased to be able to transfer the energy fast enough before getting dissipated to the environment. Fig.\ref{Fig.3} shows the optimal design graphs of Platonic networks with different number of sites. The constant parameters are chosen as $\Gamma_{\rm diss}=\Gamma=10$. It can be seen that for a fixed dephasing rate, by increasing $Nc$ of a Platonic network (the number of nearest neighbours of any site plus one), the coupling rate $J$ should be decreased to obtain maximum transport. This could be understood by the fact that by increasing $N_c$, the number of interactions by a single site will be increased. So for given fixed initial excitation (one unit) to each Platonic network, we need less coupling rate between each pair of sites, to maintain maximum transport. In summary, Figs.\ref{Fig.2} and \ref{Fig.3} provide information to design 3D noisy Platonic networks that can fully transfer the energy from the designated sites towards a target sink. Using these optimal design graphs help us to prevent the loss of energy excitation into the surrounding Markovian environment through local dissipation and dephasing noise rates. \section*{Conclusion} Energy transport is an inevitable phenomenon in many atomic-scale networks. In this work, we numerically studied the characteristics of energy dynamics in Platonic quantum networks consists of $4,6,8,12$, and $20$ qubits, located on vertices of high symmetric 3D Platonic geometries. A target site was assumed to be dissipatively connected to one of the qubits. Due to the opposite or same patterns of populations' oscillations of qubits, we made an assumption of reducing the number of qubits of each network to an effective value which was equal to the number of one group of nearest neighbor sites within each network. We found the analytical expression for the target site population in the presence of environmental Markovian dephasing and dissipation noises. In addition, we investigated the optimal design characteristics of Platonic network for maximum energy transport from the first site towards the target site. We plotted the relation between the coupling strength and the dephasing noise rate corresponding the maximum transport. The optimal designs of Platonic quantum networks could have several applications like switches or multiplexers in quantum devices. In the future, we hope to further analyse energy transport in three dimensional Platonic devices and investigate their physical implementations. \end{document}
arXiv
# Describing the data and assumptions for regression analysis - What is regression analysis? - The types of data used in regression analysis - The assumptions made in regression analysis Regression analysis is a statistical method used to model the relationship between one dependent variable and one or more independent variables. It is widely used in various fields, including economics, finance, and social sciences, to understand the relationships between variables and make predictions. Regression analysis is based on the assumption that the relationship between the dependent and independent variables can be approximated by a linear equation. This equation can be represented as: $$y = b_0 + b_1x_1 + b_2x_2 + ... + b_nx_n + \epsilon$$ where $y$ is the dependent variable, $x_1, x_2, ..., x_n$ are the independent variables, $b_0$ and $b_1, b_2, ..., b_n$ are the coefficients, and $\epsilon$ is the error term. ## Exercise What are the assumptions made in regression analysis? Instructions: List the assumptions made in regression analysis. ### Solution 1. Linearity: The relationship between the dependent and independent variables is linear. 2. Independence of observations: The observations are independent of each other. 3. Homoscedasticity: The variance of the error term is constant. 4. Normality: The error term follows a normal distribution. 5. Lack of multicollinearity: There is no multicollinearity among the independent variables. # Linear regression with one variable To perform linear regression with one variable, follow these steps: 1. Collect the data: Gather the data for the dependent and independent variables. 2. Prepare the data: Clean the data by handling missing values and outliers. 3. Fit the model: Use the `GLM.jl` package in Julia to fit the linear regression model. 4. Interpret the results: Analyze the coefficients and their significance. Here's an example of how to perform linear regression with one variable using Julia: ```julia using GLM # Sample data x = [1, 2, 3, 4, 5] y = [2, 4, 6, 8, 10] # Fit the linear regression model model = lm(@formula(y ~ x), DataFrame(x = x, y = y)) # Print the coefficients println(coef(model)) ``` In this example, we use the `GLM.jl` package to fit a linear regression model with one independent variable. The `coef` function returns the coefficients of the model, which can be used to make predictions and interpret the results. # Linear regression with multiple variables To perform linear regression with multiple variables, follow these steps: 1. Collect the data: Gather the data for the dependent variable and multiple independent variables. 2. Prepare the data: Clean the data by handling missing values and outliers. 3. Fit the model: Use the `GLM.jl` package in Julia to fit the linear regression model. 4. Interpret the results: Analyze the coefficients and their significance. Here's an example of how to perform linear regression with multiple variables using Julia: ```julia using GLM # Sample data x1 = [1, 2, 3, 4, 5] x2 = [2, 4, 6, 8, 10] y = [2, 4, 6, 8, 10] # Fit the linear regression model model = lm(@formula(y ~ x1 + x2), DataFrame(x1 = x1, x2 = x2, y = y)) # Print the coefficients println(coef(model)) ``` In this example, we use the `GLM.jl` package to fit a linear regression model with multiple independent variables. The `coef` function returns the coefficients of the model, which can be used to make predictions and interpret the results. # ANOVA for comparing multiple linear regression models To perform ANOVA for comparing multiple linear regression models, follow these steps: 1. Collect the data: Gather the data for the dependent variable and multiple independent variables. 2. Prepare the data: Clean the data by handling missing values and outliers. 3. Fit the models: Use the `GLM.jl` package in Julia to fit the linear regression models. 4. Perform ANOVA: Use the `Anova.jl` package in Julia to perform ANOVA and compare the models. 5. Interpret the results: Analyze the ANOVA table and make conclusions about the models. Here's an example of how to perform ANOVA for comparing multiple linear regression models using Julia: ```julia using GLM using Anova # Sample data x1 = [1, 2, 3, 4, 5] x2 = [2, 4, 6, 8, 10] y = [2, 4, 6, 8, 10] # Fit the linear regression models model1 = lm(@formula(y ~ x1), DataFrame(x1 = x1, y = y)) model2 = lm(@formula(y ~ x1 + x2), DataFrame(x1 = x1, x2 = x2, y = y)) # Perform ANOVA anova_result = anova(model1, model2) # Print the ANOVA table println(anova_result) ``` In this example, we use the `GLM.jl` package to fit the linear regression models and the `Anova.jl` package to perform ANOVA. The ANOVA table can be used to compare the models and make conclusions about their relationships. # t-tests for comparing means To perform t-tests for comparing means, follow these steps: 1. Collect the data: Gather the data for the two groups. 2. Prepare the data: Clean the data by handling missing values and outliers. 3. Perform t-tests: Use the `HypothesisTests.jl` package in Julia to perform t-tests and compare the means. 4. Interpret the results: Analyze the t-test results and make conclusions about the means. Here's an example of how to perform t-tests for comparing means using Julia: ```julia using HypothesisTests # Sample data group1 = [2, 4, 6, 8, 10] group2 = [4, 8, 12, 16, 20] # Perform t-test t_test_result = ttest(group1, group2) # Print the t-test result println(t_test_result) ``` In this example, we use the `HypothesisTests.jl` package to perform t-tests and compare the means of two groups. The t-test result can be used to make conclusions about the means and their relationship. # Chi-square tests for independence To perform chi-square tests for testing the independence of two categorical variables, follow these steps: 1. Collect the data: Gather the data for the two categorical variables. 2. Prepare the data: Clean the data by handling missing values and outliers. 3. Perform chi-square tests: Use the `HypothesisTests.jl` package in Julia to perform chi-square tests and test the independence of the variables. 4. Interpret the results: Analyze the chi-square test results and make conclusions about the independence of the variables. Here's an example of how to perform chi-square tests for testing the independence of two categorical variables using Julia: ```julia using HypothesisTests # Sample data variable1 = ["A", "B", "A", "B", "A"] variable2 = ["C", "D", "C", "D", "C"] # Perform chi-square test chi_square_result = chisqtest(variable1, variable2) # Print the chi-square test result println(chi_square_result) ``` In this example, we use the `HypothesisTests.jl` package to perform chi-square tests and test the independence of two categorical variables. The chi-square test result can be used to make conclusions about the independence of the variables and their relationship. # Handling missing data and outliers in regression analysis To handle missing data and outliers in regression analysis, follow these steps: 1. Detect missing data: Identify the presence of missing data in the dataset. 2. Deal with missing data: Use techniques such as deletion, imputation, or regression models that can handle missing data. 3. Detect outliers: Identify the presence of outliers in the dataset. 4. Deal with outliers: Use techniques such as deletion, transformation, or regression models that can handle outliers. Here's an example of how to handle missing data and outliers in regression analysis using Julia: ```julia using DataFrames using DataFramesMeta # Sample data x = [1, 2, 3, 4, 5, missing, 7, 8, 9, 10] y = [2, 4, 6, 8, 10, 12, 14, 16, 18, 20] # Create a DataFrame data = DataFrame(x = x, y = y) # Handle missing data data = data[completecases(data), :] # Handle outliers outlier_threshold = 2 data = @where(data, :y .< outlier_threshold) # Perform regression analysis model = lm(@formula(y ~ x), data) ``` In this example, we use the `DataFrames.jl` and `DataFramesMeta.jl` packages to handle missing data and outliers in regression analysis. The data is cleaned by deleting missing values and outliers, and then the linear regression model is fit using the `GLM.jl` package. # Applying regression analysis to real-world datasets To apply regression analysis to real-world datasets, follow these steps: 1. Select appropriate variables: Choose the independent and dependent variables that are relevant to the problem. 2. Handle missing data and outliers: Clean the data by handling missing values and outliers. 3. Perform regression analysis: Use the `GLM.jl` package in Julia to fit the linear regression model. 4. Interpret the results: Analyze the coefficients and their significance to make predictions and draw conclusions. Here's an example of how to apply regression analysis to a real-world dataset using Julia: ```julia using DataFrames using GLM # Load the dataset data = DataFrame(CSV.File("dataset.csv")) # Select appropriate variables x = data[:, "independent_variable"] y = data[:, "dependent_variable"] # Handle missing data and outliers data = data[completecases(data), :] # Perform regression analysis model = lm(@formula(y ~ x), data) # Interpret the results println(coef(model)) ``` In this example, we use the `DataFrames.jl` and `GLM.jl` packages to apply regression analysis to a real-world dataset. The data is cleaned by handling missing values and outliers, and then the linear regression model is fit using the `GLM.jl` package. The coefficients of the model can be used to make predictions and draw conclusions. # Interpreting and visualizing regression results To interpret and visualize regression results, follow these steps: 1. Analyze the coefficients: Interpret the coefficients of the linear regression model to understand the relationship between the independent and dependent variables. 2. Analyze the significance: Determine the significance of the coefficients to make conclusions about the relationship between the variables. 3. Visualize the results: Use appropriate plots such as scatter plots, line plots, or regression plots to visualize the relationship between the variables and the results of the regression analysis. Here's an example of how to interpret and visualize regression results using Julia: ```julia using Plots # Sample data x = [1, 2, 3, 4, 5] y = [2, 4, 6, 8, 10] # Fit the linear regression model model = lm(@formula(y ~ x), DataFrame(x = x, y = y)) # Interpret the coefficients coef_interpretation = coef(model) # Visualize the results scatter(x, y, label = "Data") x_range = range(minimum(x), maximum(x), length = 100) y_range = predict(model, hcat(x_range)) plot!(x_range, y_range, label = "Regression line") ``` In this example, we use the `Plots.jl` package to interpret and visualize the regression results. The coefficients of the model are interpreted to understand the relationship between the independent and dependent variables. The data and the regression line are visualized using a scatter plot and a line plot, respectively. # Extensions and advanced topics in regression analysis and statistical tests using Julia To extend regression analysis and statistical tests using Julia, follow these steps: 1. Extend regression analysis: Apply the techniques to more complex problems such as nonlinear regression, logistic regression, or multinomial regression. 2. Explore advanced statistical tests: Use advanced statistical tests such as nonparametric tests, robust tests, or Bayesian tests. 3. Explore advanced models: Apply advanced models such as random forests, support vector machines, or neural networks to regression analysis and statistical tests. Here's an example of how to extend regression analysis and statistical tests using Julia: ```julia using GLM using Anova # Sample data x1 = [1, 2, 3, 4, 5] x2 = [2, 4, 6, 8, 10] y = [2, 4, 6, 8, 10] # Fit the linear regression models model1 = lm(@formula(y ~ x1), DataFrame(x1 = x1, y = y)) model2 = lm(@formula(y ~ x1 + x2), DataFrame(x1 = x1, x2 = x2, y = y)) # Perform ANOVA anova_result = anova(model1, model2) # Print the ANOVA table println(anova_result) # Perform nonparametric tests nonparametric_test_result = kruskal(y, x1) # Print the nonparametric test result println(nonparametric_test_result) ``` In this example, we use the `GLM.jl` and `Anova.jl` packages to extend regression analysis and statistical tests using Julia. The techniques are applied to more complex problems such as ANOVA and nonparametric tests. The results of these tests are analyzed and interpreted.
Textbooks
Floor Area Ratio: Definition, Formula To Calculate, Example Marshall Hargrave Marshall Hargrave is a stock analyst and writer with 10+ years of experience covering stocks and markets, as well as analyzing and valuing companies. David Kindness Reviewed by David Kindness David Kindness is a Certified Public Accountant (CPA) and an expert in the fields of financial accounting, corporate and individual tax planning and preparation, and investing and retirement planning. David has helped thousands of clients improve their accounting and financial systems, create budgets, and minimize their taxes. Investopedia / Nez Riaz What Is Floor Area Ratio? The floor area ratio is the relationship between the total amount of usable floor area that a building has, or has been permitted to have, and the total area of the lot on which the building stands. A higher ratio likely would indicate a dense or urban construction. Local governments use the floor area ratio for zoning codes. You may determine the ratio by dividing the total or gross floor area of the building by the gross area of the lot. Floor Area Ratio = Total Building Floor Area Gross Lot Area \begin{aligned} &\text{Floor Area Ratio} = \frac{ \text{Total Building Floor Area} }{ \text{Gross Lot Area} } \\ \end{aligned} ​Floor Area Ratio=Gross Lot AreaTotal Building Floor Area​​ What Does the Floor Area Ratio Tell You? The floor area ratio accounts for the entire floor area of a building, not simply the building's footprint. Excluded from the square footage calculation are unoccupied areas such as basements, parking garages, stairs, and elevator shafts. Buildings with different numbers of stories may have the same floor-area-ratio value. Every city has a limited capacity or limited space that can be utilized safely. Any use beyond this point puts undue stress on a city. This is sometimes known as the safe load factor. The floor area ratio is variable because population dynamics, growth patterns, and construction activities vary and because the nature of the land or space where a building is placed varies. Industrial, residential, commercial, agricultural, and nonagricultural spaces have differing safe load factors, so they typically have differing floor area ratios. In the end, local governments establish regulations and restrictions that determine the floor area ratio. The floor area ratio is a key determining factor for development in any country. A low floor area ratio is generally a deterrent to construction. Many industries, largely the real estate industry, seek hikes in the floor area ratio to open up space and land resources to developers. An increased floor area ratio allows a developer to complete more building projects, which inevitably leads to greater sales, decreased expenditures per project, and greater supply to meet demand. The floor area ratio is the relationship of the total usable floor area of a building relative to the total area of the lot on which the building stands. A higher ratio usually indicates a dense or highly urbanized area. floor area ratios vary based on structure type, such as industrial, residential, commercial, or agricultural. Example of How to Use the Floor Area Ratio The floor area ratio of a 1,000-square-foot building with one story situated on a 4,000-square-foot lot would be 0.25x. A two-story building on the same lot, where each floor was 500 square feet, would have the same floor-area-ratio value. Considered another way, a lot has a floor area ratio of 2.0x and the square footage is 1,000. In this scenario, a developer could construct a building that covers as much as 2,000 square feet. This could include a 1,000 square foot building with two stories. As a real-life example, consider an apartment building for sale in Charlotte, North Carolina. The asking price for the apartment complex is $3 million and spans 17,350 square feet. The entire lot is 1.81 acres or 78,843 square feet. The floor area ratio is 0.22x, or 17,350 divided by 78,843. The Difference Between the Floor Area Ratio and Lot Coverage Though the floor area ratio calculates the size of the building relative to the lot, the lot coverage takes into account the size of all buildings and structures. The lot coverage ratio includes structures such as garages, swimming pools, and sheds—including nonconforming buildings. Limitations of Using the Floor Area Ratio The impact that the floor area ratio has on land value cuts both ways. In some instances, an increased floor area ratio may make a property more valuable if, for example, an apartment complex can be built that allows for more spacious rentals or more tenants. However, a developer who can build a larger apartment complex on one piece of land may decrease the value of an adjoining property with a high sale value bolstered by a view that is now obstructed. What Is a Solvency Ratio, and How Is It Calculated? A solvency ratio is a key metric used to measure an enterprise's ability to meet its debt and other obligations. Herfindahl-Hirschman Index (HHI) Definition, Formula, and Example The Herfindahl-Hirschman Index (HHI) is a common measure of market concentration that is used to determine market competitiveness. Debt-Service Coverage Ratio (DSCR): How To Use and Calculate It In corporate finance, the debt-service coverage ratio (DSCR) is a measurement of the cash flow available to pay current debt obligations. Add-On Factor The add-on factor is the percentage of a building's gross usable space that is added to each tenant's rented space to determine their total rent. Capital Expenditure (CapEx) Definition, Formula, and Examples Capital expenditures (CapEx) are funds used by a company to acquire or upgrade physical assets such as property, buildings, or equipment. Internal Rate of Return (IRR) Rule: Definition and Example The internal rate of return (IRR) is a metric used in capital budgeting to estimate the return of potential investments. Stocks & Bond News Top REITs McMansion: A Closer Look at the Big House Trend A Career in Real Estate Portfolio Management What You Need To Know About Wildfires and Insurance What Is a Home Appraisal? 5 Ways to Value a Real Estate Rental Property
CommonCrawl